China’s answer to ChatGPT? Alibaba refuses to be left behind in the global AI development race

Tech giant produces a generative AI chatbot of its own - will China have the edge on more regulated Western markets? We spoke to AI experts for the implications…

Chinese e-commerce giant Alibaba has entered the world of generative AI, introducing a chatbot of its own to rival competitors like Open AI’s ChatGPT.

‘Tongyi Qianwen’ – which roughly translates to “seeking truth from a thousand questions”, will reportedly work in a similar fashion to competing chatbots, relying on text-based prompts from users to generate responses using statistical probability to determine which words should be placed where.

East vs West?

Supposedly set to be incorporated into the company’s full suite of apps in the near future, Alibaba’s move into the AI space comes as tensions mount between the Chinese and Western governments, with all parties attempting to stay ahead of the technology’s rapid expansion.

Some experts believe a developmental arms race is on the cards and the addition of Chinese companies to the market could see them speed ahead of Western-based competition.

AI-regulation is far more stringent in the West when compared to China, which may create a competitive advantage for companies like Alibaba and Baidu - a Chinese tech company that also recently released an AI-powered chatbot.

Simon Reed, CRO at Multilocal, who worked at Microsoft for several years, said: “An AI arms race is well and truly in play. The major global tech companies are all vying for the top spot and the real leader is unknown as what has been developed by each player is a closely guarded secret. China is, of course, famed for being more secretive than most countries.

“No single territory has been able to legislate fast enough or knows precisely how to legislate for this new world to protect its citizens from this potential game-changer to human existence.”

Dangerous development or positive progress?

This is the first significant piece of tech development from Alibaba following the resurface of its founder, Jack Ma, last month. Ma’s return came just as the Chinese government enforced a restructuring of the company, breaking it down into six parts.

The government’s crackdown on private enterprises was viewed by some as a way of keeping wealthy people like Ma in check, but it doesn’t seem to have slowed down Alibaba’s thirst for innovation.

Dan Moseley, Managing Director of North America at Automated Creative, is less concerned about an arms race, believing the current state of AI development to be reminiscent “of when Amazon released Alexa and everyone rushed in with a 'me too' product”.

He said: “Brands should continue to get excited and experimental in the space, but continue to refer back to their data on how AI is benefiting them. There is a tendency to see this as an arms race, whereas one day most major businesses and platforms will have their own large language models.”

Peter Wallace, General Manager of EMEA at GumGum, understands peoples’ anxiety surrounding the potential for over-development. He said with AI having created “an increasingly acute need for greater regulation” because of its propensity to have global ramifications, the responsibility for safe innovation lies at the regulators’ doors.

He added: “It’s only right that these unprecedented capabilities have sparked concern along with excitement. Hopefully there will be some globally agreed standards to enable us to harness the rapidly growing power of AI productively and safely.”

But with the stakes high and an air of unknowingness surrounding the tech, will global governments be able to rally together and establish agreed upon standards? Only time will tell.

“It’s unaffordable to take eyes off the ball”

The news comes as the US government begins to establish rules for artificial intelligence tools.

Andreas Rindler, Managing Director at BCG Platinion, said: "As generative AI tools such as Microsoft's ChatGPT and Google's Bard continue to make waves, it's natural to ask tough questions. We've seen the havoc misinformation and bias can wreak on AI technology, so we can't just assume these tools are inherently ethical and safe. But it would be hasty to call for a blanket suspension, what’s really needed is a responsible and ethical approach that permeates every aspect of our society.

"There are indeed many critical risks when dealing with AI. From unexpected capabilities upon deployment to its potential use as a powerful tool for phishing and fraud activities, like deepfakes, it’s unaffordable to take eyes off the ball. To mitigate this, it’s critical to raise awareness among public institutions, businesses and educational settings, to understand and guide the continued development of AI without being overwhelmed by it.”