Wozniak, Musk and 1,000 tech leaders urge global pause on ‘dangerous’ AI development race

Open letter warns of 'profound risk to society and humanity' that could have 'catastrophic' effects if AI expansion goes unchecked.

The Future of Life Institute has published an open letter arguing that humankind doesn't yet know the full scope of the risks involved in advancing AI technology at its current rate. 

Issued by the non-profit group, the letter is signed by more than 1,000 people, including Apple co-founder Steve Wozniak, Twitter CEO Elon Musk, Yoshua Benigo, often referred to as one of the "godfathers of AI", and Stuart Russell, a pioneer of research in the field, as well as researchers at Alphabet-owned DeepMind.

The missive warned of potential risks to society and civilisation by human-competitive AI systems in the form of economic and political disruptions.

“Only once we are confident that their effects will be positive” 

"AI systems with human-competitive intelligence can pose profound risks to society and humanity," the letter warns.

"Powerful AI systems should be developed only once we are confident that their effects will be positive and their risks will be manageable."

If such a halt in development cannot be enacted quickly, the letter says governments should step in and institute a moratorium.

"AI labs and independent experts should use this pause to jointly develop and implement a set of shared safety protocols for advanced AI design and development that are rigorously audited and overseen by independent outside experts," the letter continues.

"These protocols should ensure that systems adhering to them are safe beyond a reasonable doubt."

The Future of Life Institute is primarily funded by the Musk Foundation, the London-based effective altruism group Founders Pledge, and Silicon Valley Community Foundation, according to the European Union's transparency register.

Since its release last year, Microsoft-backed OpenAI's ChatGPT has prompted rivals to launch similar products, and companies to integrate it or similar technologies into their apps and products. Musk was one of the founders of OpenAI - the company that created ChatGPT - in 2015. 

Governments step in with AI regulation

The letter comes as EU police force, Europol, on Monday voiced ethical and legal concerns over advanced AI like ChatGPT, warning about the potential misuse of the system in phishing attempts, disinformation and cybercrime.

The UK government has also issued its first white paper aimed at promoting AI growth without putting the public at risk. 

Five principles - safety, security and robustness, transparency and explainability, fairness, accountability and governance, and contestability and redress - have been adopted and are the pillars on which the new white paper stands. 





LATEST NEWS