ChatGPT banned in Italy over potential data protection breaches

Open AI bot gets its first western government ban and potential $21m fine in response to data concerns. But similar privacy fears have been growing across the world, including the US.

ChatGPT has been provisionally blocked in Italy following concerns that the artificial intelligence tool violated the country's policies on data collection.

The chatbot, which is operated by OpenAI and financially backed by Microsoft, has amassed more than 100 million monthly active users since launching late last year.

In a statement last Friday, the Italian data protection agency announced that it would immediately block the chatbot from collecting Italian users' data while authorities investigate OpenAI, the California company behind ChatGPT.

The investigation comes after the chatbot experienced a data breach on 20 March, which put at risk some users' personal data, such as their chat history and payment information.

The Italian Data Protection Authority, Garante per la Protezione dei Dati Personali (GPDP), officially announced a ChatGPT ban, effective immediately, that would last until ChatGPT complied with the EU's GDPR.

According to OpenAI, the error that caused the leak has been patched.

However, the Italian agency also questioned OpenAI's data collection practices and whether the breadth of data being retained is legal – taking issue with the lack of an age verification system to prevent minors from being exposed to inappropriate answers. The software is supposed to be reserved for people aged 13 and over.

As a result, OpenAI has been given 20 days to respond to the agency's concerns, or the company could face a fine of either $21m or 4% of its annual revenue.

It is the first time a national regulator has made such a move against ChatGPT. While the tech has wowed many with its ability to write computer code and pass tough exams, its development has left some concerned.

“Users aren't being given the information to allow them to make an informed decision"

The ban comes after EU law enforcement agency Europol warned that ChatGPT could be used by criminals and to spread disinformation online.

Data, privacy, and cyber security lawyer Edward Machin, of Ropes and Gray, said "it wouldn't be surprising" to see more regulators follow Italy's lead.

"It's easy to forget ChatGPT has only been widely used for a matter of weeks," he said. "Most users won't have stopped to consider the privacy implications of their data being used to train the algorithms that underpin the product. Although they may be willing to accept that trade, the allegation here is users aren't being given the information to allow them to make an informed decision, and more problematically, that in any event there may not be a lawful basis to process their data."

Trouble brewing in the US for OpenAI

Earlier this week, the US Centre for AI and Digital Policy filed a complaint with the Federal Trade Commission (FTC) over ChatGPT's latest version, describing it as having the ability to "undertake mass surveillance at scale".

The group asked the FTC to halt OpenAI from releasing future versions until appropriate regulations are established.

"We recognise a wide range of opportunities and benefits that AI may provide," the group wrote in a statement. "But unless we are able to maintain control of these systems, we will be unable to manage the risk that will result or the catastrophic outcomes that may emerge."

Elon Musk joined a group of AI experts this week in calling for a pause in the training of systems like ChatGPT, which are known as large language models.

The move follows the release of OpenAI's GPT-4, an upgraded iteration of the tech behind its chatbot. It already powers Microsoft's Bing search engine and is being added into Office apps like Teams and Outlook.

The AI technology, widely known for its chatbot feature, has become a global phenomenon for its wide range of capabilities, from crafting realistic art to passing academic tests to figuring out someone's taxes.


LATEST NEWS