Italy has become the first Western country to block advanced chatbot ChatGPT.
The Italian data-protection authority said there were privacy concerns relating to the model, which was created by US start-up OpenAI and is backed by Microsoft.
The regulator said it would ban and investigate OpenAI “with immediate effect”.
Millions of people have used ChatGPT since it launched in November 2022.
It can answer questions using natural, human-like language and it can also mimic other writing styles, using the internet as it was in 2021 as its database.
Microsoft has spent billions of dollars on it and it was added to Bing last month.
It has also said that it will embed a version of the technology in its Office apps, including Word, Excel, PowerPoint and Outlook.
There have been concerns over the potential risks of artificial intelligence (AI), including its threat to jobs and the spreading of misinformation and bias.
Earlier this week key figures in tech, including Elon Musk, called for these types of AI systems to be suspended amid fears the race to develop them was out of control.
The Italian watchdog said that not only would it block OpenAI’s chatbot but it would also investigate whether it complied with General Data Protection Regulation.
GDPR governs the way in which we can use, process and store personal data.
The watchdog said on 20 March that the app had experienced a data breach involving user conversations and payment information.
It said there was no legal basis to justify “the mass collection and storage of personal data for the purpose of ‘training’ the algorithms underlying the operation of the platform”.
It also said that since there was no way to verify the age of users, the app “exposes minors to absolutely unsuitable answers compared to their degree of development and awareness”.
Bard, Google’s rival artificial-intelligence chatbot, is now available, but only to specific users over the age of 18 – because of those same concerns.
The Italian data-protection authority said OpenAI had 20 days to say how it would address the watchdog’s concerns, under penalty of a fine of €20 million ($21.7m) or up to 4% of annual revenues.
Dan Morgan, from cybersecurity ratings provider Security Scorecard said the ban shows the importance of regulatory compliance for companies operating in Europe.
“Businesses must prioritise the protection of personal data and comply with the stringent data protection regulations set by the EU – compliance with regulations is not an optional extra.”
Consumer advocacy group BEUC also called on EU and national authorities – including data-protection watchdogs – to investigate ChatGPT and similar chatbots, following the filing of a complaint in the US.
Although the EU is currently working on the world’s first legislation on AI, BEUC’s concern is that it would take years before the AI Act could take effect, leaving consumers at risk of harm from a technology that is not sufficiently regulated.
Ursula Pachl, deputy director general of BEUC, warned that society was “currently not protected enough from the harm” that AI can cause.
“There are serious concerns growing about how ChatGPT and similar chatbots might deceive and manipulate people. These AI systems need greater public scrutiny, and public authorities must reassert control over them,” she said.
ChatGPT is already blocked in a number of countries, including China, Iran, North Korea and Russia.
OpenAI has not yet responded to the BBC’s request for comment.