Snapchat may have failed to adequately identify and assess the risks its artificial intelligence (AI) chatbot posed to millions of users, particularly children, says the UK data watchdog.
The Information Commissioner’s Office (ICO) has issued Snap, the parent company behind Snapchat, a preliminary enforcement notice over the privacy risks posed by its generative AI chatbot ‘My AI’.
As a result, Snapchat could face a multi-million pound fine or even be forced to shut down the feature in the UK.
The ICO issued the notice after conducting an investigation into the tool, launched in spring 2023. The chatbot, powered by OpenAI’s GPT technology, marked the first example of generative AI embedded into a major messaging platform in the UK.
However, the ICO has found that the risk assessment Snap conducted before it launched My AI did not “adequately assess the data protection risks” to users, particularly children aged 13 to 17.
“The provisional findings of our investigation suggest a worrying failure by Snap to adequately identify and assess the privacy risks to children and other users before launching My AI,” information commissioner John Edwards said.
“We have been clear that organisations must consider the risks associated with AI, alongside the benefits. Today’s preliminary enforcement notice shows we will take action in order to protect UK consumers’ privacy rights.”
The US company said it was “closely reviewing” the provisional findings.
“In line with our standard approach to product development, My AI went through a robust legal and privacy review process before being made publicly available,” Snap added.
Snapchat has 21 million monthly active users in the UK, of which 48 per cent are 24 years old or under, according to market research company Insider Intelligence. About 18 per cent of UK users are aged 12 to 17.
The findings of the investigation are provisional and Snap has been given until 27 October to make representations before a final decision is made.