Snap AI Chatbot Investigation Launched in UK Over Teen-Privacy Concerns

Spread the love

Snap, the parent company of popular social media platform Snapchat, is facing an investigation in the UK regarding potential privacy risks associated with its generative artificial intelligence chatbot, “My AI”.

The Information Commissioner’s Office (ICO), the UK’s data protection regulator, issued a preliminary enforcement notice on Friday, expressing concerns about the potential risks the chatbot may pose to Snapchat users, particularly those aged between 13 and 17.

Information Commissioner John Edwards stated, “The provisional findings of our investigation suggest a worrying failure by Snap to adequately identify and assess the privacy risks to children and other users before launching ‘My AI’.”

While the findings are not yet conclusive, Snap will be given an opportunity to address the provisional concerns before a final decision is made. If the ICO’s provisional findings lead to an enforcement notice, Snap may be required to halt the offering of the AI chatbot to UK users until the privacy concerns are resolved.

A spokesperson from Snap responded, “We are closely reviewing the ICO’s provisional decision. Like the ICO, we are committed to protecting the privacy of our users. In line with our standard approach to product development, My AI went through a robust legal and privacy review process before being made publicly available.”

Snap has assured that it will work closely with the ICO to ensure that the organization is comfortable with Snap’s risk-assessment procedures. The AI chatbot, powered by OpenAI’s ChatGPT, incorporates features designed to alert parents if their children have been using the chatbot. Snap also has established general guidelines for its bots to follow in order to refrain from making offensive comments.

The ICO refrained from providing additional comments, citing the provisional nature of the findings. The agency had previously issued a “Guidance on AI and data protection” and followed up with a general notice in April, outlining questions that developers and users should ask about AI.

Snap’s AI chatbot has faced scrutiny since its debut earlier this year, particularly over instances of inappropriate conversations. For example, it was reported that the chatbot advised a 15-year-old on how to hide the smell of alcohol and marijuana. In Snap’s most recent earnings report, it was revealed that over 150 million people have used the AI bot.

It’s worth noting that other forms of generative AI have also come under criticism recently. Bing’s image-generating generative AI, for instance, has been used by extremist messaging board 4chan to create racist images.


Spread the love

Leave a Reply

Your email address will not be published. Required fields are marked *