Artificial intelligence (AI) has rapidly integrated into our everyday lives. Tools such as ChatGPT and Google Gemini can write summaries, explain legal and medical concepts, and help businesses draft documents. Despite the impressive nature of these systems, they are not perfect. AI chatbots can get things wrong and occasionally make very serious mistakes, including inventing allegations of criminal behaviour and professional misconduct. When this happens, it poses a difficult question: If an AI chatbot makes a false allegation, who is responsible? The answer to this is by no means straightforward because AI is not a legal person and cannot be sued. The people and organisations that create the AI, use it, or share its output, can be held liable. Here we will explore the different legal issues raised by AI false allegations and the ways to reduce the risks.

AI generates text by predicting likely sequences of words. It does not focus on verifying facts, so when these predictions go wrong, a chatbot can make mistakes called hallucinations, which may produce falsehoods of crime, allegations of fraud, or fabricated misconduct claims. Notably, MP Peter Wishart was falsely accused of enabling grooming gangs on Elon Musk’s AI chatbot, Grok, although the chatbot subsequently issued a public apology and retracted the false statement on its platform.

However, false statements that are repeated, and cause serious harm to an individual or business’ reputation, are profoundly damaging.

Who bears the risk of liability?

AI generated defamation is an evolving legal area, and liability typically rests with humans or corporations that develop, deploy, or use the system. In the UK, when an AI system publishes defamatory content that causes or is likely to cause serious harm to an individual’s reputation, the operator or publisher of that content could be liable under Section 1 of the Defamation Act 2013. Similarly, a person who uses an AI tool and shares an AI-generated false allegation may be liable as the publisher even if they did not intend to defame. Organisations that use AI tools within their business should be aware that they can be liable if defamatory output is shown to clients, customers or the public.

Strategies to reduce AI risk

Businesses and individuals can limit their exposure to AI liability risks by using monitoring layers to filter outputs. Implementing human review for sensitive topics can greatly restrict AI from answering questions about individuals’ reputations. For compliance measures, AI developers can regularly audit and test models they have created to ensure that the quality of the AI tool is efficient. Following these measures can ensure that businesses, individuals, and AI developers have taken steps to prevent AI generating defamatory statements and reduce their liability risk.

Contact us

At Nath Solicitors, we offer expert legal advice on defamation, online reputation management, and media law. If you need assistance with managing user-generated comments, call us on 0203 983 8278 or email us at enquiries@nathsolicitors.co.uk.

Contact Us

Get in touch with us using the form and one of our team will respond to you promptly. You can also contact us by email or telephone if you prefer.

enquiries@nathsolicitors.co.uk

020 3983 8278

Opening Hours

Mon – Fri 9am-5pm

    Personal Information

    More Information

    Please include the background to your situation and any further details that may help us answer your query.

    This site is protected by reCAPTCHA and the Google Terms of Service and our Privacy Policy apply.

    Enquire Now