The growing prominence of AI tools like ChatGPT and Anthropic in our business and personal lives is plain to see. The threat of defamation by AI-generated content has emerged as a real challenge to us all as individuals and to businesses. Nath Solicitors in London is a boutique law firm specialising in all aspects of defamation.  Here we look at existing defamation laws – a mix of legislation and court decisions – and how they can be applied to some of the issues raised when AI generated content defames an individual or business.

Is AI capable of Defaming Someone?

Content generated by AI can be defamatory because the tools are image and text-generated. They simply produce information from the data that they are given or ‘fed’. If the data is incorrect, the results produced can be defamatory in various ways. For example:

  • Inaccurate medical and legal claims
  • False descriptions of case law, legal actions and regulatory results
  • Fabricated statements about companies or individuals
  • Out of context links, accusations or quotes

Ways AI can commit Defamation

As we’ve seen, AI tools can get it wrong, and the results can be devastating for individuals and businesses identified or affected by the AI material. Defamation can arise in multiple scenarios. For example:

  • AI providing outdated or incorrect data and presenting it as up to date fact
  • Chatbots producing false allegations of a public figure engaging in illegal activities, such as tax evasion
  • AI implying lawsuits and litigations that never occurred

It is not just text-focused AI programs which are capable of defamation but also those that solely produce images and videos such as deepfakes. Examples include:

Even if the defamatory AI generated content was not created with the intention to cause harm, liability may still arise. Given the evolving nature of the technology however, the question of who is responsible – who the victim can sue – is not always an easy one to answer.

Who Is Responsible for AI Defamation?

The UK government has not changed defamation law in any way to deal with AI content that could be defamatory. However the current rules on online defamation apply to AI content in the same way as the do to human or user generated content. That means if AI content – whether a statement or an image (such as a deepfake image) – is defamatory, the individual or business harmed can sue for defamation.

To be defamatory the AI content must lower an individual or business in the opinion of ‘right-thinking members of society’. Will people think less of you or your business because of the statement?

In addition, anyone suing for defamation must show that

  • The material actually identifies them
  • It was published by someone to a third party.
  • The statement caused you serious harm or caused your business serious financial loss.

When it comes to AI the following issues arise when deciding liability:

  • Is the user who inputs the prompt to the AI tool responsible?
  • Is the platform, such as OpenAI and Google, responsible?
  • Is there joint liability if the material is reposted?

Nath Solicitors Case Study

We have recently advised a UK client who had been a victim of AI generated content which false stated that he was ‘convicted of fraud’ based on a fictitious regulatory fine.

The statements provided were:

  • Baseless and completely false
  • Spreading in investor networks
  • Indexed on Google

We carefully analysed the case according to:

  • The Defamation Act 2013
  • The platform’s response and moderation
  • The nature and repetition of the material

Consequently we issued pre-action notices, an injunctive relief application, and applied for Norwich Pharmaceutical Orders.

Through our methods, we successfully:

  • Prepared a defamation claim against a party who knowingly repeated the falsehoods
  • Managed the full takedown of the material and retraction from the AI provider
  • Removed indexed content from various search engines

Relevance and Significance

From our experience in helping various clients with AI defamatory content, we are aware of the unique risks that AI poses. These include:

  • Anonymity: Users can order AI to generate harmful content behind untraceable prompts
  • Persistence: Once created and indexed online, the content can remain online indefinitely.
  • Speed: The content can be generated and spread within seconds

Given the dangers of AI, it is important to act decisively, strategically and promptly.

Contact Us

At Nath Solicitors, we specialise in handling defamation cases, including those involving anonymous online reviews. If you are facing issues related to defamatory content or need legal guidance to protect your reputation, contact us at 0203 983 8278 or get in touch with Shubha Nath online. Our experienced team is here to help you navigate these complex issues and safeguard your reputation.

Contact Us

Get in touch with us using the form and one of our team will respond to you promptly. You can also contact us by email or telephone if you prefer.

enquiries@nathsolicitors.co.uk

020 3983 8278

Opening Hours

Mon – Fri 9am-5pm

    Personal Information

    More Information

    Please include the background to your situation and any further details that may help us answer your query.

    This site is protected by reCAPTCHA and the Google Privacy Policy and Terms of Service apply.

    Enquire Now