CALL US TODAY: 0203 983 8278
Home // What Happens When AI Gets It Wrong?

Artificial Intelligence is developing at an unprecedented rate, and reliance on AI tools such as Chat GPT is becoming greater among consumers and businesses. This is true across all sectors.  The new technology brings with it huge advantages in terms of cost and other efficiencies. It is dramatically changing the way we work and live.  At the same time this new AI landscape is not without risk. How do governments regulate the technology to protect users from financial loss or physical harm?

Here we look at some of the legal issues raised by the increased use of AI technology, What if, for example, you seek legal or technical advice from a recognised AI tool and that information is misleading or incorrect? If you suffer financial, reputational or personal harm as a result of reliance on that information do you have any legal recourse?

We outline possible legal options below. We also discuss the EU’s recent decision to withdraw its much-anticipated AI liability directive. That legislation sought to give those harmed by AI the same protection as those harmed by other technologies. It would also have made it easier for victims to claim compensation for damage caused by AI.

Are Chatbots Liable For Incorrect Information?

Many would say it’s ill-advised to place too much reliance on chatbots and other AI tools when engaged in significant commercial transactions or personal matters. Doesn’t the end user of the technology have a responsibility to consider and assess the limitations of the technology before relying on the information?

However as we increasingly use machine-generated information to make decisions, businesses and consumers can – in appropriate circumstances – expect to rely on information provided in this manner, and to seek compensation if they rely on that information to their detriment. In 2024, for example Air Canada was forced to honour a discount offered to a traveller by a chatbot on its website – despite the airline arguing that the chatbot was a separate legal entity and therefore responsible for its own actions.

How Do We Establish Legal Liability For AI Mistakes?

As with all things AI, the legal landscape is in a state of flux. Governments around the world are grappling with how to apply traditional legal concepts and practice to the new technology. This uncertainty was highlighted in In Feb 2025 when the EU Commission withdrew what would have been a ground breaking directive on AI liability. The directive would have streamlined the issue of liability for AI-caused harms and offered significant protection to users and consumers. It seems the legislation’s demise was due to long standing disagreements among Member States and others, as well as intense lobbying from large tech firms, concerned at the obstacles they said the directive would have placed in the path of development and innovation.

To establish a successful claim against an AI system that has made an error will depend on a range of factors, including the nature of the mistake, the loss suffered, how the claimant engaged with the AI system and who caused the error. Potential claims may be brought against:

  • The developer of the AI tool – was there a design error or flaw in the way the system was installed? If so, the maker of the tool could be held liable in the same way as a manufacturer would be liable under product liability legislation. If the developer failed to take reasonable steps to ensure the integrity and safety of the system, there may also be a claim for negligence
  • A user of the AI who does not understand the tool’s limitations and relies too heavily on it could bear some liability for damage caused by incorrect

Sometimes there may be a degree of joint liability when there is a combination of poor design, negligent implementation of the system and user error.

Comment

When discussing the legal liability attaching to AI tools it’s important to go back to basics and remember that during their development, AI applications are trained or taught to behave in a certain way. But once deployed these tools continue to learn from new data. As a result their behaviour and reactions will change over time. Boiled down, this means that even the most sophisticated AI tool will, at times, behave in an unexpected way. It is this unpredictability that means effectively regulating AI liability, and ensuring users can seek appropriate legal redress will be a difficult task. The EU’s withdrawal of its AI liability directive illustrates this complexity and suggests a tension between safeguarding the rights of consumers in the face of the might of Big Tech interests.

Contact Us

If you need advice on AI litigation contact us on 44 (0) 203 983 8278 or get in touch with the firm online

 

    CONTACT US TODAY

    I accept the privacy policy

    To prove you are not a robot, please answer the following question:

    Testimonials

    Copyright. Nath Solicitors Limited. Registered in England and Wales. Company Number: 08724944. VAT number: 207490711. Office Located at: 35 Berkeley Square, London, W1J 5BF. Nath Solicitors Limited is authorised and regulated by the Solicitors Regulatory Authority. Registration number 608014. Terms Of Use. Privacy Policy. Cookies Policy. Complaints Procedure.