Artificial Intelligence is developing at an unprecedented rate, and reliance on AI tools such as Chat GPT is becoming greater among consumers and businesses. This is true across all sectors. The new technology brings with it huge advantages in terms of cost and other efficiencies. It is dramatically changing the way we work and live. At the same time this new AI landscape is not without risk. How do governments regulate the technology to protect users from financial loss or physical harm?
Here we look at some of the legal issues raised by the increased use of AI technology, What if, for example, you seek legal or technical advice from a recognised AI tool and that information is misleading or incorrect? If you suffer financial, reputational or personal harm as a result of reliance on that information do you have any legal recourse?
We outline possible legal options below. We also discuss the EU’s recent decision to withdraw its much-anticipated AI liability directive. That legislation sought to give those harmed by AI the same protection as those harmed by other technologies. It would also have made it easier for victims to claim compensation for damage caused by AI.
Many would say it’s ill-advised to place too much reliance on chatbots and other AI tools when engaged in significant commercial transactions or personal matters. Doesn’t the end user of the technology have a responsibility to consider and assess the limitations of the technology before relying on the information?
However as we increasingly use machine-generated information to make decisions, businesses and consumers can – in appropriate circumstances – expect to rely on information provided in this manner, and to seek compensation if they rely on that information to their detriment. In 2024, for example Air Canada was forced to honour a discount offered to a traveller by a chatbot on its website – despite the airline arguing that the chatbot was a separate legal entity and therefore responsible for its own actions.
As with all things AI, the legal landscape is in a state of flux. Governments around the world are grappling with how to apply traditional legal concepts and practice to the new technology. This uncertainty was highlighted in In Feb 2025 when the EU Commission withdrew what would have been a ground breaking directive on AI liability. The directive would have streamlined the issue of liability for AI-caused harms and offered significant protection to users and consumers. It seems the legislation’s demise was due to long standing disagreements among Member States and others, as well as intense lobbying from large tech firms, concerned at the obstacles they said the directive would have placed in the path of development and innovation.
To establish a successful claim against an AI system that has made an error will depend on a range of factors, including the nature of the mistake, the loss suffered, how the claimant engaged with the AI system and who caused the error. Potential claims may be brought against:
Sometimes there may be a degree of joint liability when there is a combination of poor design, negligent implementation of the system and user error.
When discussing the legal liability attaching to AI tools it’s important to go back to basics and remember that during their development, AI applications are trained or taught to behave in a certain way. But once deployed these tools continue to learn from new data. As a result their behaviour and reactions will change over time. Boiled down, this means that even the most sophisticated AI tool will, at times, behave in an unexpected way. It is this unpredictability that means effectively regulating AI liability, and ensuring users can seek appropriate legal redress will be a difficult task. The EU’s withdrawal of its AI liability directive illustrates this complexity and suggests a tension between safeguarding the rights of consumers in the face of the might of Big Tech interests.
If you need advice on AI litigation contact us on 44 (0) 203 983 8278 or get in touch with the firm online.