Human resources at a click

The Era of AI

AI

Felicity Harber v The Commissioners for HMRC (2023) UKFTT 1007 (TC )

In this recent case, the First-Tier Tribunal (Tax Chamber) gave a stark warning to litigants about use of AI in litigation. Ms Harber, a litigant in person, had failed to notify and pay HMRC the relevant Capital Gains Tax (CGT) due following the sale of a property. This subsequently led to a penalty of £3,265.11 being imposed, which the applicant appealed, seeking to rely on having a reasonable excuse for her failure to pay the CGT.

In an attempt to win the appeal, Ms Harber used nine fictitious cases she had found on ChatGPT, an Artificial Intelligence (AI) generated tool. When she put the cases to the Tribunal at the hearing, judges were unable to locate any of them on any available databases. Ultimately, Ms Harber lost her case. The Tribunal concluded that she did not have a reasonable excuse for her failure to notify HMRC of her liability for Capital Gains Tax. Even though the Tribunal accepted that she had used the nine decisions innocently, it nonetheless gave a strong caution against litigants using AI technology, highlighting the danger it poses.

The dangers of AI

As we witness the inevitable expansion and adoption of AI, we are reminded that such a significant tool is not without its risks. Generative AI produces a range of content including images, text, videos and other media sources from the information and data it receives. AI can be a legitimate legal research tool, especially for litigants in person who may not have access to professional legal databases. However, as shown from the above case, one of its problems lies in accuracy. With the wealth of information AI absorbs, it has the ability to distort facts and form illusions, and the data it is fed with may contain bias and false patterns.

Whether a litigant in person or a legal professional, a cautious approach must be taken when utilising AI for research in legal cases. Using false information will not only affect the outcome for the parties who rely on it, but will also impact on credibility and reputation. We anticipate judges will take a tough approach towards the incorrect or inappropriate use of AI technology in the future.

 

Fraud. AI tools will and are being used in fraudulent activities.

Other Areas

Legal research is not the only area in which AI poses a threat going forward. Other areas to watch out for include:

  • Privacy and confidentiality. To generate content, AI relies on data and information. In turn, AI may generate content which includes private data.
  • Fraud. AI tools will and are being used in fraudulent activities by way of, for example, using real and/or fake data to create realistic scams.
  • Intellectual property. With the wealth of data and content that AI tools possess, we are highly likely to see an increase in intellectual property litigation, for example with regard to who owns the rights to AI generated content.

If you have any further questions in relation to legal research and/or litigation more generally, please contact the Dispute Resolution solicitors.

 

 

Disclaimer This information is for guidance purposes only and should not be regarded as a substitute for taking professional and legal advice. Please refer to the full General Notices on our website.
Madeleine Harding|
Madeleine Harding
Trainee Solicitor

Related Articles

On 3 September 2025, Mr Jason Blake appeared at Beverley Magistrates Court and was fined for failing to respond to...

Nowadays most people have at least one social media account. Whether it’s Facebook or TikTok, X (formerly known as Twitter)...

The Data (Use and Access) Act 2025 (DUAA) marks the most significant refinement of the UK’s data protection framework since...

Related Resources

Generative AI policy

This Policy covers the use of generative artificial intelligence (generative AI). The use of generative AI is transforming the way...

Monitoring policy

This monitoring policy provides a brief overview of how a company should approach monitoring in the workplace. Employees and other...

Data protection policy

Policy for data protection. Purpose & Scope We need to collect and use certain types of information about individuals (such...

Human resources at a click