Human resources at a click

The Era of AI

AI

Felicity Harber v The Commissioners for HMRC (2023) UKFTT 1007 (TC )

In this recent case, the First-Tier Tribunal (Tax Chamber) gave a stark warning to litigants about use of AI in litigation. Ms Harber, a litigant in person, had failed to notify and pay HMRC the relevant Capital Gains Tax (CGT) due following the sale of a property. This subsequently led to a penalty of £3,265.11 being imposed, which the applicant appealed, seeking to rely on having a reasonable excuse for her failure to pay the CGT.

In an attempt to win the appeal, Ms Harber used nine fictitious cases she had found on ChatGPT, an Artificial Intelligence (AI) generated tool. When she put the cases to the Tribunal at the hearing, judges were unable to locate any of them on any available databases. Ultimately, Ms Harber lost her case. The Tribunal concluded that she did not have a reasonable excuse for her failure to notify HMRC of her liability for Capital Gains Tax. Even though the Tribunal accepted that she had used the nine decisions innocently, it nonetheless gave a strong caution against litigants using AI technology, highlighting the danger it poses.

The dangers of AI

As we witness the inevitable expansion and adoption of AI, we are reminded that such a significant tool is not without its risks. Generative AI produces a range of content including images, text, videos and other media sources from the information and data it receives. AI can be a legitimate legal research tool, especially for litigants in person who may not have access to professional legal databases. However, as shown from the above case, one of its problems lies in accuracy. With the wealth of information AI absorbs, it has the ability to distort facts and form illusions, and the data it is fed with may contain bias and false patterns.

Whether a litigant in person or a legal professional, a cautious approach must be taken when utilising AI for research in legal cases. Using false information will not only affect the outcome for the parties who rely on it, but will also impact on credibility and reputation. We anticipate judges will take a tough approach towards the incorrect or inappropriate use of AI technology in the future.

 

Fraud. AI tools will and are being used in fraudulent activities.

Other Areas

Legal research is not the only area in which AI poses a threat going forward. Other areas to watch out for include:

  • Privacy and confidentiality. To generate content, AI relies on data and information. In turn, AI may generate content which includes private data.
  • Fraud. AI tools will and are being used in fraudulent activities by way of, for example, using real and/or fake data to create realistic scams.
  • Intellectual property. With the wealth of data and content that AI tools possess, we are highly likely to see an increase in intellectual property litigation, for example with regard to who owns the rights to AI generated content.

If you have any further questions in relation to legal research and/or litigation more generally, please contact the Dispute Resolution solicitors.

 

 

Disclaimer
This information is for guidance purposes only and should not be regarded as a substitute for taking professional and legal advice. Please refer to the full General Notices on our website.

Madeleine Harding|
Madeleine Harding
Trainee Solicitor

Related Articles

Can an employer lawfully monitor their employee, without their knowledge, if they suspect wrongdoing? Can employers monitor employees? It’s worth...

Organisations of all sizes are susceptible to data breaches and the damage caused by these breaches, both reputationally and financially,...

In the digital age, cookies play a crucial role in how websites operate and interact with users. Companies use cookies...

Related Resources

Social media policy

This social media policy covers the use of all forms of social media by employees for both business and private...

Data subject access requests factsheet

This data subject access requests (DSAR) factsheet details the process that Data Controllers have to follow if data subjects request...

Confidentiality statement

Confidentiality statement in regards to the monitoring policy. Confidentiality Statement – Monitoring Policy  I agree, save if required by law...

Human resources at a click