We are all aware from the media of the strides Artificial intelligence (AI) technology is making, and how it is beginning to permeate every industry, from artificially generated art being used a credits for A-list TV shows, to ChatGPT being used to write a court defence in the USA. For some industries, this seemingly unstoppable progression is daunting, and is seems very much to be a case of catch up, or be left behind. For others, it offers exciting opportunities for streamlining workloads, and capitalising on using time effectively to give better value to clients.
Whether you are concerned or excited about these advancements, it is clear that this technology is here to stay and will soon play a pivotal role in many industries. With all these changes, it is important for employers to start considering how to regulate their employees using AI in the workplace, and risk management for when and how it is used.
With that in mind, here are some key considerations that we think employers should start thinking about now, to make sure they are ready when the future comes knocking.
Privacy and Data Protection
This is probably one of the primary concerns with AI systems, and as they become more common place in businesses, employers need to be certain that they are being used responsibly and that employee and client data is being protected effectively.
For those using AI systems for HR related activities such as recruitment or performance analysis, steps need to be taken to ensure that the system analysing and storing this data is secure and that privacy notices are updated to reflect the changes in how this data is being processed.
Where an AI system is being used to draft documents, letters, emails or policies, employers need to make sure that their employees understand that the data provided to the AI system may not be held securely. There are concerns that even where assurances are made about the privacy of the data storage, the AI is trained on the data provided to it, and so there are concerns that it will used this trained data out of context resulting in a data breach.
Employers need to ensure that their data protection notices and policies are up to date, that their employees are appropriately trained on these and receive regular refresher training particularly where the technology develops, and that they are appropriately trained on any software they are expected, or allowed, to use in the workplace.
Recruitment and Discrimination
Unfortunately, one of the most striking problems with using AI systems in the workplace, particularly when used in areas such as recruitment, is that they are trained on a mass of historical data which itself may contain discriminatory bias. If this occurs the AI system can perpetuate or even exacerbate the biased learned behaviour.
Employers therefore have a duty when using these systems to ensure that they are not inadvertently allowing discriminatory decisions to be made, particularly against those with protected characteristics, including race, gender, and age.
Similarly, where AI systems are used for algorithmic decision making, such as to make decisions regarding promotions and performance evaluations, the need for transparency becomes paramount. The UK GDPR limits the circumstances in which you can make solely automated decisions, including those based on profiling, that have a legal or similarly significant effect on individuals. Employees have a right to understand how these decisions are made and to challenge them if they suspect unfairness, and employers utilising these tools need to understand the data they are presented with and why it has come to the recommendations it has to be able to justify decisions to employees.
At the time of writing, there is no specific legislation which holds employers accountable to mitigating bias in AI-driven decisions. However, it is possible that by choosing to use software which results in such an outcome, an employer could be accountable under existing legislation.
Capability and Performance
As employees increasingly use and rely on AI for their daily work, employers will need to reassess capability markers and expectations. For example, if a marketing assistant is using ChatGPT to draft social media posts based on specific criteria, which used to take her 2 hours per post, and now takes less than 30 minutes, her output increases substantially. In turn, her employer would need to consider the markers used for assessing her performance, and perhaps consider training her on other duties to ensure that her time is being utilised productively.
An issue that may arise in these scenarios though, is that of quality control. Whilst the speed at which AI can output work is useful, tool such as ChatGPT are using probability to guess at the next most likely word in a phrase. It is in many ways a more advance predictive text, something that most people are aware of the potential problems with.
AI-systems cannot be relied on for factual accuracy, or even for logical and coherent writing. If employers are looking to allow their employees to rely on AI systems, they need to ensure that the work is properly quality checked before being used.
One further issue, which has been particularly prominent in the art world, is that of copyright infringement and plagiarism. Employers need to be wary of the potential pit-falls of an AI system that has learned to replicate existing works and no understanding of the law to avoid outright plagiarism or copyright infringement. Such issues are gaining attention, and it would be prudent when using such software to ensure that employees are trained on how to use it, and what is acceptable under the law.
Automation and Job Displacement
AI-driven automation has the potential to transform job markets across industries. While this innovation can lead to increased efficiency and productivity, it also raises concerns about job displacement. Employers may need to adapt by providing retraining programs or transitioning affected employees to new roles.
We recommend that employers start to look now at where they think their workplaces will most benefit from this automation, and in turn which employees might be most affected, so that they can start upskilling and acting proactively to protect their employees and benefit the business.
Whilst as of yet there are no AI specific changes to legislation that employers need to be aware of, it is likely that such changes and case law are on the horizon, and a prudent employer would be making sure that they are acting in the best interest of their employees and the business, to avoid potential claims relating to displacement due to AI integration.
So is it worth the risk?
This article has set out some of the potential risks and pit-falls associated with AI systems. Some employers reading this might find themselves thinking that it cannot be worth the risk to use these systems when there are so many potential problems, and so much that is still legally uncertain. Whilst it is true that there are risks, and it is advised that adopting these systems is done cautiously with the appropriate risk assessments and policies in place, the potential benefits for industries are huge. Regardless of where you stand on the risk verses merit line, the simple fact is that this technology is coming, and fast, and in order not to be left behind employers should be acting now to have appropriate measures in place to mitigate these risks whilst still taking advantage of the improvements this technology offers.
Our team is on hand to assist with drafting appropriate use policies, as well as offering training on issues such as data protection and diversity and inclusion.