Over the last year we have witnessed a rise in the utilisation of AI as part of recruitment processes, in order to increase efficiency in the process, minimise human administration and scale processes.
However, whilst the benefits are clear, there are significant risks with using these tools which are starting to be identified, particularly in terms of unintended bias and discrimination. To assist employers who are using, or considering the use of, AI in recruitment, we have put together a summary of the key risks that employers should be aware of.
AI in Recruitment: Use Cases
Before jumping into the risks of using these tools, its good to have a basic understanding of how these tools are being utilised in recruitment, and the benefits this offers:
- Sourcing: Employers are utilising AI tools to create job adverts, identifying potential candidates, and filtering CVs.
- Screening: AI tools have been created which can evaluate candidate profiles against predefined criteria. This streamlines the administrative process of narrowing the pool for interview.
- Interview: Some employers are using AI tools to draft interview questions, and even conduct initial interviews in some cases. AI can also be used to analyse written and video interviews to identify strong responses against a predefined criteria.
- Selection: AI tools are being used to identify and select the best candidates based on qualifications and skills.
Legal Risks and Challenges
As AI develops, the use cases above will be expanded, and with this will bring even more tempting benefits for streamlining and simplifying the recruitment process. Whilst these leaps forward are impressive, and certainly useful for employers, it is imperative for anyone using these tools to be aware of the legal risks.
1. Unintended Bias and Discrimination:
The biggest risk with utilising AI in recruitment, is that of unintended bias and discrimination. As a learning model, AI utilises large historic databases to learn from and to make its decisions from. This model necessitates a large volume of historic data, which particularly in recruitment will include historic discriminatory hiring practices usually favouring majority groups and males. Where the AI has learnt from a pattern of characteristics that make up a suitable candidate for a role, it can result in decisions which inadvertently perpetuate these biases. For example, it may see that successful candidates for a CFO role in a large company over the last 30 years have been white males from private school backgrounds. This learned data may therefore result in it favouring similar candidates, even if these are not predefined criteria that the employer has requested.
As a result of this discrimination against applicants based on gender, age, race, or other protected characteristics is a significant concern.
2. Digital Exclusion:
One area of concern that has arisen as companies start to utilise these tools more, is the risk of it unfairly excluding candidates that lack proficiency in technology or access to it. This creates increased risks of discrimination due to age, and disability.
Employers need to ensure that where an AI recruitment process is utilised, they have the ability to make reasonable adjustments if required to allow those with disabilities to engage in the process, and that they are not unfairly excluding potential candidates.
3. Data Protection and Privacy:
Another issue that has been raised regarding AI processes, is that of its compliance with UK GDPR and Data Protection laws. Recruitment process require these AI systems to process personal data and as such require compliance with data protection laws.
Key tenants of these laws include transparency, informed consent, and secure data handling. To be able to be compliant with UK GDPR, employers need to understand how AI is processing this data, and need to carefully consider the use case for this program and need. If as part of the process, employers are processing sensitive personal data, such as medical information, this will require even more safeguards.
Employers are encouraged to discuss the security systems of AI programmes with the third party supplier, as well as ensure their own checks, and to ensure that employees using these processes are suitably trained.
Mitigating Risks
To navigate these challenges, organizations should: While the benefits of AI processes are considerable, employers must strike a balance between efficiency and their legal and ethical obligations. In order to mitigate risks, steps employers can take include:
- Assess AI Systems: Evaluate AI tools for fairness, transparency, and compliance.
- Track performance: Use metrics and processes to monitor AI performance and track areas of high risk.
- Choose Trustworthy Suppliers: Verify claims made by AI system providers and ensure you are aware how data is being processed and safeguarded.
This is a complicated area, but our team are available to provide support and advice, either in the initial stages of considering risks, or to help support when risks have been identified. Please do reach out to our employment lawyers if you want more information and we would be happy to help.