Using AI technologies in recruitment: is it fair and transparent?

Published on: 03/06/2024

#Data Protection

In a rapidly evolving digital landscape, where artificial intelligence (AI) plays an increasingly pivotal role in HR and recruitment processes, ensuring responsible and ethical implementation is paramount. Recognising this imperative, the Department for Science, Innovation and Technology (DSIT) has on 25 March 2024 released comprehensive guidance, tailored to the HR and recruitment sector.

Developed collaboratively with key stakeholders including the Information Commissioner’s Office (ICO), Equality and Human Rights Commission (EHRC), and Recruitment and Employment Confederation (REC), this guidance addresses the pressing need for clearer frameworks amidst evolving regulatory landscapes, as it identifies potential ethical risks when using AI in recruitment.

Understanding the landscape

The guidance begins by elucidating the multifaceted landscape of AI application in HR and recruitment. It underscores the pervasive influence of AI-enabled technologies throughout the recruitment process, from candidate sourcing to selection, while highlighting inherent risks such as unfair bias and digital exclusion.

Core principles and assurance mechanisms

According to DSIT’s guidance, the five regulatory principles below, identified in the government’s AI white paper, should be what AI systems achieve:

  1. Safety, security, and robustness
  2. Appropriate transparency and explainability
  3. Fairness
  4. Accountability and governance
  5. Contestability and redress

To operationalise these principles, DSIT delineates a comprehensive suite of assurance mechanisms across various stages of AI adoption:

  • AI governance framework
  • Impact assessment (algorithmic and equality)
  • Data protection impact assessment (DPIA)
  • Bias audit
  • Performance testing
  • Risk assessment
  • Model cards
  • Training and upskilling
  • User feedback
  • Ensuring transparency

Procurement, deployment and operation

The guidance provides meticulous insights into each phase of AI adoption:

  1. Before procurement: before an organisation goes out to tender, it should identify the kind of AI system it wants to procure and why. It should consider the system’s integration with the organisation’s existing processes, and address accessibility requirements. Assurance mechanisms such as impact assessment, DPIA and AI governance frameworks are recommended. Organisations should consult the ICO’s AI guidance and are advised to seek independent legal advice on whether their use of AI is compliant with the UK’s data protection laws.
  2. During procurement: emphasis is placed on evaluating the accuracy, transparency, and risks associated with AI suppliers. Suppliers may have completed a DPIA for their AI model and this should be shared with the procuring organisation. Assurance mechanisms include bias audit, performance testing, risk assessment and model cards. Organisations should identify and plan for a range of potential risks. By way of an example, if the AI system being used scores candidates based on their CVs, and this scoring system is known to disadvantage candidates with disabilities, organisations should plan accordingly by making sure candidates with disabilities disclose this, and then arranging for the CVs of such candidates to be viewed manually rather than go through the scoring system first. Listen to our informative AI, Discrimination and Automated Decision-making podcast here, to understand this are more!
  3. Before deployment: pilot testing is recommended to assess employee understanding, the model performance, and accommodate reasonable adjustments. Assurance mechanisms therefore include performance testing, employee training, impact assessment and transparency. Sometimes, implementing reasonable adjustments (which employers have a legal obligation to do in certain situations) for a candidate can only be done if the AI system is removed from the recruitment process.
  4. Live operation: Continuous testing and monitoring over time are advised to ensure ongoing efficacy and fairness. Assurance mechanisms include performance testing, bias audits and user feedback. Both employees and applicants who interact with the system should be able to give feedback.


DSIT’s guidance marks a significant milestone in fostering responsible AI adoption in the HR and recruitment sector. By equipping stakeholders with actionable insights and robust assurance mechanisms, it paves the way for ethical AI integration while mitigating risks of bias and discrimination. As organisations navigate the evolving AI landscape, adherence to these principles and practices will be instrumental in fostering trust, transparency, and equity in recruitment processes.

If your organisation uses AI in recruitment, you should consider whether this is a type of automated process under the UK GDPR, and whether you must complete a DPIA to ensure your AI system is legally compliant and demonstrate that you have mitigated any high risks.

If you need help with any of the matters mentioned in this article, speak to one of our experienced data protection team members by contacting us here.


This information is for guidance purposes only and should not be regarded as a substitute for taking professional and legal advice. Please refer to the full General Notices on our website.