On November 6, 2024, the UK Information Commissioner’s Office (ICO) released its AI Tools in recruitment audit outcomes report (“Report”). This Report documents the ICO’s findings from a series of consensual audit engagements conducted with AI tool developers and providers. The goal of this process was to assess compliance with data protection law, identify any risks or room for improvement, and provide recommendations for AI providers and recruiters. The audits ran across sourcing, screening, and selection processes in recruitment, but did not include AI tools used to process biometric data, or generative AI. This work follows the publication of the Responsible AI in Recruitment guide by the Department for Science, Innovation, and Technology (DSIT) in March 2024.
Background
The ICO conducted a series of voluntary audits from August 2023 to May 2024. During the audits, the ICO made 296 recommendations, all of which were accepted or partially accepted by the organisations involved. These recommendations address areas such as:
- Fair processing of personal data,
- Data minimisation and lawful retention of data, and
- Transparency in explaining AI logic.
Areas for Improvement
Based on its findings during the audits, the ICO identified several areas for improvement for both AI recruiters and AI providers. The key areas for improvement across both were:
- Data minimisation and purpose limitation: The ICO expressed concern that not all of those surveyed limited data collection to that which is strictly necessary for AI functionality, nor did they provide clearly defined and transparent data retention schedules.
- Quality of training data: While the ICO found that most AI providers were aware of the risk of bias in AI caused by “imbalances in training and testing data”, the ICO said “not all had used sampling techniques to ensure datasets were diverse and representative”.
- Fairness, accuracy, and bias: The ICO found that the majority of AI providers tested accuracy periodically after launch, helping to ensure high standards. The ICO encouraged providers to adopt formal testing methodologies prior to launch and to conduct regular assessments of fairness, accuracy, and bias throughout the project lifecycle. This could involve, for instance, engaging cognitive and behavioural or psychometric experts to test and review AI logic, scoring and outputs. The ICO noted that accuracy should generally exceed “better than random” thresholds, especially before processing personal information.
- Transparency and explainability: The ICO reiterated the importance of AI developers and recruiters being open and transparent about how they process personal information when using AI to make recruitment decisions or produce outputs. The ICO linked to its “Explaining decisions made with AI” guidance and noted that this information must be clear, detailed, and cover all key rights, including the right of access.
- Human oversight: The ICO recommended that AI providers conduct “robust and meaningful human reviews or quality checks” to address output accuracy or bias issues at an early stage, and maintain records and policies on human reviews.
- Data Protection Impact Assessments (DPIAs): The ICO highlighted the necessity of completing comprehensive DPIAs at all stages of AI development, and especially prior to processing rather than retrospectively.
- Security measures: The ICO found variable implementation of security controls across providers and emphasized the need to implement robust technical and organisational security measures, subject to regular testing.
- Lawful basis for processing: The ICO identified inconsistent application and understanding of the lawful basis for processing personal data, particularly for special category data.
- Controller vs. Processor roles: The ICO emphasized the need for AI providers to accurately identify whether they are a controller, joint controller, or processor, so that they can comply with their applicable data protection obligations.
Balancing Challenges
The ICO acknowledged that compromises must be made when weighing up the balance between statistical accuracy and data minimization, between accuracy and explainability, and between transparency and understandability. To address these tensions, the ICO recommended:
- Conducting comprehensive DPIAs to assess and document trade-offs.
- Regularly monitoring and reporting AI accuracy and bias metrics.
- Engaging experts in fairness and ethics to refine AI models.
- Implementing any relevant compensating controls.
- Communicating any limitations to users clearly.
Legal Implications
For companies using AI recruitment tools, key considerations moving forwards will include:
- Reviewing Documentation: Ensuring comprehensive records of processing activities, DPIAs, and decision rationales.
- Updating Contracts: Scrutinizing controller/processor arrangements and data processing agreements to ensure accuracy and consistent record-keeping.
- Ongoing Monitoring: Establishing regular review mechanisms for testing accuracy, fairness, and reliability.
- Managing Risk: Implementing proper controls for special category data handling.
The Covington team continues to monitor developments in AI regulation and guidance. Please feel free to reach out to a member of the team if you have any questions.
This post was written with the assistance of Erin Lynch, a trainee solicitor in the London Office.