This section is aimed at helping technical specialists and those in compliance-focused roles to understand how human review is important in the AI lifecycle.
Control measure: AI decisions involve meaningful human review and checks, where appropriate, to mitigate eroding of privacy through selection bias and attempts to spoof controls or circumvent privacy measures. Human reviewers have appropriate knowledge, experience, authority and independence to challenge decisions.
Risk: Without a structured testing process in place, there is a risk that a human review will not be undertaken or completed effectively to provide an independent assessment of AI system outputs and how the service is used safely. Non-meaningful human review is caused by automation bias or a lack of interpretability. As a result, people may not be able to exercise their right to not be subject to solely automated decision making with significant effects. This may breach UK GDPR articles 5(1)a, 13(2)f, 14(2)g, 15(1)h, or 22.
Ways to meet our expectations:
- Undertake regular assessments to determine where a human review is most appropriate and beneficial.
- Document details of the methodology a human review will use when testing the system for statistical accuracy and bias.
- Document the testing process or test plan to outline:
- the criteria and requirements for testing;
- the sampling method and size;
- the target accuracy rates and acceptable tolerance; and
- the checks required to ensure that the rate of error in data outputs or statistical errors is within acceptable and documented tolerances.
- Maintain a log of when AI decisions are overridden by a human reviewer, including the reasons why.
- Report the results of the testing to senior management and key stakeholders.
- Check that the reviews are done at appropriate stages and as directed, if the human review is outsourced to a third party.
- Ensure that human reviewers have appropriate qualifications.
- Assign human reviewers a manageable caseload and ensure there is sufficient resource in place for them to give appropriate time to their tasks.
- Provide human reviewers with adequate training.
- Ensure human reviewers are independent and are able to influence senior-level decision making. Reflect the reporting lines both in job descriptions and in the overall organisational framework.
Options to consider:
- Have a 'fall back' option, if reviewers find an issue that questions the competency of the system.
- Agree a 'stand in' time for the alternative option, to allow time for developers to rectify the issues with the AI system.
- Be flexible, so that in case of service failure, there is the option to move to a hybrid or manual model (eg automated processing first, then manual check).
- Conduct manual reviews if levels or tolerance set for auto-processing decisions fall below acceptable levels.
- Create standardised review procedures for human reviewers to follow when evaluating AI system outputs. This includes clear guidelines, checklists, and protocols to ensure consistent and accurate assessments of AI-generated decisions.
Useful links
ICO AI and data protection guidance: Guidance on AI and data protection | ICO
Control measure: There are documented controls in place to prevent human review practices from introducing deficiencies or errors into the future decision making by the AI system.
Risk: Inappropriate actions by human reviewers may result in the corruption of the AI system, and in inaccuracies or errors being introduced which would not have existed otherwise.
Ways to meet our expectations:
- Maintain separate provider and deployer human review processes.
- Log human review decisions, including actions taken by a human reviewer to challenge or override automated decision making and the considerations they made when making their final decision.
- Ensure the process is appropriately designed, such as by testing on a sample of decisions.
- Implement processes to support a re-review or overturning of decisions (eg if there is one rogue human reviewer).
Options to consider:
- Deploy error detection and correction tools to identify and rectify deficiencies or errors introduced by human reviewers during the decision-making process.
- Conduct mystery shopping exercises, where deliberately misleading information is provided if the human disagrees with the AI. This ensures human reviewer input is meaningful.