The ICO exists to empower you through information.

This section is aimed at technical specialists and those in compliance-focused roles to understand the risks associated with security in AI.

Control measure: There has been a thorough assessment of security risks to or in the AI system before its implementation to reduce the likelihood of an attack or breach.

Risk: If a full assessment has not been undertaken, there is a likelihood of an attack or breach. Non-compliance with UK GDPR article 32.

Ways to meet our expectations:

  • Include a thorough assessment of the security risks in your DPIA and the mitigating controls to reduce the likelihood and impact of an attack or breach.
  • Implement technical controls to mitigate any security risks in the system design and build phases, where appropriate.
  • Consult with skilled technical experts as part of the risk assessment (eg traditional software engineers, systems administrators, data scientists, statisticians, as well as domain experts).
  • Commission regular external security audits.
  • Assess security risks specific to AI models (eg data poisoning, model inversion, data leakage and model theft).
  • Conduct comprehensive vulnerability assessments of the AI system's architecture, components and dependencies to identify potential security weaknesses and entry points for attackers. This involves scanning for known vulnerabilities, misconfigurations and outdated software versions.
  • Perform penetration testing to simulate real-world attacks and assess the system's resilience to various threat scenarios. This includes conducting controlled attacks to identify potential security flaws, vulnerabilities and exploitation vectors that could be leveraged by attackers.
  • Conduct code reviews and static analysis of the AI system's source code to identify security vulnerabilities, coding errors, and logic flaws that may introduce security risks. This involves reviewing code for common security issues such as injection flaws, authentication bypass and data exposure.
  • Review the AI system's security architecture and design to ensure that security controls are appropriately implemented and aligned with security best practices and industry standards. This includes assessing the effectiveness of access controls, encryption mechanisms and authentication mechanisms.
  • Implement secure configuration management practices to ensure that the AI system is configured according to security best practices and hardened against potential attacks. This involves applying security configurations, disabling unnecessary services and limiting privileged access.
  • Complete threat modelling exercises to identify potential threats and attack vectors that could compromise the security of the AI system. This involves analysing system components, information flows and trust boundaries to identify potential security risks and prioritise mitigation efforts.
  • Consider the impact on the security of any connected systems if you need to integrate existing systems, and put appropriate controls in place as part of the design and build phases.

Options to consider:

  • Run ‘bug bounty’ programs to identify vulnerabilities and identify possible personal data breaches.
  • Deliver security awareness training to staff involved in developing, deploying, and operating the AI system to educate them about security best practices, policies, and procedures. 

 

Control measure: Security measures are in place to prevent privacy attacks on AI models through model inversion, membership inference or adversarial examples.

Risk: In a model inversion attack, if attackers already have access to some personal information belonging to specific people included in the training data, there is a risk they can infer further personal information about those same people by observing the AI inputs and outputs. What the information attackers can learn goes beyond generic inferences about people with similar characteristics.

Membership inference does not go as far, however, there is a risk that malicious actors could work out whether a given person was present in the training data of an AI model.

Ways to meet our expectations:

  • If processing biometric data (eg facial images), consider how easily an attacker could probe the model and reconstruct the image. Also consider whether it is necessary to provide 'confidence' information to the end user (as this could be used to exploit the system).
  • Check the system to identify possible attacks if a large series of inputs, information or queries are entered into the system by a single source with the aim of identifying or extracting personal information (eg through monitoring queries from people through the API).
  • Implement measures to prevent the unauthorised extraction of information from either the main system or training datasets.
  • Implement measures to reduce the number of queries that can be performed by a particular person in a given time limit (rate limiting).
  • Apply different measures for black box and white box attacks. (Note: white box attacks are when the attacker has complete access to the model itself to inspect underlying code. Whereas in black box attacks, the attacker can only query the model and observe relationships between inputs and outputs.)
  • Implement measures to ensure that AI models are not vulnerable to privacy attacks through 'overfitting' (ie the model has been designed to pay too much attention to the details of the training data, remembering particular examples rather than just general patterns).
  • Implement secure model training techniques to protect against privacy attacks during the training phase. 
  • Ensure secure deployment of AI models in production environments by implementing secure coding practices. This helps prevent the exploitation of vulnerabilities in deployed models and mitigates the risk of privacy attacks in operational settings.
  • Restrict access to the underlying code and properties of the system or model.
  • Closely monitor access to the model or training data by third parties and restrict access as necessary.
  • Assess the trade-off between explainability of the model and the risk of a security breach (ie the more explainable the model, the greater the risk of model inversion and membership inference attacks).

Options to consider:

  • Stay informed about emerging privacy threats and vulnerabilities affecting AI models through threat intelligence sources, research publications, and community forums. 
  • Share insights and best practices with relevant parties to collectively address privacy risks.

 

Control measure: There is ongoing monitoring of the AI system for vulnerabilities and regular testing, assessment and evaluation of information security measures (eg through techniques such as penetration testing). Security fixes are applied where appropriate.

Risk: The infrastructure and architecture of AI systems increases the likelihood of unauthorised access, alteration or destruction of personal information. Without regular testing of all security measures, there is no assurance that they remain effective in preventing a security incident or breach.

Ways to meet our expectations:

  • If you make changes to a software stack (and possibly hardware), review if there are any new security risks.
  • Document technical security controls within system operating procedures.
  • Subject the software to a security review where one or more people view and read parts of its source code. At least one of the reviewers must not be the author of the code.
  • Implement appropriate system vulnerability monitoring and testing tools or software.
  • Log the outputs of system vulnerability monitoring and conduct proactive analysis on any anomalies.
  • Ensure there is a solid patching and updating process in place so that available security fixes are applied in a timely manner.
  • Undertake independent internal reviews of the information security management system, including internal audits and internal IT health checks (ITHC). 
  • Commission external technical compliance reviews of key systems, including vulnerability assessments, internal health checks and penetration testing.
  • Capture all issues and risks identified as part of any internal or external testing on an action plan and risk register and mitigate or treat, as appropriate.

Options to consider:

  • Subscribe to security advisories to receive alerts of vulnerabilities.
  • Use security advisory services.

 

Control measure: The AI development environment is separated from the rest of the IT network and infrastructure. There is evidence that the separation is adhered to.

Risk: If the AI development is not undertaken in a separate environment from the main network, then there is a risk to the security and integrity of the main network.

Ways to meet our expectations:

  • Detail how the AI development system is segregated from the main IT network in policy.
  • Include separation plans in the system design documents.
  • Keep the AI system in a suitably secure environment.
  • Isolate the AI development environment from the production network and other critical infrastructure by network segmentation, physically or logically. This involves creating separate network zones, subnets, or VLANs for development activities, with restricted access controls and firewall rules to prevent unauthorised communication between environments.
  • Create isolated development environments that are independent of the production infrastructure, using virtualisation or containerisation technologies. 
  • Maintain a back up of the AI system in case the main AI system becomes unavailable. Keep the back up in a separate location.

Options to consider:

  • Use secure development tools and environments that are specifically designed for AI development, with built-in security features and controls to prevent common vulnerabilities and exploits. 

 

Control measure: There is active monitoring of network activity to detect suspicious requests and take action as a result.

Risk: Without active monitoring and resulting action, suspicious requests or activity could be missed which could threaten the functioning and effectiveness of the system.

Ways to meet our expectations:

  • Undertake active monitoring of API requests for suspicious activity.
  • Log, investigate and escalate all issues detected, where necessary.
  • Deploy external and internal firewalls and intrusion detection systems to strengthen the security of information in your networks and systems and prevent unauthorised access or attack (eg denial of service (DOS) attacks).
  • Monitor network traffic for unusual or malicious incoming or outgoing activity.
  • Have a response plan ready to manage an attack, minimising disruption to legitimate service users.
  • Have a robust supply chain risk management programme in place and apply a process for monitoring, managing, and reviewing systems, processes and access throughout your supply chain.
  • Avoid digital supply chain attacks by limiting the use of commonly used libraries to enable a function in the AI application.
  • Maintain an awareness of possible threats and act swiftly to implement corrective measures.

Options to consider:

  • Use appropriate state-of-the-art hardware to help classify information before traffic reaches the server or network.

 

Control measure: When collecting personal information, there are effective measures in place to ensure the information gathered is secured at the point of collection and in transit and to mitigate any security and integrity risks associated with the information gathering.

Risk: Without measures to protect the information collected, the outputs of the AI system may be inaccurate or unusable. If effective security is not in place, information collection avenues may become a site of attack and result in a security breach. This may breach UK GDPR articles 5(1)(d) and (f).

Ways to meet our expectations:

  • Encrypt information across networks, where required.
  • Encrypt information in storage (at rest) in line with risk.
  • Implement measures to secure information collection sites or web forms from malicious attacks or corruption (or DOS attacks).
  • Undertake information accuracy and integrity testing on personal information sourced or collected indirectly from third parties as part of the build and testing phases of the system development.

Options to consider:

  • Use a trusted and verified encryption algorithm and keep the products under regular review, due to the nature of technical development over time.
  • Ensure the encryption key size is sufficiently large to defend against an attack over the lifetime of the information in accordance with the state-of-the-art technology.

 

Control measure: There are effective mechanisms in place to prevent unauthorised access (read or write), or inappropriate changes being made to datasets.

Risk: If unauthorised or inappropriate changes are made to personal information, there is a risk to the effectiveness of the outputs of the AI system. This may breach UK GDPR articles 5(1)(a) and (d).

Ways to meet our expectations:

  • Encrypt information across networks, where required.
  • Grant access to the AI system only if there is a legitimate need. 
  • Limit access to personal information to authorised staff only.
  • Implement a formal access provisioning process to assign access rights to staff.
  • Restrict and control the allocation and use of privileged access rights.
  • Review user access rights at regular intervals.
  • Remove access rights in a timely fashion when staff leave your organisation.
  • Adjust access rights when there is a change of assignment or role. 

Options to consider:

  • Make people accountable for safeguarding their own authentication information.
  • Create access level profiles based on job roles to ensure you grant access rights consistently.

 

Control measure: There are business continuity and disaster recovery plans in place.

Risk: Failure to effectively implement business continuity and disaster recovery may result in loss of access to personal information and the risk that it may not be processed in compliance with the law, resulting in regulatory action or reputational damage This may breach UK GDPR article 5(1)(f).

Ways to meet our expectations:

  • Allocate responsibility for assessing, managing and reporting on business continuity (BC) and disaster recovery (DR) risks in a structured hierarchy.
  • Take proactive steps to identify, record and manage risks to BC and DR.
  • Put measures in place to safeguard against physical and environmental disruption.
  • Determine the requirements for information security (IS) and IS management in the event of a disaster (ie information continues to remain secure, by default if necessary).
  • Put in place a documented BC and DR policy and procedures to manage high impact incidents.
  • Deliver specialised training for the Incident and Emergency Response team(s).
  • Put in place provisions for a temporary physical space in the event of loss of access to the primary site. 
  • Implement a pre-determined restoration strategy appropriate to the importance of the system and information.
  • Back up key systems, applications and information to protect against loss of personal information.
  • Build BC and DR arrangements into all third-party relationships.
  • Analyse and report on BC and DR level events and near misses and their resolutions.

Options to consider:

  • Include general BC and DR awareness and escalation training in your data protection training programme.