Project ExplAIn is a collaboration between the Information Commissioner’s Office (ICO) and The Alan Turing Institute (The Turing) to create practical guidance to assist organisations with explaining artificial
intelligence (AI) decisions to the individuals affected.

As part of this project, the ICO and The Turing conducted public and industry engagement research. This helped us understand different points of view on this complex topic.

This report sets out the methodology and findings of this research. Key
findings are:

  • the relevance of context for the importance, purpose and
    expectations of explanations;
  • the need for improved education and awareness around the use of
    AI for decision-making; and
  • challenges to deploying explainable AI such as cost and the pace of
    innovation.

The possible interpretations of these findings and their implications for the development of the guidance are discussed, including:

  • the lack of a one-size-fits-all approach to explanations, including the potential for a list of explanation types to support organisations in making appropriate choices;
  • the need for board-level buy-in on explaining AI decisions; and
  • the value of a standardised approach to internal accountability to help assign responsibility for explainable AI decision-systems and foster an organisational culture of responsible innovation.

We acknowledge the limitations of the research, and a conclusion summarises the findings, setting out their value to the project and beyond.

The report ends with next steps for the project, including a summary of the planned guidance.

The ICO and The Turing gratefully acknowledge the support and input given to this project by Citizens’ Juries c.i.c., the Jefferson Center, the Greater Manchester Patient Safety Translational Research Centre, techUK, and all the industry representatives and members of the public that took part in our engagement.