The ICO exists to empower you through information.

In detail

Why is this important?

You must determine whether your content moderation involves solely automated decisions that have legal or similarly significant effects on people. You should assess this for each stage of your content moderation workflow. This is due to restrictions in Article 22 of the UK GDPR on when you can carry this out.

When does content moderation involve solely automated decision-making?

Content moderation systems can extensively use automation to support content analysis and moderation actions.

In most cases, this also means processing people’s personal information because the content you are analysing is linked to a particular user’s account (see the section on ‘What personal data does content moderation involve?’).

This can involve systems making solely automated decisions. “Solely automated” decisions are those taken without any meaningful human involvement. For example, automating the decision about whether a piece of content breaches your content policies and what type of moderation action follows afterwards.

This is particularly likely to be the case if the system you use is an AI-based content moderation tool used to classify and take action on content without a human being involved in those decisions.

Example

A service deploys an AI-based content classification system that automatically removes all content that scores above a certain confidence score in a particular category of prohibited content. For example, the system removes all content classified as ‘violence’ that has a confidence score of X% or greater.

The system decides whether the content meets this classification, and where it does, removes the content.

As there is no human involvement, this is solely automated.

However, not all content moderation involves solely automated decision-making. For example, this may apply to systems that use exact database matching tools. These can compare user-generated content to a database of known prohibited material that has been determined as prohibited by humans. Content that is found to be a match against this database is typically removed from the service.

These types of decisions won’t necessarily be solely automated, because the moderation tool is operating according to specific, pre-defined parameters representing things that humans have already decided on. The tool isn’t making a decision based on an analysis of the likelihood of something happening, unlike with classification tools.

Generally, if you intend a moderation system to go beyond exact matches of pre-defined content, then it is more likely to be making solely automated decisions. For example, it analyses additional information and makes its own predictions based on context and circumstances, such as perceptual hash matching or machine learning classification tools.

Example

A content moderation tool detects and removes links to known child sexual abuse material (CSAM).

This involves exact matching against a pre-defined list of URLs where CSAM is present. Humans have determined that these links contain such material and have added them to the database.

The decision about the nature of the content and the action to remove it is taken before the system operates and it only functions according to these parameters. In this sense, the system is not making decisions, even though it’s operating automatically.

Since this is not solely automated decision-making, it does not fall under Article 22. However, the service using this tool needs to make sure it complies with all the other requirements of data protection law set out in this guidance.

When do solely automated content moderation decisions have a legal or similarly significant effect?

A ‘legal effect’ is something that affects someone’s legal status or their legal rights. A ‘similarly significant effect’ is something that has an equivalent impact on someone’s circumstances, behaviour or choices.  

Examples of legal and similarly significant effects include decisions that:

  • affect someone’s financial circumstances; or
  • lead to someone being excluded or discriminated against;

The impact of solely automated content moderation decisions can depend on the person, the service, and how that person uses that service. Understanding the full context in which automated decisions take place will help you identify whether Article 22 applies.

You must determine whether solely automated decisions taken in your content moderation systems are going to have a legal or similarly significant effect. See our guidance on automated decision-making and profiling for more information about what types of decision have a legal or similarly significant effect. If this is the case, then you need to consider the Article 22 exceptions (see next section).

Example

A video sharing service uses a solely automated moderation system that results in a user’s video being removed from the service.

The moderation action is taken as a result of a solely automated analysis that classifies the video as violating the service’s content policies.

The service is making a solely automated decision about that particular user based on analysis of their personal information.

The user is a content creator and revenue from video content is their primary source of income. Removal of the video has a significant impact on their income.

This is a solely automated decision based on the user’s personal information that has a legal or similarly significant effect on the user.

The service has a mechanism to identify which of its solely automated content moderation decisions have a legal or similarly significant effect on its users. Therefore it can determine which decisions Article 22 applies to.

The service identifies a relevant Article 22 exception for these decisions. It also implements the required safeguards and provides users with the required information about the decision-making (see next section for more information).

 

What do we need to do when Article 22 applies to our content moderation decision-making?

Consider what exception applies

Article 22 means that you must only take solely automated decisions that have legal or similarly significant effects if they are:

  • authorised by domestic law;
  • necessary for a contract; or
  • based on a person’s explicit consent.

Where the decision is authorised by law

This exception applies where domestic law (including under the OSA and accompanying codes of practice) authorises solely automated decision-making with legal or similarly significant effects. But only where the law contains suitable measures to safeguard a user’s rights, freedoms and legitimate interests.

It is your responsibility to determine whether this exemption applies.

You should document and be able to justify which part of the legislation authorises your use of solely automated decision-making.

If you're carrying out solely automated decision-making under this exception, you must also comply with the requirements of Section 14 of the DPA 2018. This means you must:

  • tell people that you've made the decision as soon as reasonably practicable; and
  • be prepared for any request they may make for you to reconsider the decision, or take a new one that's not solely automated.

If someone does request that you reconsider, you must also:

  • consider the request and any other relevant information the person provides;
  • comply with the request; and
  • inform the person, in writing, of the steps you've taken and the outcome.

Where the decision is necessary for a contract

This exception may apply if you’re carrying out solely automated decision-making that’s necessary to perform the contract between you and your users. For example, if you are using solely automated decision-making to enforce the terms of service that your users sign up to.

You must ensure that your processing is necessary for the performance of the contract. This doesn’t mean it must be absolutely essential, but it must be more than just useful.

Where the decision is based on someone’s explicit consent

This exception applies if you have explicit consent from someone to carry out solely automated decision-making based on their personal information.

It is unlikely that explicit consent exception applies to content moderation because it is unlikely to be freely given. In addition, it may be impractical for you to gather explicit consent from users.

Provide users with transparency about decisions

If your content moderation involves solely automated decision-making with legal and similarly significant effects, then you must proactively tell your users about this. You must:

  • say that you’re making these types of decisions;
  • give them meaningful information about the logic involved in any decisions your system makes; and
  • tell them about the significance and envisaged consequences the decisions may have.

For example, you could include this information in your privacy policy or terms of service.

You must also provide this information to any user that makes a SAR to you.

The OSA requires regulated services to set out in their terms of service if they are using ‘proactive technology’ to comply with their online safety duties. Services are also required to explain the kind of proactive technology they use, when they use it, and how it works. Complying with this duty may help you provide the transparency to users that UK GDPR requires. However, you must provide the necessary transparency for data protection law.

Implement appropriate safeguards

If you’re relying on the contract or explicit consent exceptions, you must implement appropriate safeguards to protect people’s rights, freedoms and legitimate interests, including enabling people to:

  • obtain human intervention;
  • express their point of view; and
  • contest the decision.

You should have an appeals process for content moderation decisions that users can easily use, understand and find on your service.

Under the OSA, all user-to-user services have a duty to operate complaints processes, including for users who have generated, uploaded or shared content on the service. Among other things, they should be allowed to complain if their content is taken down (or if they are given a warning, suspended or banned from using the service) on the basis that their content is illegal. There are also complaints obligations for services likely to be accessed by children and category 1 services.

Where you are complying with your OSA complaints duties, these processes may help you provide the safeguards that Article 22 of the UK GDPR requires. In particular, ensuring your users can contest a content moderation decision. However, you must implement the appropriate safeguards for data protection law.

What about special category information?

You must also consider whether your solely automated decision-making is likely to involve special category information.

You must not base your decisions on special category information unless you:

  • have explicit consent; or
  • can meet the substantial public interest condition in Article 9.

In addition, you must implement safeguards to protect users’ rights and freedoms, and legitimate interests.

As noted above, you are unlikely to seek explicit consent for your content moderation processing. This means that if you intend to process special category information, you must consider the substantial public interest condition. (See the section on ‘What if our content moderation involves special category information?’ for more information.)

What if Article 22 does not apply?

If Article 22 doesn’t apply to your content moderation processing, you must still comply with data protection law and ensure that users can exercise their rights.

You could tell people about any automated decision-making your content moderation involves, even if it has meaningful human involvement.

You should tell people what information you’re using and where it came from. This helps you be more transparent, particularly if your processing won’t necessarily be obvious to people.