The UK’s independent authority set up to uphold information rights in the public interest, promoting openness by public bodies and data privacy for individuals.

How can we make sure that we meet the ‘best interest of the child’ standard?

You need to think about what’s appropriate and in the best interest for children visiting or using your service. How can you make sure that you are not using their data in a way that isn’t in their best interests?

You should consider how, in your use of personal data, you can:

  • keep them safe from exploitation risks, including the risks of commercial or sexual exploitation and sexual abuse;
  • protect and support their health and wellbeing;
  • protect and support their physical, psychological and emotional development;
  • protect and support their need to develop their own views and identity;
  • protect and support their right to freedom of association and play;
  • support the needs of children with disabilities in line with your obligations under the relevant equality legislation for England, Scotland, Wales and Northern Ireland;
  • recognise the role of parents in protecting and promoting the best interests of the child and support them in this task; and
  • recognise the evolving capacity of the child to form their own view, and give due weight to that view.

In order to implement this standard, you need to consider the needs of child users and work out how you can best support those needs in the design of your online service, when you process their personal data. In doing this you should consider the age of the user.

How will my company know how old our users are, to meet the ‘age appropriate application’ standard?

How far you need to go in establishing age depends on what you are doing with children’s data and what the impact might be.

The code allow services the flexibility to adopt an approach to age assurance that works for their context. Options that are available include:

  • Self-declaration.
  • Artificial intelligence.
  • Third-party age verification services.
  • Account holder confirmation.
  • Technical measures.
  • Hard identifiers.

The level of certainty you require depends on the risks associated with your data processing, but generally speaking the higher the risks the greater your confidence needs to be.

Don’t forget, another option is to apply the standards in the code to all your users, regardless of age.

Does the standard on ‘detrimental use of data’ mean the ICO is now going to police what content is recommended to young users of social media platforms?

No. Personal data often drives the content that children see and the ICO is responsible for regulating the use of personal data.

If you are using children’s personal data to automatically recommend content to them based on their past usage or browsing history, then you have a responsibility for the recommendations you make.

Data protection law doesn’t make you responsible for third-party content, but it does make you responsible for the content you serve to children who use your service, based on your use of their personal data. This use of personal data is what the ICO regulates.

Organisations can’t use personal data in ways that are detrimental to children or that go against industry codes of practice. So, you need to keep up to date with relevant advice and recommendations on children’s welfare in the digital context. Relevant advice and codes are likely to include marketing, broadcasting and gaming regulations.

What do we need to do to meet the ‘privacy by default’ standard?

Your default position for each individual should be privacy-enhancing or ‘high privacy’. This means that children’s personal data is only visible or accessible to other users of the service if the child amends their settings to allow this.

This also means that, unless they change the setting, your own use of the children’s personal data is limited to what’s essential for you to provide the core service.

If a user does change their settings, you should generally give them the option to do so permanently or to return to the high privacy defaults when they end the current session. You should not ‘nudge’ them towards taking a lower privacy option.

My app relies on geolocation to provide its service. Will the code require me to turn it off?

No. If you have to process any geolocation data in order to provide your core service, you don’t need a privacy setting for this.

You should offer children control over whether and how their personal data is used, whenever you can.

However, any geolocation services that go over and above your core service should be subject to a privacy setting. For example, enhanced mapping services that make recommendations for places to visit based on location would need a privacy setting.

What does the code say about age assurance?

Standard 3 of the Children’s code discusses age assurance. It says that online services need to establish the age of each individual user with a level of certainty that is appropriate to the risks that arise from your data processing. If you can’t or choose not to do this, you should apply the code to all your users instead.

Some of the standards, including those concerning privacy settings (default settings, profiling, data sharing and geolocation), require you to know only whether a user is or isn’t a child. This is so you can decide who gets high privacy by default. However, other standards (on transparency, parental controls, nudge techniques and online tools) specifically recommend different design and content for distinct age ranges. This means that you need to know not just if a user is a child, but what age range they fall into.

How can we establish the age of our users?

The code suggests various ways that services might establish how old a user is. These include:

  • self-declaration;
  • hard identifiers;
  • account holder confirmation;
  • age verification; and
  • artificial intelligence.

It also suggests using “technical measures”, such as discouraging false declarations or closing down underage accounts alongside self-declaration.

The code does not provide a comprehensive list of methods to determine age, as these change frequently. We provide further information on establishing the age of users in the Commissioner’s Opinion on age assurance. This Opinion is the Commissioner’s current view of these issues. Our expectations of reasonable methods to use will change over time as age assurance practices and capabilities develop. This is why part of the requirement to keep your service and associated DPIA under review involves monitoring age assurance developments. We are reviewing the Opinion in 2022, as part of the planned overall review of the Children’s code in September 2022.

What level of certainty do different methods provide?

We cannot define precisely what level of certainty different methods of age assurance provide. It is always for controllers to demonstrate how they have assessed what is an appropriate level of certainty for the particular needs of children using their service, as per the Article 5(2) accountability principle of the UK GDPR.

However, we can say that the use of hard identifiers, such as a passport, provide a high level of certainty. However, they also have a higher risk of discrimination. Self-declaration is a much lower level of certainty. Combining different methods is likely to increase certainty. We cannot tell you the level of certainty provided by any method which uses artificial intelligence. This depends on the quality of the dataset and the efficacy of the algorithms and should be considered on a case by case basis.

How do we decide what level of certainty is appropriate to the risk?

We do not intend to map certainty levels to risks because every service is different. Carrying out a data protection impact assessment (DPIA), as required by the UK GDPR and the code, would help you identify and assess the risks in your service. We produced a best interests framework to help you identify the risks, which can help you decide on the appropriate level of certainty. However, you should not adopt a method of age assurance if the risks associated with the data processing for that purpose exceed the risks associated with the original processing. That would not be an appropriate level of certainty.

Can we just ask for proof of identity?

Yes, if that is the right approach for your processing. Standard 3 of the code includes “hard identifiers” in the list of possible methods you might use to establish a user’s age. This means asking for proof of identity. However, the Code says to do so only if, “the risks inherent in your processing really warrant such an approach.” This is because there is a high risk of discrimination and exclusion for children (and indeed some adults) who don’t have access to identity documents. The code also says that “requiring hard identifiers may also have a disproportionate impact on the privacy of adults.” The Commissioner’s Opinion on age assurance in the context of the Children’s code provides further information on the use of hard identifiers.

When can I use third-party age verification?

We recognise that the current age verification ecosystem was designed and is set up to determine whether someone is over 18. It is not to determine the age of a child. You can use third-party age verification if you want, or are legally required, to prevent children from accessing your service altogether. You can also use it if you intend to offer two versions of your service; one for children, which conforms with the code, and one for adults. However, this is unlikely to be a solution for age-appropriate design elements such as tailored transparency or nudging.

Third-party age verification involves using a third party to determine a user’s identity and then accessing officially held data to confirm their age. It provides a high level of certainty, but there’s a risk of indirectly discriminating against people who lack the necessary documentation or data, such as credit history. Therefore, you need to use age verification in proportion to the identified risks to children.

An advantage of third-party age assurance services is that they can, depending on the approach they take, provide an age assurance decision without the need for additional personal data. You can find more information on the use of third-party age verification in the Commissioner’s Opinion on age assurance.

What do you mean by “artificial intelligence” (AI) methods?

The code specifically refers to using AI to estimate a user’s age by analysing the way they interact with the service. This is a type of profiling. However, any method which uses data to estimate a user’s age is likely to make use of AI. Examples of AI methods include:

  • content analysis;
  • analysis of the language or phrases people use; or
  • the use of biometric characteristics such as facial images.

If you do use AI to estimate a user’s age, you must comply with the UK GDPR and take into account possible risks of unreliability, bias and unfairness. You can find further information on the use of AI as part of age assurance in the Commissioner’s Opinion on age assurance. You can also read more about how to ensure your use of AI is compliant with the UK GDPR in our guidance on AI and data protection and our guidance on automated decision making.

What about using facial images to estimate age?

Using AI to estimate a user’s age from an image of their face may be, in principle, a reasonable way to establish age with a level of certainty appropriate to the risk. However, we recognise that  currently much of the work in this area is still in a research and development phase. This means there are few products commercially available. Therefore, if you use this technology you must ensure that it provides an appropriate level of certainty. The processing must also be compliant with the UK GDPR and the code where appropriate. 

How can I use profiling to estimate age if the code requires me to turn off profiling for children?

The code states that profiling for the purpose of age assurance is acceptable. It also recognises that profiling is harmful to children and requires it to be turned off by default, unless it is core to the service or there is a compelling reason for it to be turned on. Any profiling used for the purpose of age assurance must not create risks that are higher than the risks the age assurance is being used to address.

Some services may already be able to estimate a user’s age, with a high level of certainty, from the data they collect about them. For these services, when a user’s age is estimated and assessed to be over 18, the Children’s code does not apply, although these services still need to comply with UK GDPR and the Data Protection Act 2018. Where a user’s age is estimated to be below 18 or where it has not yet been estimated, non-core processing may continue only for the purpose of age estimation. It can also continue where you can demonstrate that it is in the best interests of the child, eg for safeguarding purposes.

How are app developers able to implement age verification when data collection is specified as an option by app stores?

App developers falling under the scope of the code are responsible for ensuring they conform with the age-appropriate application standard. This is independent of any app store requirements for developers. The standard outlines a number of approaches that developers could use for age assurance, including:

  • AI;
  • self-declaration;
  • third-party verification services;
  • account holder confirmation;
  • technical measures; and
  • hard identifiers.

Where app stores or other third parties do provide age assurance, online services can factor this into their overall assessment about how confident they are about the age of child users. You can find more information on the use of age assurance in the context of the Children's code in the Commissioner’s Opinion on age assurance.

Is there further guidance on specific age verification options, including what is likely to be proportionate in specific circumstances?

Yes. The Commissioner’s Opinion on age assurance explains that any use of age assurance, including age verification, must be proportionate to the risks to children posed by the online service. Section two provides an overview of age assurance methods and Section three gives examples of processing that are likely to be high risk. The Opinion also outlines considerations for online services to ensure their use of age assurance is compliant with data protection legislation. We recommend that any service considering the use of age assurance refers to this Opinion for further guidance that supplements standard 3 of the code.

How should services ensure ‘age-appropriate application’ when this can mean different things for different children, particularly vulnerable children?

We expect online services to take a risk-based approach to conform with the code. This includes taking into account the age and developmental needs of child users. The code provides guidance in several areas around how online services could apply this approach in practice:

  • The DPIA standard recommends consulting with parents and children to understand their specific needs.
  • The Detrimental use of data standard clarifies that online services must not use data in ways that demonstrably contravene other regulators’ standards. Here we will consider relevant standards relating to vulnerable children and children's developmental needs from bodies, including:
    • the Children's Commissioner;
    • the Equality and Human Rights Commission;
    • Public Health England; and
    • the National Institute for Health and Care Excellence.

The Commissioner’s Opinion on age assurance provides further information on how we expect organisations to assess the risks to children of various ages and developmental needs. Ultimately, it is for controllers to demonstrate how they have assessed what is appropriate for the particular needs of children, as per the Article 5(2) accountability principle of the UK GDPR. This includes children of varying ages and vulnerabilities. The Commissioner’s Opinion highlights that data controllers must ensure that any use of age assurance does not lead to undue discrimination.