Skip to main content

FAQs on the 15 standards of the Children’s code

How can we make sure that we meet the ‘best interest of the child’ standard?

You need to think about what’s appropriate and in the best interest for children visiting or using your service. How can you make sure that you are not using their data in a way that isn’t in their best interests?

You should consider how, in your use of personal data, you can:

  • keep them safe from exploitation risks, including the risks of commercial or sexual exploitation and sexual abuse;
  • protect and support their health and wellbeing;
  • protect and support their physical, psychological and emotional development;
  • protect and support their need to develop their own views and identity;
  • protect and support their right to freedom of association and play;
  • support the needs of children with disabilities in line with your obligations under the relevant equality legislation for England, Scotland, Wales and Northern Ireland;
  • recognise the role of parents in protecting and promoting the best interests of the child and support them in this task; and
  • recognise the evolving capacity of the child to form their own view, and give due weight to that view.

In order to implement this standard, you need to consider the needs of child users and work out how you can best support those needs in the design of your online service, when you process their personal data. In doing this you should consider the age of the user. The ICO’s best interests of the child self-assessment provides tools, templates and guidance to assist you.

How will my company know how old our users are, to meet the ‘age appropriate application’ standard?

How far you need to go in establishing age depends on what you are doing with children’s data and what the impact might be.

The code allow services the flexibility to adopt an approach to age assurance that works for their context. Options that are available include:

  • Self-declaration.
  • Artificial intelligence.
  • Third-party age verification services.
  • Account holder confirmation.
  • Technical measures.
  • Hard identifiers.

The level of certainty you require depends on the risks associated with your data processing, but generally speaking the higher the risks the greater your confidence needs to be.

Don’t forget, another option is to apply the standards in the code to all your users, regardless of age.

Does the standard on ‘detrimental use of data’ mean the ICO is now going to police what content is recommended to young users of social media platforms?

No. Personal data often drives the content that children see and the ICO is responsible for regulating the use of personal data.

If you are using children’s personal data to automatically recommend content to them based on their past usage or browsing history, then you have a responsibility for the recommendations you make.

Data protection law doesn’t make you responsible for third-party content, but it does make you responsible for the content you serve to children who use your service, based on your use of their personal data. This use of personal data is what the ICO regulates.

Organisations can’t use personal data in ways that are detrimental to children or that go against industry codes of practice. So, you need to keep up to date with relevant advice and recommendations on children’s welfare in the digital context. Relevant advice and codes are likely to include marketing, broadcasting and gaming regulations.

What do we need to do to meet the ‘privacy by default’ standard?

Your default position for each individual should be privacy-enhancing or ‘high privacy’. This means that children’s personal data is only visible or accessible to other users of the service if the child amends their settings to allow this.

This also means that, unless they change the setting, your own use of the children’s personal data is limited to what’s essential for you to provide the core service.

If a user does change their settings, you should generally give them the option to do so permanently or to return to the high privacy defaults when they end the current session. You should not ‘nudge’ them towards taking a lower privacy option.

My app relies on geolocation to provide its service. Will the code require me to turn it off?

No. If you have to process any geolocation data in order to provide your core service, you don’t need a privacy setting for this.

You should offer children control over whether and how their personal data is used, whenever you can.

However, any geolocation services that go over and above your core service should be subject to a privacy setting. For example, enhanced mapping services that make recommendations for places to visit based on location would need a privacy setting.

What does the code say about age assurance?

Standard 3 of the Children’s code discusses age assurance. It says that online services need to establish the age of each individual user with a level of certainty that is appropriate to the risks that arise from your data processing. If you can’t or choose not to do this, you should apply the code to all your users instead.

Some of the standards, including those concerning privacy settings (default settings, profiling, data sharing and geolocation), require you to know only whether a user is or isn’t a child. This is so you can decide who gets high privacy by default. However, other standards (on transparency, parental controls, nudge techniques and online tools) specifically recommend different design and content for distinct age ranges. This means that you need to know not just if a user is a child, but what age range they fall into.

How can we establish the age of our users?

The code suggests various ways that services might establish how old a user is. These include:

  • self-declaration;
  • hard identifiers;
  • account holder confirmation;
  • age verification; and
  • artificial intelligence.

It also suggests using “technical measures”, such as discouraging false declarations or closing down underage accounts alongside self-declaration.

The code does not provide a comprehensive list of methods to determine age, as these change frequently. We provide further information on establishing the age of users in the Commissioner’s Opinion on age assurance. This Opinion is the Commissioner’s current view of these issues. Our expectations of reasonable methods to use will change over time as age assurance practices and capabilities develop. This is why part of the requirement to keep your service and associated DPIA under review involves monitoring age assurance developments. We are reviewing the Opinion in 2022, as part of the planned overall review of the Children’s code in September 2022.

What level of certainty do different methods provide?

We cannot define precisely what level of certainty different methods of age assurance provide. It is always for controllers to demonstrate how they have assessed what is an appropriate level of certainty for the particular needs of children using their service, as per the Article 5(2) accountability principle of the UK GDPR.

However, we can say that the use of hard identifiers, such as a passport, provide a high level of certainty. However, they also have a higher risk of discrimination. Self-declaration is a much lower level of certainty. Combining different methods is likely to increase certainty. We cannot tell you the level of certainty provided by any method which uses artificial intelligence. This depends on the quality of the dataset and the efficacy of the algorithms and should be considered on a case by case basis.

How do we decide what level of certainty is appropriate to the risk?

We do not intend to map certainty levels to risks because every service is different. Carrying out a data protection impact assessment (DPIA), as required by the UK GDPR and the code, would help you identify and assess the risks in your service. We produced a best interests self-assessment to help you identify the risks, which can help you decide on the appropriate level of certainty. However, you should not adopt a method of age assurance if the risks associated with the data processing for that purpose exceed the risks associated with the original processing. That would not be an appropriate level of certainty.

Can we just ask for proof of identity?

Yes, if that is the right approach for your processing. Standard 3 of the code includes “hard identifiers” in the list of possible methods you might use to establish a user’s age. This means asking for proof of identity. However, the Code says to do so only if, “the risks inherent in your processing really warrant such an approach.” This is because there is a high risk of discrimination and exclusion for children (and indeed some adults) who don’t have access to identity documents. The code also says that “requiring hard identifiers may also have a disproportionate impact on the privacy of adults.” The Commissioner’s Opinion on age assurance in the context of the Children’s code provides further information on the use of hard identifiers.

When can I use third-party age verification?

We recognise that the current age verification ecosystem was designed and is set up to determine whether someone is over 18. It is not to determine the age of a child. You can use third-party age verification if you want, or are legally required, to prevent children from accessing your service altogether. You can also use it if you intend to offer two versions of your service; one for children, which conforms with the code, and one for adults. However, this is unlikely to be a solution for age-appropriate design elements such as tailored transparency or nudging.

Third-party age verification involves using a third party to determine a user’s identity and then accessing officially held data to confirm their age. It provides a high level of certainty, but there’s a risk of indirectly discriminating against people who lack the necessary documentation or data, such as credit history. Therefore, you need to use age verification in proportion to the identified risks to children.

An advantage of third-party age assurance services is that they can, depending on the approach they take, provide an age assurance decision without the need for additional personal data. You can find more information on the use of third-party age verification in the Commissioner’s Opinion on age assurance.

What do you mean by “artificial intelligence” (AI) methods?

The code specifically refers to using AI to estimate a user’s age by analysing the way they interact with the service. This is a type of profiling. However, any method which uses data to estimate a user’s age is likely to make use of AI. Examples of AI methods include:

  • content analysis;
  • analysis of the language or phrases people use; or
  • the use of biometric characteristics such as facial images.

If you do use AI to estimate a user’s age, you must comply with the UK GDPR and take into account possible risks of unreliability, bias and unfairness. You can find further information on the use of AI as part of age assurance in the Commissioner’s Opinion on age assurance. You can also read more about how to ensure your use of AI is compliant with the UK GDPR in our guidance on AI and data protection and our guidance on automated decision making.

What about using facial images to estimate age?

Using AI to estimate a user’s age from an image of their face may be, in principle, a reasonable way to establish age with a level of certainty appropriate to the risk. However, we recognise that  currently much of the work in this area is still in a research and development phase. This means there are few products commercially available. Therefore, if you use this technology you must ensure that it provides an appropriate level of certainty. The processing must also be compliant with the UK GDPR and the code where appropriate. 

How can I use profiling to estimate age if the code requires me to turn off profiling for children?

The code states that profiling for the purpose of age assurance is acceptable. It also recognises that profiling is harmful to children and requires it to be turned off by default, unless it is core to the service or there is a compelling reason for it to be turned on. Any profiling used for the purpose of age assurance must not create risks that are higher than the risks the age assurance is being used to address.

Some services may already be able to estimate a user’s age, with a high level of certainty, from the data they collect about them. For these services, when a user’s age is estimated and assessed to be over 18, the Children’s code does not apply, although these services still need to comply with UK GDPR and the Data Protection Act 2018. Where a user’s age is estimated to be below 18 or where it has not yet been estimated, non-core processing may continue only for the purpose of age estimation. It can also continue where you can demonstrate that it is in the best interests of the child, eg for safeguarding purposes.

How are app developers able to implement age verification when data collection is specified as an option by app stores?

App developers falling under the scope of the code are responsible for ensuring they conform with the age-appropriate application standard. This is independent of any app store requirements for developers. The standard outlines a number of approaches that developers could use for age assurance, including:

  • AI;
  • self-declaration;
  • third-party verification services;
  • account holder confirmation;
  • technical measures; and
  • hard identifiers.

Where app stores or other third parties do provide age assurance, online services can factor this into their overall assessment about how confident they are about the age of child users. You can find more information on the use of age assurance in the context of the Children's code in the Commissioner’s Opinion on age assurance.

Is there further guidance on specific age verification options, including what is likely to be proportionate in specific circumstances?

Yes. The Commissioner’s Opinion on age assurance explains that any use of age assurance, including age verification, must be proportionate to the risks to children posed by the online service. Section two provides an overview of age assurance methods and Section three gives examples of processing that are likely to be high risk. The Opinion also outlines considerations for online services to ensure their use of age assurance is compliant with data protection legislation. We recommend that any service considering the use of age assurance refers to this Opinion for further guidance that supplements standard 3 of the code.

How should services ensure ‘age-appropriate application’ when this can mean different things for different children, particularly vulnerable children?

We expect online services to take a risk-based approach to conform with the code. This includes taking into account the age and developmental needs of child users. The code provides guidance in several areas around how online services could apply this approach in practice:

  • The DPIA standard recommends consulting with parents and children to understand their specific needs.
  • The Detrimental use of data standard clarifies that online services must not use data in ways that demonstrably contravene other regulators’ standards. Here we will consider relevant standards relating to vulnerable children and children's developmental needs from bodies, including:
    • the Children's Commissioner;
    • the Equality and Human Rights Commission;
    • Public Health England; and
    • the National Institute for Health and Care Excellence.

The Commissioner’s Opinion on age assurance provides further information on how we expect organisations to assess the risks to children of various ages and developmental needs. Ultimately, it is for controllers to demonstrate how they have assessed what is appropriate for the particular needs of children, as per the Article 5(2) accountability principle of the UK GDPR. This includes children of varying ages and vulnerabilities. The Commissioner’s Opinion highlights that data controllers must ensure that any use of age assurance does not lead to undue discrimination.

Gaming

Does an online game provider have to comply with the code if the game has an ‘18’ rating?

If your game has a PEGI 18 rating, then your focus should be on how you prevent access to underage users rather than on making it code compliant. You should have sufficient age assurance systems in place so that children cannot access the game. We provided further detail on the use of age assurance in this Opinion.

If you offer games or services with a PEGI rating of 16 or lower, you would need to conform to the code.

How do you propose to get gaming designers to incorporate privacy by design into their work?

We have created Children's code design guidance to help designers ensure data protection by design and default, often known as privacy by design. We also created gaming-specific worked examples for a range of our tools, such as a gaming DPIA, privacy moments and age-appropriate mindsets. We continue to engage with the design and gaming communities through workshops and blogs to assist with embedding data protection considerations into designing gameplay.

When the service is actually a user-generated content (UGC) platform, who is responsible for the content being compliant?

The data controller, or joint data controllers for the service that the child accesses, are responsible for conformance with the code. All controllers, joint controllers and processors should appropriately assess their relationship based upon the processing taking place. Platforms allowing UGC should consider their terms and conditions for how users can add content to their service. This would ensure that developers build privacy by design and code conformance into their games.

Will the code also be applied retrospectively to existing games? Won’t that just result in a lot of games and services being shelved or just restricted so UK players cannot play?

The code applies to all ISS likely to be accessed by children, including both new and existing services. We acknowledge this is a challenge for legacy services. However, the aim of the code is to protect children’s data within the digital world, by ensuring that the services they use have their best interests as a primary concern.

We recommend that you review your existing services and DPIAs as soon as possible, to bring your processing into line with the code’s standards. You should focus on assessing conformance with the standards, then identifying and actioning any additional measures necessary to conform.

Online tools

To meet the online tools standard, can we just 're-direct' users to external resources, such as those developed by children’s associations?

‘Online tools’ in the context of the code refers to mechanisms available to children to exercise their rights under UK data protection law. This includes the right to request a copy of their data or to have inaccurate data corrected. These online tools should be accessible and prominent. Pointing users to external guidance explaining their data protection rights could support this requirement. However, ISS should be able to demonstrate that they have systems in place to allow children to exercise their data rights. Online tools should link to internal processes to support users to exercise their rights.

Broader ‘online tools’, for example those for users to report problematic content or flag a general problem with a service, are beyond the scope of the code.

Nudge techniques

Are service notifications excluded from the need to restrict or default push notifications to off when sending to under 18s?

Yes. General service notifications, pushed out to all users without using personal data to target them, fall outside of the scope of the code. Where you base push notifications on personal data - for example sending only to users who are under 18 – you need to give consideration as to the purpose of the notification when determining whether they should be off by default.

If the push notification is essential to the service’s core function (for example, an app for sending out diary reminders) you can set these notifications to on by default. Where push notifications are non-essential (for example, encouraging users to return to the app after a period of inactivity), you need to turn these settings off by default for child users.

Geolocation

What should universities do about geolocation when speaking to prospective students, such as international students, who may wish to speak to someone in their language of choice?

The geolocation standard within the code does not prohibit the use of geolocation data or location tracking. Where it is used, online services should ensure that:

  • they set these functions to off by default (unless it is essential for the functioning of a service);
  • it is obvious to children where the tracking is active; and
  • they switch geolocation tracking off when not in use.

Default settings

What would a 'compelling reason' be for not using high-privacy default settings?

The default settings standard states that privacy settings should be high-privacy and off by default, unless organisations "can demonstrate a compelling reason for a different default setting, taking account of the best interests of the child". In practice, "compelling reasons" may be that data processing is required in order to meet a legal obligation (such as a child protection requirement) or to prevent child sexual exploitation and abuse online. A compelling reason could also be that the data processing is essential to deliver the service’s core function (for example, a banking app that is reliant on spending data). However, you must ensure that any processing complies with the purpose limitation principle of the UK GDPR.

In the case of geolocation services, you could keep services on if you are able to demonstrate that geolocation is part of your core service. You could also make a case that the metrics needed to measure demand for regional services may be sufficiently un-intrusive to be necessary. However, the onward sharing of this data or its use for targeting adverts to the area the user is in would not be a compelling reason. This would not be compliant with the purpose limitation principle.

Should children be able to opt out of profiling?

The code states that profiling should be off by default, unless it’s in the child’s best interests for the profiling to be on. Therefore, profiling should automatically be off, and where appropriate services can offer children the option to opt in by switching profiling on.

Profiling which relies on cookies or similar technology should also adhere to PECR requirements. Where you require consent, you need to consider Article 8 of the UK GDPR. As children under the age of 13 cannot provide consent, you need to receive it from their parent or guardian instead.

DPIA

Do businesses need to create data maps around any interactions with children's data?

Yes. Understanding where, how and why you use children's data is a fundamental step in completing a data protection impact assessment (DPIA). All organisations falling under the scope of the code must complete a DPIA. Step 2 within the data protection impact assessment standard provides more details on this mapping process.