Children’s Code Strategy progress update - December 2025
-
Due to the Data (Use and Access) Act coming into law on 19 June 2025, this guidance is under review and may be subject to change. The Plans for new and updated guidance page will tell you about which guidance will be updated and when this will happen.
Latest updates - updated 03 March 2025
1 December 2025 - this progress update was published.
Introduction
Safeguarding children’s privacy is a key priority for us. In April 2024, we launched our Children’s code strategy to look closely at social media platforms (SMPs) and video sharing platforms (VSPs).
This update sets out the progress we’ve made since March 20251 and the impact we’ve had on SMPs and VSPs, including:
- securing improvements or confirming good practice to 10 platforms’ approach to children’s privacy settings, including ensuring implementation of commitments to improve default privacy settings on Twitch, Viber and Hoop;
- initiating engagement with Snap and Meta about their processing of children’s geolocation information, particularly the compliance of their map functions on Snapchat and Instagram;
- issuing notices of intent to impose monetary penalties on MediaLab (Imgur) and Reddit from our investigations into how both platforms use UK children’s personal information and age assurance measures;
- reviewing the age assurance practices of 17 platforms popular with children in the UK, including Discord, Pinterest and X. We will be continuing our work with a monitoring programme to drive the adoption of more robust and proportionate age assurance methods on high-risk platforms; and
- defending TikTok’s appeal against our information notice which we had issued as part of our investigation into how it uses children’s personal information in its recommender systems.
Alongside this progress update, we have published a detailed review of all that we have achieved through our strategy since it was launched.
Around 98% of children (aged three to 17) have access to the internet at home. We’re making sure their privacy rights are respected. Our work has the potential to improve online privacy for up to 11.7 million children. Our findings show that improvements introduced from the launch of our Children’s code strategy in April 2024 to October 2025 have already affected over three million child users across several platforms2.
Throughout the rest of this update, we set out in detail the progress we’ve achieved since March 2025 and how we will continue our work into 2026. That includes building on our impact and extending our focus to the mobile games sector, which is another space where children spend significant time online and where risks to their privacy may exist.
Default privacy settings
Our Children’s data lives research shows that children often share their information because of social pressure and fear of missing out. That’s why having privacy-friendly default settings matters. It makes it easier for children to stick with safer choices, even when they feel that pressure, and helps shift the norm towards valuing privacy.
Our Children’s code sets out that:
- privacy settings are a practical way to offer children choice over how their personal information is used; and
- these settings should be ‘high privacy’ by default.
This offers children greater protection by limiting who can view and interact with their content. This helps to prevent bullying, emotional harm and unwanted contact from strangers.
Following the launch of our strategy, we assessed 32 SMPs’ and VSPs' approaches to privacy settings and prioritised our intervention based on the highest risk to children. We had questions about the approaches of 12 platforms. Since March 2025, we’ve confirmed good practice or secured changes to nine platforms’ approaches to default privacy settings:
- Soda, Vimeo, and Wizz App showed us they have good practice on privacy settings, such as using just-in-time notices and only allowing child users to interact with users of the same age group.
- Dailymotion, Hoop, Twitch, and Viber implemented the changes we asked of them, including just-in-time privacy notices, private profiles by default, and restricting visibility of child users on platforms.
- Vero and WeAre8 implemented partial changes, with a continued programme to roll out their commitments, including making profiles private by default and content filtering. We will monitor and review these outstanding changes.
Of the remaining three platforms, Befriend chose to change their terms of service to 18+ and Flickr has updated its Terms of Service to restrict platform access to users 18 and older applying uniform privacy settings users over 18. We continue to have questions about the practices of Triller, however, the platform is currently unavailable in the UK. If the platform becomes available to UK users, and we are unable to gain assurance that they have appropriate default privacy settings in place through our continued engagement, we will consider formal information gathering.
Default geolocation settings
The code explains that geolocation information is particularly sensitive because there are potential risks if a child’s physical location can be tracked.
Our work has focused on the public sharing of children’s geolocation information, recognising that location sharing can offer benefits and risks.
The benefits could include:
- enabling parents to balance their desire to keep their children safe with the need to give them more independence; and
- allowing children to stay connected with friends and family.
However, sharing geolocation information can introduce significant risks when it enables others to find out or track a child's location.
When location sharing is turned off by default, children can choose to share it only when they want to. This way, they still benefit from sharing their location but keep control of it, making it easier to manage any risks.
As part of our review of SMPs and VSPs, we also examined wider geolocation processing practices that may pose risks to children. These include:
- collecting IP addresses;
- mechanisms that nudge users to enable location sharing;
- clarity and transparency of privacy policies; and
- using geolocation information for marketing or advertising purposes.
This broader assessment has helped us better understand the landscape of geolocation risks and inform our approach to protecting children’s privacy online.
Out of the 32 platforms we assessed, 13 SMPs and VSPs allow children to share their geolocation. In our March update, we set out that most platforms we engaged with had precise geolocation sharing off by default and do not share children’s locations automatically with other users. However, we had engaged with and secured changes from BeReal, Sendit, Soda and X. This included removing location data information from profiles by default and restricting the use of precise location data information on posts. Since March 2025, we prioritised our engagement with four platforms which resulted in:
- Vero’s approach to geolocation settings satisfying us;
- BeFriend no longer nudging users to enable precise location sharing;
- Flickr updating its Terms of Service to restrict platform access to users 18 and older applying uniform geolocation settings to users over 18; and
- Frog deciding to exit the UK market.
We assessed the risk of harm and found that these changes, along with our ongoing engagement, have lowered the risk to children in this area. We’ll continue to monitor the changes made by these platforms to ensure they are sustained and effective.
We’re now engaging with Snap and Meta about their processing of children's geolocation information, particularly the compliance of their map functions on Snapchat and Instagram. Snap has committed to completing a comprehensive review of its approach to processing geolocation information by the end of December 2025. We’ll consider the outcome of this review and decide whether we need to take any further regulatory action.
Profiling children for targeted advertisements
Our code sets out that profiling children for targeted advertising should be off by default for most services.
Our Children’s data lives research has revealed that some children can identify potential risks, such as their personal information being used for targeted advertising. However, many remain unaware of how their information is collected and used by companies for this purpose.
In our March update, we explained that we engaged with a number of platforms that confirmed they made limited use of children’s data for advertising purposes. However, following our intervention X stopped serving advertisements to children and we gained commitments from Viber to turn off personalised advertising for under 18s by default. Since March 2025, our engagement has resulted in five platforms showing us good practice or implementing changes:
- Soda, Vimeo and Wizz App, showed us good practice.
- Viber implemented the changes we asked of them, including turning off personalised advertising for users under 18 years old.
- WeAre8 is currently rolling out positive changes.
Of the remaining three platforms, as noted earlier, BeFriend changed their terms of service to 18+, Flickr has updated its Terms of Service to restrict platform access to users 18 and older applying uniform advertising practices to users over 18, and Triller is currently unavailable in the UK.
Using children’s information in recommender systems
SMPs and VSPs use profiling to recommend content to users through recommender systems, as well as for targeted advertising. Our Children’s data lives research shows that children widely understand the personalisation of content through algorithms and recommender systems. They recognise that what they see online reflects their past likes, views and searches.
While this can make platforms more engaging and tailored to their interests, some children reported encountering inappropriate content. For example, they saw explicit, violent or disturbing media that they didn’t want to see and couldn’t understand why it had been shown to them.
Organisations using profiling should ensure that they put appropriate measures in place to protect children from any harmful effects that occur as a result of processing children’s personal information.
In February 2025, we opened an investigation into how TikTok processes the personal information of 13–17-year-olds in its recommender systems. We issued an information notice to TikTok, requiring it to provide information and documents to progress our investigation.
TikTok has appealed against the information notice to the First-tier Tribunal on the grounds that it relates to the processing of personal data for artistic, academic, literary and journalistic purposes (the ‘special purposes’). TikTok is not required to provide any of the requested information or documents, pending determination or withdrawal of the appeal . We are defending the appeal.
We continue to further our understanding about how the UK GDPR and the children’s code applies to recommender systems. We’re considering further intervention in this area. We’re also engaging with online safety authorities and other data protection authorities about how best to ensure children are protected from their personal information being used in harmful algorithms.
Using information of children under 13 years old
Age assurance helps platforms to either prevent or remove access to underage users, or to tailor people’s online experience appropriately. Our code says that online services should:
- take a risk-based approach to assuring the age of their users, and apply appropriate protections for children; or
- apply all standards of the code to all users.
We are taking action when we find platforms in scope of the strategy that are likely to feature high risk processing of Children's data where we have concerns that adequate age assurance is not in place. We opened investigations into Imgur and Reddit's approach to the processing of children's personal information in the UK, with particular focus on each platform's approach to age assurance.
On 8 July 2025, after reaching our provisional findings, we issued a notice of intent to impose a monetary penalty on Reddit. We are carefully considering the representations received from Reddit before taking a final decision.
On 10 September 2025, after reaching our provisional findings, we issued a notice of intent to impose a monetary penalty on MediaLab about its Imgur platform. We’ll carefully consider any representations from MediaLab before taking a final decision on whether to issue a monetary penalty. We’re aware that Imgur is currently not available in the UK. Exiting the UK doesn’t allow an organisation to avoid responsibility for any prior infringement of data protection law. Our investigation remains ongoing.
We’re also interested in the approaches of services that use age assurance methods to detect children, either to:
- prevent access to underage users; or
- tailor their services to offer better protections.
In our March progress update, we explained that due to the limited information we received through the call for evidence, we’d be contacting organisations directly to find out more about how they use profiling for age assurance. Since March, we’ve engaged with services to explore how effective and proportionate their age assurance measures are. This includes age profiling to detect children under the age of 13, and using self-declaration as the primary method of age assurance.
Profiling for age assurance
The code says that profiling options should be off by default for children, unless services can demonstrate a compelling reason for profiling to be on by default, taking into account the best interests of the child.
We contacted six organisations to find out:
- how their profiling works to infer users’ ages;
- whether profiling is used to remove under 13s; and
- how effective the profiling is.
Only two of those organisations used profiling to remove under 13s from their platforms. On these services, profiling supplemented self-declaration, where users were asked to state their own age.
The profiling models on these platforms relied on information that users provided on the services – either actively or inferred through their behaviour. On services where profiling models were used to flag under 13 accounts for potential removal, text uploaded by users was analysed to identify accounts that were suspected of being under 13, which were then sent for human review.
Self-declaration used in isolation is not appropriate for services likely to pose high risks to children. We acknowledge that using additional age assurance measures instead of relying solely on self-declaration can help organisations to detect under 13s more effectively. However, based on current evidence, we’re of the view that profiling cannot replace a robust age gate where it is required.
There could be scope for profiling to be used as part of an age assurance system (eg alongside a robust age-gate), if organisations can evidence that their use of children’s information is necessary and proportionate, and their system is effective.
We’ll continue to monitor developments in this technology. For further information, read about what we think and what we learned about profiling for age assurance.
Relying on self-declaration
We’ve been examining the continued reliance on self-declaration by SMPs and VSPs. This typically involves users entering a date of birth when creating an account, without any reliable means of verifying its accuracy. Our research has shown that self-declaration remains a common approach.
This method presents significant risks to child users, as it allows children below the minimum age required by the terms of service to easily misrepresent their age and access platforms. Findings from our Children’s data lives research over the last two years reinforce this, showing that self-declaration measures are often easy to bypass. Children told us that in many cases platforms either didn’t ask for birth dates or only allowed entry of ages above the minimum threshold. This nudges children to misrepresent their age.
Children bypassing age checks may be exposed to high-risk activities, such as targeted advertising and behavioural profiling. Ineffective identification of children also makes it more difficult for platforms to apply appropriate protections for children aged 13-18 who are permitted on a service.
To address this, we will be prioritising high-risk services for further regulatory engagement. Those who continue to rely primarily on self-declaration will be a key focus in our next phase of work, as we seek to drive the adoption of more robust and proportionate age assurance methods.
We’ve identified 17 platforms that we will include in this next phase of our work.
Collaborating with Ofcom
Ofcom also plays an important role in ensuring children are protected online through their responsibility for the Online Safety Act (OSA). Under Ofcom’s Protection of Children Codes, online service providers that allow harmful content and dedicated porn services must use highly effective age assurance to prevent children from encountering harmful content online.
We continue to collaborate with Ofcom on age assurance to promote a safer internet, where children’s information is protected.
Our recent work together involved:
- Providing input and data protection expertise to Ofcom’s statement on protection of children and guidance on highly effective age assurance; and
- promoting a strong international regulatory culture through presentations at the Global Age Assurance Standards Summit in April 2025.
In response to an increase in the use of age assurance measures Ofcom’s protection of children codes came into force in July 2025, we reminded organisations about their data protection obligations. This included the availability of ICO-approved certification schemes4 through which they can demonstrate compliance with data protection law and the children’s code. We’re preparing to start a data protection risk review of age assurance providers to understand their current compliance.
We plan to issue a third joint statement with Ofcom in early 2026 to further clarify how services can meet their age assurance obligations under both the data protection regime and OSA.
International engagement
International collaboration continues to be crucial for our learning in the age assurance space. We chair the International Age Assurance Working Group (IAAWG) and have organised three technology teach-ins. This involved six age assurance providers presenting their measures to the working group membership to encourage knowledge exchange. These included both age verification and estimation techniques from providers based in different jurisdictions. This has contributed to our understanding of the age assurance options available in the market.
What’s next
We are concluding our work with SMPs and VSPs about default privacy settings, default location settings and targeted advertising. As we’ve set out in this update, we’ve secured positive improvements in these areas across a range of SMPs and VSPs. We’re satisfied that the overall risk to children’s privacy on the platforms we identified has been significantly reduced. However, we’ll continue our engagement with Snap and Meta about their geolocation features, and monitor developments in this area. Where we have ongoing concerns, we’ll consider using our enforcement powers where necessary.
This will help us progress our ongoing work into how SMPs and VSPs use information of children under 13, as well as how information is used in recommender systems. In addition, this conclusion will allow us to focus on expanding our work to include mobile games. This work is necessary and will help to ensure that children are protected from data protection harms in this sector. We will also be putting the Data (Use and Access) Act into practice and continuing to use our position to advocate for children.
Together, these steps will ensure we target our regulatory action where it is most needed. This will help us to focus our direct engagement on the services that present the greatest risks or opportunities. At the same time, this will support our efforts to raise the overall standard of online protection for children. We set out further details of our next steps below.
Progressing our ongoing work
We will be strengthening our focus on driving improvements in how SMPs and VSPs use information of children under 13, as well as how they use information in recommender systems.
To ensure that platforms are protecting children's wellbeing online by effectively identifying children under 13, we aim to secure improvements by:
- reviewing and driving improvements in services’ approach to age assurance to identify under 13s on their platforms. We will begin with a monitoring programme to drive the adoption of more robust and proportionate age assurance methods on 17 identified high-risk platforms that appear to currently be relying primarily on self-declaration; and
- using our regulatory Sandbox to assess the development of a social media platform, Tribela, targeted at children, which will use age assurance measures such as facial age estimation.
We will continue to ensure platforms are protecting children’s information in recommender systems and consider further interventions in this area. We will also continue to engage other relevant authorities to share information and insights.
Mobile games
Around 90% of children in the UK play games on digital devices5, using them as a popular way to connect and spend time with friends. Our early review suggests that some mobile games’ design features can be especially intrusive. This raises important questions about how these games are designed and experienced, and their adherence to the code standards.
Our work on mobile games will start with a monitoring programme focusing on 10 popular mobile game platforms used by children in the UK. These are games that are intended for leisure and entertainment, designed to be played on smartphones and tablets. They range from casual puzzle games to complex multiplayer experiences.
Our monitoring programme will review game design and user experience to assess compliance with default privacy settings, default geolocation settings, and targeted advertising.
We will also use this review to identify whether we need to extend our compliance assessment work to other areas of the code, or other areas of the games sector in the future. We’re also keeping abreast of other sectors or issues that may be causing harm to children’s privacy online and may look to expand our work further in the coming months.
Data (Use and Access) Act
The government has passed new legislation, the Data (Use and Access) Act, that strengthens protections for children online. Starting in 2026, we will therefore update our guidance and consider any implications of the new legislation for our children’s code, setting our expectations of how organisations should best support and protect children online.
We’re continuing to engage with government on the:
- sequencing and delivery of our guidance;
- implications for existing codes; and
- development of secondary legislation that will require us to produce a new code about processing children’s personal information in educational technology (Ed tech).
Advocating for children
We will continue to influence further changes in how companies use children’s information. We will continue to advocate for children by:
- launching an awareness campaign to empower parents to make online privacy part of their day-to-day conversations with their children;
- continuing international engagement to influence international policy discussions8;
- commissioning new research to understand children’s experiences online. This builds on our existing Children’s data lives project, which continues to follow how children think about and engage with data and privacy. New research will include an annual tracking survey with children and parents, as well as a yearly study with adults who help keep children safe on social media, video-sharing platforms, and gaming platforms;
- auditing providers of Ed tech services used to process children’s information in an educational setting;
- publishing guidance to help people and organisations in the UK who work in the education sector feel confident in sharing children’s information for safeguarding purposes; and
- collaborating with the Department for Education and National Crime Agency to ensure schools recognise unauthorised system access from students as cybercrime and report incidents where necessary. This will support perting young people from future criminality and ensure school communities are protected from the related security and information breaches.
[1] Children’s code strategy progress update – March 2025
[2] Measuring the direct impact on this population can be challenging. As children often use more than one platform, the figures do not necessarily represent inpidual children in the population
[3] In accordance with s142(6) DPA 2018
[4] Age Check Certification Scheme, and the Age Appropriate Design Certification Scheme
[6] 2025 Global Privacy Enforcement Network sweep focuses on the protection of children’s privacy | Global Privacy Enforcement Network
2025 G7 Data Protection and Privacy Authorities Roundtable Statement - Office of the Privacy Commissioner of Canada