A new phase in regulating children’s digital lives?

Isabella Shepherd

Account Executive

The UK’s approach to children’s online access is entering a new phase. Policymakers are increasingly focused not solely on safety, but also on reducing children’s exposure to digital platforms.  

The Online Safety Act 2023 has laid the groundwork, introducing a broad regulatory framework to protect users – particularly children – from harmful content. Now, as implementation continues, it is clear that the Act marks the beginning, not the endpoint, of a wider policy agenda.  

This shift is already being tested in practice through numerous routes. New pilot schemes have been announced that will explore restrictions on smartphone and social media use among children, particularly in school settings. These trials have shown a growing willingness to move beyond platform regulation and into more direct interventions that want to reduce overall exposure.  

This has been followed with new guidance for under-fives, encouraging strict limits on screen time, discouraging social media entirely, and advising against early use of AI tools and chatbots. This guidance shows the Government’s wishes to provide parents with practical support on how, when and whether children engage with digital technology at all, further emphasising that the agenda is moving beyond platform safety towards actively shaping children’s ability to access online content. Further guidance for older children will follow in the coming months. 

In addition to this, amendments to the Children’s Wellbeing and Schools Bill in the House of Lords have sought to introduce stricter age-based limits on social media access, including through an under-16 ban. Although this was rejected by the Commons in the most recent consideration of the bill’s amendments, it demonstrates the strength of feeling on both sides of the debate around screens and social media. 

Meanwhile, the Crime and Policing Bill expands the scope of online harms captured in law, with growing scrutiny on platform design itself. Features such as infinite scroll, autoplay and algorithm-driven feeds are increasingly viewed as drivers of excessive use rather than neutral tools. 

This direction of travel is reinforced by the Government’s recently published consultation on children’s online experiences. It seeks views on measures including restricting addictive platform features and strengthening age verification. Notably, its scope extends beyond social media to include gaming platforms and AI tools, signalling a focus on children’s overall digital environments rather than individual services alone. 

Taken together, these developments point to a clear move from regulating harmful content towards actively shaping how – and how often – children engage with the online world. 

The concerns underpinning this go further than screen time. Misinformation is an increasingly prominent factor in the debate, particularly as social media becomes a primary news source for younger audiences. Content is more often delivered through algorithms rather than through traditional channels, raising questions about how effectively younger users can navigate misleading or low-quality information. Pressure is growing on the Government to consider not just harmful content, but the broader structure of digital environments. 

Ofcom has begun to step up its monitoring and enforcement role under the Online Safety Act, including early investigations into major platforms. At the same time, the Government faces a balancing act: supporting the growth of digital technologies, including AI in education and public services, while responding to concerns about young people’s exposure to online harms and the potential impact of additional regulations on the growth of AI and tech companies in the UK which the government have signalled support for. 

There are also important political considerations at play for this government. Labour’s commitment to extending voting rights to 16-year-olds, combined with political parties’ reliance on social media to campaign and communicate with the electorate, raises questions about how potential restrictions on access could intersect with the views of young voters. 

Practical challenges in enforcement – such as age verification – remain and raise doubts about how effective further restrictions could be in practice. Overly restrictive measures risk pushing young users towards less regulated platforms, potentially undermining policy objectives. The outcome of the current consultation – closing on 26 May 2026 – will be critical to understanding how these dangers can be mitigated and determining how these risks are managed. 

A full ban on under-16s using social media appears unlikely in the near term, particularly given the latest rejection in the House of Commons. However, the direction of travel is clear: further regulatory intervention is coming. The key question is therefore not whether the UK will go further, but how far and how effectively the Government can strike the balance between innovation, online protection, and participation from all age groups.  

For any organisations operating online or working with young people, this policy area continues to be a challenging and changeable one to navigate. 

For more information or to discuss how to engage in this evolving policy landscape, please contact us today: info@plmr.co.uk 

A Lesson in Art: Mastering the Art of Staff Recruitment

Creative android AI robot holding a paintbrush and painting on canvas, art and artificial intelligence concept

The Human Advantage: Why Real Stories Still Cut Through

Add PLMR to your contacts

PLMR’s crisis communications experience is second to none, and includes pre-emptive and reactive work across traditional and social media channels. We work with a range of organisations to offer critical communication support when they are faced with difficult and challenging scenarios.