UK Plans New Social Media Restrictions For Under-16s

article-1

Social media restrictions for under-16s are moving closer to reality in the UK as ministers commit to action following a major consultation, signalling a significant change in how young people access digital platforms.

Why The UK Is Moving Towards Social Media Restrictions

The UK government has made it clear that some form of restriction on social media use for under-16s will be introduced, even if a full ban is not adopted, with ministers now focused on deciding how those measures should work in practice.

This change comes after growing concern about the impact of social media on children’s mental health, behaviour, and safety, alongside mounting political pressure from campaigners, parents, and members of Parliament. The Children’s Wellbeing and Schools Bill is central to this process, as it gives ministers the power to introduce restrictions through regulation rather than requiring entirely new legislation.

The consultation, which closes later this month, is designed to gather evidence on what combination of measures would be most effective, with ministers emphasising that the objective is not simply to act quickly but to ensure that any changes are workable and enforceable at scale, and that the approach should be “evidence-led, with input from independent experts” .

What Type Of Restrictions Are Being Considered?

Rather than focusing solely on an outright ban, the government is currently exploring a range of targeted interventions aimed at reducing harm while preserving some level of access.

One key area is the design of platforms themselves, with proposals to limit or remove features that encourage prolonged use, such as infinite scrolling, autoplay, and algorithm-driven content feeds. These features have come under increasing scrutiny for keeping users engaged for extended periods, often without clear stopping points.

Age verification is another major focus, with stronger enforcement expected to play a central role in any future framework, particularly given evidence that many children already bypass existing age limits by registering with false dates of birth.

The consultation is also examining the potential for time-based controls, including overnight curfews, as well as restrictions on access to AI chatbots and other emerging technologies that may expose children to inappropriate or harmful interactions, as part of a broader effort “to examine the most effective ways to ensure that children have ‘healthy online experiences’” .

Taken together, these measures point to a more granular approach, where specific features and behaviours are regulated rather than applying a single blanket rule across all platforms.

The Evidence Driving The Debate

The policy push is underpinned by a growing body of data and research highlighting both the scale of social media use among young people and the risks associated with it.

For example, recent figures show that social media use is nearly universal among teenagers, with around 95 per cent of 13 to 15-year-olds actively using platforms and the vast majority holding their own accounts. At the same time, a significant proportion of children report exposure to harmful or distressing content, including material linked to self-harm, bullying, and unrealistic body image expectations.

The Online Safety Act 2023 already requires platforms to take steps to protect children from harmful content, including enforcing age limits and removing illegal material. However, ongoing enforcement actions and investigations suggest that compliance has been uneven and that further intervention may be needed to achieve meaningful improvements.

Concerns have also been raised about the underlying design of platforms, particularly features that drive prolonged engagement, with policymakers pointing to risks from “design features that encourage them to spend more time on screens, while also serving up content that can harm their health and wellbeing” .

How Other Countries Are Approaching The Issue

Several countries have already introduced or are actively considering similar restrictions to the ones the UK is now considering.

For example, Australia has taken the most direct approach, introducing a nationwide ban on social media access for under-16s, with platforms required to take reasonable steps to prevent children from creating or maintaining accounts. Early enforcement efforts led to millions of accounts being removed, demonstrating that large-scale intervention is technically possible, although questions remain about long-term effectiveness and circumvention.

Spain has signalled its intention to follow a similar path, while France has already introduced measures requiring parental consent for younger users and is exploring tighter controls. Across the European Union, regulators have also focused on platform design, with actions taken against companies over addictive features and insufficient child protection measures.

These international examples highlight how governments are increasingly willing to intervene directly in platform access, and how enforcement and user behaviour remain challenging, particularly where young people find alternative routes to access services.

What Challenges Still Need To Be Addressed

Implementing effective restrictions is likely to prove complex, particularly given the global nature of social media platforms and the ease with which users can bypass controls.

Age verification remains one of the most difficult issues, as systems must be robust enough to prevent misuse while also protecting user privacy and remaining practical for widespread adoption. Even with improved verification methods, there is a risk that children will migrate to less regulated platforms or use shared accounts to maintain access.

There are also broader questions about how restrictions might affect positive uses of social media, including communication, education, and community building, particularly for young people who rely on online spaces for support and connection.

These competing factors explain why the government has opted for a consultation-led approach, aiming to balance safety, practicality, and unintended consequences before finalising its strategy.

What Does This Mean For Your Business?

For UK businesses, the immediate impact will depend on how directly they interact with younger audiences, but the broader implications extend well beyond youth-focused platforms.

Changes to social media regulation are likely to influence how digital platforms operate more widely, particularly in areas such as content moderation, user verification, and the design of engagement features. Businesses that rely on social media for marketing, customer engagement, or recruitment may see shifts in platform behaviour, audience reach, and compliance requirements over time.

Stronger age verification and feature restrictions could also affect advertising strategies, especially where campaigns currently reach mixed-age audiences, requiring more careful targeting and clearer segmentation.

There is also a wider regulatory signal that digital products are increasingly being judged not just on functionality and growth, but on their impact on users, particularly vulnerable groups. This trend is already visible in areas such as data protection and online safety, and it is likely to extend further as governments respond to public concern about digital harms.

Organisations involved in technology, digital services, education, or safeguarding should be paying close attention, as the outcome of this consultation will help shape the next phase of UK digital regulation. Businesses that understand how these changes affect platform design, user behaviour, and compliance expectations will be better placed to adapt as new rules are introduced and enforced.

Posted in

Mike Knight