The UK’s Department for Science, Innovation and Technology has launched a six-week pilot involving 300 teenagers aged 13 to 17, testing four different approaches to limiting social media use. Some participants will have apps blocked entirely. Others will face daily time caps of one hour. Others will have access cut off between 9 pm and 7 am. A control group carries on as before. Researchers will interview parents and children at the start and end to assess the effect on family dynamics, sleep, and academic performance.
The trial is the UK government’s most concrete step yet toward either legislating a social media ban for under-16s or providing an evidence base for rejecting that approach. It follows a broader digital wellbeing consultation launched in 2026 that had received 30,000 responses by the time the pilot was announced. Ofcom and the Information Commissioner’s Office have both urged social media platforms to improve age verification and restrict stranger contact with minors — measures that amount to a softer version of the same underlying concern.
What the Pilot Can and Cannot Tell Us
The trial’s design is straightforward, and that is both its strength and its limitation. Three hundred teenagers, six weeks, four conditions. The results will tell policymakers something genuine about whether parental enforcement of social media restrictions affects family life, sleep quality, and schoolwork performance in the short term.
What the trial cannot tell policymakers is whether a platform-level ban — applied to all UK users under a certain age through age verification at the app or operating system level — would produce similar effects at scale. The difference is significant. A family-level restriction affects how teenagers use devices at home, supervised by parents who have opted into the experiment. A platform-level ban affects the supply of the service itself, regardless of parental engagement, and requires technical enforcement mechanisms that are still being debated.
The Register observed that the trial also cannot account for substitution effects: teenagers blocked from Instagram or TikTok by parental controls may migrate to platforms without those controls, to private messaging, or to other online spaces where adult supervision is even lower. The restriction, in other words, may change where young people are on the internet without changing how much time they spend there or what they encounter.
The Political Context
The UK government is under sustained public pressure on children’s online safety following a series of high-profile cases in which social media algorithms recommended harmful content to minors. The Online Safety Act, passed in 2023, gave Ofcom new powers to fine platforms failing to protect children — but the regulation is still being implemented and contested. The social media ban trial represents a parallel track: direct behavioural intervention rather than platform accountability enforcement.
Australia moved further than the UK in 2025, passing legislation banning social media for under-16s and requiring platforms to verify ages rather than relying on parental controls. The Australian approach shifts the compliance burden to the platform rather than the family — a significant structural difference from what the UK is currently testing.
The POV
The UK trial is genuinely useful if it is understood correctly. It is testing whether parental restriction of social media access affects teenage wellbeing in measurable ways over six weeks. That is a narrow and answerable question. The political pressure on the government is to answer a much broader question: whether banning social media for all teenagers through platform-level age gates would be safe, enforceable, and net positive for children’s wellbeing. The six-week pilot of 300 families cannot answer that. The risk is that it will be used as though it can — that positive findings will justify a policy leap from “parental controls help” to “ban the platforms,” ignoring the substitution effects, the enforcement challenges, and the possibility that teenagers excluded from mainstream platforms end up somewhere less visible and less safe.
The UK trial exists within a much larger global conversation about youth mental health and platform accountability. Australia has passed a hard ban on social media for under-16s. France has attempted screen-time restrictions. The United States is mired in congressional debates while state-level laws get challenged in court. What makes the UK approach unusual is its reliance on behavioural research rather than legislative mandate — a signal that policymakers are not yet confident enough in the evidence base to impose blanket rules. That tentativeness may be appropriate, but it also means that the 300 teenagers in this trial are essentially proxies for a policy question that affects hundreds of millions of young people worldwide.
Platform companies have watched each of these national experiments carefully. None has materially altered their core recommendation algorithms in response. Until the financial consequences of targeting young users outweigh the revenue they generate, voluntary compliance will remain limited.
Sources
- UK government to trial social media ban for hundreds of teens — CNBC, March 25, 2026
- UK’s teen social media ban gets first reality check — The Register, March 26, 2026
- Children and parents to pilot social media bans — UK GOV.UK
- Hundreds of UK teenagers to be banned from social media as part of pilot — ITV News