Can We Govern the Algorithm?: Netflix’s Adolescence and Children’s Online Safety
13/05/25
In the final episode of Adolescence, two parents ruminate on what more they could have done to protect their child from online radicalisation. “He was in his room, weren't he? We thought he was safe, didn't we?.. You know, what harm can he do in there?”¹ Their whole lives have been rattled by harms to their child that they hadn’t remotely perceived, and they grapple with what more they could have done to protect him.
This is far beyond pure fiction. Ofcom has reported that over half of 11 - 14 year-old boys are aware of and have engaged with influencers tied to the ‘manosphere.’² The show alerted viewers to the complex and multifaceted problems we face in protecting children online, but ultimately left it to us, with newfound insight, to find, explore and debate the solutions. Every so often, tragedy, such as the death of Molly Russell in 2017,³ sparks national conversation and turns our attention to children’s online safety. It makes us aware of the reality which we often ignore - that young children can become both victims and perpetrators when left unprotected from and unequipped to navigate the content that they are exposed to online. In the Prevention of Future Deaths Report following Molly Russell’s death, HM Coroner Mr Andrew Walker reported that the sites which Molly accessed had “normalised her [depressive illness] focusing on a limited and irrational view without any counterbalance of normality” and that such material “contributed to her death in a more than minimal way”.⁴
When children create social media accounts, they can tap into the vast internet – a place “designed by adults, for adults”.⁵ While age protections exist, stringency is weak and “enforcement is laughable”,⁶ with over 25% of children between the ages of 5 and 7 in the UK using TikTok.⁷ Cambridge Mind Technologies (CMT) aims to provide an extra layer of support for children exploring vast online worlds, and to help develop robust emotional maturity in the face of social media feeds which can deliver increasingly extreme posts.
But Adolescence has also left us to ruminate – to question whether social media companies themselves, and the algorithms they create to get us hooked, hold any accountability for these tragedies. As we gain a greater understanding of the harm that can be inflicted when children become entirely immersed in online worlds, governments globally are looking to implement new regulations on Big Tech. This article will examine the different measures governments are implementing, and whether any are well-suited to prevent children – just like Jamie – from falling into toxic and damaging online spaces.
Blanket Bans or Safety by Design?
Governments across the world are responding to the increasing concern and outcry surrounding online safety by proposing and implementing new measures. The responses of governments to increasing concerns surrounding online safety can generally be classed into two categories (Table 1).
Table 1 - Differing approaches to online safety concerns
With hopes of making the UK “the safest place in the world to be online”,⁸ the Online Safety Act (OSA) was passed into law by Parliament on 26 October 2023. Its provisions are being implemented in phases over the next year, placing new duties on online service providers to take steps to protect children. While age assurance measures are a component of the work, the act does not aim to be a blanket ban but instead to pressure service providers to create safer platforms, allowing children to have “age appropriate experiences”.⁹
The Illegal Harms Provisions of the OSA came into force recently, which empower Ofcom to fine companies who fail to identify and prevent illegal content from appearing on their platforms. Ofcom recently launched its first investigation under these powers, into an online suicide forum.¹⁰ Prior to this, Ofcom fined the provider of OnlyFans £1.05 million for failing to disclose accurate information about its age assurance policies, which had violated their obligations under the OSA to prevent children from accessing pornographic material.¹¹
While this is undoubtedly beneficial, there remains the question of whether the OSA goes far enough. Beyond targeting the most serious illegal content, can the OSA effectively protect children from toxic, but legal, spaces? The OSA directly addresses legal content with regulations surrounding ‘content that is harmful to children’, categorising this into two types¹² (as seen in Table 2 below):
Table 2 - Harmful Content Under the OSA
Both of these types of content appear to be extremely prevalent – Ofcom has found that 3 in 5 children in the UK aged 13 - 17 reported encountering potentially harmful content over a four week period.¹³ But with such a wide ambit of potential severity encompassed in the word “bullying”, the OSA must take action to ensure that the ‘harmful content’ provisions hold weight, without placing an unreasonable burden on companies to detect even the most subtle cyberbullying. The nuance of these issues is highlighted in Netflix’s Adolescence. In the series, adults and police officers involved in the investigation mistakenly identify emojis left on Jamie’s Instagram page as friendly or flirtatious, while the children involved easily recognise the comments as insults and taunts meant to accuse Jamie of being an incel.
While the OSA marks a step forward, will it prove sufficient to spur change and create a safer environment for children? The 5Rights Foundation, an NGO looking to drive change and build a digital world designed for children, argues that “the success of the act hinges on robust implementation.”¹⁴ If tech companies are now required to offer children a high standard of protection online, we must require them to demonstrate their successes, rejecting empty promises and tick-box solutions.¹⁵ The OSA can seek to make the UK ‘the safest place in the world to be online’, but it must be effectively implemented, and leveraged against tech companies, if it is to succeed in its lofty aims.
The recent Australian Online Safety Amendment (Social Media Minimum Age) Bill 2024,¹⁶ represents an alternative approach. If tech companies resist and dodge regulation, then the harms of social media and smartphones on children must be targeted with blanket bans, as described in Table 1. A similar rationale underlies the recent push from the Conservative Party to ban phones in schools in the UK. While there is no statutory mandate in place currently, a national survey has found that over 90% of secondary schools in England have a phone ban in place.¹⁷
However, while agreeing that social media is often harmful to children, there are many who oppose a blanket ban. A Government Youth Select Committee on Youth Violence and Social Media has pointed out that stripping access could prevent under 16s from the potential benefits of the internet – learning digital skills and finding community.¹⁸ The Committee also points out the ease with which children can circumvent such bans, as they do with the current age restrictions for under 13’s. Sonia Livingstone, Professor in the Department of Media and Communications at the London School of Economics, points out that such bans can allow companies to abdicate their responsibility to create safe spaces.¹⁹ This ultimately leads to greater danger when children do bypass age restrictions, as more harm awaits them in such an unregulated space. These concerns were echoed in an open letter to the Australian government from 140 mental health experts.²⁰
Increasing media literacy is one option; we can invest in promoting healthy tech habits and raising awareness amongst both parents and kids of online dangers. Google’s initiative, Be Internet Legends, which was recently displayed on the home page (as shown below in Figure 1), centres on educating children and equipping them with basic internet safety tools through an interactive gamified experience.²¹
Figure 1 - Screenshot of Google’s ‘Be Internet Legends’ advertised on the Home Page (April 25, 2025)
However, this places a burden on individuals to handle harmful content and online danger better, not on tech companies to restrict it. Sonia Livingstone argues that such emphasis on media literacy is insufficient when children encounter “algorithms closely tailored to their preferences and also their vulnerabilities, to sustain their undivided and unending attention.”²² Expecting education alone to prevent the harms these algorithms create for children is not a viable strategy.
The Shifting Tides
With an estimated 1 in 3 children (under 18) globally being internet users,²³ they deserve to use platforms designed with them in mind. However, the difficulties in pressuring tech companies to create safe spaces by design will be difficult to overcome without international effort. Although the OSA has extra-territorial scope (it can apply to companies based abroad with significant UK user bases), the UK alone cannot prompt a total shift in the attitudes of the social media sector. Ofcom itself has conceded that small sites based abroad with anonymous user bases are likely beyond its reach.²⁴
In September 2024, Meta introduced Teen Accounts for Instagram which would automatically have built-in protections to limit the content which users under 16 could access. It would also have default privacy settings to ensure strangers cannot reach out to children, or see what they post. Recently, it was announced that they are now expanding this service to Facebook Messenger and Facebook.²⁵
However, with the AI race heating up and changing tides in US politics, the focus is shifting away from increased regulation. Meta’s move to expand Teen Accounts seems to clash with other measures it has taken to ‘restore free expression’ at the expense of online safety, such as removing fact-checking in favour of Twitter/X style Community notes.²⁶ With the current US administration and Silicon Valley CEOs united in the view that the deregulation of social media is akin to saving free speech, the efficacy of policy in this area is increasingly uncertain. In this heated topic of debate, the detrimental implications for children of an unregulated and instantly accessible online world often fall out of the picture.
Additionally, as the UK pushes to be an AI superpower²⁷ and an attractive location for technology companies, there is a perceived risk of driving new start-ups away due to fear of excessive regulation and fines. Rachel Reeves recently met with major UK regulators and emphasised the importance of “tearing down the regulatory barriers that hold back growth.”²⁸ Such rhetoric may limit Ofcom’s ability to fully realise the potential of the Online Safety Act, harming its efficacy at targeting key issues.
Ultimately, the UK makes up only a small piece of the much larger, international effort, required to regulate service providers, such as Meta and X, and hold them to account for failing to address children’s safety issues. Maybe creative works like Adolescence will place the spotlight on these issues once again, and revive calls for change from the public.
Where Cambridge Mind Technologies Comes In
In a time of sporadic and piecemeal regulation, where the balance between prizing free speech and increasing regulation is ever changing, CMT’s conversational agent, Cami, can play a pivotal role in bridging the gap between reality and online spaces for young people.
Social media has been designed without considering the long-term effects on young users, from addictive interfaces to algorithms that amplify extreme and hateful rhetoric. As young people isolate themselves further and retreat into online spaces, Cami can provide a partner – a ‘thought buddy’ – to navigate hard times and encourage offline conversations. Cami helps guide through the messy and unpredictable sea of content online, as – unlike social media – it was designed specifically with young people in mind.
What are your thoughts on this? Please feel free to email hello@cambridgemindtechnologies with any opinions you have on this topic, we’d love to hear from you!
References
‘Episode 4’ (2025) Adolescence. Netflix. Available at: Netflix (Accessed: 12 May, 2025)
Ofcom (2025) New rules for a safer generation of children online. Available at: https://www.ofcom.org.uk/online-safety/protecting-children/new-rules-for-a-safer-generation-of-children-online (Accessed: April 29, 2025).
Crawford, A. (2024) “The Online Safety Act is one year old. Has it made children any safer?,” BBC News, 27 October. Available at: https://www.bbc.co.uk/news/articles/c5y38z4pk9lo (Accessed: April 29, 2025).
H.M. Coroner Mr Andrew Walker (2022) Molly Russell: Prevention of future deaths report. London. Available at: https://www.judiciary.uk/prevention-of-future-death-reports/molly-russell-prevention-of-future-deaths-report/ (Accessed: May 12, 2025).
Betancourt, L., Ringmar Sylwander, K. and Livingstone, S. (2025) “Being heard: Shaping digital futures for and with children,” Media@LSE. Available at: https://blogs.lse.ac.uk/medialse/2025/03/05/being-heard-shaping-digital-futures-for-and-with-children/ (Accessed: May 12, 2025).
Campbell, C. (2025) “Australia’s Leader Takes On Social Media. Can He Win?,” TIME, 3 April. Available at: https://time.com/7273443/australia-social-media-ban-anthony-albanese/?utm_source=linkedin&utm_medium=social&utm_campaign=editorial&utm_term=_&linkId=789857705 (Accessed: April 29, 2025).
Ofcom (2024) A window into young children’s online worlds. Available at: https://www.ofcom.org.uk/media-use-and-attitudes/media-habits-children/a-window-into-young-childrens-online-worlds (Accessed: April 29, 2025).
Department for Science, I. and T. (2022) Online Safety Bill: supporting documents. Available at: https://www.gov.uk/government/publications/online-safety-bill-supporting-documents#what-the-online-safety-bill-does (Accessed: April 29, 2025).
Department for Science, I.& T. (2025) Online Safety Act: explainer, GOV.UK. Available at: https://www.gov.uk/government/publications/online-safety-act-explainer/online-safety-act-explainer (Accessed: April 29, 2025).
5 Rights UK Team (2025) UK’s online safety regulator launches first investigation under Online Safety Act , 5 Rights Foundation. Available at: https://5rightsfoundation.com/uks-online-safety-regulator-launches-first-investigation-under-online-safety-act/ (Accessed: April 29, 2025).
Ofcom (2025) Ofcom fines provider of OnlyFans £1.05 million. Available at: https://www.ofcom.org.uk/online-safety/protecting-children/ofcom-fines-provider-of-onlyfans-1.05-million (Accessed: April 29, 2025).
Department for Science, I.& T. (2025) Online Safety Act: explainer, GOV.UK. Available at: https://www.gov.uk/government/publications/online-safety-act-explainer/online-safety-act-explainer (Accessed: April 29, 2025).
Ofcom (2025) New rules for a safer generation of children online. Available at: https://www.ofcom.org.uk/online-safety/protecting-children/new-rules-for-a-safer-generation-of-children-online (Accessed: April 29, 2025).
5 Rights UK Team (2024) Enforcing the Online Safety Act for Children, 5 Rights Foundation. Available at: https://5rightsfoundation.com/resource/enforcing-online-safety-act-for-children/ (Accessed: April 29, 2025).
Ibid.
Online Safety Amendment (Social Media Minimum Age) Bill 2024 (The Parliament of the Commonwealth of Australia)
Adams, R. (2025) “More than 90% of schools in England ban mobile phone use, survey shows,” The Guardian , 10 April. Available at: https://www.theguardian.com/education/2025/apr/10/majority-of-schools-in-england-ban-mobile-phone-use-survey-shows#:~:text=The%20national%20survey%2C%20ordered%20by,have%20some%20form%20of%20ban (Accessed: May 12, 2025).
Youth Select Committee (2025) Youth Violence and Social Media. Available at: https://www.parliament.uk/globalassets/documents/youth-select-committee/hc-999---youth-violence-and-social-media-online.pdf (Accessed: April 29, 2025).
Livingstone, S. (2025) “Child online safety – next steps for regulation, policy and practice,” LSE British Politics and Policy Blog. Available at: https://blogs.lse.ac.uk/politicsandpolicy/child-online-safety-next-steps-for-regulation-policy-and-practice/ (Accessed: April 29, 2025).
Australian Child Rights Taskforce (2024) Open letter regarding proposed social media bans for children. Available at: https://apo.org.au/node/328608 (Accessed: April 29, 2025).
Google (2022) Helping children be safe and confident explorers of the online world. Available at: https://blog.google/around-the-globe/google-europe/united-kingdom/helping-uk-children-be-safe-and-confident-explorers-online-world/ (Accessed: April 29, 2025).
Livingstone, S. (2025), Ibid.
Livingstone, S., Carr, J. and Byrne, J. (2015) “One in Three: Internet Governance and Children’s Rights,” Global Commission on Internet Governance Paper Series, 22. Available at: https://www.cigionline.org/publications/one-three-internet-governance-and-childrens-rights/ (Accessed: April 29, 2025).
Crawford, A. (2024). Ibid.
McMahon, L. (2025) “Meta expands restrictions for teen users to Facebook and Messenger,” BBC News, 8 April. Available at: https://www.bbc.co.uk/news/articles/cvgqe6yv0yzo#:~:text=Meta%20will%20notify%20under%2018s,nude%20images%20in%20direct%20messages (Accessed: May 12, 2025).
Kaplan, J. (2025) More Speech and Fewer Mistakes, Meta Newsroom. Available at: https://about.fb.com/news/2025/01/meta-more-speech-fewer-mistakes/ (Accessed: April 29, 2025).
Department for Science, I.& T. (2025) AI Opportunities Action Plan: government response, GOV.UK. Available at: https://www.gov.uk/government/publications/ai-opportunities-action-plan-government-response/ai-opportunities-action-plan-government-response (Accessed: April 29, 2025).
Makoroff, K. (2025) “Rachel Reeves summons regulators to Downing Street to push growth agenda,” The Guardian , 16 January. Available at: https://www.theguardian.com/business/2025/jan/16/rachel-reeves-summons-regulator-chiefs-to-downing-street-to-push-growth-agenda (Accessed: April 29, 2025).
Author: Charlotte Westwood, Cambridge Mind Technologies Volunteer
Charlotte is a First Class Law graduate from the University of Cambridge and currently works as a paralegal in a boutique firm that specialises in advising families and individuals. She is passionate about AI ethics and interested in how the growing prevalence of AI will impact both our legal systems and personal lives. Charlotte volunteers at Cambridge Mind Technologies because she is inspired by how the project is harnessing innovative technology and technical expertise, to create a tool which will provide mental health support to young people as they navigate troubling times. Through contributions to the blog, she aims to shed light on the potential legal reforms regarding AI and their impact on start-ups, showcasing how Cambridge Mind Technologies is uniquely positioned to thrive in this rapidly evolving space.