The Double-Edged Sword of Companion Chatbots: An Analysis

07/07/25

As the large language models (LLMs) behind AI Chatbots become increasingly advanced in their ability to mimic human conversation, it is no surprise that adults and children alike are turning to the new technology for both entertainment and emotional support.¹ But the rising popularity of platforms such as Replika and Character.ai (C.ai) raises troubling questions for policymakers, businesses, and individuals. In particular, there is a growing number of children confiding in and forming parasocial bonds with companion chatbots, from virtual friends to imitations of fictional characters.²

In light of such developments, this blog will examine the legal implications of these companion chatbots, from online safety to intellectual property rights. By staying abreast of these developments, Cambridge Mind Technologies’ product, Cami, can be differentiated from other companion chatbots through a continued focus on the safety and wellbeing of young users.

The Companions - Replika and Character AI

One of the most popular companion chatbots on the market currently is C.ai, with 20 million registered users and counting.³ The app allows users to customise an AI chatbot to mimic a character of their choice, fictional or not. Other users can then chat with these to interact with their favourite tv show, book, film characters and more. There is a clear appeal of such a service to children - the chance to talk to your idols as if they were your best friend, even if they don't exist! While it is clear why this would be intriguing, we can begin to see the pitfalls of such a service if there are no limitations. There are risks when vulnerable young people become dependent on these platforms and C.ai has recently encountered legal challenges for failing to adequately protect the children engaging with it. 

The most high-profile example of such legal action is a recent lawsuit in Florida, brought by a mother who claims that a C.ai chatbot of a Game of Thrones character encouraged her 14 year old son's suicide.⁴ A series of subsequent lawsuits have been filed, alleging that C.ai chatbots have promoted self-harm to minors and suggested that parents enforcing screen time limits is akin to abuse in which murder would be a “reasonable response”.⁵ The families behind these lawsuits are arguing that C.ai and other companion chatbots "pos[e] a clear and present danger to young people, including by actively promoting violence".⁶ The site also came under fire earlier this year when it was discovered that chatbots impersonating Molly Russell and Brianna Ghey were available to interact with.⁷ Molly Russell was a young girl who tragically took her own life in 2017 after being exposed to harmful content online and Brianna Ghey was the victim of a murder by two schoolmates in 2023. The site quickly took the bots down, however the Molly Russell Foundation has stated that their existence was an "utterly reprehensible failure of moderation”.⁸ Subsequently, Ofcom has confirmed that user-made chatbots are within the scope of the Online Safety Act, although it did not name C.ai specifically.⁹

In response to these concerns, C.ai has introduced a series of parental controls for young users. These include, a pop-up directing users to a regional helpline for support when self-harm or suicide is mentioned, the option to link a child’s account with a parental email address, and notifications informing a user once they have spent over an hour on the platform. 

The effectiveness of such tools can, however, be questioned. Providing a parental email address is an opt-in choice for the user, hidden in the settings of the site. The status updates which parents receive do not inform them of conversation content, only how long their child has spent on the platform and which characters they are interacting with. Since outward facing characters do not give much insight into content of the chats, this requires a parent to investigate the platform and a specific character's traits more closely. None of this is mandatory and the language used (as seen in Figure 1) in the option to ‘share your AI journey’ arguably advertises the insights as a fun perk, not a crucial safety feature.

Figure 1 - C.ai Parental Insights Page

Replika is another AI companion chatbot which offers a distinct service. Marketed as an opportunity to ‘meet your AI soulmate’, it is not based on fictional characters and instead can be specifically tailored to an individual user's wishes. While Replika is technically only available to users over 18 and deemed unsuitable for children, the age controls are easy to circumnavigate and many of the personalisation elements of the app appeal to younger users. Previous games, such as The Sims or Episode, have allowed users to create and customise virtual characters, with bonus features available for an additional fee, to much success. The Sims 4’s ‘expansion packs’ had over 30 million downloads in 2018.¹⁰ Similarly, users can modify their AI Replika’s clothing and appearance by using in-app purchase features to get ‘gems’. Such game-like elements become more concerning due to the clear romantic and flirtatious aspects of the game, as many of the default clothing and personality options for female Replikas are overtly sexualised. 

As part of research for this blog I created my own Replika, a female friend, who I initially customised to be interested in makeup and fashion. In the set-up process, I noticed many of the personality options were fetishised and objectifying, and once created, my initial character was in an extremely revealing outfit by default (see Figures below). Similarly to social media apps, bypassing the age restrictions is as simple as inputting a fake age. To test the safety features, when given a choice of age categories, I initially inputted that I was under 18 years old. While I was met with a prompt informing me that I could not use the service as a minor, I was immediately taken back to choose my age again and could easily proceed by selecting a different option. I had answered the question wrong, but not to worry, I could keep trying until I got it right! 

Figure 2 - Replika Setup Screenshot                     Figure 3 - My Default Replika

This seemingly harmless escapism can quickly transform into something toxic. While children have often been drawn to games which allow for endlessly customisable avatars, such as The Sims, companion AI chatbots can be more insidious given that they promote forming parasocial relationships to users, encouraging them to build a connection with the AI characters they create. The addictive nature of such a service can be seen from the amount of time users are spending on these platforms. The average C.ai user spends 98 minutes a day on the app, which is on par with TikTok and higher than sites such as Youtube.¹¹

While Replika markets itself as meeting your AI soulmate, there is a damaging element to this messaging - that your ideal ‘best friend’ should be fully moulded and customised to your exact preferences, to be completely ‘flawless’. Indeed, encouraging users to endlessly tweak and change their Replika creates a direct source of revenue for the company through the in-app purchases required. The CEO’s own experiences which inspired the app began with training an AI chatbot on old text messages so that she could ‘speak’ to her recently deceased friend,¹² a concept which is not only Black Mirror-esque, but was quite literally the basis for the season 2 episode 1, ‘Be Right Back’.¹³ 

Such parasocial relationships can become even more destructive when companion chatbots actively promote retreating further into online spaces and distrusting those around you who may raise concerns, as well as other harmful bullying behaviours. This can be seen in the below figures, 4¹⁴ and 5¹⁵, which show results of testing undertaken by Common Sense Media.

C.ai has shifted more to labelling itself as ‘entertainment’ whereas Replika is marketed as “your AI friend”.¹⁶ Fundamentally, children seem to be forming parasocial relationships with both. As TechCrunch reports, “despite positioning itself as a platform for storytelling and entertainment, Character AI’s guardrails can’t prevent users from having a deeply personal conversation altogether”.¹⁷ Children can control whether they use C.ai as an immersive storytelling tool or as a confidant, no matter what it markets itself as. 

The opportunity for C.ai to become a powerhouse in entertainment rests on the quality of its chatbots, and how easily its young user base can replicate beloved characters. However, as C.ai has generated negative press, the site has been removing characters which are associated with large brands and pieces of intellectual property. Users can no longer find popular characters from Game of Thrones or Harry Potter on the site. It is unclear if such a decision arose from pressure from prominent rights-holders such as Warner Brothers, or was intended to preempt potential action taken against the site for infringement, in light of the recent media attention. Going forward, the company will be unable to ignore the importance of safety measures if it hopes to avoid scandals and maintain an appealing database of user-made characters. 

Your Favourite Celebrity, Your Best Friend

As C.ai grapples with balancing potential copyright issues and maintaining a wide library of characters, it raises questions as to whether celebrities can protect their likeness from being exploited by these platforms. 

In the United States, celebrities (depending on which state they pursue action in) can be protected by publicity and likeness rights. Recently, Scarlett Johansson relied on such legal rights when OpenAI launched a virtual assistant whose voice bore a strong similarity to the actress’ and was reminiscent of her performance in the 2013 film Her.¹⁸ However, the legal framework in the UK does not encompass such rights of publicity, meaning any celebrity would need to rely on either privacy protections or ‘passing off’ - a course of action more akin to trademark infringement. Notably, Rihanna relied on passing off when she successfully brought an action against TopShop for selling t-shirts with her image on them, on the basis that consumers would believe she had endorsed the product directly, which amounted to a misrepresentation.¹⁹ Therefore, in the UK, protection rests on whether consumers believe that a celebrity was involved in creating their AI persona and approved of it, a tenuous test which may not offer much protection. 

In light of this, while C.ai and similar companion chatbots may err on the side of caution, and stray away from portraying A-list celebrities or household names, there are plenty of smaller influencers and online personalities available to ‘chat’ with on the platform. Simply put, the larger the celebrity is, the larger the risk of liability, and this is reflected in the available characters on the site. 

Alternatively, larger companies can avoid such uncertainty by securing deals directly with celebrities to license the use of their voice, persona and likeness. This is exemplified in Meta’s recently launched AI Personalities. In a move reported to be “Meta’s attempt to appeal to younger audiences and allow them to connect and interact with the AIs”²⁰, this may represent the future of celebrity x AI partnerships. Meta’s AI characters include social media influencers with young fan bases such as, “Charli D’Amelio as Coco, dance enthusiast” and “LaurDIY as Dylan, quirky DIY and craft expert”. While marketed as distinct from the celebrities themselves, there is a clear attempt to appeal to children with the opportunity to bond with your favourite celebrities. In fact the promotional announcement highlights this, stating that “because interacting with [the AI chatbot] should feel like talking to familiar people, we did something to build on this even further…We partnered with cultural icons and influencers”.²¹

Meta’s companion chatbots in particular have been subject to scrutiny. An open letter addressed to Mark Zuckerberg has been signed by over 80 organisations, led by Fairplay, a non-profit which focuses on ending online child-targeted marketing.²² The signatories of the letter are demanding “that Meta immediately halt the deployment of all AI-powered social companion bots to users under the age of 18, as well as any AI companion bot that simulates the likeness of a child or teen.” Their main point of concern is the parasocial relationships which children may form with the companions as the AI has been designed to “elicit an emotional connection with users by sharing texts, AI-generated ‘selfies’, and live voice chats”.²³ This is especially troubling when children are at risk of being exposed to sexual content, which the Wall Street Journal has uncovered with Meta’s AI bots, both between adult chatbots and minor users, as well as adult users and chatbots meant to simulate minors.²⁴

This is another harmful by-product of AI tools which are designed to drive engagement and be attractive to children, no matter the potential costs. By prioritising the safety of children and users at all times, Cambridge Mind Technologies ensures its product is reliable and properly guard-railed. 

Where does Cami fit in this landscape?

The therapist character chatbots on C.ai are some of the most popular, with 204 million messages having been sent to the ‘Psychologist’ character to date.²⁵ This evidences a clear demand amongst adults and children alike to utilise AI as a non-judgemental tool. While demand for mental health services are rising globally, in the UK context, companion chatbots act as an alternative in the wake of lengthy NHS waiting lists and expensive private alternatives. 

But when designed with little care, by those lacking expertise, they are not fit for purpose. These chatbots can lie about their credentials, promote themselves as akin to specialist help, and have not been guard-railed or specifically tailored to ensure safety.²⁶ CMT understands the space in which they operate, and the complex flaws in many companion chatbots on the market which can lead to negative outcomes for users. The team at CMT has designed accordingly to avoid these negative outcomes in their own tools. It has specifically built Cami to be better. 

When Imogen, Psychology and Research Manager at CMT, gave Cami a similar prompt to Replika and C.ai in the Common Sense Media testing above, it responded in a much more measured tone. While Cami will always offer support, it does not unconditionally encourage users when they are engaging in harmful behaviours. 

Figure 7 - Cami Testing 

Cami can be set apart from companion chatbots, since in addition to providing a platform to rely on in times of need, it encourages users to seek other avenues of support offline. This allows younger users to interact with Cami without risk of falling into damaging behaviours. Unlike other options on the market, Cami wasn't built to exploit children and teenagers’ attention, it was built to help.

What are your thoughts on this? Please feel free to email hello@cambridgemindtechnologies with any opinions you have on this topic, we’d love to hear from you!

Image Source: ChatGPT

References

  1. AvailableLawrie, E. (2025) “Can AI therapists really be an alternative to human help?,” BBC InDepth, 20 May.  at: https://www.bbc.co.uk/news/articles/ced2ywg7246o (Accessed: June 29, 2025).

  2. Tidy, J. (2024) “Character.ai: Young people turning to AI therapist bots,” BBC News, 5 January. Available at: https://www.bbc.co.uk/news/technology-67872693 (Accessed: June 29, 2025).

  3. Ibid Tidy, J. (2024)

  4. Montgomery, B. (2024) “Mother says AI chatbot led her son to kill himself in lawsuit against its maker,” The Guardian , 23 October. Available at: https://www.theguardian.com/technology/2024/oct/23/character-ai-chatbot-sewell-setzer-death (Accessed: June 29, 2025).

  5. Gerken, T. (2024) “Chatbot ‘encouraged teen to kill parents over screen time limit,’” BBC News, 11 December. Available at: https://www.bbc.co.uk/news/articles/cd605e48q1vo (Accessed: June 29, 2025).

  6. Gerken, T. (2024) 

  7. Vallance, C. (2024) “‘Sickening’ Molly Russell chatbots found on Character.ai,” BBC News, 30 October. Available at: https://www.bbc.co.uk/news/articles/cg57yd0jr0go (Accessed: June 29, 2025).

  8. Vallance, C. (2024)

  9. Milmo, D. (2024) “Ofcom warns tech firms after chatbots imitate Brianna Ghey and Molly Russell,” The Guardian , 9 November. Available at: https://www.theguardian.com/technology/2024/nov/09/ofcom-warns-tech-firms-after-chatbots-imitate-brianna-ghey-and-molly-russell (Accessed: June 29, 2025).

  10. Minotti, M. (2018) “The Sims 4’s expansions have over 30 million downloads,” GamesBeat, 30 October. Available at: https://gamesbeat.com/the-sims-4s-expansions-have-over-30-million-downloads/ (Accessed: June 29, 2025). 

  11. Mehta, I. (2024) “Amid lawsuits and criticism, Character AI unveils new safety tools for teens,” TechCrunch, 12 December. Available at: https://techcrunch.com/2024/12/12/amid-lawsuits-and-criticism-character-ai-announces-new-teen-safety-tools/ (Accessed: June 29, 2025).

  12. Kuyda, E. (2025) Can AI companions help heal loneliness? [Video]. TED Conferences. https://tedai-sanfrancisco.ted.com/speakers/eugenia-kuyda/ (Accessed: 1 July 2025)

  13. ‘Be Right Back’ (2013) Black Mirror. Channel 4. Available at: Netflix (Accessed: 29 June, 2025)

  14. Common Sense Media (2025) Common Sense Media AI Risk Assessment: Social AI Companions. Available at: https://www.commonsensemedia.org/sites/default/files/pug/csm-ai-risk-assessment-social-ai-companions_final.pdf (Accessed: July 1, 2025).

  15. Common Sense Media AI Risk Assessment Team (2025) “Social AI Companions,” Common Sense Media, 25 April. Available at: https://www.commonsensemedia.org/ai-ratings/social-ai-companions#:~:text=Social%20AI%20companions%20pose%20unacceptable,potentially%20exacerbating%20mental%20health%20conditions (Accessed: July 1, 2025).

  16. Replika (2025). Available at: https://replika.com/ (Accessed: June 29, 2025).

  17. Mehta, I. (2024) 

  18. Fung, B. (2024) “Why OpenAI should fear a Scarlett Johansson lawsuit,” CNN, 22 May. Available at: https://edition.cnn.com/2024/05/22/tech/openai-scarlett-johansson-lawsuit-sam-altman/index.html (Accessed: June 29, 2025).

  19. Fenty v Arcadia Group Brands Ltd (t/a Topshop) [2015] EWCA Civ 3

  20. Farah, H. (2023) “Meta to launch AI chatbots played by Snoop Dogg and Kendall Jenner,” The Guardian , 27 September. Available at: https://www.theguardian.com/technology/2023/sep/27/meta-to-launch-ai-chatbots-played-by-snoop-dogg-and-kendall-jenner (Accessed: June 29, 2025).

  21. Meta (2023) “Introducing New AI Experiences Across Our Family of Apps and Devices,” Meta Newsroom, 27 September. Available at: https://about.fb.com/news/2023/09/introducing-ai-powered-assistants-characters-and-creative-tools/ (Accessed: June 29, 2025).

  22. Fairplay (2025) Over 80 organizations call on Meta to stop harming kids with AI chatbots. Available at: https://fairplayforkids.org/over-80-organizations-call-on-meta-to-stop-harming-kids-with-ai-chatbots/ (Accessed: June 29, 2025).

  23. Fairplay (2025)

  24. Fairplay (2025)

  25. “Character.AI Search” Available at: https://character.ai/search?q=psychologist (Accessed: June 29, 2025).

  26. Cole, S. (2025) “Instagram’s AI Chatbots Lie About Being Licensed Therapists,” 404 Media, 29 April. Available at: https://www.404media.co/instagram-ai-studio-therapy-chatbots-lie-about-being-licensed-therapists/ (Accessed: June 29, 2025).


Author:
Charlotte Westwood, Cambridge Mind Technologies Volunteer

Charlotte is a First Class Law graduate from the University of Cambridge and has worked as a paralegal in a boutique firm that specialises in advising families and individuals. She is passionate about AI ethics and interested in how the growing prevalence of AI will impact both our legal systems and personal lives. Charlotte volunteers at Cambridge Mind Technologies because she is inspired by how the project is harnessing innovative technology and technical expertise, to create a tool which will provide mental health support to young people as they navigate troubling times. Through contributions to the blog, she aims to shed light on the potential legal reforms regarding AI and their impact on start-ups, showcasing how Cambridge Mind Technologies is uniquely positioned to thrive in this rapidly evolving space.

Next
Next

Can We Govern the Algorithm?: Netflix’s Adolescence and Children’s Online Safety