De-Villainising AI: Charting a Responsible Path in Mental Health Care
20/05/24
The concept of Artificial Intelligence (AI) often finds itself at the centre of controversy and scepticism. Surveys from 17 countries show that 61% of people are hesitant or refuse to trust AI, with 73% highlighting potential risks such as cybersecurity threats, misuse of AI technology, and job displacement¹.
This apprehension is more acute in areas like mental health care, where personal connection and trust are crucial. A study found that only 14.1% of participants (≥ 18 years old, M = 31.76 years old) trust AI-based psychotherapy systems to protect their data securely². However, 55% showed a preference for AI-based psychotherapy. They appreciated the freedom to talk openly about sensitive topics, the around-the-clock availability, and the ability for remote interaction³. This juxtaposition highlights the need of carefully blending AI into mental health care, balancing the advantages of AI-enhanced care with commitments to address its risks.
Cambridge Mind Technologies commits to ethical practices and strong data privacy in mental health care. This blog will explore our dedication to responsible AI and user privacy protection, showcasing our approach to responsibly enhance mental health support.
Using AI in the Field of Mental Health
In the realm of mental health, AI plays a crucial role by providing us with tools to experiment with better ways of offering support in a scientifically controllable manner.
AI can offer support 24/7, addressing the shortage of human therapists available to provide timely therapy. Furthermore, the ability to type rather than speak aloud to a therapist, whether in-person or via video conferencing platforms, offers greater privacy. This method helps mitigate fears of judgement or stigma. Additionally, Cam AI’s service is free for young people, making therapeutic support accessible to a broader segment of the younger population.
However, the primary aim is not to replace therapists with AI, but to foster a complementary relationship that leverages the strengths of both. By integrating AI, we make the therapeutic alliance and approach widely available to the young people who will benefit from being helped to navigate in-the-moment issues, while still upholding the professional and empathetic standards of human therapists.
This approach creates a supplemental service that enriches traditional therapy methods, making high quality mental health care more adaptable and widely attainable.
What are your thoughts on this? Please feel free to email hello@cambridgemindtechnologies.com with any opinions you have on this topic, we’d love to hear from you!
Image Source: ChatGPT (DALL-E)
References
Trust in Artificial Intelligence | Global Insights 2023. (2023, February 22). KPMG. Retrieved March 8, 2024, from https://kpmg.com/au/en/home/insights/2023/02/trust-in-ai-global-insights-2023.html
Aktan, M. E., Turhan, Z., & Dolu, İ. (2022). Attitudes and perspectives towards the preferences for artificial intelligence in psychotherapy. Computers in Human Behavior, 133, 107273. https://doi.org/10.1016/j.chb.2022.107273
Aktan, M. E., Turhan, Z., & Dolu, İ. (2022). Attitudes and perspectives towards the preferences for artificial intelligence in psychotherapy. Computers in Human Behavior, 133, 107273. https://doi.org/10.1016/j.chb.2022.107273
Author: Julie Wang, Cambridge Mind Technologies Volunteer
Julie is a second year undergraduate student from Cambridge University studying Psychological and Behavioural Sciences. She volunteered as an assistant and blog writer in 2024 reading papers about AI and mental health, engaging in outreach activities and writing blogs. She is curious about the ways in which AI can enhance the mental health services provided to humans.