Demystifying AI: Harnessing its Potential and Ensuring Responsible Use

02/05/24

Following our previous discussion on the fundamentals of AI, it is important to consider how we can harness the full potential of AI responsibly. Responsible AI enables the design, development, and deployment of ethical AI systems in our society¹.

As AI continues to evolve, it raises several ethical concerns. Short-term issues include fairness, transparency, accountability, privacy and data protection. Looking ahead, there are also worries about AI-induced unemployment and the equitable distribution of AI-generated economic benefits². 

Despite these valid concerns, many people are not well-informed about the current regulations and evaluations governing AI, leading to myths such as the fear of AI robots rebelling against humans³.

In this blog, we will first explore the purpose and potential of AI, including its content generation capabilities. Following that, we will discuss how AI is regulated and how its performance is rigorously evaluated and monitored throughout its development process.

The Purpose and Potential of AI

AI is designed to enhance human efficiency by automating complex tasks, analysing vast amounts of data, and making decisions faster than humans. It also expands and enriches our world with insights and innovations that were previously unimaginable, serving as a new avenue for self-expression. Despite concerns from science fiction and movies about AI dominating the world, this remains far-fetched, provided AI remains supervised and regulated. However, it is still crucial to maintain meticulous oversight. 

Who Checks on AI? Self-Monitoring and Regulations

A common myth is that AI operates entirely autonomously without any form of self-monitoring mechanisms in place. But that is not true: it can detect when real-world changes diverge from its programmed model. For instance, just as how cars have evolved from their designs a century ago, AI algorithms can identify and adapt to such shifts. This self-regulation ensures AI remains relevant and accurate in our rapidly changing world.

However, the role of responsible and well-informed regulators remains crucial, and nations have been taking initiatives to address the technology. On December 9, 2023, the EU Parliament and Council provisionally agreed on the AI Act, pending formal adoption to become EU law. It ensures that fundamental rights, democracy, the rule of law and environmental sustainability are protected from high risk AI⁴. 

Open communication is important. It is imperative that dialogue is maintained amongst all stakeholders to empower them with responsibility. This encompasses everyone from the recruiters who gather training data from participants to the software engineers who meticulously review lines of code.

Evaluating AI Performance

Contrary to popular belief, AI does not operate on autopilot once it is programmed. It must pass through rigorous, multi-stage testing to ensure it functions as intended. This process starts with testing the software used for curating training data and developing models, often by writing the code in different ways to verify that it yields consistent results. Such practices ensure the foundational integrity of AI systems, preparing them for further examination.

In addition, the taxonomies of labels defining correct or incorrect behaviour, undergo thorough testing to confirm they accurately represent the data used to train models. This step is crucial for maintaining the quality and relevance of AI decisions. Furthermore, experts who assign these labels are regularly assessed and qualified, ensuring that their judgments are reliable and consistent.

Finally, the performance of AI models and algorithms is compared against human capabilities, focusing on their precision, consistency, and speed in making predictions. The effects of an AI model in the intended application will be assessed. In mental health support settings, for example, certain metrics exist for evaluating how someone is feeling before and after therapy. These same metrics will be applied to evaluate Cambridge Mind Technologies’ services, guaranteeing accurate context interpretation and the generation of suitable responses.

Conversational AI

The primary service offered by Cambridge Mind Technologies is conversational AI, which is a type of AI that can simulate human conversation⁵. To prevent inappropriate responses, the uncertainty and risk tied to predictions are meticulously considered within mental health settings. Cambridge Mind Technologies is committed to minimising inappropriate responses by closely controlling what the AI says. This is achieved through training with data from genuine therapy and psychological mentoring sessions.

What are your thoughts on this? Please feel free to email hello@cambridgemindtechnologies.com with any opinions you have on this topic, we’d love to hear from you!

Image Source

References

  1. Responsible AI | AI Ethics & Governance. (n.d.). Accenture. Retrieved March 1, 2024, from https://www.accenture.com/gb-en/services/applied-intelligence/ai-ethics-governance  

  2. Stahl, B.C. Embedding responsibility in intelligent systems: from AI ethics to responsible AI ecosystems. Sci Rep 13, 7586 (2023). https://doi.org/10.1038/s41598-023-34622-w

  3. 10 most common myths about AI. (2022, February 10). Spiceworks. Retrieved March 1, 2024, from https://www.spiceworks.com/tech/artificial-intelligence/articles/common-myths-about-ai/

  4. Artificial Intelligence Act: deal on comprehensive rules for trustworthy AI | News | European Parliament. (2023, December 9). European Parliament. Retrieved March 1, 2024, from https://www.europarl.europa.eu/news/en/press-room/20231206IPR15699/artificial-intelligence-act-deal-on-comprehensive-rules-for-trustworthy-ai

  5. What is conversational AI: examples and benefits. (n.d.). Google Cloud. Retrieved March 8, 2024, from https://cloud.google.com/conversational-ai

Author: Julie Wang, Cambridge Mind Technologies Volunteer

Julie is a second year undergraduate student from Cambridge University studying Psychological and Behavioural Sciences. She volunteered as an assistant and blog writer in 2024 reading papers about AI and mental health, engaging in outreach activities and writing blogs. She is curious about the ways in which AI can enhance the mental health services provided to humans.

Previous
Previous

De-Villainising AI: Charting a Responsible Path in Mental Health Care

Next
Next

Simplifying AI: Unravelling the Magic