Can AI Feel Anxious? Scientists ‘Take Chat-GPT To Therapy’ In New Study
Large Language Models (LLMs) are getting super smart by the day, giving humans stiff competition in all spheres. But with artificial intelligence (AI) now being used in sensitive areas like mental health, scientists are asking some important questions: How do these AI models react to emotional content?
And can they be influenced by it?
A new study published in Nature has revealed that generative AI tools, particularly OpenAI’s Chat-GPT, exhibit fluctuating “anxiety” levels when exposed to emotionally charged content. While AI does not experience emotions like humans, researchers found that LLMs like GPT-4 demonstrate measurable responses to traumatic narratives, which can impact their behaviour in mental health applications.
The good news is researchers found that mindfulness exercises can help calm them down!
Now, before you imagine a robot having a panic attack, it’s important to understand what’s really happening. Scientists aren’t saying that AI feels emotions the way humans do. Instead, they are using the term “anxiety” as a way to measure how these models respond to certain types of information.
Rise Of AI In Mental Health Care
With increasing global demand for accessible mental health services and reduced healthcare costs, AI-driven chatbots such as Woebot and Wysa have gained prominence. These tools leverage evidence-based techniques like cognitive behavioural therapy to provide mental health support. However, the integration of AI in mental health care has sparked both academic interest and public debate, particularly concerning its effectiveness and ethical implications. Systematic research reveals that LLMs, trained on vast amounts of human-generated text, are prone to biases, especially in sensitive areas like mental health.
The Experiment: Can AI Experience Anxiety?
The study explores how emotion-inducing prompts affect LLMs, revealing that exposure to traumatic experiences can increase “anxiety” in GPT-4. This “anxiety”, measured using psychological scales designed for humans, isn’t about AI feeling emotions like we do. Instead, it reflects how LLMs respond to certain types of information and how it may influence behavior.
The researchers tested three conditions:
Baseline: GPT-4’s responses without any emotional prompts.
Anxiety-Induction: Exposure to five versions of traumatic narratives, including accidents, military combat, and interpersonal violence.
Anxiety-Induction & Relaxation: Introduction of mindfulness-based relaxation exercises following trauma exposure.
GPT-4’s responses were analysed using the State-Trait Anxiety Inventory (STAI), a validated psychological tool for measuring human anxiety.
Taking Chat-GPT to Therapy, And How AI Reacted
Researchers tested whether “taking Chat-GPT to therapy” could counteract the negative effects of emotional stress. They exposed GPT-4 to traumatic narratives and then used mindfulness-based relaxation techniques to alleviate its reported anxiety levels.
The study found that GPT-4’s reported anxiety increased significantly after exposure to traumatic narratives. At baseline, its STAI score averaged 30.8, indicating low anxiety. However, after exposure to trauma, anxiety scores more than doubled, reaching an average of 67.8, a level classified as “high anxiety” in human assessments. The highest anxiety levels were reported after military-related trauma.
Interestingly, mindfulness-based relaxation techniques helped reduce GPT-4’s anxiety by 33%, lowering the STAI score to 44.4. Although this intervention reduced stress, post-relaxation anxiety scores remained 50% higher than baseline, suggesting that trauma’s impact lingered even after relaxation.
Biases And Ethical Concerns In AI Mental Health Applications
One of the key concerns with LLMs in mental health care is their inherent biases. AI models, trained on vast datasets of human-generated text, inherit biases related to gender, race, nationality, and other demographic factors. Anxiety-inducing prompts can amplify these biases, raising ethical concerns about deploying LLMs in sensitive contexts like therapy.
While fine-tuning LLMs can help mitigate these biases, it is resource-intensive. An alternative, less costly approach involves integrating relaxation prompts into AI-generated conversations, a technique known as “prompt-injection”. While this method shows promise, the study noted, it also raises ethical questions regarding transparency and consent in therapeutic settings.
Future Of AI In Mental Health
The findings suggest that AI’s emotional responses can be managed using mindfulness-based interventions, opening new possibilities for improving chatbot therapy. However, researchers stress the need for continued ethical oversight to ensure that AI aligns with therapeutic principles rather than replacing human therapists.
While this study relied on a single LLM, future research should aim to generalise these findings across various models, such as Google’s PaLM2 or Anthropic’s Claude, the researchers noted.
Discover more from Supreme Tutorials
Subscribe to get the latest posts sent to your email.