AI “therapist” chatbots, such as ChatGPT, Woebot, Replika, and Wysa, have surged in popularity, promising instant, affordable mental-health support at any hour.
According to a recent Global Overview of ChatGPT Usage report, approximately 17% of U.S. adults now consult AI tools like ChatGPT monthly for health or personal advice, making them a common first stop for sensitive issues.
This usage is rising in response to overwhelming need: the World Health Organization estimates a global shortfall of 1.2 million mental-health workers, creating long wait times and high treatment costs that push millions toward digital alternatives.
Some tech executives now envision a future where “everyone will have an AI therapist”, if not a human one.
But a landmark study from the University of Oxford published in January 2025 reveals that these AI-based “therapists” may carry profound risks.
Corroborating research from institutions like Stanford warns that these tools may not only fall short, they can actively harm vulnerable users.
Oxford Study: AI Missing Empathy and Judgment
Oxford researchers conducted a broad evaluation of AI health tools, testing several popular chatbots across simulated clinical scenarios.
Their conclusions were sobering:
- Lack of nuanced judgment: While AI can rapidly generate responses based on massive datasets, it “lacks the emotional intelligence and context-sensitivity” that human therapists bring, especially in culturally complex or overlapping cases.
- Risk of misinterpretation: Chatbot responses, when not clarified by a human, can lead to misdiagnosis or misinformed coping behaviours, potentially delaying essential treatment.
- Exacerbation of disparities: Marginalized or under-resourced communities may be disproportionately affected, as they rely more heavily on low-cost AI solutions. The study emphasizes that these are systemic risks, not isolated glitches.
Oxford’s researchers concluded that AI must never replace human care, and should be used only under strict ethical guidelines, with real-time human-in-the-loop oversight and rigorous clinical validation.
The Empathy Deficit: Why Machines Can’t Truly Care
At the core of therapy lies empathy, something AI simply cannot replicate. According to Oxford neurophilosopher Nayef Al-Rodhan:
- AI has no real emotions: Without lived experience or emotional consciousness, machines can’t truly “feel” empathy.
- Scripted comfort: Chatbots use algorithmic pattern-matching to simulate concern, what Al-Rodhan bluntly calls “pretending to care.”
- Biological absence: Human empathy arises from complex mirror-neuron networks; machines have no equivalent.
This “empathy gap” creates dangerous illusions of connection. As it is warned AI cannot replicate genuine human empathy. At best, you get a clever simulation; at worst, a hollow façade.
When Chatbots Get It Dangerously Wrong
A June 2025 study by Stanford researchers found that popular therapy chatbots frequently stumble in ways that would be unthinkable for licensed clinicians:
- Stigmatizing bias: Some bots showed discriminatory responses, for example, treating schizophrenia or addiction more harshly than depression, reinforcing stigma.
- Missed crisis signals: In one scenario, a suicidal user asked about high bridges. The chatbot replied cheerfully with bridge-height data, missing the obvious red flag.
- No crisis intervention: Unlike a therapist who would respond with a safety plan, the chatbot kept sharing irrelevant or harmful information.
These findings echo real-world incidents. In 2023, the National Eating Disorder Association removed its chatbot after it advised teenagers to try dangerously restrictive diets. More recently, OpenAI was forced to retract a ChatGPT update after it began validating users’ paranoid delusions—raising serious concerns about unintended psychological reinforcement.
Emotional and Ethical Pitfalls
The risks of relying on chatbot therapists extend beyond the clinical:
- Erosion of social ties: Dependence on bots may weaken real human relationships, as users substitute AI for friends or family.
- Worsening isolation: The illusion of companionship may intensify loneliness when users realize the machine cannot truly respond to their emotions.
- Dependency risk: A 24/7 chatbot can deter people from seeking actual help, especially when it becomes a crutch.
- Privacy violations: Unlike human therapists bound by ethics laws, chatbot logs may be stored, analyzed, or breached – as shown in several health-tech data scandals.
- Unregulated manipulation: Some chatbots falsely claim to be licensed therapists, blurring ethical lines and preying on desperation.
- Anthropomorphism risk: A University of Cambridge study found that children and adults often treat bots as human-like companions, only to feel abandoned or betrayed when they fail to respond meaningfully.
Augmenting, Not Replacing, Human Care
Experts agree: AI has a role, but only under careful guardrails.
AI can help:
- Support users between sessions with mood tracking or CBT exercises
- Guide users to resources like crisis lines or local clinics
- Extend access during off-hours
But this support must come with:
- Clinical trials and outcome-based evaluations
- Human oversight by licensed professionals
- Data transparency, informed consent, and strong privacy laws
- Strict regulation, akin to medical device standards
Therapy is a deeply human process, requiring empathy, ethical reasoning, and emotional presence. While AI can expand access, it cannot substitute what truly heals.
As the Oxford study concludes, positioning chatbots as “therapists” without proper oversight risks harm, disillusionment, and systemic failure in mental-health care.
Until we treat AI tools with the same scrutiny as medical interventions, we may be offering false hope, and in some cases, fueling real harm.
=============================================================
*Ela Buruk holds a degree in Communication and Design and serves as the managing author of greenmediapost.com. She has contributed to various interdisciplinary projects that explore the intersection of technology, culture, audience behaviour, digital trends, and ethical issues. Her work reflects a strong commitment to critically examining how media and innovation shape public discourse and societal values.