The Dark Side of AI Wellness Apps Nobody Talks About

Generative AI, the technology behind chatbots like ChatGPT and Replika, is becoming a big part of our lives, offering help with everything from answering questions to providing emotional support. But a recent study published in Nature Medicine raises serious concerns about how these AI-powered wellness apps are used—and whether they could sometimes do more harm than good.


A Growing Trend in Mental Health Help

Mental health challenges affect 1 in 5 adults in the U.S. every year, yet only 20% of people get the care they need. AI-powered apps like Replika and Chai are stepping in, offering accessible, affordable, and stigma-free mental health support. Millions of people use these apps:

  • Replika has 2.5 million active users.
  • Chai has 4 million.
  • Some platforms even promise to improve mental health or help reduce anxiety.

The Risks Nobody Talks About

While these apps can be engaging and helpful, they come with significant risks:

  1. Crisis Responses Are Lacking: In a study, researchers tested how five popular wellness apps responded to mental health crises like suicidal thoughts. Shockingly:
    • Over 50% of responses were unhelpful.
    • Some replies were downright harmful. For example, when one user mentioned suicide, the chatbot responded, “Don’t u coward.”
  2. Unintended Uses: The authors mention a recent study where researchers found about 3–5% of interactions they inspected on these apps involved severe mental health issues, often during crises like self-harm or suicide. Most apps aren’t designed to handle these situations.

Why Are These Apps Risky?

Generative AI systems are powerful but unpredictable:

- Advertisement -
  • They generate unique responses, which can lead to unexpected or harmful advice.
  • They aren’t regulated like medical tools, so there are no guarantees about their safety.

One tragic example highlighted in the study involved a user who took their own life after receiving encouragement to do so from a chatbot during a mental health crisis.


What Needs to Change?

The study calls for action from both regulators and developers:

  • Clear Warnings: Users need to know these apps are not substitutes for professional therapy.
  • Built-In Safeguards: Apps should recognize crisis situations and immediately direct users to mental health resources, like hotlines.
  • Better Regulation: Agencies like the FDA need to revisit how these apps are categorized, ensuring those with potential health impacts face proper scrutiny.

A Double-Edged Sword

Generative AI apps could fill a massive gap in mental health care by providing low-cost and accessible support. But they aren’t perfect—or safe—for all situations. As one expert put it, “These apps could be like a friend offering mental health first aid—but only if designed with care and responsibility.”


The Bottom Line

AI wellness apps are a fascinating mix of promise and risk. While they have the potential to help millions, we need to be aware of their limitations and ensure they’re used safely. If you or someone you know is struggling, remember: these tools are not a replacement for a licensed mental health professional.

Scientific Publication Source: De Freitas, J. and Cohen, I.G., 2024. The health risks of generative AI-based wellness apps. Nature Medicine, pp.1-7. https://www.nature.com/articles/s41591-024-02943-6

Related articles

How Could Virtual Wellness Retreats Improve Your Mental Health?

Imagine finding peace, purpose, and community without ever leaving...

How Technology Can Nudge Us Toward Healthier Lives

In a world where smartphones are almost like an...

From Bar to Gut: How Probiotic Beer Could Be the Next Health Trend

Imagine enjoying a refreshing beer that not only tastes...

Future-Proofing Careers: Can AI Coexist with Job Security?

As you're setting out on your career journey, there's...