What AI in Mental Health Must Get Right

What AI in Mental Health Must Get Right

Everywhere I look, across industries, AI is being sold as a gleaming solution - a digital savior of sorts, with an answer for every problem. In mental health, that often looks like softly lit avatars with kind eyes & calm expressions, sometimes with culturally concordant versions, offering instant and seemingly judgment-free access.

Many of us have seen the marketing decks with promises of transformation, increased access, better outcomes (fingers crossed), and lower costs. But those of us in the field know the real work happens in the margins - and that most solutions won’t live up to the hype.

Just today, a promising study was published out of Dartmouth on their “Therabot” (the full article is behind a paywall in NEJM, but I’ve included links to an article feature for those who aren’t subscribers). I appreciated seeing it published, and frankly, I wasn’t surprised by the findings. Participants reported reductions in depression and anxiety (compared to a control group), and they rated their alliance with the bot as strong. That’s certainly nothing to dismiss.

But I’d urge you not to overlook an important detail: Dartmouth researchers have reportedly been working on this since 2019. While I’m not close to the work, that timeline suggests to me they likely did their diligence - iterating, testing, refining. This wasn’t built overnight.

In contrast, we’re seeing a flood of “overnight wonders” - AI bots with sweeping claims and sleek UIs, built more for scale than for safety or effectiveness, often by teams with no clinical background. So yes, I want us to be excited - but also intentionally quite cautious.

We’re in a pivotal moment. Clinicians and innovators must rise to meet it - holding the tension between the urgency of the need for the now, given how many are suffering, and the responsibility to ensure these tools are safe, effective, and equitable.

Let’s start with access, since it’s usually the first argument made in favor of scaling AI mental health tools quickly (often without putting them to the test).

The truth is, traditional therapy is a luxury for many: $150–$250 per hour, months-long waitlists, and entire regions without adequate care. We’ve built a system that - intentionally or not - says healing is for those who can afford it, navigate it, and show up exactly when and how the system demands.

And beyond that, the math simply doesn’t work. Demand far exceeds supply. There aren’t enough licensed providers to meet the need, and forecasts paint an even grimmer future.

So, we have gotten creative: peer support, digital tools, coaching, scaling clinical supervision - and now, large language models and agentic AI.

The conversations we are having today are exciting. We’re imagining an infrastructure that doesn’t care about your zip code, insurance status, or the time of day crisis strikes. One that speaks multiple languages (not just linguistically, but culturally, too). One that flexes across life stages and situations.

That’s the promising vision these tools are reaching toward. And I want that future too! But we need to be very clear in emphasizing that access isn’t the same as impact.

In medicine, we don’t assume that just because someone can walk into a pharmacy, they’ll get better. Access to medication doesn’t mean it’s the right dose, the right diagnosis, or that it’s being used safely. Nor does it mean it's administered at the right time or that the patient will be adherent. The same is true here. Simply offering “access” to an AI tool doesn’t ensure it will help - or that it won’t cause harm.

We need to be painfully aware that we all have blind spots. We’re human - and we’re wired for novelty. Right now, AI is as shiny as it gets. But we can’t skip or shortcut the hard part: the hows.

Too many of my conversations these days with developers and technologists are skimming over complexity: mechanisms of action, clinical frameworks, acceptability, privacy and security, exceptions to the rule, potential adverse events, and more.

We also need to be clear and transparent with the public: not all AI is created equal. And the old adage - garbage in, garbage out - applies here more than ever. If what we feed these systems is flawed, biased, or incomplete, the consequences will scale with the technology.

So we need to ask - and we need to keep asking - important questions like:

  • How is this being built?

  • What data informs its creation? Who gathered it? Who reviewed it?

  • How are biases monitored - and are they actively being addressed?

  • Are users screened or triaged?

  • How is the scope of the solution defined and is it appropriate?

  • What about privacy, security, and informed consent?

  • Has this tool been tested - on whom, in what settings, under what conditions?

  • If it hasn’t been tested, is that made clear to users?

  • What ethical guardrails are in place - and are they enforced?

  • What escalation procedures exist? Where are the gaps?

  • How is impact measured?

  • Who is responsible for oversight - and are they qualified? What conflicts of interest do they hold and can those conflicts be addressed?

  • Where does liability lie? Are you transparent about it - and are policies aligned with the level of risk your tool may pose?

  • Are adverse events defined, tracked, and reported?

We can’t afford to confuse speed with progress. Access is a critical ingredient - but it’s not sufficient. If you’ve interacted with me before, you’ve heard me say it: engagement doesn’t automatically equate to outcomes. And outcomes don’t always translate into impact - the changes that really matter in someone’s life.

That’s the end goal for me: What impact does a particular AI tool have? For whom, under what conditions, and in what context?

And while some may not want to hear it, the truth is many of these AI bots are still experimental. They’re not standard of care. In medical terms, they’re closer to compassionate use. That doesn’t make them unimportant (in fact, labeling them as such is likely dangerous), but it does mean they need to be handled with seriousness, humility, and care. In mental health speed without care is not innovation, it's simply risk.

This is why I advocate for stronger partnerships. Now more than ever, clinicians and developers need to be working side-by-side - from the ground floor, not as an afterthought.

The future of mental health will be built by people who understand both what’s possible and what’s at stake. People who are excited about transformation and feel the urgency of the mental health crisis, but who also know that real change can’t gloss over or shortcut the important hows.

Shivm Dubey, M.D.

Psychiatrist | Co-Founder of World's First Mental Health Education Curriculum | Founder of Mighty Champions of Mental Health | Co-Founder of YesMindy | Helping People Build Resilience & Well-Being

4mo

Thank you for voicing this so clearly and compassionately. As someone deeply invested in both clinical care and scalable mental health education, I resonate strongly with your reflections. The urgency is undeniable—but urgency without wisdom can do more harm than good. Mental health isn't a product to be rushed; it's a deeply human experience that demands intentionality, safety, and lived insight. While AI holds great promise, we must hold firm to clinical rigor, ethical grounding, and community trust. I applaud the work you’re doing and the thoughtfulness behind it. Looking forward to reading the Therabot study and continuing this critical dialogue on how we build *with* people, not just *for* them.

Tony Udotong

Founder @ ZenQuill | UATX '29 | Le Wagon Alumni Batch #1920

4mo

Amazing insights🔥

To view or add a comment, sign in

Others also viewed

Explore topics