4/30/26
What does clinical AI for mental health look like?
On The Healthtech Podcast, ieso's Clare Palmer, PhD, gets into what responsible digital mental health demands — and where the field is headed next.
Back to Resources


In a recent episode of The Healthtech Podcast, ieso's Director of Evidence Generation, Clare Palmer, PhD, spoke with host Dr. James Somauroo about what building responsible AI in mental health actually requires, how integrating mental and physical health support can unlock real economic value, and where agentic AI is taking mental healthcare next. Plus, she explains why the 50% recovery rate in mental health (yes, you read that right!) is a problem we have both the tools and the obligation to solve.
🎧 Listen on:
➡️ Spotify
➡️ Apple Podcasts
This conversation has been lightly edited and condensed for length and clarity.
What separates purpose-built mental health AI from general-purpose AI tools
“A lot of people are using ChatGPT for mental health support. We can't prevent that. What we can do is educate people on where AI is strong and where it’s not so strong. Then, come up with alternatives that are equally accessible but designed to help with specific mental health use cases.
There's a big difference between having an unconstrained AI system versus something more constrained, which is what we're building.
There's a big difference between having an unconstrained AI system version something more constrained, which is what we're building. Within mental health tech, a lot of people are thinking about the risks of AI. For example, the sycophantic behavior, the always agreeing with everything you say, the ‘Absolutely. That’s a great idea!’ kind of responses. It's that kind of collusion with what could be a delusional belief that could lead to horrendous endings for some people.
Building a system purpose-built for mental health support means you're controlling for all of these different risks. You're understanding them and mitigating them within the system. Then, you can have AI that truly helps people. It can deliver the right kind of training to help an individual with a particular cognitive skill, but in a very safe way.”
Velora’s three-layered safety architecture
“We’ve built our system to have three layers within Velora’s architecture. There’s a core intent engine that says, ‘This is the skill I need to deliver, and this is why.’
There’s also an experience layer, which is where we use generative AI to contextualize the delivery of that particular skill to the user, so it’s being taught in a way that resonates with them. For instance, it’s in a language that they understand, and it's using examples they may have alluded to earlier. That contextualization is what makes the Velora experience so engaging.
Finally, there’s a safety layer — the wraparound — which looks out for all the different risks in an AI output. So it won't collude with you. It won't invalidate your emotions. It won't say something offensive. It won't encourage harm to self or others. That's all baked into the system.”
Proving Velora is safe, not just claiming it
“We built a multi-agent evaluation framework, which I'm super proud of.
We use a very clear risk taxonomy that lets us ask, ‘Did the AI present with any of the risks we know are problematic with unconstrained LLMs?’
There’s a care agent that delivers Acceptance and Commitment Therapy-based skills training. A patient agent emulates what different users would say in different scenarios. It’s completely customizable and helps us understand how the system handles foreseeable misuse cases — and simulates hundreds of transcripts to evaluate. An evaluation agent goes through all of these transcripts, looks at every single AI output, and marks it for appropriateness. We use a very clear risk taxonomy that lets us ask, ‘Did the AI present with any of the risks we know are problematic with unconstrained LLMs?’
We can answer with confidence: no. Velora meets our acceptability criteria.
So we’ve done that in a simulated environment. But we also ran a research study (currently in preprint) to see whether we could see the same thing with real user interactions.
Across over 55,000 AI outputs, we saw zero instances of the LLM saying something that would likely cause harm, invalidating a user's emotions, judging, or encouraging harmful behaviors.
Across over 55,000 AI outputs, we saw zero instances of the LLM saying something that would likely cause harm, invalidating a user's emotions, judging, or encouraging harmful behaviors. There were a couple of instances in our simulations where our clinicians were like, ‘I don't love where it said that.’ But they were all in cases where we were stress-testing the system — really pushing the patient agent adversarially. We've since iterated and improved to control for those things. But we've been able to demonstrate that it is safe and that we can control those risks."
The bidirectional relationship between mental and physical health
“We're thinking a lot about how physical health and mental health relate to each other. They're not separate things. Depression is two to three times more likely in someone with a co-morbid chronic condition — and that's a bidirectional relationship.
We're thinking a lot about how physical health and mental health relate to each other. They're not separate things.
Having a lifelong chronic condition — the medication management, the appointments, the social isolation — means people really struggle with their mental health. But equally, there are biological pathways, which mean having a chronic health condition can exacerbate those mental health symptoms.
We conducted a study in collaboration with Roche with patients with type 2 diabetes who also had a co-morbid mental health condition. We trained therapists to always bring the care back to diabetes management, moving people toward their goals around diet, exercise, and staying on top of medication. We saw clinically meaningful reductions in anxiety and depression. But we also saw diabetes distress scores drop significantly and patient activation increase. Patient activation is that confidence a patient has that they're able to self-manage their chronic condition — and we know from the literature that it impacts cost savings.
Help someone with type 2 diabetes with their anxiety or depression, and they're more likely to stay on top of their glucose monitoring and less likely to show up in the ER. You're saving money for the healthcare system as a whole by helping them with their mental health.”
What real-world impact looks like
“We had one user who was struggling a lot with self-esteem and confidence and had a lot of negative thoughts about himself. He tried one of Velora’s modules, which was all about defusion — a technique about trying to distance yourself from unhelpful thoughts. Instead of challenging the thought, it's about saying: I acknowledge that thought, but it's not really relevant anymore. I'm just going to create that distance and not let it impact my behavior in quite the same way.
He went, and he had a great time.
He did one of our exercises, and it really gelled. He had a lightbulb moment that stuck with him. And so he would practice at home and use it in his life. He told us about an event he went to with his girlfriend, where other boyfriends were there. He hated going because he would always compare himself to them. But after he'd been practicing this one exercise, he realized that thought was silly, and it didn't matter in the same way anymore. He went, and he had a great time.
So it's like, are we measuring that? Are we measuring those lightbulb moments that matter for that person? He only used our program for two weeks, and it made such a difference to him. He was able to go to that social interaction and have a better time. Let's measure that.”
Where AI in mental health is heading
“I think we are moving away from autocomplete mimicry to goal-oriented reasoning. Large language models today, at their core, are essentially sophisticated autocomplete — predicting the next word in the sentence. But as we move to these more agentic systems, it's like moving from AI that can speak and generate language to AI that now can speak but also has arms and legs and can run towards a goal.
I think we are moving away from autocomplete mimicry to goal-oriented reasoning.
These agentic systems are augmented to be able to break down task lists, reach out to external tools, bring in other information, reason, and think: what is the next action I need to take in order to reach an outcome? Which is fundamentally very different.
If we can channel that in the mental healthcare space, then we can start asking — or we can have the AI ask — what does this person need to think, feel, and do, and what is the next sequence of interventions that this person needs in order to reach their goal and feel better?”
The big opportunity
“Half the people coming forward for mental health support who are really struggling don't get better. When I mention that figure, people are very surprised, I don't think people understand the likelihood of recovery from depression or anxiety on antidepressants or through psychotherapy.
We need to be measuring the right things, asking the right questions, and interrogating data in the right way. And I think with the emergence of AI and big data sets, now is the moment.
If we can get AI to determine exactly what skills someone needs and deliver them at a high, consistent quality, we're raising the floor. That's how we push beyond the 50% recovery rate.
If we can get AI to determine exactly what skills someone needs and deliver them at a high, consistent quality, we're raising the floor. That's how we push beyond the 50% recovery rate. We use AI to bring everyone together with this collective intelligence.
Let's raise that floor and make outcomes better — and let's democratize that knowledge to clinicians, as well. That's my ambition.”
ABOUT THE AUTHOR

Clare Palmer, PhD
Director of Evidence Generation
Latest Articles

4/30/26
What does clinical AI for mental health look like?
On The Healthtech Podcast, ieso's Clare Palmer, PhD, gets into what responsible digital mental health demands — and where the field is headed next.

4/23/26
Chronic Pain Programs Are Still Treating Only Half the Patient
Most MSK and chronic pain programs are built around an incomplete model. Here's what the behavioral layer they're missing is costing them — in outcomes, engagement, and ROI.
.jpg)
4/16/26
Diabetes Care Needs to Account for the Mental Load
People with diabetes are more likely to experience depression and anxiety. When mental health goes unaddressed, disease management falters. Here's what integrated support changes.

4/8/26
Chronic Illness Has a Mental Health Problem
Clinicians and scientists have understood for decades that stress, worry, and feeling down don’t stay in your mind.