12/3/25

What Makes AI-Enabled Mental Healthcare Work? 5 Factors People Value Most

We set out to answer a big question: What makes AI-enabled mental healthcare acceptable and genuinely supportive?

Back to Resources

In our latest peer-reviewed study, we set out to answer a big question:

What makes AI-enabled mental healthcare acceptable and genuinely supportive?

Generative AI has transformed content delivery in recent years, boosting engagement with digital mental health tools. But acceptability of mental health AI hinges on more than technology alone. It’s about creating programs that feel accessible, helpful, and trustworthy.

In this study, we found participants wanted structure, guidance, practical tools that work, and the reassurance of human support in their mental health AI experience. A free-flowing, empathetic conversation wasn’t enough. If we want real outcomes, quality matters. That means delivering evidence-based techniques proven to help and providing a supportive environment for meaningful change.

About the study

Our study evaluated an early version of Velora, ieso’s mental health AI program.

We analyzed:

  • Self-report quantitative data from 203 study participants, and
  • 21 in-depth qualitative interviews conducted before using the program, and 16 after using the program

Our goal was to understand the “felt experience” of using the program to manage feelings of anxiety, stress, and worry.

From this, we identified five themes on what made AI-enabled mental healthcare feel supportive, not sterile; personal, not automated; and flexible, not formulaic.

Five qualitative themes reveal user needs influencing conversational agent-led digital mental health intervention (DMHI) acceptability, based on pre/post-intervention interviews.
1. Support that feels accessible – because it is

Participants wanted support that was easy to access both practically and emotionally. They appreciated that the program was a faster, easier, more affordable way to get support compared to what they had experienced elsewhere. For some, it felt like a first step toward getting help:

“I see it like, if I lined up dominos, this is like domino number one.”

Pre-intervention interviewee

Several anticipated it would be easier to open up in a digital program rather than speaking face-to-face with a therapist, with less fear of judgment or embarrassment. Removing the pressure to respond immediately gave them the time and space to reflect.

Participants also valued the ability to revisit content and activities whenever they liked:

“Text-based certainly for me. I'm able to be much more thoughtful, much more introspective, much more precise in my answers when they're written rather than spoken. I think it gives me time to think deeper about things.”

Pre-intervention interviewee

2. Content that helps and results that motivate

People weren’t looking for an unguided, free-flowing conversation with AI about their problems. They wanted:

  • Helpful, relevant content
  • Practical tools and techniques
  • To feel that the program was actually working in their day-to-day lives

As participants’ mood and anxiety symptoms improved, motivation grew. Feeling better became a powerful incentive to keep going. This builds on previous research linking perceived usefulness to greater engagement.

“I was motivated to continue because each module had something which helped me. I felt that it was helping me reduce my anxiety right from the beginning. So, you wanted to do the next module.”

Post-intervention interviewee

The numbers bear this out: 84% of participants found the program rewarding, underscoring that when digital mental healthcare is designed around progress, it can deliver meaningful benefits.

3. A personalized experience

Participants expected interactions with the AI to feel personal and relevant. Feeling heard and understood was crucial. In pre-intervention interviews, interacting with an AI conversational agent was viewed as acceptable so long as it was:

  • Well-trained
  • Fair and unbiased
  • Able to understand them and offer thoughtful responses
“As long as it's been well-trained, it will be interesting to see what it comes back with. That it’s not just a sort of trite responses you get when you're trying to get customer service out.”

Pre-intervention interviewee

When the program misinterpreted participants' input, it led to frustration.* Nevertheless, most participants were willing to overlook some of the AI’s limitations because the program helped them: 81% reported that it enhanced their care. Optimizing personalization and responsiveness to meet current expectations remains a key design priority, ensuring clinical quality is always at the forefront.

*Note: This study took place during the rapid rise of ChatGPT, a moment when prior perceptions of AI interactions – shaped by clunky customer service chatbots – were suddenly being rewritten.

The technology used in this study featured a highly advanced tree-based dialogue system. It was vastly superior to older models, but less fluid than the latest generative AI systems. Frustrations with the agent’s responses and the desire for greater personalization highlighted a gap between the program and newer AI capabilities. This reflects prior research indicating that generative AI agents are more engaging and foster a deeper understanding of a user’s needs – further highlighted by the millions of users seeking support from publicly accessible chatbots, like ChatGPT.

Tree-based dialogue systems, however, offer a crucial advantage: they can be safer and more controllable, which is especially important in mental health settings. The challenge is to combine this safety and clinical rigor with the responsiveness and nuance people now expect from AI (something we’ve now achieved).

4. Guided but in control

One of the most interesting findings was how people navigated the balance between structure and autonomy. Participants wanted:

  • Clear direction and guidance on what to do next
  • Evidence-based techniques and exercises
  • The flexibility to move through the program on their own terms
“It's something that’s available, the app is always there for you to use. And at the same time there's time frames on it. So, you have to do things by certain dates. There's a bit of a routine, but also a bit of flexibility.”

Pre-intervention interviewee

Participants didn’t want a free-flowing conversation that simply reflected their thoughts and feelings back to them. They wanted to be nudged toward specific practices that would help. The program provided structure and direction while still allowing users to maintain agency over their journey.

5. The human support factor

Finally, the research confirmed that adding a layer of human support is crucial for building trust and creating accountability. Participants valued:

  • Knowing the content was written by clinicians
  • Having a human available if needed to help them stay on track (in addition to reminders)
“It helped with accountability knowing that someone was going to call because then if I hadn't done it, that wouldn't be good.”

Post-intervention interviewee

This hybrid approach was effective, too: 89% of participants were satisfied with the program.

Importantly, including human support did not significantly impact scalability. In our broader study (of which the interviewed subsample was recruited), delivering the digital program required an average of 1.6 hours of clinician time per participant – up to 8x fewer hours than global estimates for traditional talking therapy – yet mental health outcomes were comparable.

Our findings validate that a hybrid care model (AI + human support) can be both effective and well-received, meeting user expectations. As generative AI is more ubiquitously deployed in mental health, including human support forms an essential risk mitigation layer ensuring user safety.

Important caveats

Like all research, this study has limitations. We only interviewed a small sample, primarily composed of individuals who actively engaged with the program. As a result, the data may reflect certain biases and lack the diversity needed to capture a broader range of perspectives.

To develop more inclusive and effective mental health tools, we need to understand experiences across different demographic groups and address potential health inequities. Future research will build on these insights in a more representative population.

What this means for the future of AI-enabled mental health support

For digital mental health solutions to succeed, they must feel like real care:

  • Accessible support
  • Helpful and beneficial
  • Personalized and responsive
  • Flexible, yet guided
  • Connected to human expertise and support

Our takeaway: as AI evolves, user experience and clinical rigor have to lead the way. Technology is the tool. Feeling supported, understood, and given truly helpful solutions is ultimately what helps people feel better. That is the key to building digital mental health interventions people trust and value.

👉 Read the full peer-reviewed paper here.

ABOUT THE AUTHOR

Clare Palmer, PhD

Director of Evidence Generation

Latest Articles

3/10/26

The Hormonal Blind Spot in Women's Mental Health

A reflection for International Women's Day on hormonal transitions and women's mental health risk across puberty, peripartum, and menopause — and how we can respond better.

3/6/26

What a Room Full of Health Tech Builders Is Talking About Right Now

Insights from Cornell Tech and Weill Cornell Medicine's Health Tech Summit 2026.

3/3/26

In Mental Health AI, Safety Is the Floor. Evidence Is What Counts.

The market is waking up to a new reality.

12/3/25

What Makes AI-Enabled Mental Healthcare Work? 5 Factors People Value Most

We set out to answer a big question: What makes AI-enabled mental healthcare acceptable and genuinely supportive?

View All Articles

Ready to take your care platform to the next level? Let’s talk

Request a Demo