(09) A History of Virtual Patients. Plus: The First AI-delivered Therapy Ban.
This week's top stories:
- How virtual patients could improve mental healthcare. Simulating a disorder has been done through role-play for decades. Besides the direct contact with patients, this is probably the most widely spread training method for therapists. But AI-applications could solve some classic limitations of the method in the near future.
- Illinois (US) first state to ban AI-delivered therapy. A full summary of the bill and my take on why full prohibition is as bad as non-regulation.
Odd Lots – Relevant News & Research:
- Why users are ghosting your mental health app. (Blog)
- Are multimodal LLMs dyslexic? (Paper)
- Why AI should not fully replace human therapy. (Paper)
I. A Short History of Virtual Patients.
And how they could help improve therapist training and care quality.
The timeline from starting your degree to getting your “Approbation” (full license to practice as a psychotherapist in Germany) is at the very best 10 years. 3 years bachelors, 2 years masters, 5 years practical education. It is important to regulate therapy and make sure people get educated to a high standard, both practically and theoretically.
All this would be a recipe for delivering quality care, both in terms of availability and effectiveness.
But in a country with a massive lack of existing therapists and no current plans for funding the education of future ones, we can not allow ourselves the luxury of the current system.
That’s where artificial patients come in. In my opinion, they can improve the preparedness of therapists for edge cases, difficult to treat disorders, and the emotional stress of the job.
And this might make training cheaper, of course. So here is my assessment of where virtual patients stand and could go in the future.
A Short History of Virtual Patients
The concept of virtual patients has evolved over the decades, beginning with physical role-play. In the 1960s, Dr. Howard Barrows at the University of Southern California introduced “standardized patients”. Trained actors portrayed specific medical conditions for student assessment. This approach allowed for consistency and control, but was costly and logistically demanding. Pretty basic, no-tech solution. Tried and tested.
The first technological milestone came in 2007 when Kenny et al. from the USC Institute for Creative Technologies published a paper on virtual patients designed specifically for therapist training. Unlike role-play or diagnostic tools, this effort marked a step toward replicating therapeutic dynamics, though still primarily focused on interview practice and initial assessments.
USC ICT introduced the characters Justin and Justina, used to train clinicians in interviewing patients with conditions like PTSD or conduct disorder. These systems offered realism through graphics, voice, and behavioral cues, aiming to replicate patient interactions in immersive virtual settings. But they were very limited in conversational quality. This only allowed for basic diagnostic testing. The tech never really picked up.
Enter the AI Era
In 2024, researchers published a paper on “Patient-Ψ” (Patient Psi), an AI-based virtual patient built on cognitive behavioral therapy principles. Rather than simulating general behavior, this model was designed to help students build and test cognitive models associated with psychological disorders. It represents a more structured and theory-driven approach to simulating patient thought patterns, but remains limited to diagnostic use.
Today, most virtual patient systems are designed to support specific, limited tasks. And a sophisticated ChatGPT prompt can produce a quality of conversation which decades of prior research couldn’t. Students use AI tools to practice diagnostic interviews, while others train specific therapist skills such as rapport-building. However, the vast majority are focused on short-term, one-time simulations.
In my opinion, this is a product issue, not a technology one.
The core limitations of an off-the-shelf LLM-based solution are still the same: they are narrow in scope, static in behavior, and cannot simulate the unfolding dynamics of real treatment across multiple sessions.
Where to go next?
Generative AI changes the game by allowing us to simulate not just fixed traits, but also dynamic states. For the first time, it is possible to model psychological volatility (shifts in mood, belief, and behavior over time) in ways that feel lifelike and consistent with real disorders. This opens the door to simulating comorbidities, layered symptoms, and evolving narratives, giving students a much richer and more realistic understanding of what it means to live with a mental illness.
Ultimately, virtual patients are a question of data availability. To train LLMs to recreate the above-mentioned features, ideally you would use full therapy transcripts, from first to last session. Gathering such data in compliance with regulations and especially securing the patient's privacy and consent, those are the hard nuts to crack.
Last, some doomsday scenarios. Building “dysfunctional” AI systems (even for educational purposes) creates the potential for misuse. A model that mimics a depressed or paranoid patient could just as easily be repurposed to generate fake personas for harassment, disinformation, or emotional manipulation. The same qualities that make an artificial patient (with hallucinations, for example) convincing in training also make it dangerous outside controlled environments.
Who will build this? Happy to connect and share a more detailed vision.
II. Illinois Outlaws AI-delivered Therapy.
The step demonstrates lack of understanding and fear of the technology.
Illinois has passed House Bill 1806, known as the Wellness and Oversight for Psychological Resources Act. Sponsored by Rep. Bob Morgan (D-Deerfield) and Sen. Ram Villivalam, it passed the Illinois Senate and House, both with bipartisan support. It is now headed to Governor J.B. Pritzker for signature.
What is it about?
Once signed, the law will ban AI from providing therapeutic services, including diagnosing, treating, or engaging in therapeutic communication, or detecting emotions or mental states. AI can still be used by licensed professionals for administrative tasks or patient-record transcription, provided informed consent is obtained. Breaching the ban carries civil penalties up to $10,000 per violation.
The main part of this bill is about protecting the term psychotherapy. That part is perfectly reasonable. Similar regulation exists in Germany, prohibiting anything but human-delivered psychotherapy to be represented as therapy. But this law goes further in banning licensed therapists from using AI as a support tool for delivering care.
In the past, there have being cases of therapy fraud, where therapists used LLMs to chat with users for them. Oracle Capital has a short report on Teladoc (owners of Betterhelp) for this reason. Banning fraud makes sense.
What doesn't make sense to me is banning therapeutic applications if implemented with informed consent by the patient. Hybrid therapy approaches will be made much more difficult by this bill. This sucks as such hybrid approaches might make mental healthcare more effective and scalable.
(I already hear you rebelling: Healthcare is not simply business! It is about the humans, it is not only about efficiency! Yes, but we need to use healthcare resources economically, and in this spirit should explore all potential new opportunities for doing so.)
The ban applies only once the governor signs and the statute goes into effect, sometime in the next weeks.
So No More Robo Therapy then?
Illinois’ move makes it the first US state to ban AI-delivered therapy. The swift, unanimous vote shows deep concern about unregulated AI impacting mental health services.
Legislators fear that chatbots might mislead vulnerable clients, breach confidentiality, or undermine professional standards.
While I understand the caution, I view this ban as a classical regulatory overreaction that may hinder innovation. AI-based tools offer a real promise in expanding access to and supporting therapists. The challenge lies in designing smart regulation, not blanket prohibition.
The biggest issue lies in effective oversight. Who has the knowledge to test LLM-based solutions on safety and efficacy? Self-regulation by companies seems like a solution guaranteed to fail. And the government likely doesn’t have the resources. Maybe the APA and other associations could step in?
Regardless, if we ban AI therapy outright, we lose potential benefits and leave those unable to access or pay for therapy with ChatGPT & co widely used for emotional advice and other therapy-like conversations today. A recent study showed how inadequate generalist LLMs are for delivering mental health support. Neither the full ban nor the current wild-west situation in the rest of the world regarding mental health and AI are sustainable solutions.
To be clear, I don’t think AI should replace human therapist. But I see a future where human-lead therapy combines the traditional setting with between-session AI-support to improve outcomes.
III. Odd Lots
Relevant news and research to help you keep on top of the industry.
I. Ghosting your mental health app? Reasons for bad retention.
In this LinkedIn Article, Scott Wallace offers his take on why mental health apps see bad retention rates vs chatbot usage for mental health struggles.
He argues that many mental health apps are build in ways, that make the user fell little understood. Where LLMs simply resonate, those apps start challenging beliefs and quanitying issues through journaling, mood tracking etc. He continues to propose a list of remedying tactics.
II. Are multimodal LLMs dyslexic?
This freethink article centers around a simple premise: LLMs can outperform humans on complex visuospatial tasks, and yet the average first-grader is likely better at reading clocks.
It is loosely based off of a recent study that examined how multimodal LLMs deal with clocks and calendars.
III. Should AI replace human therapy?
This paper investigates whether large language models (LLMs), like GPT-4o, can safely replace human therapists. The authors conclude they cannot and should not.
Their argument is based on three pillars: Empirical Failures, Clinical Incompatibility and Foundational Limitation.
Alright, that's it for the week!
Best
Friederich
Got this forwarded? Get the weekly newsletter for professionals in mental health tech. Sign up for free 👇
