The ceiling was people
The company runs a network of care centers for children on the autism spectrum. Its clinical methodology was the differentiator: evidence-based, structured around child autonomy, with clear discharge criteria rather than indefinite treatment. But that methodology existed in the heads of a few senior coordinators. Each center followed different protocols, and opening new ones meant replicating expertise that could not be replicated. The business had a growth ceiling, and the ceiling was people.
Encoding expertise into product
Before writing a spec, I spent weeks with clinical teams: observing sessions, shadowing coordinators, mapping where clinical reasoning ended and manual process began. The clinical model had never been formalized into rules a product could enforce. Defining what the protocol actually was, in explicit, structured terms, became the foundation of the entire discovery process.
Underneath, I found a clinical cycle where every step depended on the quality of the previous one, and most steps were broken. Documentation took 2 hours of manual writing after every session. Evolution data arrived late, sparse, or inconsistent. Coordinators were making decisions with fragments.
Every downstream layer needed clean data, so we fixed the source first
I led the product team responsible for the clinical platform, from assessment through session delivery. Each release needed to deliver enough clinical value to earn adoption while laying the foundation for the next layer.
The most urgent problem was session output. Therapists documented each session by hand, producing unstructured text that no system could reliably analyze. We built LIAM first, using Google Cloud's Vertex AI to capture session audio and generate structured clinical documentation in 3 minutes. Google Cloud later published it as a reference case. That solved the output side of the data problem and gave every downstream layer a clean foundation to build on.
Coordinator Dashboard
Alerts
4
Needs attention
Clinical cases
15
Active
Insufficient progress data for progress analysis
LIAM suggests
AI-powered clinical alerts with prioritized cases and actionable suggestions
With structured session data flowing, we built the other half: an agentic AI wizard that recommends protocol selections, objective structures, and plan adjustments across disciplines. The coordinator reviews and approves. Clinicians pushed back on the rigidity of structured inputs, but without them, no AI layer downstream could function.
An AI assessment pipeline then compressed what used to take 3 months into 3 hours. With both sides of the data problem solved, plan adjustments that used to wait weeks happened within days.
The whole cycle, finally connected
A child who used to take 3 months to reach a therapeutic goal now reached it in 20 days. Not because any single feature was faster, but because the whole cycle was finally connected.
3mo → 20d
Goal completion
Time to complete a therapeutic objective
3mo → 3h
Assessment time
End-to-end assessment pipeline
35% → 90%
Quality adherence
Across all therapists
5,500+
Weekly hours
Hours of therapy flowing through the system
+1,000%
Therapist base
From 20 to 220, each running 25h/week
3x
Case capacity
Per clinical coordinator
The expertise that once lived in a few coordinators' heads now lives in product. Every AI layer in the system follows the same principle: the model proposes, the clinician decides. In a domain where a wrong recommendation affects a child's development, that boundary is not a limitation. It is the product.
Adoption required reshaping how coordinators tracked progress and how clinical performance was measured. It started slower than leadership expected, but it was earned, not imposed.