Healthcare AI’s Missing Role: The Technical Clinician
.jpg)

I was reviewing an AI-generated clinical summary of an older adult with dementia. The summary described her as "agitated and non-compliant with care". Technically accurate, but clinically thin. Was she in pain? Disoriented? Reacting to a change in environment? None of that was in the note the model was fed. In dementia care, the most important information is often the part that rarely gets written down. Someone without clinical training would have read the summary and moved on. I could see it was thin, because I'd spent years caring for patients like her.
I have a slightly unusual background. I’ve spent over a decade as a nurse in aging and dementia care and research, and trained in nursing and biomedical informatics at Columbia University. I now work in healthcare AI as a Clinical Solutions Architect, making sure what we build is helpful and does no harm. From that vantage point, I see AI developing faster than ever, but lagging in its translation. Less than 2% of clinical AI models make it past prototyping. I see the barriers as workflow integration, trust, and usability.
Engineering teams crave "clean data". In healthcare AI, the gold standard is something like CMS claims data (structured and episodic). But in healthcare, what gets labeled "dirty" often carries exactly the context that drives the clinical decision. The dementia summary note looked "clean" to an engineer, but it had actually been stripped of everything useful in a way that someone without clinical training couldn’t see — because seeing what was missing required knowing what should have been there.
I see a different version of that gap every week at Anterior, where I help build AI that accelerates the work of health plan clinicians. Medicare's homebound criterion requires that leaving the home takes "considerable and taxing effort" to cover home health care services. I watched three experienced utilization management nurses read the same chart and reach three different conclusions about whether the patient met that bar. None were wrong. The phrase sounds subjective but to veteran nurses it is specific, read against years of cases, documentation cues, and an unwritten sense of what the standard looks like in practice. The policy contained none of that. My job was to extract it and put it somewhere our AI platform and our engineers could understand it.
I think of people who do this kind of work as bilingual, fluent in both clinical and technical. They can evaluate, design, build, and challenge AI because they know what the data actually means and where the ambiguity lives. The role barely exists: over 75% of health professions students get no formal AI education. At Anterior, about 40% of the team is clinical, embedded in product, engineering, and leadership. That structure is deliberate — it's the only way I've seen AI for healthcare actually ship safely.
A technical clinician is someone who knows why the AI was wrong and can explain it. Most of us came here the same way: days at the bedside, and weekends spent tinkering with AI. That mix is where our instincts were built.
These systems need technical clinicians helping build them.