Healthcare AI is transforming how medicine is practiced, and most of the industry has settled into two lanes: tools that make clinicians more productive, and tools that make consumers better informed. Both lanes are real, both are growing. But there’s a third lane that neither touches: the patient inside an active care pathway, at the moment they need clinical guidance and no licensed resource is available. Building there requires a different set of assumptions, a different regulatory posture, and a fundamentally different relationship to accountability. That’s the lane we’re building.
Health & Wellness AI
Clinical Support
Clinical AI
Lane One: Consumer-Facing Health and Wellness AI
OpenAI, Anthropic, Amazon, and others are investing heavily in consumer-facing health assistants, symptom checkers, and wellness tools. These systems serve the general public: helping individuals interpret health information, track behaviors, and arrive at clinical encounters better prepared. They’re designed to stay wide of the regulatory line, and that constraint is appropriate. Research consistently documents that unregulated AI systems drift toward clinical territory, sometimes with a confident tone that patients find persuasive and clinicians find alarming. The consumer-facing lane is useful, and potentially dangerous in proportion to how far it strays from its stated scope.
Lane Two: Provider-Facing Clinical Support
Abridge, Nuance, Rad AI, OpenEvidence, UpToDate: tools like these serve licensed clinicians. They assist with documentation, imaging interpretation, workflow optimization, and clinical decision support. They’re often highly effective and increasingly adopted. The clinician remains present, directs care, and retains responsibility. The AI functions as a productivity multiplier within existing professional oversight. The FDA’s recently updated Clinical Decision Support Software guidance draws the regulatory line here with precision: software that supports a healthcare professional who retains the ability to independently review recommendations and exercise their own judgment falls outside the device definition.¹ The clinician’s license, judgment, and accountability are standing behind every output. The regulatory bar is lower, the market has responded accordingly, and this lane is crowded, well-funded, and producing real value.
Lane Three: Prescribed Clinical AI
This is the lane we’re building.
Where lanes one and two are defined by who the AI is serving — a consumer in one case, a clinician in the other — lane three is defined by what the AI is being asked to do: engage directly with a patient inside an active care pathway, interpret what they report, and influence clinical decisions in the absence of a supervising clinician. That’s a materially different role, and it carries a materially different accountability. The same FDA guidance is explicit on this point: software that supports or provides recommendations to patients or caregivers meets the definition of a medical device.¹ The clinician’s license is no longer standing behind the output. The AI itself carries the clinical accountability, and that requires regulatory authorization to be trustworthy enough to deploy at scale.
Post-operative recovery is the sharpest illustration of what unsupervised care actually looks like. A typical total knee replacement involves approximately 400 minutes of clinical engagement across the full episode of care. Of those minutes, roughly 150, about 40% of the total, represent the time care teams spend managing issues that arise during post-operative recovery,² most of it reactive, most of it driven by complications that earlier visibility could have prevented or mitigated. Patients, meanwhile, spend weeks or months recovering at home with little clinical supervision, navigating a recovery in which daily activity, home environment, and individual variation conspire to introduce factors that, if unaddressed, escalate into serious complications. Half of patients forget their discharge guidance within two weeks. Forty percent of readmissions are tied to missed early warning signs. One in five patients experiences an unplanned provider contact or complication within 30 days of discharge. The incremental annual direct healthcare cost from post-operative complications runs to approximately $60 billion. These are the predictable outcomes of an interval that has never had adequate clinical supervision.
The supervision problem lives in the interval after discharge, where the care pathway continues but the clinical presence does not.
The structural forces compounding this aren’t subtle. The AAMC projects a shortage of up to 86,000 physicians by 2036. One in five practicing physicians is already 65 or older. Healthcare demand is hitting a shrinking workforce under sustained pressure, and that collision has no natural resolution without tools that can carry clinically supervised load at scale. Prescribed clinical AI: authorized, validated, and operating under physician prescription, is uniquely positioned to serve as a digital proxy for the care team in the intervals human labor cannot reach. Without it, the structural problems in healthcare don’t get smaller; they compound.
The trust, reimbursement, and liability arguments for lane three all flow from the same source. Physicians need to know the tool performs the way a trained clinician would. Payers need an evidence record anchored in regulatory authorization before they’ll build reimbursement frameworks around a new category of care. Health systems need an accountability structure with defined scope and a cleared indication; without one, patient-facing clinical AI sits in a legal grey zone that no risk committee will approve for deployment at scale. An FDA cleared indication, an evidence record, and a defined accountability structure resolve clinical liability in ways that disclaimers cannot. (The liability question runs deeper than most in healthcare AI have yet grappled with, and I’ll address it directly in a future dispatch.)
The third lane of Healthcare AI is hard precisely because the work it’s attempting is consequential. The rigor isn’t the obstacle to solving the supervision problem. The rigor is what makes solving it possible, at the scale the problem demands, with the trust that physicians, health systems, and patients will need to actually rely on it. That’s what we’re building toward.
¹ U.S. Food and Drug Administration. Clinical Decision Support Software: Guidance for Industry and Food and Drug Administration Staff. Issued January 29, 2026 (superseding January 6, 2026 guidance). Section IV(3), p. 13.
² Halawi MJ et al. Quantifying Surgeon Work in Total Hip and Knee Arthroplasty: Where Do We Stand Today? J Arthroplasty. 2020;35(5):1170–1173. Mean postoperative/global-period work: 149.6 min overall; 140 min THA; 162 min TKA.