Foundation models are not coming to healthcare. They are already here. The question is no longer whether these large-scale AI systems will reshape how we generate evidence and make clinical decisions -- it is whether your organization is positioned to use them responsibly or whether you will be playing catch-up for the next decade.
I have spent the past year watching healthcare organizations grapple with this shift, and what I see most often is a dangerous combination of hype and hesitation. Leaders are excited about the potential but paralyzed by the complexity. They read about GPT-4 passing medical licensing exams and assume the technology will solve itself. It will not.
What Foundation Models Actually Change
Let me cut through the noise. Foundation models -- large neural networks pre-trained on massive datasets and then fine-tuned for specific tasks -- change three fundamental things about healthcare AI.
First, they collapse the data requirement problem. Traditional machine learning in healthcare required you to collect thousands or tens of thousands of labeled examples for every specific task. Want to identify diabetic retinopathy? Train a model on diabetic retinopathy images. Want to identify glaucoma? Start over with a new dataset. Foundation models trained on diverse medical data can be adapted to new tasks with far fewer examples. This is not a marginal improvement -- it is an order-of-magnitude reduction in the barrier to entry for AI applications.
Second, they enable multimodal reasoning. The most exciting foundation models are not just processing text or images in isolation. They are learning to integrate clinical notes, lab values, imaging, genomic data, and patient history into unified representations. This mirrors how experienced clinicians actually think -- synthesizing across modalities rather than looking at each data type in a vacuum.
Third, they democratize sophisticated capabilities. Tasks that previously required specialized NLP pipelines -- extracting medications from clinical notes, identifying adverse events, coding diagnoses -- can now be accomplished with well-crafted prompts to general-purpose models. This lowers the technical barrier for healthcare organizations that do not have deep machine learning expertise but desperately need these capabilities.
Where the Real Value Is Emerging
I am skeptical of most "AI in healthcare" announcements because the gap between demo and deployment is vast. But there are three areas where foundation models are already delivering measurable value.
Clinical Documentation and Coding
The administrative burden of clinical documentation is crushing. Physicians spend more time on paperwork than patient care. Foundation models fine-tuned for clinical text can draft notes, suggest diagnosis codes, extract structured data from unstructured narratives, and identify documentation gaps. This is not sexy work, but it is high-impact work. Organizations deploying these systems are seeing 20-30% reductions in documentation time and meaningful improvements in coding accuracy.
The key is that these are augmentation tools, not replacement tools. The physician still reviews and signs off. But the cognitive load shifts from generation to verification, which is a fundamentally different and less exhausting task.
Drug Discovery and Target Identification
Pharmaceutical companies are using foundation models trained on molecular structures, protein sequences, and scientific literature to accelerate early-stage drug discovery. Models like AlphaFold have already transformed how we predict protein structures. Newer systems are identifying potential drug targets, predicting compound toxicity, and even suggesting novel molecular structures that would take human chemists years to conceive.
The economics here are compelling. Drug discovery is a process where 90% of candidates fail, and each failure costs millions. If foundation models can improve the hit rate by even a few percentage points, the ROI is enormous.
Evidence Synthesis and Literature Review
This one is personal to me because it touches directly on my work in real-world evidence. The scientific literature is growing faster than any human can read. Foundation models can now synthesize findings across thousands of papers, identify gaps in the evidence base, surface relevant studies that traditional search would miss, and even generate preliminary systematic review outputs.
I am not suggesting we let AI write our meta-analyses unsupervised. But using foundation models to do the initial heavy lifting -- screening abstracts, extracting study characteristics, identifying heterogeneity in findings -- frees researchers to focus on the interpretive and analytical work that actually requires human judgment.
The Risks Are Real
I would be doing you a disservice if I only talked about the upside. Foundation models in healthcare carry substantial risks that are not being taken seriously enough.
Hallucination is not a minor technical issue. These models can generate plausible-sounding but completely fabricated information. In a clinical context, that could mean suggesting a drug interaction that does not exist or missing one that does. Any deployment must include robust validation pipelines and human oversight for high-stakes outputs.
Bias amplification is structural. Foundation models learn from the data they are trained on. If that data reflects historical healthcare disparities -- and it does -- the models will perpetuate and potentially amplify those disparities. Organizations deploying these systems need rigorous fairness auditing across demographic groups.
Regulatory clarity is still evolving. The FDA's framework for AI/ML-based software as a medical device is improving, but foundation models challenge existing categories. Is a general-purpose clinical language model a medical device? What about when it is fine-tuned for a specific diagnostic task? Organizations need to engage with regulatory strategy early, not after they have built and deployed a system.
What I Would Do
If I were advising a healthcare organization on foundation model strategy today, here is what I would recommend:
Start with high-value, lower-risk applications. Clinical documentation, literature synthesis, and internal analytics are safer starting points than direct patient care decisions. Build organizational muscle and validation infrastructure before moving to higher-stakes domains.
Invest in evaluation capabilities. You cannot safely deploy what you cannot rigorously evaluate. Build or acquire the ability to benchmark model performance against clinical ground truth, test for bias across patient subgroups, and monitor for drift over time.
Do not build from scratch unless you have to. For most healthcare organizations, the right strategy is to leverage pre-trained foundation models from established providers and fine-tune for specific use cases. The resources required to train competitive foundation models from scratch are beyond what most organizations can justify.
Get your data house in order. Foundation models are only as good as the data they are fine-tuned on. If your clinical data is fragmented, poorly curated, or inconsistently coded, you will not be able to adapt these models effectively. Data infrastructure investment is a prerequisite for AI success.
Plan for the regulatory conversation. Even if your initial applications do not require FDA clearance, your trajectory likely leads there. Engage with regulatory affairs early. Document your development and validation processes in ways that will support future submissions.
The Bottom Line
Foundation models represent the most significant technical shift in healthcare AI since deep learning emerged a decade ago. They lower barriers, enable new capabilities, and will reshape competitive dynamics across pharma, health systems, and payers.
But the organizations that capture value from this shift will not be the ones that move fastest. They will be the ones that move most deliberately -- building the evaluation infrastructure, the governance frameworks, and the clinical integration pathways that separate hype from real impact.
The foundation is laid. The question is what you build on it.