Introducing AI in Health Systems: The Workflow‑First Approach CEOs Want in 2026
- mark70309
- Apr 7
- 7 min read

The AI pilot launched in January with executive fanfare. By March, clinicians had found seventeen workarounds to avoid using it. By June, it was quietly deactivated—$840,000 spent, zero sustained workflow change, and one more reason for your frontline staff to distrust the next 'innovation.'
Sound familiar?
Across health systems in 2026, COOs and CIOs are watching this pattern repeat: not because the AI is bad, but because no one designed the workflow first. And, there's a lot of urgency to get this one right.
For COOs and CIOs, that pressure takes a very specific shape during AI introduction and EHR or digital workflow changes. Your CEOs are signaling a top‑box priority that sounds like this:
Focus on a limited number of high‑yield, workflow‑specific initiatives.
Prove value fast, ideally within the fiscal year and within a clearly defined workflow.
Show visible wins on access, equity, and clinician burden—not just dashboards and decks.
The question isn’t, “What’s our AI strategy?” anymore. It’s, “Where can we apply AI narrowly, inside a real clinical or operational workflow, to deliver measurable benefit without destabilizing everything around it?”
One powerful answer is hiding in plain sight—in a story out of Johns Hopkins.
What Johns Hopkins Did Differently with AI and Retinal Screening
Diabetic retinopathy is a leading cause of vision loss in people with diabetes. In a 2024 Johns Hopkins study of AI‑driven retinal screening for youth with diabetes, researchers found that introducing autonomous AI exams into routine care dramatically increased screening completion, with some cohorts approaching universal completion among those offered the in‑clinic exam.
A related Johns Hopkins report in 2023 highlighted a pediatric program where diabetic retinopathy screening rates jumped from roughly half of eligible patients to well over 90 percent after implementing an AI‑enabled workflow in primary care settings. These gains were especially pronounced among racial and ethnic minority youth and patients insured by Medicaid, who had previously been least likely to receive timely eye exams.
In other words, Johns Hopkins didn’t just add an AI tool. They moved the point of care. Instead of sending patients somewhere else for an exam, they brought the exam to where patients already are.
Since launching autonomous AI retinal screening in 2020, Johns Hopkins has seen patterns like:
Screening completion rates climbing dramatically for young patients with diabetes.
The largest gains in youth from racial and ethnic minority groups and those insured by Medicaid were previously the least likely to receive timely screening.
A sustainable operational model—once the workflow was in place, it didn’t depend on daily heroics or “special project” energy to keep working.
Imagine being five years into those kinds of integrated AI results in your own system—where a single, well-chosen workflow quietly delivers better access, stronger equity, and less friction for clinicians, every clinic day -- AND, even better if you could show ROI in these years.
Autonomous AI for pediatric diabetic eye exams pay off when you use them at scale. In the first year, it can cost more per patient at low volumes, but it becomes clearly cost‑saving as volume grows.
In an npj Digital Medicine study, AI screening ranged from about $242 USD more per patient at low volumes to about $140 USD savings per patient at higher volumes. The break‑even point was around 241 pediatric patients per year.
Larger systems that can route more children through an AI-enabled workflow benefit the most. They both save money per exam and screen more youth than traditional eye care–only pathways.
The authors also noted equity and productivity upside: bringing exams into routine pediatric or endocrine visits can reduce missed screenings tied to separate eye appointments. And because the model only counts first‑year costs, long‑term savings could be greater, even if not every assumption fits every setting.
Inside the Exam Room: How the AI Retinal Workflow Actually Works
The most important part of this story isn’t the AI model. It’s the workflow.
Here’s, in simple terms, what changed in the primary care setting:
The hardware lives in primary care. A non-mydriatic retinal camera sits in the clinic, often in a small room near the exam rooms. No dilation drops, no trip to a separate specialist office.
Medical assistants or nurses run the exam. During a routine diabetes visit, the MA or nurse takes a set of retinal images using the camera. They do not need to be eye care specialists. Training focuses on positioning, capturing acceptable images, and basic troubleshooting.
AI reads the images in real time. The autonomous AI algorithm analyzes the images within minutes and classifies the results as “no refer,” “referable,” or “non-diagnostic—repeat or refer.” The clinician does not manually interpret the images.
Results flow directly into the record and next steps. The result is written back into the electronic record. If a referral is needed, the workflow can trigger an order, a message to scheduling, or a care management task—depending on how the health system designs it.
Specialists see the right patients. Ophthalmology clinics receive a more filtered stream of patients who truly need follow-up, rather than a mix of routine negative screens and late-stage disease.
From a COO or CIO lens, the key is this: the AI didn’t live in a lab or in a standalone app. It was inserted into a standard primary care visit, with clearly defined roles, decision rights, and data flows.
Why This Fits the 2026 CEO “Top Box”
When you look at the Johns Hopkins retinal AI program through the lens of your CEO’s current priorities, a pattern emerges.
Narrow scope, big upside. This wasn’t “AI everywhere.” It was AI in one very specific workflow—diabetic retinopathy screening—where gaps were visible, impact was measurable, and the outcome (preventing vision loss) is easy to understand.
Fast, trackable ROI. Success shows up in metrics you can track within a fiscal year: screening completion rates, time-to-diagnosis, specialist utilization patterns, and eventually reductions in late-stage complications and associated costs.
Equity gains built in, not bolted on. Because the exam takes place where patients already receive diabetes care, those who are least likely to make it to a separate eye clinic—such as young, minority, and Medicaid patients—benefit the most. Equity is not a side effect; it is central to the value story.
Clinician time is protected, not consumed. Primary care clinicians aren’t adding another interpretive task. MAs and nurses run the images; the AI reads them; the system handles much of the routing. For specialists, capacity is used for higher-value visits rather than routine negative screens.
This is the blueprint: a single, deeply integrated workflow with visible benefits for patients, clinicians, and the P&L.
Five Moves COOs and CIOs Can Make Now
You do not need a Johns Hopkins research bench to follow this pattern. You do need discipline in where and how you deploy AI into clinical workflows.
Here are five practical moves you can make in your organization:
1. Pick one “narrow, high-yield” workflow for AI
Instead of starting with, “What AI can we buy?” start with, “Which workflow meets all four of these tests?”
High volume and high clinical importance (e.g., screenings, chronic disease follow-up, high-risk medication monitoring).
Clear, guideline-based criteria for what constitutes success or failure.
Documented gaps today (missed screens, delayed follow-ups, no-shows, inequities by race or payer).
A straightforward, high-impact next step when the patient is flagged (referral, counseling, treatment change).
If a workflow fails one of these tests, it’s probably not your first AI pilot.
2. Design from the visit backward, not from the algorithm forward
Before you evaluate a single vendor, sit with a small group and map the visit in painful detail:
Who sees the patient first?
At what point could an exam, image, or questionnaire be added without derailing flow?
Who will actually start the AI process—MA, nurse, front desk, clinician?
How does the result display, and who is responsible for acting on it in real time?
If you can’t draw a simple swim lane diagram that anyone can follow, you’re not ready to implement the tech. Johns Hopkins’ success came from getting this part right.
3. Make “less administrivia per clinician” a core success metric
Many AI projects die because they technically “work,” but they add clicks, messages, or confusion.
For every AI-enabled initiative, define up front:
What specific tasks or steps will be removed from physicians and advanced practice clinicians?
How many minutes per visit are you aiming to save—or at least not add?
How will you capture the front-line perception of burden at 30, 60, and 90 days?
If you can’t articulate how clinician time will be protected or improved, the project is not aligned with your CEO’s 2026 priorities.
4. Build a one-page equity and ROI scorecard for each initiative
Think in terms of a simple, recurring view rather than a one-time business case slide deck. For each AI workflow, track:
Reach and completion by race, ethnicity, language, payer, and geography.
Downstream utilization (e.g., appropriate specialty referrals, reduced ED visits, avoided complications).
Operational impact (e.g., no-show rates, turnaround times, message volume).
Financial indicators (where possible) over 12–18 months.
This doesn’t need to be perfect on day one, but it needs to exist. If you cannot tell an equity and ROI story in one page, your CEO will struggle to rank this initiative in the “top box” next cycle.
5. Plan a 90-day “stability check” as part of go-live—not as an afterthought
AI implementations often get a flurry of attention at go-live and then quietly drift.
Bake in a 60–90 day stability review:
Are visit flows holding up, or are workarounds emerging?
Are front-line staff using the AI as intended, or selectively ignoring it?
Are we seeing early signs of the equity and ROI pattern we expected—or something different?
Do we need to adjust decision rights, escalation paths, or training?
Treat this as non-negotiable, just as you would with a major EHR or revenue cycle change.
If you’re considering AI in a clinical workflow this year—whether for screenings, managing chronic diseases, or another high-impact area—the Johns Hopkins retinal story offers a helpful pattern: start small, design the workflow first, protect clinician time, and track equity and ROI on a single page. If this is a live discussion in your system and you’d like to compare notes on deploying AI in operations, feel free to message me here on LinkedIn.
Further reading
Recent summaries of healthcare CEO priorities for 2026 (e.g., reports from ACHE, WittKieffer, and others) highlight a common pattern: focus on a smaller set of initiatives that protect margins, support workforce sustainability, and deploy AI where it clearly improves quality, safety, and efficiency. https://wittkieffer.com/insights/healthcare-ceos-agenda-top-priorities-for-2026
A 2024 study in Nature Communications on autonomous AI eye exams for youth with diabetes showed very high completion rates when exams were offered in routine care. https://doi.org/10.1038/s41467-023-44676-z
A 2023 Johns Hopkins Medicine report describing pediatric diabetic retinopathy screening rates rising from ~50% to >90% after implementing AI-enabled retinal screening in primary care, with especially strong gains for minority and Medicaid youth. https://www.hopkinsmedicine.org/news/articles/2023/12/with-ai-tool-johns-hopkins-clinician-boosts-diabetic-retinopathy-screening-to-95-among-pediatric-patients
Cost-effectiveness of AI for pediatric diabetic eye exams from a health system perspective. Journal: npj Digital Medicine, 2025, volume 8, article 3. Authors: Ahmed M, Dai T, Channa R, Abramoff MD, Lehmann HP, Wolf RM, et al. https://www.nature.com/articles/s41746-024-01382-4



Comments