AI Adoption in the U.S. Clinics Doubled in 12 Months

A Report on ‘Adoption of AI in U.S. Clinics’: What is Really Happening, And Where it is All Heading

Executive Summary: The Six Numbers That Tell You Everything

Let’s start with the facts. If you only remember six things from this entire report, make it these six. 

66% of U.S. doctors used AI in their practice in 2024. A year earlier, it was just 38%. That is nearly double in 12 months. AMA 2024 71% of U.S. hospitals had AI built into their patient records system in 2024, up from 66% the year before. AHA/ONC 2025 
100% of the 43 largest U.S. health systems surveyed had adopted AI note-writing tools. Every single one of them. JAMIA May 2025 1,356 AI medical tools approved by the FDA by September 2025. But here is the catch: 97% of them were approved without any real patient outcome testing. JAMA Network Open Nov 2025 
81% vs 50% That is the AI adoption gap between big urban hospitals and small rural hospitals. And it is getting worse, not better. AHA Nov 2025 Dec 2024 The specific two-week window when healthcare AI adoption flipped from a slow crawl to a full sprint, according to U.S. Census Bureau data. PMC / Census Jul 2025 

Why Did All of This Happen So Fast? 

Honestly? Because doctors were exhausted, and AI showed up at exactly the right moment. 

Think about what the average American doctor’s day looked like in 2025. They spent 2 to 3 hours doing paperwork for every single hour they spent actually with a patient. More than half of all U.S. doctors reported being burned out. Not just tired. Genuinely questioning whether they could keep going. 

A big part of that exhaustion was the documentation. Every patient visit creates a pile of work: clinical notes, billing codes, insurance forms, referral letters. Doctors were seeing their last patient of the day and then sitting down to type notes for the next two hours. The medical community even has a name for it. They call it pajama time, which is the work you do at home, late at night, in your pajamas, when you should really be resting. 

So when AI note-writing tools arrived and said they would listen to the appointment and write the notes automatically, doctors did not need much convincing. Word spread fast, results were real, and adoption took off almost overnight. The data confirms this perfectly. U.S. Census Bureau research published in July 2025 tracked AI adoption across all healthcare businesses and found something striking. Growth was nearly flat for most of 2023 and 2024. Then it suddenly jumped to almost six times its previous rate within a single two-week window: December 30, 2024 to January 12, 2025. That is not a gradual trend. That is a switch being flipped.

What Is AI Actually Doing Inside Clinics Today? 

Healthcare AI is not one single technology. It is dozens of different tools doing completely different jobs. To really understand what is going on, you need to look at each one separately, what it does, what the evidence says, and what the limits are. 

3.1 AI Note-Writing: The Tool Doctors Love Most 

This is the big one, and it is the place where AI has made the fastest and most dramatic difference. 

Here is how it works. An AI note-writing tool, often called an ambient scribe, runs quietly in the background during a doctor’s appointment. It listens to the conversation between the doctor and the patient. After the appointment, it produces a full draft clinical note. The doctor reads it over, makes any needed corrections, and approves it. That review takes about 30 to 60 seconds. Compare that to the 10 to 15 minutes of typing that the same note would have required before. 

A May 2025 survey of 43 of the biggest U.S. health systems, published in JAMIA, found that AI note-writing was the only tool where every single health system, all 43 of them, had adopted it to at least some degree. More than half said it was working really well for them. That kind of unanimous adoption is almost unheard of for any new medical technology. 

 

What does the actual science say about this? 

Two very important studies were published in late 2025. These were not surveys or opinion polls. They were randomised controlled trials, which is the same gold standard method used to test new medicines. One group gets the treatment, another group does not, and you compare the results. 

The first trial was published in NEJM AI in November 2025 by a team at UCLA Health. They randomly assigned 238 doctors across 14 different specialties to one of three groups: use Microsoft DAX, use Nabla, or keep doing things the normal way. After two months, here is what they found. 

  • Doctors using the Nabla scribe spent about 10% less time writing notes compared to the control group. 
  • Both AI tools produced meaningful improvements in doctor burnout scores and mental workload, roughly a 7% improvement. 
  • The AI did occasionally make mistakes, leaving out information, using wrong pronouns, or introducing small inaccuracies. One mild patient safety issue was reported during the study. 
  • The more a doctor actually used the scribe during appointments, the bigger their benefit was. Doctors who used it infrequently saw very little improvement. 

The second trial was published in NEJM AI in December 2025 by researchers at the University of Wisconsin. They found similar results: a meaningful reduction in burnout scores and about 30 fewer minutes of paperwork per doctor per day. The university was so confident in the findings that they immediately rolled the tool out to 800 doctors and nurses across Wisconsin and Illinois.

Important: The catch you really need to know about AI scribes can and do make mistakes. Doctors must carefully read every single note the AI produces before it goes into the patient’s record. This technology works best as a helper, not as a replacement for human judgment. Any clinic that simply turns on the AI and trusts it without review is taking a genuine patient safety risk.

3.2 AI Reading Scans and X-Rays 

This is the area where AI has been developing the longest, and it has by far the most government approvals. AI tools designed to look at medical images like X-rays, CT scans, and MRIs can flag potential problems, help radiologists work faster, and catch things that might otherwise be missed. 

The same JAMIA survey of 43 health systems found that 90% had deployed some form of AI for medical imaging. But only 19% said it was actually working really well. That gap between widespread deployment and genuine success is something we see across many AI tools right now, and it is worth paying attention to. 

Here is what the evidence actually shows AI imaging tools can do when they work well: 

  • In stroke patients, AI-assisted triage has cut the time between arriving at hospital and starting treatment by up to 30 minutes. In stroke care, every single minute matters because time directly determines how much brain damage occurs. This benefit is genuinely life-saving. 
  • In breast cancer screening, AI-assisted reading of mammograms has reduced missed cancers by nearly 9% and brought down the number of unnecessary follow-up appointments. 
  • Radiologists working alongside AI detect problems 26% faster and spot nearly 30% more cases overall, according to a 2025 analysis. 

But here is something critical to understand about those 1,356 FDA approvals. A systematic review published in JAMA Network Open in November 2025 found that 97% of approved radiology AI tools were cleared through a process that does not require any clinical testing on real patients. The FDA verified that the tool was technically safe to use. It did not require any evidence that the tool actually improves patient outcomes. Only about 5% of approved AI devices were ever tested in a real clinical trial. FDA approval tells you the tool will not harm patients. It does not tell you it will help them. That is a very important distinction. 

3.3 AI That Spots Dangerously Sick Patients Early 

Some of the most powerful AI in hospitals does not perform any task you can see. It sits quietly in the background, continuously watching patient data, and raises an alert when it detects that someone is about to get much sicker. The most important use case for this today is sepsis. 

Sepsis is a life-threatening reaction to infection. It kills more than 270,000 Americans every year. It is also notoriously hard to catch early because the initial warning signs, a slightly elevated heart rate or a mild fever, could point to dozens of other, far less serious conditions. By the time sepsis becomes obvious, it is often very advanced and much harder to treat. 

In September 2025, Cleveland Clinic announced the expanded rollout of an AI sepsis detection tool across its hospitals, following a pilot that delivered extraordinary results. The system produced 10 times fewer false alarms compared to the previous approach. It identified 46% more sepsis cases. And it gave advance warnings before antibiotics were needed in seven times as many cases. 

Those numbers are worth sitting with for a moment. One of the biggest problems with older AI alert systems was something called alert fatigue. When a system fires off constant alarms, including many false ones, nurses and doctors gradually start ignoring all of them, including the real ones. Cutting false alarms by a factor of 10 means that when this AI flags a patient, people actually take it seriously. 

3.4 AI Handling Billing and Scheduling 

This is less dramatic than catching sepsis, but it is actually where the most money is moving and where adoption is growing the fastest. According to the 2025 AHA and ONC hospital survey, in just one year from 2023 to 2024, the share of hospitals using AI for billing jumped from 36% to 61%. Scheduling AI went from 51% to 67%. Those are the two fastest-growing AI applications in all of U.S. healthcare right now.

Who Is Getting AI, and Who Is Being Left Out? 

This is probably the most important section in this entire report. Because the data tells a very clear story, and it is not a comfortable one. 

If you live near a big urban hospital, you are probably already benefiting from healthcare AI in ways you may not even know about. If you live in a rural area and depend on a small local clinic, you almost certainly are not. And the gap between those two experiences is growing wider every year. 

4.1 The Numbers by Hospital Type 

The American Hospital Association published a detailed breakdown in November 2025 showing exactly which hospitals were using predictive AI. Here is what the data shows. 

81% Urban hospitals using AI AHA Nov 2025 56% Rural hospitals using AI AHA Nov 2025 
86% Hospitals that belong to a large health system IntuitionLabs Oct 2025 31 to 37% Independent hospitals with no system affiliation IntuitionLabs Oct 2025 
80% Standard non-critical-access hospitals AHA Nov 2025 50% Critical Access Hospitals: the small rural facilities that are often the only option for miles around AHA Nov 2025 

4.2 Why the Gap Exists 

A July 2025 ScienceDirect review and an August 2025 arXiv study focused on rural healthcare both pointed to the same set of root causes. 

  • Bad internet. Many rural areas still do not have reliable broadband. Most AI tools live in the cloud and simply do not work properly without a fast, stable connection. 
  • Old software. Rural clinics often run older, cheaper electronic records systems that cannot connect to newer AI tools at all. 
  • No IT staff. A three-person rural clinic does not have a technology director to evaluate AI tools, negotiate contracts, train staff, or fix things when they break. 
  • Thin budgets. Clinics that serve a high share of Medicaid patients operate on very slim financial margins. There is simply no money left over for new technology investments. 
  • Language barriers. Most AI tools only work well in English. In communities where many patients speak Spanish, Vietnamese, Somali, or other languages, this is a serious practical problem that goes far beyond inconvenience.

The ScienceDirect review put a number on the scale of this problem: 29% of rural adults are effectively shut out of AI-enhanced healthcare by the digital divide alone. And when AI tools do exist but were not trained on data from diverse patient populations, they can be 17% less accurate for minority patients. That is not a side issue. That has a direct impact on patient care. 

4.3 The Shadow AI Problem 

There is a newer concern that has only started emerging clearly in 2025 and 2026, and it goes by the name shadow AI. 

Shadow AI is what happens when hospital staff start using AI tools on their own, without telling the hospital and without any official approval. A doctor might copy patient notes into ChatGPT to get a quick summary. A nurse might use a personal AI app on their phone to help draft a patient response. It sounds harmless on the surface, but it creates real problems. 

These tools have not been checked for compliance with HIPAA privacy rules. They have not been reviewed for clinical accuracy. And if something goes wrong as a result of their use, nobody is quite sure who is legally responsible. It is a sign of how genuine the pressure on healthcare workers is, and how quickly technology is outrunning the rules designed to govern it.

Why Are Not More Clinics Using AI Yet?

The JAMIA survey of 43 major health systems published in asked leaders directly: what is the biggest thing stopping you from using AI more? The answers might surprise you. 

77% said the biggest problem is that AI tools just are not good enough yet JAMIA 2025 47% said cost was a significant barrier to adoption JAMIA 2025 
40% said regulatory confusion was holding them back JAMIA 2025 17% said that reluctance from doctors and nurses was the main issue JAMIA 2025 

5.1 The Tools Are Not Ready Enough (77% said this) 

This is the most common barrier, and it makes total sense when you look at the underlying evidence. Many AI tools perform well in controlled test environments but then struggle in the real world, on different patient populations, on different hospital software systems, or when something in the clinical environment changes slightly. 

There is also something called model drift. An AI model that was accurate when it was first deployed can gradually become less accurate over time as patient populations shift and care patterns change. The problem is that most hospitals do not yet have systems in place to continuously monitor whether their AI tools are still performing the way they were promised to. The tool could be getting worse, and nobody would notice. 

5.2 It Costs Too Much (47% said this) 

A good AI scribe subscription can cost tens of thousands of dollars per year for a single clinic. For a 400-doctor hospital system, that cost gets divided across enough people to feel manageable. For a 3-doctor rural practice, it could consume the entire technology budget. 

When the U.S. government asked clinicians for direct input on healthcare AI in early 2026 through a formal Request for Information, one of the most consistent responses was that insurance companies do not yet reimburse for AI-assisted care. That means clinics absorb the full financial cost with no offset from payers. Until reimbursement changes, the financial math does not work for many smaller providers. 

5.3 Nobody Knows What the Rules Are (40% said this) 

The regulatory landscape for healthcare AI right now is genuinely confusing, and that confusion is a real barrier to action. There are federal rules from the FDA. Privacy rules from HHS. Data standards from ONC. And on top of all of that, more than 250 AI-related healthcare bills were introduced across 34 or more states in 2025 alone. 

For a clinic administrator trying to make responsible, legally sound decisions, figuring out exactly what is required and what is prohibited is extremely difficult. And the single biggest unresolved legal question hanging over everything is this: if an AI tool gives wrong advice and a patient is harmed as a result, who actually gets sued? The doctor? The hospital? The AI company that built the tool? The law does not have a clear answer yet.

What the Regulations Actually Say Right Now 

If the rules around healthcare AI feel confusing to you, you are in very good company. Multiple federal agencies, more than 34 state legislatures, and international bodies are all simultaneously trying to regulate the same technology, and they do not always agree with each other. 

6.1 What the FDA Did in 2025 and 2026 

  • By September 2025, the FDA had approved a total of 1,356 AI-enabled medical tools. Radiology tools made up 77% of that total. 
  • In 2025, the FDA introduced new labelling rules requiring all AI medical tools to clearly state that they use AI, describe what data they rely on, and disclose any known risks or potential sources of bias. This was the first time AI tools faced mandatory bias disclosure requirements. 
  • In August 2025, the FDA finalized rules around how AI tools are allowed to update themselves after they have been approved. This matters because AI tools need to keep learning over time, but that learning process now needs to happen within a structured regulatory framework. 
  • In January 2026, the FDA reduced its oversight of low-risk AI tools such as fitness apps and wellness wearables, so that regulatory energy could be focused on the higher-stakes clinical tools. 
  • Also in early 2026, updated Clinical Decision Support guidance now requires that AI tools be designed in a way that allows clinicians to actually evaluate and question AI recommendations, rather than simply accepting whatever the AI says automatically. This was a direct attempt to address the well-documented risk of automation bias. 

Source: Bipartisan Policy Center: FDA Oversight of Health AI Tools (Dec 2025) 

6.2 What Individual States Are Doing 

According to a January 2026 healthcare policy report, more than 250 AI-related healthcare bills were introduced across 34 or more states in 2025. Every state is approaching this differently, which creates an increasingly messy patchwork of rules for any organisation operating across state lines. 

  • Colorado passed the most comprehensive state AI law. It requires disclosure whenever AI is used in any major healthcare decision, annual bias audits, and three years of record-keeping. Enforcement begins on June 30, 2026. 
  • Utah, since May 2025, requires upfront disclosure of AI use in regulated sectors including healthcare, with fines of $2,500 per violation. 
  • Texas requires plain-language disclosure whenever AI influences what is classified as a high-risk healthcare decision. 
  • The 2026 Medicare fee schedule introduced improved reimbursement for AI-enhanced services, creating a direct financial incentive for clinics to adopt qualifying AI tools.

The Risks You Really Should Know About 

AI in healthcare has real, proven benefits. We have just covered them. But it also has real, documented risks that are already happening right now, not at some point in the future. 

7.1 AI Can Be Biased Against Certain Patients 

AI systems learn from historical data. And historical data carries the fingerprints of historical inequalities. A May 2025 Royal Society review and a separate PMC ethics analysis both confirmed what this looks like in practice. 

  • AI tools for detecting skin diseases perform significantly worse on patients with darker skin tones, because the training data was drawn mostly from patients with lighter skin. 
  • A July 2025 review found that algorithmic bias leads to 17% lower diagnostic accuracy for minority patients in the tools where this problem has been directly measured. 
  • AI models trained predominantly on data from middle-aged Western patients tend to perform less effectively for elderly patients, children, and patients from underrepresented communities. 

This is not a theoretical future risk. It is happening to real patients right now. And it matters deeply because AI is supposed to help close gaps in healthcare quality, not widen them. 

This issue is now legally enforceable. HHS-OCR’s Section 1557 rule, which began enforcement in 2025, explicitly prohibits discrimination through AI clinical decision tools. It requires healthcare providers to actively identify and address any bias present in the AI tools they use. 

7.2 Your Data Is Being Used in Ways You May Not Know 

When an AI scribe records your appointment, that recording and the notes it generates are processed on a technology company’s servers. When hospitals use AI tools trained on patient records, your medical data may be part of what trained that model. 

HIPAA requires that all of this happens with proper safeguards and legal agreements in place. But in practice, patients are frequently unaware that any of it is happening, and the consent processes are often inadequate. 

A June 2025 PMC analysis of FDA AI approvals found that by mid-2025, only about 5% of approved AI medical devices had ever filed any adverse-event data at all. That means there is almost no systematic monitoring of how these tools behave once they are in real-world use. A tool can be deployed across thousands of hospitals and generate outcomes for millions of patients, and almost nobody is officially tracking whether anything is going wrong.

7.3 Nobody Knows Who Is Responsible When AI Gets It Wrong 

This is one of the most consequential unresolved questions in all of healthcare right now. Current medical malpractice law is built around the assumption that a human doctor made the clinical decision. When AI was involved in that decision, the question of who bears legal responsibility has no clear answer. 

It could be the doctor who trusted the AI recommendation. It could be the hospital that deployed the tool. It could be the AI company that built it. Healthcare systems gave direct feedback to the U.S. government on this exact issue in early 2026, flagging it specifically as a barrier to adoption. They are genuinely reluctant to invest in AI tools when they do not know what legal liability they might be taking on. 

7.4 AI Cannot Always Explain Itself 

Many AI tools produce an output without being able to explain the reasoning behind it. An AI might tell a nurse that a specific patient has a 78% chance of developing sepsis in the next six hours. But it cannot tell them why it reached that conclusion or which data points drove that prediction. The nurse just has to decide whether to trust the number or not. 

This is what researchers call the black box problem, and it is a genuine patient safety concern. New FDA guidance from 2025 and the updated Clinical Decision Support guidance from early 2026 now require that AI tools be designed so that clinicians can independently evaluate the AI’s recommendation rather than simply accepting it. This is a direct attempt to address what researchers have documented as automation bias, the human tendency to trust what a computer says even when we should be questioning it. 

What Doctors Think About All This

Doctor attitudes toward AI have shifted dramatically in just two years. The change has been from cautious scepticism to genuine enthusiasm, but with important reservations that have not gone away. That combination matters. 

The American Medical Association’s 2024 survey found that 68% of doctors now recognise real advantages in using AI for patient care. Among those three key findings: 57% say that reducing administrative burden is AI’s single biggest opportunity in medicine. Among doctors already using an AI scribe, two-thirds report saving between one and four hours a day on documentation. For a doctor who used to write notes at midnight, getting those hours back is genuinely life-changing. 

But doctors are raising consistent, specific concerns that have not been resolved, and they deserve to be taken seriously. 

  • Accuracy. AI-written notes can contain real errors, and doctors carry the weight of having to review every single one. That review responsibility adds its own kind of pressure. 
  • Liability. If an AI tool contributes to a harmful mistake, who is legally responsible? Nobody knows yet. That uncertainty makes doctors uncomfortable. 
  • Non-English speakers. AI scribes work poorly in languages other than English. For clinics serving immigrant communities, this is not a minor limitation. It is a fundamental gap. 
  • Children and elderly patients. Direct feedback submitted to the U.S. government in early 2026 flagged that AI tools perform less effectively for pediatric and geriatric patients, because those groups are not well represented in the training data. 
  • Long-term skill erosion. Some physicians are genuinely worried that depending heavily on AI tools could gradually dull the clinical instincts and judgment they built through years of hands-on practice. 

One more finding is worth highlighting. Research showed that the doctors who got the greatest benefit from AI scribes were the ones who used them the most consistently. The benefit does not appear automatically. It comes from proper training, real commitment to the workflow change, and using the tool regularly. Giving a doctor access to AI is not the same as successfully implementing it.

Where Is All of This Heading? Predictions for 2026 and 2027 

Everything you have read up to this point has been based entirely on facts. Documented, sourced, verifiable data from 2025 and early 2026. 

This section is different. This is where we look ahead and make predictions about what comes next in 2026 and 2027. 

But here is the important thing to understand about these predictions. Every single one of them is grounded in a trend that is already visible in the data we have just reviewed. We are not speculating or guessing. We are following existing patterns to where they logically lead. 

Think of it like watching a ball that someone has just thrown. You cannot know exactly where it will land. But based on where it is right now, how fast it is moving, and what direction it is going, you can make a very solid prediction. That is exactly what we are doing here. 

9.1 AI Will Be Built Into the Software Doctors Already Use 

Right now, adopting an AI tool usually requires a deliberate choice: find the tool, evaluate it, negotiate a contract, train your staff, and manage the implementation. That process takes time, money, and organizational capacity that many smaller clinics simply do not have. 

But that is about to change fundamentally. The largest hospital networks are now deploying AI tools embedded directly into their clinical workflows, automating documentation, simplifying billing, and reshaping how providers communicate with patients, all from within the systems they already use every day. Even platforms built for smaller independent practices have gone fully AI-native, with artificial intelligence no longer bolted on as a feature but architected into the foundation from day one, now reaching more than 160,000 provider endpoints. OmnIMD is part of this same wave, purpose-built to bring that same AI-native thinking to the practices that need it most.

What this means in practice is that AI is going to stop being something you adopt and start being something that shows up in the software you already use. Think about the way spell-check appeared in word processors. Nobody decided to adopt spell-check. It was just there one day, and eventually everyone used it. AI in healthcare is heading in exactly the same direction. 

Prediction: By the end of 2026 

More than 80% of U.S. hospitals will have at least one AI tool actively running, not because they went looking for it, but because it arrived inside their existing software updates. The conversation will shift from asking whether a clinic uses AI to asking which parts of AI it is using well. 

9.2 The Rural Gap Will Become the Biggest Healthcare Equity Crisis 

The gap we see right now, 81% adoption at urban hospitals versus 50% at rural ones, is already serious. But here is what makes it a crisis going forward. 

AI is about to make care quality measurably better at large, well-resourced hospitals. AI scribes will keep experienced doctors in practice longer by reducing their burnout. AI diagnostic tools will catch more cancers and strokes earlier. AI prediction models will flag deteriorating patients before they crash. All of that improvement is coming, but mainly to hospitals that already have the infrastructure to implement it. 

Meanwhile, a small critical-access hospital in rural Wyoming, serving a community with 30% Medicaid patients and unreliable internet, is being left further and further behind. Not because anyone planned it that way. Because the technology is being built for and sold to the customers who can pay for it. 

The 2025 ScienceDirect review was direct about this: 29% of rural adults are already locked out of AI-enhanced healthcare. Without targeted action, including rural broadband investment, affordable AI licensing models for small clinics, and tools designed to work on lower-bandwidth connections, that number is going to get worse before it gets better. 

Prediction: By the end of 2027 

Without targeted federal or state investment to close the gap, the AI adoption divide between urban and rural healthcare will widen to more than 40 percentage points. That gap will begin showing up in measurable patient outcome differences, including earlier cancer detection and lower sepsis mortality rates, skewed heavily toward urban areas. Policymakers who do not act now will face very difficult questions about those outcome disparities in 2028. 

9.3 A Legal Reckoning Is Coming 

Right now, when AI makes a mistake that harms a patient, nobody knows with certainty who is legally responsible. The evidence we reviewed confirms this ambiguity is real, it was flagged explicitly in government feedback in early 2026, and it is already making hospital legal teams nervous. 

But legal ambiguity does not stay ambiguous forever. All it takes is one high-profile case. A patient is harmed by an AI-assisted diagnosis. A doctor gets sued. A court is asked to decide whether the doctor, the hospital, or the AI company bears the legal responsibility. Whatever that court decides becomes the precedent. Courts, legislators, and medical boards are going to be forced into this conversation, most likely within the next 12 to 18 months as AI use continues to expand and adverse events continue to accumulate. 

The direction of that ruling will shape everything that comes after. If liability lands on physicians, many doctors will stop using AI tools entirely to protect themselves. If it lands on hospitals, expect risk-averse hospital boards to pull back sharply from AI adoption. If it lands on AI vendors, expect legal indemnification clauses to drive up costs dramatically for everyone. 

Prediction: By the end of 2027 

At least one U.S. state, and possibly a federal court, will issue a significant ruling on AI liability in a healthcare context. That ruling will immediately change how AI vendor contracts are written across the entire industry. It is likely to be the single most consequential event shaping healthcare AI adoptionover the following three years. 

9.4 The FDA Will Start Requiring Real Clinical Proof 

This fact bears repeating one more time because it is so important: 97% of the AI medical tools the FDA has approved were cleared without any clinical outcome testing. The FDA confirmed the tools were technically safe. Nobody required evidence that they actually help patients. 

That situation is not sustainable. The November 2025 JAMA Network Open systematic review called it out plainly. Researchers, patient advocates, and members of Congress have all been flagging it. The FDA itself acknowledged the problem when it introduced new labelling requirements in 2025. 

The logical next step, requiring at least some real-world clinical evidence before approving high-stakes AI tools, is almost certainly coming. The FDA’s 2025 real-world evidence pilot programme, called Technology-Enabled Meaningful Patient Outcomes, is a direct experiment in how to collect that evidence at scale. That programme is a trial run for a future where clinical proof is required, not optional. 

Prediction: By the end of 2026 

The FDA will introduce tiered approval requirements. High-risk AI tools, meaning those involved in diagnosis or treatment decisions, will require at least some clinical outcome evidence before approval. Lower-risk tools will remain on the current fast-track pathway. This will slow new high-risk approvals in the short term, but will significantly increase trust in the ones that do make it through. 

9.5 AI Scribes Will Become as Normal as a Stethoscope 

The two randomised controlled trials published in NEJM AI in late 2025, at UCLA and the University of Wisconsin, did something very specific and very important. They gave hospital leaders the kind of high-quality, unambiguous evidence they needed to justify large-scale rollouts with confidence. 

The University of Wisconsin’s response tells you everything. They published their trial results, and then immediately deployed the tool to 800 clinicians across two states. That is the speed at which things move when the evidence is genuinely compelling. 

At least four major AI scribe platforms are now competing for hospital contracts: Microsoft DAX, Nabla, Nuance, and Suki. Real competition is pushing prices down and features up. Within two years, the question will not be whether to use AI scribes. It will be which one to use and how to train staff to use it well. 

Prediction: By the end of 2026 

AI ambient scribes will be actively used by more than 75% of large U.S. health systems and will start appearing widely in smaller practices through EHR software bundles. The central implementation challenge will shift from deciding whether to adopt the technology to ensuring that every AI-generated note is properly reviewed by a clinician. Patient safety guidelines specifically addressing AI note review are expected from the Joint Commission and medical licensing boards. 

9.6 State Regulations Will Get Messy Before They Get Cleaner 

More than 250 healthcare AI bills were introduced across 34 or more states in 2025 alone. Colorado’s comprehensive AI Act takes effect on June 30, 2026. Utah is already imposing fines for disclosure violations. Texas has its own approach. 

Meanwhile, the federal picture is pulling in the opposite direction. The Trump administration issued an executive order in early 2026 aimed at loosening AI oversight, warning that excessive state-level regulation could slow down growth and innovation. That order is expected to face significant legal challenges. The result is a collision course between states moving toward stricter rules and a federal government pushing for lighter ones. 

For any healthcare organisation that operates in multiple states, this is a compliance nightmare in the making. A hospital system operating in Colorado, Utah, Texas, and California in 2027 will be navigating four different sets of AI rules simultaneously, with no unified federal standard to simplify the picture. 

Prediction: By the end of 2027 The growing patchwork of conflicting state AI rules will create enough compliance chaos that major hospital associations will formally lobby Congress for a unified federal standard. A federal healthcare AI framework, likely built around disclosure requirements, bias testing obligations, and vendor accountability, will be introduced in Congress, though it will probably not be fully passed within this window. In the meantime, organisations operating across multiple states will need dedicated AI compliance staff for the first time.

Wrapping It All Up 

Here is the honest summary of where American healthcare AI stands as of early 2026. 

The good news is real and it is supported by solid evidence. AI note-writing is genuinely helping doctors claw back hours of their lives from the documentation trap, and two randomised controlled trials now prove it works. AI is reading scans faster and helping catch more cancers and strokes earlier. AI sepsis detection at leading hospitals is saving lives by cutting through the noise of constant false alarms. And the pace of adoption is still accelerating. 

But the uncomfortable truths are also real and supported by the same evidence. The gap between who gets AI and who does not is growing, and it maps almost perfectly onto the existing inequalities in American healthcare. The patients who most need better care are the least likely to benefit from AI improvements. 97% of FDA-approved AI tools were cleared without any proof they actually help patients. Nobody knows who is legally responsible when AI makes a harmful mistake. And most patients have no idea their appointments are being recorded and processed. 

The next 12 to 18 months are going to be genuinely formative for this technology and for American healthcare. The decisions being made right now, by regulators, hospital boards, state legislators, and AI companies, will determine whether AI becomes a tool that makes healthcare better for everyone, or a technology that reinforces a two-tier system where cutting-edge care is available only to people lucky enough to live near a well-funded urban hospital. 

That is not a technology question. It is a values question. And the window to get it right is open right now. 

Every Source, With Links 

Every fact in this report traces back to one of the sources below. All of them were published in 2025 or early 2026. Click any link to verify the original data. 

Adoption Data 
Source: AHA/ONC: Hospital Trends in Predictive AI 2023 to 2024 (2025) 
Source: AMA: 2 in 3 Physicians Using Health AI (2024 Survey) 
Source: PMC/JAMA: Census Bureau BTOS Analysis of Healthcare AI Adoption (Jul 2025) 
Source: Becker’s Hospital Review: Half of U.S. Hospitals to Adopt Generative AI by End of 2025 (Dec 2025) 
Source: JAMIA: Poon et al., Survey of 43 Health Systems (May 2025) 
Source: AHA: 4 Actions to Close the Predictive AI Gap (Nov 2025) 
Source: IntuitionLabs: AI Adoption in U.S. Hospitals 2025 

AI Scribes 

Source: PubMed / NEJM AI: Lukac et al., AI Scribes Randomized Trial (Nov 2025) 
Source: UCLA Health: AI Scribes Study Press Release (Nov 2025) 

Source: UW-Madison / NEJM AI: Ambient Scribe Reduces Burnout (Dec 2025) 
Source: JMIR AI: Real-World Evidence on AI Scribes: Rapid Review (Oct 2025) 

AI Radiology and Imaging 

Source: JAMA Network Open: FDA AI Approvals in Radiology: Systematic Review (Nov 2025) 
Source: The Imaging Wire: FDA AI Device Authorizations Update (Dec 2025) 

Source: BCC Research: How AI Is Changing Medical Imaging (2025) 
Source: IntuitionLabs: AI in Radiology: 2025 Trends and FDA Approvals 

Sepsis and Predictive AI 

Source: Cleveland Clinic: Bayesian Health AI Sepsis Detection Rollout (Sep 2025) 

Equity and Rural Healthcare 

Source: ScienceDirect: AI as Catalyst for Health Equity in Primary Care (Jul 2025) 
Source: arXiv: AI in Rural Healthcare Delivery: Bridging Gaps (Aug 2025) 

Ethics, Bias, and Privacy 

Source: PMC: The Illusion of Safety: FDA AI Healthcare Approvals (Jun 2025) 
Source: Royal Society Open Science: Ethical and Legal Considerations in Healthcare AI (May 2025) 
Source: PMC: Ethical Challenges in AI Clinical Practice (2025) 
Source: HHS RFI Comments: AI in Clinical Care (Feb 2026) 

Regulation and Policy 

Source: Bipartisan Policy Center: FDA Oversight of Health AI Tools (Dec 2025) 
Source: blueBriX: The 2026 AI Reset: Healthcare Policy (Jan 2026) 
Source: Telehealth.org: FDA Clarifies AI Software Oversight (Jan 2026) 
Source: Faegre Drinker: FDA 2026 Clinical Decision Support Guidance Source: Jimerson Firm: Healthcare AI Regulation 2025: New Compliance Requirements (Feb 2026)