What If We Are Misallocating the Most Powerful Technology of Our Generation?
Healthcare AI investment is a declaration of priorities. Follow the money.
Look at where the capital is going. The biggest AI investments in healthcare right now cluster around three functions: automated coding, risk adjustment optimization, and prior authorization. All three sit on the revenue cycle. All three are sold on the same promise: accuracy, speed, defensibility.
That promise is real. The technology works. But "it works" is not the same as "it helps."
Risk adjustment has a valid purpose. Sicker, more complex patients should bring more resources with them into the system. Accurate coding matters. Fraud should be caught. None of that is controversial.
What should be controversial is how little we question the pattern.
The coding layer
AI-powered coding tools are getting remarkably good at reading clinical documentation and generating billable codes. The pitch is straightforward: fewer human coders, faster turnaround, fewer missed codes. Health systems love it because it closes revenue gaps they didn't know they had.
But think about what "closing revenue gaps" actually means. In most cases, it means the documentation supported a higher-acuity code that a human coder missed or was too conservative to assign. The AI catches it. The system bills for it. Revenue goes up.
The patient's care didn't change. Their diagnosis didn't change. Their outcome didn't change. The only thing that changed is the check.
The risk adjustment layer
Risk adjustment is where this gets more complex. Medicare Advantage plans receive higher capitation payments for sicker patients. That's by design. The problem is that "making sure patients are coded accurately" and "making sure we capture every defensible diagnosis" start to blur together when there's a direct financial incentive on one side.
Neuro-symbolic AI, evidence trails, audit-ready documentation. These are real technical achievements. But the customer for these tools is not the patient. It's the plan or the system trying to defend its revenue against CMS enforcement and RADV audits. The innovation is pointed inward, toward the balance sheet, not outward toward the person receiving care.
When UPMC Enterprises and Microsoft back a company in this space, that's a market signal. It tells you where sophisticated buyers see ROI. And ROI here is measured in retained and recovered revenue, not in patient outcomes.
The prior authorization layer
This is where it gets harder to look away.
Prior authorization was designed as a utilization check. In practice, it has become a friction machine. Patients wait. Physicians burn hours on paperwork. Care gets delayed or abandoned. The human cost is well documented and staggering.
Now AI is being deployed on both sides. Payers use it to process and deny faster. Providers use it to appeal and resubmit faster. We have built an adversarial AI arms race around whether a patient gets the care their doctor ordered.
Step back and look at that clearly. We are spending engineering talent, capital, and compute on two opposing AI systems arguing over a fax about someone's MRI. And we are calling it innovation.
The question nobody wants to ask
Healthcare AI could be pointed at access. At catching a patient falling through the cracks before they end up in the ED. At surfacing social determinants that change a care plan. At giving patients real agency over their own health data instead of locking it behind portals they can barely navigate.
Some of that work is happening. But it is not where the scale capital is going. The scale capital follows revenue optimization because that's where the ROI model is clearest, the sales cycle is shortest, and the buyer has budget.
That's not a technology problem. It's a priorities problem.
Being honest about what we're building
I'm not arguing that revenue cycle AI is illegitimate. Health systems need to get paid. Accurate coding matters. Fraud detection matters.
But we should stop pretending this is primarily about the patient. When a company's value proposition is "defensible accuracy" against a federal auditor, the customer is the institution, not the person receiving care. When prior auth AI speeds up denials, the beneficiary is the payer's margin, not the patient waiting for treatment.
The language matters. "Revenue integrity" sounds noble. "Billing optimization" sounds like what it is. We should use the honest version.
Healthcare has a 30-year habit of wrapping financial engineering in clinical language. AI is making that habit more sophisticated, not less.
The builders and investors reading this have a choice. Not between profit and purpose. Between building AI that makes the current system more efficient at extracting value, and building AI that changes what the system is capable of.
Those are not the same project. And right now, almost all of the money is on the first one.
Paul J. Swider is CEO of RealActivity and an analyst for the Acceleration Economy. He has spent 30+ years building technology for health systems, including work at GE, IDX Systems, and Microsoft.



