<   Back to Blog

The Federal Mandate to "Shift Left": WISeR

Jan 21, 2026
The Federal Mandate to "Shift Left": WISeR

If you've been waiting for the definitive signal that the era of "Pay & Chase" is ending, it arrived this month.

For years, we have discussed the "Shift Left" in Payment Integrity, moving from chasing down overpayments months later to preventing them seconds before they happen. It's been a strategic goal for many, a buzzword for others.

But as of January 2026, with the official launch of the Wasteful and Inappropriate Service Reduction (WISeR) model, CMS has turned that strategic goal into the new industry standard.

WISeR model overview: CMS pre-payment review

WISeR model (CMS)

This isn't just another pilot program. It is a fundamental re-architecture of how traditional Medicare approaches claims integrity. For the first time, CMS is explicitly deputizing technology companies to use AI to review claims for specific, "low-value" clinical services before payment are issued.

The message is clear: The government is no longer willing to be the "dumb payer" that funds waste and hopes to recover it later. They are building a defensive perimeter.

For private payers, this is the wake-up call. But it also reveals a dangerous trap that most of the industry is walking blindly into: The Probability Trap.

The "Black Box" Problem

The WISeR model targets clinical gray areas, procedures like knee arthroscopies, spinal injections, and nerve stimulation. These aren't simple administrative errors. They are complex medical decisions that require nuance.

  • Does the patient have severe enough osteoarthritis to justify the surgery?
  • Did they actually fail six weeks of conservative therapy before the injection?

Here is the problem: The current generation of "Generative AI" (pure Large Language Models) cannot reliably solve this.

If you feed a 500-page medical record into a standard LLM and ask, "Is this surgery medically necessary?", the model will give you an answer based on statistical probability. It predicts the next most likely word.

It guesses.

In Payment Integrity, a guess is a liability. You cannot deny a $50,000 claim and risk a provider appeal or a regulatory fine, based on a system that is "85% confident."

A prediction is not a decision.

This is where the industry is stumbling. We are trying to solve logic problems with probability engines.

The Solution: Neuro-Symbolic AI (The Nēdl Labs Approach)

At Nēdl Labs, we saw this coming. We knew that for AI to work in the high-stakes world of pre-payment review, it couldn't just be creative; it had to be accountable.

This is why we built our platform on Neuro-Symbolic Compound AI.

This isn't just a buzzword. It is a fundamentally different architecture that combines the two necessary halves of human-like intelligence:

  1. The Neural Brain (Perception): This is the Deep Learning component. It is incredible at reading the "messy" reality of healthcare, faxed PDF charts, handwritten notes, and unstructured radiology reports. It doesn't make decisions; it extracts facts.
  2. The Symbolic Brain (Reasoning): This is the deterministic logic layer. It takes those facts and applies the hard, binary rules of medical policy (LCDs, NCDs) and payer contracts. It doesn't guess; it calculates.

By decoupling perception from reasoning, we solve the "Black Box" problem. We don't just tell you that a claim is denied; we show you the exact logic path, the specific policy clause, and the clinical evidence that triggered it.

We bring Glass Box Explainable and Reasoning AI.

WISeR in Practice: Why Neuro-Symbolic Wins

Let's look at two specific examples from the CMS WISeR target list to see why this architecture is the only viable path forward.

Example 1: Arthroscopic Lavage for Osteoarthritis

The Rule: CMS generally does not cover arthroscopic lavage for osteoarthritis of the knee unless specific criteria are met (or rather, not met, it is often considered low-value).

  • The GenAI Failure: A standard LLM reads the chart and sees "osteoarthritis" and "pain." It hallucinates approval because it has seen millions of training documents where surgery followed pain. It misses the specific exclusionary criteria found in a different section of the chart.
  • The Nēdl Neuro-Symbolic Way: Neural Step: Our neural engine ingests the radiology report and extracts the Kellgren-Lawrence grade (a measure of severity). It finds "Grade 4" in the unstructured text. Symbolic Step: The symbolic engine runs the policy rule: IF Diagnosis = Osteoarthritis AND Severity >= Grade 4 THEN Procedure = Not Covered (Low Value). Result: The claim is flagged with 100% precision, citing the specific policy line. Zero hallucination.

Example 2: Epidural Steroid Injections (ESI)

The Rule: Coverage often requires documentation of "failure of at least 4 weeks of non-surgical conservative therapy."

  • The GenAI Failure: The LLM reads a note that says, "Patient tried physical therapy last month." It approves the claim. But "last month" might only mean two weeks of actual therapy. The LLM fails the math.
  • The Nēdl Neuro-Symbolic Way: Neural Step: The model extracts the dates of physical therapy sessions from the billing history or clinical notes (e.g., "Started PT: Jan 4," "Ended PT: Jan 20"). Symbolic Step: The engine performs a temporal calculation: (Jan 20 - Jan 4) = 16 days. It compares this to the rule: 16 days < 28 days (4 weeks). Result: Denial for insufficient conservative therapy duration. The math is auditable and irrefutable.

The "Balloon Effect" & The Private Market

Why does this matter to commercial payers right now? Because fraud and waste operate like a balloon. When you squeeze it in one place, it expands in another.

As Medicare tightens its defenses with WISeR, bad actors and wasteful patterns will not disappear. They will migrate to the path of least resistance: You.

If CMS is using AI to block these low value claims pre-pay, and you are catching them post-pay (or not at all), you are effectively subsidizing the waste that Medicare just rejected.

From Generative to Grounded

2025 was the year of the "AI Hangover," where we realized that ChatGPT couldn't run a health plan.

2026 is the year of Grounded AI.

The WISeR model proves that pre-payment clinical review is scalable, but only if the technology is trusted. At Nēdl Labs, we aren't building AI that writes poetry. We are building AI that reads contracts, understands policy, and protects the healthcare dollar with mathematical certainty.

The federal government has shifted left. The question for private payer leadership is no longer if you will join them, but what architecture you will trust to get you there.

Don't settle for a probability. Demand a proof.

Share this article

About the author

Ashish Jaiman profile picture
Ashish Jaiman

Founder nēdl Labs | Building Intelligent Healthcare for Affordability & Trust | X-Microsoft, Product & Engineering Leadership | Generative & Responsible AI | Startup Founder Advisor | Published Author