<   Back to Blog

Glass Box AI: Why Reasoning Is the Only Answer to Provider AI Intensity

Feb 3, 2026
Glass Box AI: Why Reasoning Is the Only Answer to Provider AI Intensity

I am at the Medical Cost Containment Series (HPRI), speaking with payer leaders about the concept of "Glass Box" payment integrity.

Nēdl Labs Neuro-Symbolic

The conversations on the floor are intense, and the anxiety is palpable. We are all watching providers roll out ambient scribes and documentation copilots at breakneck speed. Everyone agrees that reducing administrative burden is critical, but we also know the incentives remain stubbornly fixed on optimizing the claim.

The uncomfortable truth we are confronting here in Dallas is that provider LLMs are materially increasing "coding intensity" while care intensity remains flat. We are entering an era in which the system rewards better documentation far more than it rewards better care, and Language Models are exceptional at documentation.

The solution isn't a black box. The solution is Nēdl Labs' Neuro-Symbolic AI platform, where reasoning is the product.

The New Risk Signal: "Discordance"

For payers, the defining risk signal of this new era is no longer simple error or crude fraud; it is discordance. This occurs when the coded severity of a patient profile rises significantly faster than the clinical treatment they actually receive.

Consider a concrete, increasingly common inpatient scenario: A GenAI tool scans a patient's EHR observations and identifies a single SpO₂ reading of 92% on room air. Based on this data point, a secondary diagnosis of acute respiratory failure is recommended. Technically, the data point exists. However, the treatment record indicates the patient received no oxygen, no bronchodilators, and no specific respiratory intervention.

The resulting jump in DRG weight represents a financial lift without clinical reality. It is no longer about whether a code is possible to justify; it is about whether the coded story is concordant with the care delivered.

The Three Patterns Redefining Complexity

We are seeing this discordance manifest in three distinct patterns that are blinding traditional rules-based engines:

  • Inpatient Complexity (CC/MCC Capture): GenAI is incredibly efficient at surfacing secondary diagnoses to elevate the Case Mix Index. We are seeing spikes in complication capture without any corresponding change in the actual treatment provided.
  • DRG Migration: This is a subtle game of "re-positioning." Diagnoses are strategically shifted from secondary to principal positions (or vice versa) to displace lower-weight DRGs with higher-weight ones. It sits in the gray zone of "documentable" versus "treated", exactly where GenAI thrives.
  • Outpatient E/M Creep: AI documentation tools are expanding problem lists and auto-summarizing "data reviewed" to capture every second of total time. This drives a measurable shift toward higher-intensity codes (like 99214s/99215s) even for routine encounters.

Why "Reasoning is the Product"

The natural reflex for payers is to crank up black-box denials. But in the age of GenAI, opacity is a losing strategy. If we respond with opaque automation, we force providers to reverse-engineer our logic, turning aligned behavior into adversarial gaming.

The winning strategy is Glass Box AI, defenses that scale trust through transparency.

This requires the fundamental architectural shift we are building at Nēdl Labs with neuro-symbolic AI.

Unlike purely statistical models that just flag outliers, our platform combines two distinct powers:

  1. Neural systems (like LLMs) to read and extract evidence from messy, unstructured medical records at scale.
  2. Symbolic reasoning engines to execute explicit criteria, policy logic, and clinical rules.

This allows us to implement true Concordance Scoring. We don't just predict a denial; we programmatically score the gap between the coded severity and the treatment intensity. We verify if the medication, labs, and imaging actually support the high-severity diagnosis.

Crucially, because "reasoning is the product," the output isn't just a score; it's an auditable, evidence-based rationale.

We generate a "chain of thought" that links the specific evidence in the record directly to the criteria in your policy.

Moving from Adversarial to Collaborative

This "show your work" posture is the only way to survive the GenAI arms race. It moves Payment Integrity from a posture of adversarial optimization to one of collaborative adjudication.

By leveraging Nēdl Labs' neuro-symbolic reasoning, we can provide specific, actionable feedback. We stop sending generic denials that breed resentment and start sending smart, transparent requests that point to the missing clinical link.

Coding intensity is no longer a back-office topic; it is a strategic issue. If we expose the provider GenAI to greater opacity, the system breaks.

But if we apply Glass Box reasoning, we ensure that when the claim arrives, the severity recorded on the paper matches the patient in the bed.

Go Deeper: The Defensive AI Blueprint

For those at HPRI (and those following along from home), we have detailed the full architectural blueprint and the operational playbook for this shift in our latest white paper. It covers the specific "Defensive AI" strategies required to counter discordance without breaking provider trust.

The Trust Imperative: Defensive AI Strategy for Healthcare Payers

Share this article

About the author

Ashish Jaiman profile picture
Ashish Jaiman

Founder nēdl Labs | Building Intelligent Healthcare for Affordability & Trust | X-Microsoft, Product & Engineering Leadership | Generative & Responsible AI | Startup Founder Advisor | Published Author