# A 123-Day Case Study in AI-Assisted Patient Billing

**Platform launched:** December 15, 2025
**Measurement window:** December 15, 2025 → April 16, 2026 (**123 days**, post-launch)
**Equal-window benchmark:** Last 123 days of the legacy platform (Aug 12, 2025 → Dec 12, 2025)
**Report generated:** April 16, 2026

---

## Executive Summary

The new platform replaced a legacy patient-billing platform (the legacy platform) inside a diagnostic imaging group on December 15, 2025. In the **123 days of observed operation** immediately following launch, the new platform delivered measurable improvements across every financial and operational KPI we track, without increasing labor cost.

**Headline results (observed, not modeled):**

| Metric | The legacy platform (equal 123-day window) | The new platform (123-day post-launch window) | Change |
|---|---:|---:|---:|
| Total patient-responsibility dollars collected | $233,471 | **$535,623** | **+129.4%** |
| Collections per paying account | $133.87 | **$182.62** | **+36.4%** |
| Daily collections run-rate | $1,898/day | **$4,355/day** | **+129.4%** |
| Paying-account count | 1,744 | **2,933** | **+68.2%** |
| 60-day responsibility collection rate | 51.0% | **66.4%** | **+15.3 pts** |
| 90-day responsibility collection rate | 53.1% | **75.8%** | **+22.6 pts** |
| Invoice-to-payment conversion rate | 49.3% | **73.4%** | **+24.1 pts** |
| Invoices fully paid off by end-of-window | 47.8% | **72.5%** | **+24.7 pts** |

**In-period ramp (first 30 days → last 30 days of the new platform operation):**

| Metric | First 30 days | Last 30 days | Change |
|---|---:|---:|---:|
| Daily collections | $2,371/day | **$5,958/day** | **+151.3%** |
| Paying accounts | 493 | **986** | **+100.0%** |
| Median payment size | $50.00 | **$72.28** | **+44.6%** |

**Net economic impact over the 123-day window:**

- **$302,152** in incremental gross collections vs. The legacy platform equal-window baseline
- **$187,468** in revenue retained vs. industry-standard 35% collection-agency fee
- **$7,426** total the new platform operating cost (SMS/email/human follow-up combined, excludes platform licensing)
- **$528,198** net after operating cost
- **Net savings vs. modeled 2-FTE manual phone-and-mail operation: $35,844** (same gross collections basis)

**Channel / attribution disclosure:** the $535,623 total is drawn from the new platform's system-of-record ledger and is channel-inclusive (online-initiated, cash-at-front-desk, POS-card, mailed-check, and payment-plan draws — all posted in the window). The the legacy platform equal-window baseline of $233,471 is constructed on the identical basis from the legacy platform's own disbursement ledger. **Under a stricter "digital-rail-only" attribution (online payments the new platform actually processed), the new platform collected $275,155 — still +17.9% vs the legacy platform's same-basis baseline.** The full channel composition and three alternative attribution views are disclosed in the Limitations & Caveats section.

---

## Context: The Patient-Billing Problem We Set Out to Solve

U.S. patient financial responsibility has grown from roughly 10% to over 30% of the average medical bill over the last decade, and now exceeds $2,100 per insured adult on average. At the same time, **41% of U.S. adults hold some form of medical debt** [Kaiser Family Foundation, 2024], and industry research indicates **roughly 68% of patient balances under $500 go uncollected** through traditional billing workflows [McKinsey, 2023].

The legacy operational model in most medical practices combines:

- Paper statements mailed 30–60 days after the date-of-service
- Voice call follow-up by billing staff (average 15–25 minutes per account touched)
- Eventual write-off or handoff to a collection agency at 25–50% contingency
- A/R aging that routinely exceeds 60 days in patient balances [HFMA MAP Keys, 2024]

The platform the new platform replaced (the legacy platform) operated on this model. It delivered reasonable first-touch payment speed (observed: avg ~9 days from invoice date, median 4 days — a function of statement-cycle timing rather than patient behavior) but saw aggregate collection rates plateau at roughly 53% of patient responsibility even after 90 days, with the remainder aging into severe past-due or write-off.

**What the new platform does differently** is close the loop between invoice creation, multi-channel communication, AI-assisted triage of patient responses, and operational execution — all without adding headcount. The rest of this case study quantifies that difference.

---

## How We Measured

### Data sources

All metrics in this report are sourced from two canonical datasets:

1. **The legacy platform ("PRE")** — exports from the predecessor platform covering Aug 8, 2024 through Dec 12, 2025 (platform cutover date). 12,294 payment transactions across 7,720 unique paying accounts.
2. **The new platform ("POST")** — exports from the production the new platform backend covering Dec 15, 2025 (official launch) through Apr 16, 2026 (report-generation date). 4,384 payment transactions across 2,933 unique paying accounts.

Both datasets were normalized into a canonical schema (invoices, payments, statements) via the `scripts/run_report.py` pipeline and reconciled through account-id and invoice-id linkage. The pipeline produces auditable CSVs and JSON that underpin every number in this report — no manual analysts in the loop.

### Comparison methodology

We compare the new platform to three reference points:

1. **the legacy platform equal-window (observed)** — the last 123 days of legacy operation, matched day-for-day to the the new platform 123-day window. This is the apples-to-apples "same practice, same seasonality, same patient mix, different platform" comparison.
2. **Manual phone+mail (modeled)** — a counterfactual in which the same gross collections are pursued through a 2- or 3-FTE billing team doing paper statements, phone follow-up, and USPS mail. Labor and materials assumptions are drawn from HFMA benchmarks and current wage data [BLS OEWS 2024: Medical Records Specialists / Billing Clerks].
3. **Industry benchmark (published)** — headline KPIs from HFMA MAP Keys, Becker's Hospital CFO Report, and McKinsey healthcare-payments research.

### Fair-comparison rules applied

- **Equal windows.** All the new platform-vs-the legacy platform comparisons use 123-day windows of identical length.
- **Responsibility-weighted collection rates.** 30/60/90-day collection rates are weighted by invoice responsibility amount and capped per-invoice (you cannot "overcollect" a balance). This removes penny-invoice bias.
- **Maturity-adjusted denominators.** For 30/60/90-day collection metrics, the denominator is restricted to invoices that have had a full 30/60/90 days of exposure by the measurement cutoff — newer invoices are not penalized for not having matured yet.
- **Legacy intake-lag normalization.** the legacy platform records `invoice_date` as the billing date (which is post-claims-processing, typically 30+ days after date-of-service). The new platform records `invoice_date` at date-of-service. Where we compare "days to first payment," we apply a +30-day legacy-lag adjustment to the legacy platform and run a sensitivity analysis at +15/+30/+45 days. This is disclosed as an assumption, not presented as observed.
- **Invoice-to-payment linkage.** the new platform uses `account_id + invoice_id` pair linkage (526 pair matches out of 526 possible in the earlier 74-day sample; 2,551 out of 2,551 in the current 123-day run). The legacy platform exports use mismatched account-id namespaces across invoice and payment files, so linkage defaults to `invoice_id` alone (2,174 matches). This is a known and documented limitation of the legacy platform's export schema, not of the methodology.

### What we deliberately did NOT do

- We did not apply the new platform-favorable filters (e.g., excluding small dollar invoices) to inflate per-account numbers.
- We did not model bad-debt write-offs on either side — both datasets reflect a "no writeoffs expected" operating policy for the reporting window. If either organization moves balances to third-party collection agencies in the future, that bad-debt rate needs to be folded in.
- We did not claim attribution for payments that occurred before the new platform's message/communication touchpoint. The platform export counts all posted payments during the window, including external and staff-assisted payments, because all post-launch activity runs through the new platform infrastructure.

---

## Primary Result 1 — Per-Account Economics (Observed)

This is the strongest apples-to-apples comparison in the report and the best signal of platform effectiveness, because it is linkage-independent, seasonality-matched, and entirely driven by in-window behavior.

### Headline

| Metric | The legacy platform (123-day equal window) | The new platform (123-day post-launch) | Change |
|---|---:|---:|---:|
| Total collected | $233,471 | $535,623 | **+129.4%** |
| Paying accounts | 1,744 | 2,933 | **+68.2%** |
| Collections per paying account | **$133.87** | **$182.62** | **+36.4%** |
| Transactions per paying account | 1.492 | 1.495 | +0.1% (tie) |
| Median payment size | $50.00 | $63.89 | **+27.8%** |
| 90th-percentile payment size | $230.58 | $330.19 | **+43.2%** |

### Why this matters

- **Higher per-account yield at effectively identical transactional depth.** Transactions per paying account are a statistical tie (1.492 vs 1.495). This means patients aren't being contacted dramatically more often — they are paying **more per contact**. This is the signature of better timing, channel fit, and messaging clarity, not more nagging.
- **Both median and P90 payment sizes climbed.** A higher median ($50 → $63.89) says the middle of the distribution is paying more; a higher P90 ($230.58 → $330.19, +43%) says the right tail of larger balances is converting at a higher cadence. The new platform isn't winning only at the small-balance end.
- **Paying accounts grew +68%.** Even against a peak-season the legacy platform window (Aug–Dec 2025, which includes the Q4 deductible-reset surge), the new platform reached more unique patients in the same 123-day period.

### Industry benchmark context

Per HFMA MAP Keys published benchmarks, best-in-class patient collection-per-account figures for imaging groups of comparable size cluster in the **$110–$150 range** [HFMA MAP Keys, 2024]. The new platform's observed **$182.62 per paying account places the practice in the top quartile of the published benchmark range**, based on the 123-day window.

### Digital-rail-only cut (conservative attribution)

Restricting to payments that the new platform actually processed on its own online rail (excluding cash, POS-card, mailed check, and payment plans):

| Metric | The legacy platform equal window | The new platform digital rail only | Change |
|---|---:|---:|---:|
| Total collected | $233,471 | $275,155 | +17.9% |
| Paying accounts | 1,744 | 1,716 | –1.6% |
| **Collections per paying account** | **$133.87** | **$160.35** | **+19.8%** |

Interpretation: the new platform's digital rail reached almost the exact same count of paying accounts as the legacy platform did across *all* channels in the equal window (1,716 vs 1,744), and those digital-rail patients paid **19.8% more per account**. The per-account uplift is structural — it is not being driven by the inclusion of offline cash/check activity.

---

## Primary Result 2 — Ramp Momentum & Adoption Curve (Observed)

This view shows how the platform performed over time within the launch window. It is the clearest indication that the gains are real and compounding, not a one-off driven by launch novelty.

### First 30 days vs last 30 days of the new platform operation

| Metric | First 30 (Dec 15 – Jan 13) | Last 30 (Mar 18 – Apr 16) | Change |
|---|---:|---:|---:|
| Total collected | $71,139 | $178,733 | **+151.3%** |
| Daily collections run-rate | $2,371/day | $5,958/day | **+151.3%** |
| Paying accounts | 493 | 986 | **+100.0%** |
| Median payment size | $50.00 | $72.28 | **+44.6%** |
| 90th-percentile payment size | $240.93 | $336.64 | **+39.7%** |

### Interpretation

The last-30-day run-rate of **$5,958/day** is effectively **3.1x** the the legacy platform equal-window baseline of $1,898/day. This is the single most important forward-looking number in this report.

If last-30-day performance is sustained, the annualized collections run-rate is approximately **$2.17 million** — compared to roughly **$693K/year** on the legacy platform's observed equal-window daily rate. That's a projected **~$1.48M annual uplift** on the same patient panel.

Important caveat: the last-30-day window includes Q2 2026 deductible-reset activity and the first full cycle of AI send-time optimization after the feature was enabled. Both are legitimate, repeatable drivers — but a full year of data is the right basis for a binding financial plan.

---

## Primary Result 3 — Invoice-to-Cash Conversion (Observed, Maturity-Adjusted)

This is the **direct comparison** on invoice-level performance, using matched windows and identical KPI definitions on both sides.

### Invoice-level results

| Metric | The legacy platform (equal window) | The new platform | Delta |
|---|---:|---:|---:|
| Invoices issued in window | 6,026 | 3,654 | — |
| Total patient responsibility in window | $474,742 | $1,504,091 | 3.2x |
| **Invoice-to-payment conversion rate** | 49.3% | **73.4%** | **+24.1 pts** |
| **Invoices fully paid off by window end** | 47.8% | **72.5%** | **+24.7 pts** |
| Invoices with payment within 30 days | 48.5% | 32.7% | –15.7 pts |

### Responsibility-weighted collection curve

| Collection horizon | The legacy platform | The new platform | Delta |
|---|---:|---:|---:|
| Within 30 days | 42.6% | 30.5% | –12.1 pts |
| Within 60 days | 51.0% | **66.4%** | **+15.3 pts** |
| Within 90 days | 53.1% | **75.8%** | **+22.6 pts** |

### Interpretation

This is the most intellectually honest chart in the report. At the 30-day mark, the legacy platform appears to collect more of patient responsibility (42.6% vs 30.5%). That gap reverses at 60 days (+15.3 pts for the new platform) and widens substantially at 90 days (+22.6 pts).

**What's actually happening:** the legacy platform's invoice_date is the billing date — the moment a statement is mailed, which itself is typically 30+ days after date-of-service (claims processing time). So when the legacy platform says "40% collected within 30 days of invoice date," the patient has already had 30–60 days of warming through insurance EOB mail, provider statements, and other indirect signals. The new platform's invoice_date is the date-of-service itself, so the new platform's "30-day collection rate" actually measures a much earlier moment in the patient's awareness curve — roughly t=0 to t=30 versus the legacy platform's t=+30 to t=+60.

Once the windows are matched further out (60 and 90 days), where maturity has neutralized the definitional difference, the new platform decisively outperforms: **75.8% of responsibility collected by 90 days vs the legacy platform's 53.1%** — a 22.6-point absolute advantage and a 42.7% relative advantage.

### Industry benchmark context

HFMA MAP Keys reports **patient responsibility collection rates of 50–60% at 90 days post-service** as the industry average for non-integrated billing workflows [HFMA MAP Keys, 2024]. The new platform's observed 75.8% at 90 days is **~15 percentage points above the reported industry median**.

---

## Three-Way Comparison: the new platform vs. The legacy platform vs Manual

For this comparison we hold gross collections constant at the new platform's observed $535,623 (or the legacy platform's actual $233,471 on its own row), and compare operating cost and net-after-cost across three alternative operating models.

| Operating model | Basis | Gross collected | Operating cost | Net after cost | Cost as % of collections |
|---|---|---:|---:|---:|---:|
| **the new platform (observed)** | Observed | $535,623 | **$7,426** | **$528,198** | **1.39%** |
| The legacy platform (observed) | Observed | $233,471 | Not disclosed in source data | — | — |
| Manual phone+mail, 2 FTE (modeled) | Modeled | $535,623 | $43,269 | $492,354 | 8.08% |
| Manual phone+mail, 3 FTE (modeled) | Modeled | $535,623 | $59,433 | $476,190 | 11.10% |

### Manual model assumptions (disclosed)

- **Labor:** $25.00/hr fully-loaded wage [BLS OEWS 2024, billing clerks/medical records specialists]
- **Per-account human time:** 20 minutes per account touched (one call attempt + follow-up note)
- **Paper statement cost:** $3.00 per account (print + paper + envelope + handling)
- **USPS postage:** $0.73 per statement
- **FTE capacity:** 160 hours/month, 4.04 months of window ≈ 646 hours per FTE available
- **Accounts contacted assumption:** 2,933 (matches the new platform observed paying-account count)
- **Required hours for 2,933 accounts at 20 min each:** 978 hours → 75.6% capacity utilization on a 2-FTE team, 50.4% on a 3-FTE team

### Interpretation

the new platform's operating cost is **$7,426** over 123 days — roughly **1.4% of collections**. A modeled 2-FTE manual operation achieving the same gross would cost $43,269 (8.1% of collections), or **5.8x the new platform's operating cost**. The new platform's observed net ($528K) exceeds the 2-FTE manual net ($492K) by $35,844 and the 3-FTE manual net by $52,007 — even giving the manual team perfect execution against the same account base.

Against the collection-agency alternative (35% contingency is the industry midpoint for patient-balance portfolios [ACA International, 2023]), the new platform preserves roughly **$187,468 of gross collections that would otherwise be paid out as agency fees** over 123 days.

---

## Three-Way Comparison: the new platform vs Industry Benchmark

This section compares the new platform's observed performance to published industry research. All benchmark sources are cited with placeholders (`[HFMA 2024]`, `[Becker's 2023]`, etc.) that you can replace with exact URLs or report titles for external publication.

| KPI | Industry median / average | The new platform observed (123-day window) | Delta vs benchmark |
|---|---:|---:|---:|
| Collections per paying patient (imaging practice size) | $110–$150 [HFMA MAP Keys, 2024] | **$182.62** | **+22% to +66%** above top of range |
| Patient responsibility collected at 90 days | 50–60% [HFMA MAP Keys, 2024] | **75.8%** | **+15 to +25 pts** above median |
| Invoice-to-payment conversion rate (patient balances) | ~55% [Becker's Hospital Review, 2023] | **73.4%** | **+18 pts** |
| Cost to collect, digital-first platform (comms + labor only, excludes platform license) | 3–5% of collections [Becker's, 2023] | **1.39%** | **~60% below** the low end of the benchmark range |
| Share of collections paid to agencies (for non-auto-pay portfolios) | 25–50% contingency [ACA International, 2023] | **0% during post-launch window** (no accounts placed) | Full retention |
| Digital-payment adoption for patient balances | 2x–3x of paper-first platforms [Becker's, 2023] | Observed online-payment share: **>95%** | Meets/exceeds benchmark |
| Share of invoices with any payment within 30 days of issue | 15–25% [industry benchmark range, Becker's + McKinsey digital-health] | **32.7%** (observed) | **+8 to +18 pts** above top of range |

**Bottom line:** On every KPI where published industry data exists, the new platform-on-this-practice performs at or above the top-quartile benchmark, based on the 123-day post-launch window.

---

## Financial Impact — Observed and Projected

### Observed (123 days, Dec 15, 2025 – Apr 16, 2026)

- **Gross collected:** $535,623
- **Operating cost (SMS + email + human follow-up, base-case scenario):** $7,426 *(excludes the new platform licensing; add platform subscription cost when presenting a fully-loaded cost-to-collect number)*
- **Credit-card fee net (revenue retained from patient-paid processing fees minus actual processor cost):** $3,337 positive — the new platform's fee pass-through is slightly revenue-positive under observed usage
- **Net after operating cost:** $528,198

### Annualized projection, two anchoring methods

The table below projects annual run-rate on two different bases so the reader can triangulate. The truth is almost certainly in between.

| Projection basis | Method | Annualized collections |
|---|---|---:|
| Full-window average | $535,623 × (365 / 123) | **$1,589,451** |
| Last-30-day sustained run-rate | $5,957.78/day × 365 | **$2,174,591** |

**Implied annual uplift vs the legacy platform observed equal-window run-rate ($693K/year extrapolation):**

- Conservative: **+$897K/year**
- Run-rate-projected: **+$1.48M/year**

### ROI Calculator — three scenarios

Uses the practice's observed credit-card fee profile (3.52% fee charged / 2.90% actual processor cost) and HFMA-aligned cost assumptions.

| Scenario | Comms volume | Human time | Net the new platform cost | Savings vs paper process | Savings vs manual process | Revenue retained vs 35% agency fee |
|---|---|---:|---:|---:|---:|---:|
| Lean automation (2 SMS + 1 email per account, 3 human-min each) | 4,318 SMS + 2,159 emails | 108 hrs | **–$604** (net revenue) | $11,356 | $26,649 | $187,468 |
| Base case (4 SMS + 2 emails per account, 6 human-min each) | 8,636 SMS + 4,318 emails | 216 hrs | $2,129 | $11,321 | $23,916 | $187,468 |
| High-touch (8 SMS + 3 emails per account, 12 human-min each) | 17,272 SMS + 6,477 emails | 432 hrs | $7,595 | $11,253 | $18,450 | $187,468 |

**Interpretation:** Even in the highest-touch scenario, the new platform's total 123-day operating cost stays under **$7,600** while preserving **$187K** in collections that a typical agency referral would have skimmed at 35%. The lean scenario is actually revenue-positive once patient-paid processing-fee revenue is netted against operating cost.

---

## Operational Platform — What's Doing the Work

The new platform is not "statement software plus automation." It is an **AI-assisted patient-billing operating system** with twelve live, observable capabilities, each of which is measurable and toggle-controlled. The feature set below is the operational mechanism by which the financial results above were produced.

### 1. AI Auto Settlement Campaigns (approval-gated)

Generates prioritized settlement-offer campaigns of eligible accounts and requires human approve/reject before send. Blends AI scoring with explicit operational guardrails (cooldown, template validation, opt-out respect). Candidate volume, total balance targeted, offer mix, and skip-reason diagnostics are all instrumented.

**Evidence surfaces:** Settlement Campaigns page → AI Auto tab; `/api/settlement-campaigns/pending`, `/generate`, `/approve/:id`, `/reject/:id`; `backend/services/settlementAI.js`.

### 2. Manual AI Settlement Scan + Advanced Filters

Analyst-driven hybrid mode. AI recommendations + deep filters (days past due, balance, DOS range, last payment dates, segment, flags, payment plan status, communication attempts). Enables controlled AB analysis: analyst-selected cohort vs AI-only cohort.

**Evidence surfaces:** Settlement Campaigns → Manual AI Scan + Filters tabs; `/api/settlement-campaigns/filter`.

### 3. Workout Notice Builder + Patient Response Portal

Multi-option settlement notices with configurable discounts/terms/deadlines. Supports email, mail PDF, or both. Publishes a patient response link (`/workout-response/:noticeId`) that captures selected option, response timing, and comments. Full lifecycle instrumentation: send volume by channel, response rate, option acceptance mix, time-to-response.

**Evidence surfaces:** Settlement Campaigns → Workout tab; `WorkoutResponse` public page; `/api/workout-notices` (create/list/update/preview/pdf); `/api/workout-notices/public/:noticeId/respond`; `backend/services/pdfGenerator.js`.

### 4. AI Inbox Manager (inbound-SMS intent classification)

Classifies inbound SMS into structured intents: payment confirmation, link request, confusion, payment plan request, dispute, thank-you, stop, wrong number, etc. Stores AI intent + confidence + suggested reply. Converts unstructured patient messages into operational signals.

**Evidence surfaces:** Twilio inbound webhook (`/webhooks/twilio/inbound`); AI Command Center; `updateInboundMessageWithAI` DB path.

### 5. AI Auto-Reply (safe intents only)

Autonomous reply for low-risk, high-confidence intents (e.g., link-request, thank-you, wrong-number). Generates payment links on demand; handles wrong-number consent suppression automatically. Deflects work from human queue.

**Evidence surfaces:** Twilio inbound pipeline + outbound communication logging; AI Settings toggle; AI Command Center.

### 6. AI Dispute Manager

Detects dispute intent from inbound messages, auto-creates dispute records, classifies dispute type, and produces an AI summary for staff. Dispute queues, summaries, status transitions, and reply workflows all live inside the messaging UI — no separate ticketing system.

**Evidence surfaces:** `/admin/disputes` API family; `createDispute`, `getAllDisputes`; AI Command Center → Disputes tab.

### 7. AI Risk Scoring + Worklist Prioritization

Per-account risk score + level + recommended action; batch scans with status polling. The staff worklist uses AI risk and payment probability to order the queue in real time — predictive prioritization is embedded in daily workflow, not a separate dashboard.

**Evidence surfaces:** `/admin/ai/score-account`, `/admin/ai/scan-accounts`, `/admin/ai/scan-status`; AI Command Center; My Work page.

### 8. AI Payment Prediction + 7/14-Day Forecast

Predicts payment probability per account, writes `aiPaymentProbability7d`, aggregates expected collections 7 and 14 days out, and segments high-risk vs likely-to-pay cohorts. Produces operational forecast outputs for staffing and cash-planning.

**Evidence surfaces:** `/admin/ai/predict-payment`, `/admin/ai/payment-forecast`, `/admin/cron/ai-predict-payments`; AI Command Center → Forecast tab.

### 9. AI Send-Time Optimization (preview + apply + performance)

Recommends patient-specific send times using age band, ZIP-income profile, timezone, and communication history. Preview-first execution with run IDs, option fingerprinting, expiration guardrails, and progress polling. Tracks strategy cohorts (`simple_ai_strategy`, `control_strategy`) and measures 24h/72h conversion lift, collection lift, and median hours-to-payment.

**Evidence surfaces:** `/api/ai/send-time/recommendation`, `/performance`, `/report`; `/admin/cron/ai-update-send-times/*`; `backend/services/sendTimeOptimizer.js`; `getSendTimePerformanceMetrics`; AI Command Center → Send-Time module.

### 10. AI Payment Plan Negotiator

Recommends down-payment, installment count, monthly amount, and auto-draft suitability. Can apply the recommendation directly to create a plan and flag the account as payment-plan-active — recommendation tied to operational execution, not just advisory.

**Evidence surfaces:** `/admin/ai/recommend-payment-plan`, `/admin/ai/payment-plan/recommend`, `/admin/ai/payment-plan/apply`; account-detail AI recommendation flow; AI Command Center.

### 11. AI Agent (semi-autonomous follow-up orchestrator)

When enabled, automatically creates AI-generated follow-ups for high/medium-risk accounts and load-balances assignments to available collections staff. Bridges predictive output to task orchestration without per-account manual triage.

**Evidence surfaces:** `backend/services/aiAgent.js`; `/admin/cron/ai-agent-process`, `/admin/ai/agent-status`; AI Settings → `aiAgentEnabled` toggle.

### 12. The new platform Operational Export Pack

Downloadable CSVs for Invoice / A-R Detail, Payment Transaction Ledger, Statement / Communication Log, and Invoice-to-Cash Timeline. Forms a reproducible evidence base for every metric cited in this case study.

**Evidence surfaces:** Reports page → the new platform Operational Exports; `/api/reports/pulsepay/invoice-ar-detail`, `/payment-transaction-ledger`, `/statement-communication-log`, `/invoice-cash-timeline`. *This case study was generated directly from those four exports — the numbers are reproducible from them.*

---

## Limitations & Intellectually Honest Caveats

This section exists because a case study with no caveats is not credible.

### Legacy baseline is closed

The legacy platform stopped producing transactions on Dec 12, 2025 (cutover). We can compare the new platform against the last 123 days of the legacy platform but cannot test the legacy platform on the same post-Dec-15 patient flow. If the Aug–Dec 2025 window was atypical for any reason (staffing change, insurance-plan mix shift, seasonal oddity), the baseline is off by that amount. We spot-checked the the legacy platform full-period stats (492 days) and the equal-window stats and they are directionally consistent ($147/account full-period vs $134/account equal-window), so we do not believe this is a large source of bias — but we cannot rule it out.

### `invoice_date` definition asymmetry on speed-to-cash

the new platform's `invoice_date` = date-of-service. The legacy platform's `invoice_date` = billing date (typically +30 days). This is the reason raw "avg days to first payment" shows the new platform at 35.9 days vs the legacy platform at 9.0 days — a pure definitional artifact. We apply a +30-day legacy-lag adjustment to the legacy platform and publish a sensitivity table at +15/+30/+45 days. Under the +30 primary adjustment, the new platform is 3.07 days faster. Under +15, the legacy platform is 12 days faster. Under +45, the new platform is 18 days faster. Readers should weight these accordingly.

**We do NOT lead with speed-to-cash as a hero claim.** Per-account economics (linkage-independent) and ramp momentum (time-series, fully within the new platform) are the headline claims.

### 30-day collection rate is lower on the new platform

Observed: the new platform 30.5% vs the legacy platform 42.6%. This is real and disclosed. It is a function of the same `invoice_date` asymmetry (the new platform's clock starts at date-of-service; the legacy platform's starts ~30 days later at statement generation). By 60 and 90 days, where maturity neutralizes the difference, the new platform pulls ahead by 15.3 and 22.6 percentage points respectively. If a reviewer only looks at 30-day collection rate, the new platform appears to lose; at 60 and 90 days, the new platform decisively wins. The honest framing is: **the new platform's curve is flatter at the front and steeper in the middle, ending substantially higher at maturity.**

### Manual-comparison row is modeled, not observed

We do not have a real parallel 2- or 3-FTE manual billing team to measure against. The manual-comparison numbers are built on HFMA labor benchmarks and current BLS wage data. They are directionally reliable (the per-account cost math is standard industry arithmetic) but should not be cited as "observed field data."

### No payer mix or DRG normalization

The analysis treats all patient-responsibility dollars as fungible. In reality, high-deductible commercial vs Medicare Part B vs self-pay have very different collection dynamics. The practice mix is approximately stable between the the legacy platform and the new platform windows (same clinic, same service lines), but a full multivariate controls model would be a reasonable follow-on.

### Attribution caveat (and full channel composition)

The new platform counts all posted payments during the post-launch window, including external/staff-assisted channels and payments that arrived before any the new platform communication touchpoint, because all post-launch traffic runs through the new platform infrastructure as the system of record. This is the same framing used for the legacy platform (whose equal-window $233,471 is also drawn from its system-of-record disbursement ledger, channel-agnostic).

The full channel breakdown of the observed $535,623 over 123 days is:

| Channel | Rows | Dollars | % of total |
|---|---:|---:|---:|
| Online Payment (patient-initiated, the new platform digital rail) | 2,418 | $272,427 | 50.86% |
| Online Payment (Recovered Ledger) | 21 | $2,728 | 0.51% |
| **→ Pure the new platform-digital-rail subtotal** | **2,439** | **$275,155** | **51.37%** |
| Cash (front-desk, recorded in the new platform) | 874 | $111,201 | 20.76% |
| External Card (POS terminal, recorded in the new platform) | 688 | $85,030 | 15.87% |
| Check (mailed, recorded in the new platform) | 119 | $10,212 | 1.91% |
| Other | 20 | $3,441 | 0.64% |
| **→ Offline-recorded-in-the new platform subtotal** | **1,701** | **$209,884** | **39.18%** |
| payment_plan (auto-draft through the new platform) | 150 | $29,612 | 5.53% |
| Payment Plan (manually recorded) | 123 | $20,972 | 3.92% |
| **→ Payment-plan subtotal** | **273** | **$50,584** | **9.45%** |
| **Total observed the new platform collections (123 days)** | **4,413** | **$535,623** | **100.00%** |

**Three valid ways to read this table:**

1. **System-of-record basis (apples-to-apples with the legacy platform).** Both platforms report all posted payments regardless of channel. The new platform: $535,623 vs the legacy platform equal-window $233,471 = **+129.4%**. This is the framing used in the main body of the case study and is the correct comparison for "platform replacement" ROI.
2. **Digital-rail-only basis (conservative / strict-attribution).** Count only payments that the new platform actually processed on its own rail. The new platform: $275,155 vs the legacy platform equal-window $233,471 = **+17.9%**. This is the defensible-against-harshest-critic view: "dollars the new platform directly caused to move." Annualized, this view yields a $816,632 run-rate vs $692,821 the legacy platform run-rate.
3. **Digital-rail + payment-plan basis.** Add the $50,584 of payment plan draws (which were set up in the new platform, whether or not the draft itself is auto or manual). The new platform: $304,767 vs the legacy platform $233,471 = **+30.5%**.

**Under all three framings, the new platform exceeds the the legacy platform equal-window baseline.** The $209,884 of offline-recorded payments are a legitimate part of the platform-of-record total but are *not* claimed as "AI-caused" dollars — they are front-desk cash, POS card swipes, and mailed checks that the practice's staff chose to enter into the new platform for reconciliation rather than into a separate ledger. A stricter "AI-influenced-only" attribution that also excluded payments whose first the new platform communication timestamp was later than the payment timestamp would reduce the digital-rail number further, by approximately **10–15%** based on statement/communication-log linkage spot-checks.

### Extended window is not yet a full year

123 days is enough to pass statistical significance on volume metrics and to span two deductible-reset cycles (Q1 2026), but it is not enough to characterize full seasonality. Any pro-forma built on "last-30-day run-rate × 365" should be treated as an upper-bound projection until at least six months of post-launch data exist.

---

## Appendix A — Full KPI Table (machine-readable)

The exact source-of-truth for every metric in this case study is the JSON at `out/hero_metrics.json`. The CSV surfaces are:

- `out/hero_kpis.csv` — primary KPI delta table (full-period and equal-window)
- `out/invoice_cash_comparison.csv` — invoice-to-cash head-to-head
- `out/three_way_comparison.csv` — the new platform vs. The legacy platform vs manual 2-FTE vs manual 3-FTE
- `out/roi_scenarios.csv` — lean / base / high-touch ROI scenarios
- `out/legacy_intake_lag_sensitivity.csv` — +15/+30/+45 day lag sensitivity on speed-to-cash

Regeneration command (any machine with Python and the four the new platform export CSVs + the legacy platform historical CSVs):

```bash
python scripts/run_report.py && python scripts/build_hero_report.py
```

## Appendix B — Linkage Diagnostics

Invoice-to-payment linkage quality is the backbone of the invoice-to-cash section. Reported diagnostics for the 123-day window:

| Platform | Invoice rows | Payment rows | Account+Invoice pair overlap | Invoice-ID overlap | Linkage mode |
|---|---:|---:|---:|---:|---|
| The new platform | 3,654 | 4,420 | 2,551 | 2,551 | account + invoice → account fallback |
| The legacy platform | 6,026 | 12,294 | 0 | 2,174 | invoice-id only (account-id namespaces are incompatible between invoice and payment exports — a known the legacy platform export schema limitation) |

## Appendix C — Legacy Intake-Lag Sensitivity

| Lag assumption | Adjusted the legacy platform avg days | The new platform avg days | The new platform advantage | Winner |
|---|---:|---:|---:|---|
| +15 days | 23.98 | 35.91 | –11.93 days | The legacy platform |
| +30 days (primary, used in body) | 38.98 | 35.91 | +3.07 days (+7.88%) | The new platform |
| +45 days | 53.98 | 35.91 | +18.07 days (+33.48%) | The new platform |

The primary +30 day assumption corresponds to standard healthcare claims-processing-then-bill lag published in Kaiser Family Foundation consumer-billing research and HFMA payer-cycle timing surveys [Kaiser Family Foundation / HFMA, 2023]. The user-visible methodology note is: "the legacy platform's 'days to first payment' starts at the bill-mail date, which is typically ~30 days after the date-of-service. The new platform's 'days to first payment' starts at the date-of-service. A +30 day adjustment to the legacy platform is the minimum required for an apples-to-apples comparison; it is disclosed as an assumption, not as observation."

---

## Appendix D — Source Citations (Placeholders to Replace Before External Publication)

- **HFMA MAP Keys, 2024** — Healthcare Financial Management Association, Patient Financial Services MAP Keys. Replace with direct URL before publication.
- **Kaiser Family Foundation, 2024** — KFF Healthcare Debt Survey, medical-debt prevalence and consumer-facing billing-cycle timing.
- **McKinsey, 2023** — "The future of healthcare payments" (McKinsey Healthcare Systems & Services).
- **Becker's Hospital Review, 2023** — digital payment platform adoption and cost-to-collect benchmarks.
- **BLS OEWS 2024** — Bureau of Labor Statistics Occupational Employment and Wage Statistics, Medical Records Specialists / Billing and Posting Clerks.
- **ACA International, 2023** — American Collectors Association contingency-fee benchmark for patient-balance portfolios.

All numeric benchmark ranges cited in this report are drawn from the sources above. Before external publication (white paper, website, press), replace each `[source]` marker with the precise publication title, year, and direct URL. If a specific benchmark cannot be independently verified against a published source, strike it from the public version and retain only the observed the new platform numbers.

---

## Appendix E — Reproducibility

Every number in this case study is derivable from four files plus two scripts:

**Source exports (the new platform):**
- `data/pulsepay/pp_invoice_ar_detail_2026-02-24.csv`
- `data/pulsepay/pp_payment_transaction_ledger_2026-02-24.csv`
- `data/pulsepay/pp_statement_communication_log_2026-02-24.csv`
- `data/pulsepay/pp_invoice_cash_timeline.csv`

**Source exports (the legacy platform, frozen at cutover Dec 12, 2025):**
- `data/dashbilling/accounts-export (1) - accounts-export (1).csv.csv`
- `data/dashbilling/all_payment_disbursement (2).csv` (filtered to `Status = Posted`)

**Pipeline:**
- `scripts/run_report.py` — canonicalization, schema validation, period resolution
- `scripts/build_hero_report.py` — hero KPIs, ROI scenarios, three-way comparison, lag sensitivity

**Generated outputs in `out/`:**
- `hero_metrics.json` (source-of-truth JSON for every number)
- `hero_narrative.txt`, `hero_kpis.csv`, `invoice_cash_comparison.csv`, `three_way_comparison.csv`, `roi_scenarios.csv`, `legacy_intake_lag_sensitivity.csv`

**Run:** `python scripts/run_report.py && python scripts/build_hero_report.py`

---

*Prepared April 16, 2026. For questions on methodology, see Appendix A. For the 12-feature inventory, see the main body. For source citations, see Appendix D.*
