The Documentation Burden
Ask any orthopedic surgeon what they like least about their job. The answer is almost never surgery itself. It is usually:
"Charting."
Assessment notes, SOAP documentation, follow-up reports, quality submissions. These consume hours each week, yet every entry must be precise. You cannot afford mistakes, but you also cannot afford the time it takes to write everything from scratch.
This is the problem Doctor AI was built to solve.
Phase 1: AI That Reads
Earlier in 2026, we shipped Doctor AI Phase 1. Physicians connect their preferred AI tool (Claude Code, Gemini CLI, Codex CLI, or any MCP-compatible client) via an API Token, and the AI can:
- Read patient VAS trends, adherence rates, and PROM scores
- Generate weekly summaries and trend analyses
- Answer questions like "Which patients have declining adherence?" or "Is this patient recovering on track?"
Phase 1 was read-only. AI could observe, but could not modify anything. PII (national ID, phone, email) was automatically stripped before reaching the AI.
This worked well. But physicians started asking: "The AI already has all the context — can it write the SOAP notes for me?"
Phase 2: AI That Writes (Drafts Only)
Phase 2 opens write access. AI can now draft clinical assessment notes (Assessments) for the physician.
But here is the critical design decision:
AI writes are saved as drafts. They never auto-publish.
We call this principle draft-only enforcement.
Why Not Let AI Publish Directly?
Because clinical decision accountability belongs to the physician.
AI can look at a VAS trend and suggest "pain improving, recommend phase advancement." But it does not know that the patient walked into the clinic today looking uncomfortable. It does not know the patient fell yesterday. It does not know the patient has psychological resistance to a specific exercise.
These contextual signals are not yet fully available to AI. The correct workflow is: AI provides a draft, the physician reviews it in 30 seconds, edits as needed, and confirms. Not: AI writes directly into the official record.
Two Paths: App or CLI?
Most orthopedic surgeons interact with AI through a phone app — ChatGPT, Gemini, Claude. Not a terminal. So we designed two paths, both leading to the same goal: the physician never has to type.
Path A: App Users (Most Physicians)
This is the common workflow:
- Your assistant or physical therapist completes Step 1 (VAS, wound status) and Step 2 (ROM, clinical tests) in the Doctor PWA
- They tap the "AI-Assisted SOAP" button — the system generates a prompt containing all of today's measurements, but no patient name or identifying information
- The prompt is copied to clipboard + one tap opens ChatGPT / Gemini / Claude
- The assistant dictates the clinical picture to the AI: "Six weeks post-op TKA, walking with minimal pain, much better than last visit, knee swelling resolved"
- AI returns structured SOAP notes — copy back into the Doctor PWA — the system auto-parses them into S/O/A/P fields
- The assistant saves it as a draft
- After the physician examines the patient in person, they open the PWA, see "Pending Confirmation" with a purple badge, review it in 30 seconds, and confirm
iRehab itself never sends patient-identifiable information to AI. The prompt contains only formatting instructions and clinical numbers (VAS 3/10, ROM 120/0, effusion: none). What the physician or assistant says to the external AI tool is their own clinical activity.
Path B: CLI Users (Power Users)
A smaller number of physicians use Claude Code, Gemini CLI, or similar command-line tools. This path is more capable:
- AI reads patient data directly through the MCP Server (authorized via API Token + PII automatically stripped)
- The physician says: "Mr. Wang, 6 weeks post-op, ROM 120/0, ready to advance"
- AI fills in the fields, asks about anything missing, and saves as draft
- The physician confirms in the PWA
Path B gives AI full patient context — VAS trends, PROM scores, previous assessments — so the generated SOAP notes are higher quality. But it requires setting up an API Token (3 minutes).
Three-Role Architecture: Assistant + AI + Doctor
In a typical orthopedic outpatient session, a surgeon sees 30 to 50 patients in a few hours. Average time per patient: 3 to 5 minutes. The physician's core value is clinical judgment, not data entry.
We split the work into three roles:
| Role | Responsibilities | Does NOT do |
|---|---|---|
| Assistant / PT | Measure ROM, record VAS, document wound status, tap AI-Assisted SOAP, dictate clinical picture to AI, save draft | No clinical decisions, no progression calls |
| AI | Convert dictation into structured SOAP, parse into S/O/A/P fields | No auto-publishing, no replacing the physician exam |
| Physician | Examine the patient in person (palpation, interview), review AI-drafted SOAP, confirm or edit | No typing, no remembering which field goes where |
Physician keystrokes: zero. All they need to do is: look, edit if necessary, confirm.
Technical Architecture
| Layer | Description |
|---|---|
| MCP Server v2.0.0 | 2 write tools (draft_assessment, draft_prescription) + 6 read tools (trends, alerts, PROM, etc.) |
| Default-deny API | Allowlist-based — only explicitly listed endpoints are accessible via AI Token |
| Draft-only enforcement | At the API layer: AI Tokens can only write records with status=draft. Setting status=published is rejected |
| Scope management | Physicians explicitly authorize write permissions in the Doctor PWA Token Scope UI |
Safety Boundaries
We define explicit boundaries for what Doctor AI can and cannot do:
AI Can
- Read patient rehabilitation data (VAS, PROM, exercise logs, assessment history)
- Draft assessments from brief physician instructions (SOAP notes + ROM + VAS + effusion + progression)
- Draft exercise prescriptions (select exercises from library, set sets/reps)
- Ask follow-up questions about missing fields (this is the AI tool's natural conversational ability, not a server feature)
- Generate trend analyses and weekly reports
AI Cannot
- Publish any record directly (physician confirmation required)
- Execute phase advancement directly (deferred until physician confirms the draft)
- Draft surgical records or billing entries (currently limited to assessments and prescriptions)
- Access PII (national ID, phone, email are stripped automatically)
- Access patients outside the physician's authorized scope
- Diagnose independently or auto-prescribe
What If AI Gets It Wrong?
Nothing happens — because it is a draft. If the physician spots any issue before confirming, they delete or edit it. Drafts do not enter the patient's official record, do not affect PROM scheduling, and do not trigger any clinical workflows.
The cost of a bad draft is zero. The cost of a bad auto-published note could be significant. This asymmetry is exactly why draft-only enforcement exists.
BYO-LLM: No Lock-In, No Hosting
Another deliberate design choice: iRehab does not embed a specific AI chat interface.
We provide a standard MCP Server and API Token interface. Physicians choose their own AI tool. Claude Code, Gemini CLI, Codex CLI, local models — all work.
The reasoning:
- AI models turn over every 6 months — binding to a specific vendor is short-sighted
- Data sovereignty — the physician's choice of AI provider determines whose servers process the data. Enterprise tiers typically do not retain data
- Cost — different AI providers have different pricing. Physicians should have the choice
iRehab's role is to provide a secure data access layer, not to become an AI vendor.
Just Talk: Voice Input + AI Form Filling
A common question: "I do not want to type. Can I just speak?"
Yes, and no extra setup is required.
iPhone, Mac, and Android all have built-in dictation — tap the microphone key on any keyboard. The physician taps the mic in their AI tool's input field, speaks, sends, and the AI maps natural language to structured form fields.
No model "training" is needed. The MCP Server defines a schema for each field (ROM flexion/extension, VAS 0-10, effusion grade, etc.). The LLM reads this schema and knows how to map speech to fields. "Flexion one-twenty, extension zero" becomes kneeFlexion: 120, kneeExtension: 0 automatically.
Define Your Shortcuts with CLAUDE.md
If you use Claude Code, you can write your preferred shorthand in the project's CLAUDE.md file:
# My shorthand
- "advance" or "ready to progress" = progressionDecision: advance
- "step back" = progressionDecision: regress
- "swollen" = effusionGrade — ask me for severity
- If I don't mention a field, always ask — never guess
The AI follows these rules every session — effectively natural language macros. Other AI tools have similar system prompt configuration options.
Current Limitations
- System dictation handles everyday language well, but English medical abbreviations (ROM, VAS, TKA) may occasionally be misrecognized
- No real-time streaming — you speak, send, then AI processes (not live transcription into fields)
- Voice input quality depends on your device and environment, not on iRehab
Trust But Verify
The name "draft-only enforcement" borrows from an old principle: trust but verify.
We trust AI's capability — it genuinely produces reasonable clinical assessment drafts based on patient data. But we also verify — every draft must pass through a human physician's eyes and judgment before it becomes real.
This is not distrust of AI. It is a commitment to keeping a human in the loop for clinical decisions.
As AI reliability improves over time, draft-only is a starting point that can be gradually relaxed. But at launch, we err on the side of caution. The history of medical technology teaches us that conservative rollouts with clear safety boundaries earn more trust than aggressive ones that occasionally fail.
The Physician Never Touches the Keyboard
Back to the original problem: the documentation burden.
With AI SOAP Assist, the clinic visit workflow changes from:
Physician examines patient → Physician types notes → Physician saves record (5-10 minutes each)
To:
Assistant measures → Assistant + AI generate SOAP → Physician examines patient → Physician confirms (30 seconds to 1 minute each)
The physician does not need to remember where each field lives, does not need to write SOAP notes from scratch, does not need to touch the keyboard. The 60% of time previously spent on data entry can go to what actually requires a physician — palpation, patient interview, spending an extra minute talking to the patient.
Path A (ChatGPT on your phone): No setup needed. Open Doctor PWA and start using it. Path B (CLI / MCP): Go to Doctor PWA → Profile → API Token to generate a Token and connect your preferred AI tool. Setup takes 3 minutes.
Full setup guide: denovortho.com/irehab/ai-setup
