In a recent morning conference at a regional hospital, a senior attending physician asked us: "Isn't design thinking outdated by now?"
A short primer first, because the term isn't universally familiar. Design thinking is a five-step loop popularized in the 1990s by IDEO and adjacent design consultancies: empathize (understand the user), define (frame the problem), ideate (generate solutions), prototype (build something rough), test (put it in front of users). The premise is simple — figure out what problem you're solving before you build, then have real users touch a rough version, then revisit your problem definition. It sounds like common sense, but in practice most teams never finish a single full loop.
Over the past decade, an impression formed that the methodology had grown outdated. The usual subtexts:
- It's a 2010s thing — the world has moved on to lean startup and AI direct-generation
- It became a workshop industry: lots of Post-it notes, no follow-through
- Big companies turned it into ritual: a one-week sprint and then no further iteration
- If LLMs can synthesize solutions on demand, why bother to empathize or do user research at all?
Our reading is different. For the past decade, what made design thinking look outdated was speed, not logic. AI now compresses the parts that used to drag the loop down — prototype build and feedback synthesis — from months to days. The full loop can run end-to-end for the first time.
This piece walks through a recent internal pivot — from a single-specialty bedside tool to a cross-specialty schema composer that any specialist can self-serve. The whole arc happens to map cleanly onto one full design thinking loop. We use it as a concrete example of why AI is not design thinking's replacement, but the tool it has been waiting twenty years for.
Why people thought it was outdated
Design thinking's five-stage core — empathize, define, ideate, prototype, test — was popularized in the 1990s by IDEO and adjacent design consultancies. Tim Brown's 2008 Harvard Business Review manifesto turned it into a mainstream methodology.
Its real bottleneck was never logical. It was time.
Running one full iteration — empathize through test, then back to redefine — typically took a traditional product team three to six months:
- Empathize: schedule 5–15 user interviews, transcribe, cluster patterns
- Define / ideate: pull every stakeholder into multi-day workshops, generate piles of Post-its
- Prototype: have designers mock up screens, have engineers build a playable version
- Test: run another round of user sessions, gather feedback
Each step was slow. Slow enough that by the time a team finished one round and saw real user reaction, there was no time or budget left to act on it.
Three things happened over the past decade as a result:
- Empathize and define got ritualized: one workshop and call it done
- Prototype and test got outsourced to design agencies: pretty deliverables, far from production
- The whole methodology got squeezed into "two-day sprints" — design thinking in name, but missing the move that made it meaningful: actually pivoting
By the early 2020s the consensus had shifted: design thinking was "the old way." The new way was lean startup, growth hacking, data-driven product.
The "outdated" verdict wasn't just industry small talk. Rebecca Ackermann's 2023 retrospective in MIT Technology Review consolidated the case: ideas without execution, innovation theater, empathy without community expertise. NYU professor Natasha Iskander had named it more sharply five years earlier in her 2018 HBR piece: "fundamentally conservative." Both critiques target design thinking's failure on social and policy problems — school-lunch programs, anti-poverty initiatives, government services — domains that need long-term political and institutional work, not a five-stage loop.
We are not arguing with that body of critique. It lands. Our piece addresses a narrower question: why design thinking stopped running inside product workflows.
What AI actually changed
When AI entered the picture, the loudest narrative was: "We don't need user research anymore — let LLMs synthesize the solution directly." That narrative is in direct conflict with design thinking, which is why people ask whether the latter is outdated.
What we see inside product teams is closer to the opposite.
LLMs do not accelerate the empathize stage. They accelerate prototype construction and feedback processing:
| Stage | Before AI | With AI |
|---|---|---|
| Spec drafting | 1–2 weeks (multi-round stakeholder review) | 1–2 days (multi-AI review in parallel) |
| UI mock | 1 week (designer + tools) | Half day (LLM + Figma plugins) |
| Playable prototype | 2–4 weeks (frontend + backend engineering) | 3–5 days (mocked API + AI-generated UI logic) |
| Feedback synthesis | 1 week (manual clustering) | Half day (LLM theme extraction + clustering) |
| Compliance / legal drafts | 2–4 weeks (lawyer + internal review) | 3–5 days (AI drafts + lawyer reviews) |
Add it up: a single design thinking loop that used to take 12–16 weeks (or longer) now runs in 2–3 weeks (or less).
A side note: IDEO themselves agree the path forward leads here. Since 2024, they offer an online course titled Bring AI to the Design Thinking Process, teaching AI acceleration across five phases that map almost one-for-one onto the table above. We'll come back to that signal later.
The bottleneck has moved. What used to be slow — "build something a user can actually touch" — is now fast. What's slow now is "decide what to build."
In other words: the relative importance of the empathize / define stages has not gone down. It has gone up.
The real shift
LLMs are amplifiers. Frame the problem wrong, and they will produce things nobody wants — at unprecedented efficiency. Frame it right, and they multiply the right direction tenfold.
Which means design thinking's most classical move — "ask why first" — is not just still relevant in the AI age. It is the guardrail.
In the past, the empathize stage was slow, so teams were tempted to skip it. Today the prototype stage is fast, so skipping empathize is more dangerous, not less — because you'll race down the wrong path very efficiently before noticing.
Our internal rule is: no spec sign-off, no work begins. The faster AI gets, the more critical the sign-off step becomes. We run three AI reviewers (Codex, Gemini, ChatGPT) in parallel against every spec — they catch contract violations, IDOR, PHI, cross-version compatibility issues that traditional design thinking workshops never cover. In the AI age, those concerns must sit alongside "what does the user want" as part of empathize / define, not after it.
"Spec first" is not anti-design-thinking. It's design thinking adapted for the AI age.
Case: from SOAP.S overflow to a cross-specialty composer
The arc didn't begin with market research or a focus group. It began with a conversation with a senior physician. They described what their team had been seeing: their patient copilot tool let patients enter chief complaints before the visit, but the LLM-generated text routinely overflowed the Subjective section of SOAP. Time the physician used to spend on clinical reasoning was now spent cleaning up what the AI wrote.
The first instinct was: just add a compiler — compress patient input into something the physician can actually use.
The team didn't do that. Sitting with the problem, it became clear the issue wasn't a missing compression layer; the entire value flow needed to be rearranged. So the team walked it through design thinking's five stages. From the conversation to the pivot decision: roughly two weeks. The same work pre-AI: six to nine months.
Empathize
The team includes a founder who is also a practicing orthopedic surgeon. From his perspective, the most classical design thinking question:
"In every clinic visit, how do I personally translate a patient's loose complaints into the SOAP.S content?"
The answer isn't "write everything down." It's "compress into four fields plus the reasoning behind each." Which side, severity, traumatic or degenerative, duration. Once those four are filled, the differential-diagnosis hypothesis tree assembles itself.
In other words: the SOAP.S a physician writes is never a transcript of the patient's narrative. It's the physician's structured interpretation of that narrative. SOAP.S overflow isn't caused by patients writing too much — it's caused by tools designed to dump AI output directly into the chart, instead of compressing it for the physician to review first.
(We have written a deeper version of the "AI drafts, physician confirms" design rule in iRehab Doctor AI: Draft-Only Enforcement — AI translation and physician confirmation are two actions that cannot be merged.)
Define
We restated the problem:
"In the two minutes before the patient enters the room, compress the previous fortnight of patient inputs — VAS trends, wound photos, exercise logs, PROM scores — into a specialty-relevant summary the physician reads in five seconds."
The keywords are "compress" and "five-second readable." If we kept the conventional "transcript-into-EMR" logic, no level of AI accuracy would stop the SOAP.S overflow.
This define step was deliberately scoped tight: orthopedic only, this slice only, nothing else.
Ideate
Every prior approach assumed the action happened inside the consult room — physician sits down, patient in front of them, EMR open, typing while listening.
Our key ideate move was to relocate the entire translation step out of the room. The patient fills inputs outside the visit (at home, on a phone, over the previous two weeks). AI compresses two minutes before the visit. The physician reads it in five seconds before walking in.
This isn't a tool upgrade. It's a workflow relocation. (The patient-side input flow that feeds Brief is the 30-second daily check-in we've written about — 30 seconds per day, but two weeks of compounding is the raw material for the pre-visit summary.)
Prototype
A few days of spec writing, multi-AI review, and prototype build later: iRehab Brief Wave 1 shipped, orthopedic-only.
This used to take four to eight weeks, often longer. AI compressed the spec drafting, UI mocking, backend API scaffolding, data model review, and compliance document drafting.
Test
When Brief entered its in-hospital pilot, two unanticipated things happened.
First, physical rollout was blocked at one site. Not for code reasons. Hospital policy required PR-department approval for any QR-coded poster; the surgical chief framed it as "yet another doctor pushing a Facebook page." Code review's coverage and institutional governance turn out to be two different worlds.
Second, the FB reaction came back hotter than we expected. Direct messages from physicians in psychiatry, neurology, urogynecology, and obstetrics, all asking the same question: "Can you build one for my specialty?"
The two streams of feedback converged on a single message: the problem we originally defined is a slice of a larger problem.
Redefine
Every specialty needs a translator, but the four fields each one needs to translate into are completely different:
- Psychiatry needs sleep, mood swings, medication adherence
- Urogynecology needs voiding diary, Kegel completion
- Neurology needs symptom variability, side effects, episode frequency
- Obstetrics needs pregnancy-related complaints, fetal movement, trimester-specific alerts
Same architecture. Four entirely different schemas.
If we kept pushing the orthopedic Brief, we would probably have rolled it out smoothly to a handful of orthopedic users. We would also have missed the real insight — the value of this tool is not "save orthopedists time." It is "let any physician define their own translation rules in a few minutes."
We paused the orthopedic Brief rollout. We opened a new spec: a cross-specialty Schema Composer that lets any specialist define their own four fields, sign attestation, and ship their own specialty pack.
That redefinition decision is the actual output of design thinking's fifth stage: not "the prototype works" but "the original problem frame was too small."
Two AI multipliers
Looking back at those two weeks, AI accelerated the work in two specific places:
First multiplier: spec → prototype time.Writing a sign-off-grade spec, building a playable prototype, and running internal review used to take four to eight weeks. Our team now compresses that to 7–10 days. LLMs draft the spec; three AI reviewers run in parallel; a human architect reconciles. The prototype starts from mocked APIs and AI-generated UI stubs; engineers fill in business logic.
Second multiplier: feedback → redefine time.Going from "pilot live" to "we should pivot" used to require three to six months of usage data, multiple interview rounds, and synthesis work. Our loop now runs: FB for one week, DMs for three days, internal team discussion for one day. Five days from external signal to a new spec capturing "the original frame was a sub-problem."
Together, the two multipliers compress one full design thinking loop — think, spec, prototype, feedback, redefine — from 6–9 months to about 2 weeks. This is not design thinking being replaced. This is design thinking running end-to-end for the first time.
What design thinking really is
Design thinking is not workshops. Not Post-its. Not empathy maps.
Its core is two willingnesses:
- The willingness to admit your first answer is wrong.
- The willingness to test fast and find the right one.
For the past decade, both have been hard to honor. The first was blocked by sunk cost ("we spent three months on this spec, how can we admit it's wrong?"). The second was blocked by time ("running another loop takes six months").
AI has changed the second. Fast iteration is finally cheap. Which is what makes the first willingness meaningful for the first time: admitting you were wrong now means you can immediately try something else.
This is why our team treats spec-first, ask-why, and "what door did this open?" as internal discipline. AI is just the tool. The discipline is the methodology that sits on top of it — and that methodology has a name. It's called design thinking.
We're not alone
If you only consider one team's case, the argument can sound like an anecdote. The methodology's originators have been giving a more direct signal.
Since 2024, IDEO has offered an online course titled, plainly, Bring AI to the Design Thinking Process. The curriculum teaches AI acceleration across five phases:
- Inspiration / Research: expanding creative input with AI
- Synthesis: "speed up research and synthesis, get insights faster"
- Brainstorming: ideating faster, exploring more options
- Prototyping: bringing concepts to life quickly
- Responsible use: understanding AI's limits and ethics
Map those onto the AI-acceleration table earlier in this piece, and the five phases line up almost one-for-one.
The original popularizers of design thinking are not treating AI as a rival. They are personally onboarding their community into the next-generation version. Note the fifth phase — responsible use — echoing our own argument that as AI gets faster, "ask why first" becomes the guardrail.
Design thinking isn't outdated. Its creators are wiring it to AI by hand.
Closing
Back to the senior attending physician's question: "Isn't design thinking outdated?"
Our answer: it isn't. For the past decade it looked outdated because the iteration loop was too long. AI is not its rival — AI is the tool it has been waiting twenty years for.
The two weeks during which iRehab Brief turned from a single-specialty pilot into a cross-specialty composer was the first time our team saw design thinking actually run end-to-end. We don't think it'll be the last.
Further reading
- Brown, T. (2008). Design Thinking. Harvard Business Review. — The original manifesto.
- IDEO U. Bring AI to the Design Thinking Process. — IDEO's 2024+ on-demand course teaching AI acceleration across the five phases.
- Ackermann, R. (2023, Feb). Design thinking was supposed to fix the world. Where did it go wrong? MIT Technology Review. — The defining retrospective of the past decade's "outdated" narrative.
- Iskander, N. (2018, Sep). Design Thinking Is Fundamentally Conservative and Preserves the Status Quo. Harvard Business Review. — The earliest sharp academic critique.
- Mayer, J. et al. (2025). The impact of design thinking and its underlying theoretical mechanisms: A review of the literature. Creativity and Innovation Management. — The 2025 academic synthesis.
- U.S. FDA (Jan 2025). Artificial Intelligence-Enabled Device Software Functions: Lifecycle Management and Marketing Submission Recommendations (Draft Guidance). — Regulatory anchor for clinical AI software lifecycle discipline.
From the De Novo blog
- iRehab Doctor AI: Draft-Only Enforcement — The sister piece on Brief's design rule: AI translates, physician confirms, the two actions cannot be merged.
- 30-second daily check-in — The patient-side input flow that feeds Brief: 30 seconds per day, two weeks of compounding becomes the raw material for the pre-visit summary.
- Recovery Loop — The wider clinical methodology iRehab features ship inside.
