Preventable harm in healthcare remains staggeringly high compared with other safety-critical industries. Critical fields such as nuclear power and aerospace have become ultra safe through the use of risk management tools. Many of these risk management tools wave been tried to some extent in healthcare, but preventable error in healthcare remains very high. My doctoral research set out to understand why the risk-assessment tools that helped make aviation and nuclear power so safe haven’t delivered similar gains in healthcare—and what would need to change to finally move the needle.
This work explored four questions:
Rather than proposing yet another tool, I examined the fit between healthcare as a socio-technical system, the risks that actually arise in care, and the design characteristics and resource demands of the tools we keep importing from other industries (e.g., RCA, FMEA, risk matrices).
Using a Design Research Methodology (DRM) “comprehensive study of the existing solution,” I combined:
This structure let me connect what tools demand, what organizations can realistically support, and where risk truly comes from in day-to-day care.
Across observed procedures and corroborating literature, human factors—especially skill-based slips and routine violations—drive a large share of preventable risk. Tools that assume mostly technical failure modes will miss much of what actually harms patients.
Healthcare frequently uses RCA, FMEA, and risk matrices—but even when a tool could be appropricate, its effects are hobbled by poor safety culture, missing inputs (like under-reported incidents) and other issues
The biggest hurdle to safer healthcare isn’t a lack of tools; it’s safety culture and organizational support. Error and near-miss reporting is weak, follow-through is inconsistent (many recommended improvements are never implemented), and risk functions are frequently subordinated to operations or quality. In the US, resources and adoption are somewhat stronger than in the UK—but both systems fall very short of what robust risk work requires.
Other industries achieved dramatic safety improvements by aligning tools + data + organizational will. In healthcare, severe cultural and resource gaps mean even good tools underdeliver. Until we close those gaps, adding more tools won’t fix the problem.
Consider the scale: estimates suggest over 200,000 patients die from preventable medical error every year in the USA. This statistic places preventable medical error as the third or fourth leading cause of death in the USA. Getting the sociotechnical foundations of healthcare safety right—culture, reporting, learning, and implementation discipline—isn’t a “nice to have”; it should be a critical priority for our nation.
These shifts enable retrospective analyses (RCA, etc.) to produce actionable insights and make prospective analyses (like FMEA) realistic and useful, especially when procedures are less complex—or when complemented by methods that handle complex, human-system interactions.
Healthcare doesn’t need yet another imported tool as much as it needs better alignment between (1) the risks that actually occur, (2) the tools we choose, and (3) the culture and resources required to use those tools well. Do that, and the kinds of safety gains seen in aviation and nuclear power become possible in medicine, too. That’s the path my research lays out—and the work I’m continuing: designing practical, human-centered ways to make risk assessment work in the places where it matters most.