type
Research
Category
Healthcare
Start Date
October, 2018
End Date
August, 2022
Work

Risk Assessment Tools for Patient Safety

Risk Assessment Tools for Patient Safety

Overview

Preventable harm in healthcare remains staggeringly high compared with other safety-critical industries.  Critical fields such as nuclear power and aerospace have become ultra safe through the use of risk management tools. Many of these risk management tools wave been tried to some extent in healthcare, but preventable error in healthcare remains very high. My doctoral research set out to understand why the risk-assessment tools that helped make aviation and nuclear power so safe haven’t delivered similar gains in healthcare—and what would need to change to finally move the needle.

What this work explores

This work explored four questions:

  1. What is the nature of patient-safety risk in healthcare?
  2. Which risk-assessment tools are commonly used?
  3. How is patient safety currently assessed in practice (US & UK)?
  4. What are the requirements for risk assessment to actually work in healthcare?

Rather than proposing yet another tool, I examined the fit between healthcare as a socio-technical system, the risks that actually arise in care, and the design characteristics and resource demands of the tools we keep importing from other industries (e.g., RCA, FMEA, risk matrices).

Methods

Using a Design Research Methodology (DRM) “comprehensive study of the existing solution,” I combined:

  • Literature reviews on patient-safety risk and common tools
  • A US/UK survey and interviews with risk managers about real-world practice
  • An observational study of semi-surgical anesthesia procedures to see how risks are actually created at the sharp end of care
  • Synthesis to derive the minimum requirements for “fit-for-purpose” risk assessment in healthcare settings

This structure let me connect what tools demand, what organizations can realistically support, and where risk truly comes from in day-to-day care.

What I found

1) Risk is largely human-factor driven

Across observed procedures and corroborating literature, human factors—especially skill-based slips and routine violations—drive a large share of preventable risk. Tools that assume mostly technical failure modes will miss much of what actually harms patients.

2) The tools cannot succeed in current enviroments

Healthcare frequently uses RCA, FMEA, and risk matrices—but even when a tool could be appropricate, its effects are hobbled by poor safety culture, missing inputs (like under-reported incidents) and other issues

3) Culture and resources are the binding constraints

The biggest hurdle to safer healthcare isn’t a lack of tools; it’s safety culture and organizational support. Error and near-miss reporting is weak, follow-through is inconsistent (many recommended improvements are never implemented), and risk functions are frequently subordinated to operations or quality. In the US, resources and adoption are somewhat stronger than in the UK—but both systems fall very short of what robust risk work requires.

4) The consequence: unrealized safety gains

Other industries achieved dramatic safety improvements by aligning tools + data + organizational will. In healthcare, severe cultural and resource gaps mean even good tools underdeliver. Until we close those gaps, adding more tools won’t fix the problem.

Why this matters

Consider the scale: estimates suggest over 200,000 patients die from preventable medical error every year in the USA. This statistic places preventable medical error as the third or fourth leading cause of death in the USA. Getting the sociotechnical foundations of healthcare safety right—culture, reporting, learning, and implementation discipline—isn’t a “nice to have”; it should be a critical priority for our nation.

What needs to change (and what will work)

  • Elevate safety culture as a leadership priority. Without it, everything else—data quality, staff engagement, rigor of analysis, and follow-through—breaks.
  • Invest in human-factors-oriented approaches that capture real-world work (work-as-done), not just technical failure trees.
  • Build reporting and learning systems that actually surface near-misses and incidents (and protect staff who report).
  • Resources: time, training, analytic capability, and implementation support so recommendations don’t die on the vine.
  • Match tools to context: choose methods whose design characteristics fit the complexity and contributors of the particular care setting.

These shifts enable retrospective analyses (RCA, etc.) to produce actionable insights and make prospective analyses (like FMEA) realistic and useful, especially when procedures are less complex—or when complemented by methods that handle complex, human-system interactions.

Concrete outcomes of the thesis

  • A framework to classify risk-assessment tools by their design characteristics and organizational demands—so teams can select tools that actually fit their context.
  • New empirical insights on how risk assessments are conducted in the US and UK, including what’s most commonly used and where processes break down.
  • Observational evidence from semi-surgical anesthesia showing where risks originate (heavily human-factor driven) and where barriers succeed or fail.
  • A requirements set for “fit-for-purpose” risk assessment in healthcare—clarifying the minimum cultural, informational, and procedural conditions for tools to deliver real safety impact.

The bottom line

Healthcare doesn’t need yet another imported tool as much as it needs better alignment between (1) the risks that actually occur, (2) the tools we choose, and (3) the culture and resources required to use those tools well. Do that, and the kinds of safety gains seen in aviation and nuclear power become possible in medicine, too. That’s the path my research lays out—and the work I’m continuing: designing practical, human-centered ways to make risk assessment work in the places where it matters most.

Return to Home Page