June 2, 2021
In mathematical terms, risk is the number you get when you multiply the values assigned to the likelihood of a use error resulting in harm (A), and the severity of that potential harm (B). The product of (A x B) is often referred to as the risk prioritization number (RPN). For example, say you are rating likelihood and severity on 5-point scales, as is common practice in risk management. The likelihood of a use error resulting in harm could range from 1 = Rare to 5 = Almost Certain. The severity of that harm could range from 1 = Negligible to 5 = Catastrophic. If a use error was considered Likely to Occur (likelihood = 4) and capable of causing Moderate harm (severity = 3), the RPN would be (4 x 3) = 12. Depending on your organization’s risk management procedure, this could (and probably does) indicate the need to implement risk control measures (i.e., mitigations).
RPN and risk assessment
So, is relying on a risk’s RPN a good way to assess whether the risk is significant enough to attend to? In effect, the US Food and Drug Administration (FDA) has said, “No, it is not,” at least not if the likelihood of a use error is being used to determine whether a risk is acceptable, especially in the initial phase of a use-related risk analysis. I and many others working in human factors and specializing in medical product development concur. The reason is that it is quite difficult and largely impractical to rate the likelihood of a use error. Imagine being asked to rate the likelihood of the following use errors:
- Did not exhale fully before inhaling through mouthpiece
- Did not sterilize drug port before needle insertion
- Selected wrong drug from infusion pump’s menu
- Spilled contents of a surgical kit while opening it
- Injected into scar tissue
You sense the problem, right? There’s unlikely to be a solid data source on which to base likelihood estimates. Also, it’s unlikely you would have the time or funding necessary to conduct the experiments necessary to generate the data, even if a definitive experiment was possible. Therefore, the usual, lousy approach has been to guesstimate; to draw on the wisdom of development team members and perhaps clinicians, and maybe some applicable post-market surveillance data (tied to a similar product) to pick a rating. Based on my experience, team members can all look at the same use error and rate its likelihood over a disturbingly wide range (e.g., a few 2’s, 3’s, and 4’s). There might be value in this “wisdom of crowds,” and rating likelihood in this manner might be practical. However, I prefer the more conservative approach encouraged by the FDA’s guidance to industry.
Focusing on severity rather than likelihood
Today, we know that the FDA wants us to focus on the severity ratings associated with use-related risks and disregard the matter of likelihood when determining risk significance. Sure, the likelihood of a use error might determine the extent of your mitigation efforts. But the severity of any potential harm will dictate whether you need to demonstrate that a risk is effectively mitigated regardless of the extent of your risk control measures. This protects against the case that you have a use error that can cause severe harm but get discounted by a low likelihood rating (e.g., remote chance of occurrence). Instead, all use errors that can cause relatively serious harms are included as part of use-related risk management.
I and others endorse this approach based on the belief that clinicians are able to accurately estimate the severity of harm arising due to a given use error. For example, while it is difficult to estimate the likelihood that a respiratory therapy patient fails to exhale fully before inhaling a drug through a mouthpiece, the severity of harm arising due to drug underdose is more readily accessed.
Feasibly estimating likelihood of use errors
So, is the message not to bother estimating the likelihood of use errors? Is it acceptable to leave empty the likelihood rating column of the traditional Use-Failure Modes and Effects (U-FMEA) table? Well, no.
A good approach is to proceed to mitigate the risk of all use errors with moderate-to-high severity of harm ratings. And as I mentioned, you still might look at the likelihood associated with a risk to determine the specific nature of the mitigations. Near the end of the product development process, you will probably conduct a human factors validation test to assess the effectiveness of these mitigations. During such a test, some participants are likely to commit use errors while performing certain tasks, a few of which might be related to risks with moderate-to-high severity ratings. Does this mean that the product has failed the validation test? Not necessarily because success or failure is determined by a residual risk analysis – typically a risk-benefit analysis that is likely to consider a number of factors including both the severity and likelihood of the risks. But always remember, your likelihood ratings are likely, at best, to be guesstimates. Therefore, do not rely on them solely to justify any residual risk.
Look for a future blog in which I will talk about the challenges of estimating severity and why it requires a systematic approach. After all, a needlestick injury could result in harms ranging from pain, to minor bleeding, to infection, to amputation, to sepsis, and ultimately to death. Where do you draw the line, so to speak, in a severity of harm rating exercise?
Michael Wiklund, CHFP, P.E., is General Manager of Human Factors Research & Design at Emergo by UL.
Learn more about human factors research and design issues for medical products:
Request more information from our specialist
Thanks for your interest in our products and services. Let's collect some information so we can connect you with the right person.