March 3, 2025
By Suruchi Batra
Understanding Evaluation Activities in Usability Testing
Usability test protocols include a large volume of information, ranging from participant demographics to testing environments to evaluation activities. Evaluation activities can include use scenarios, during which participants perform hands-on activities, and they can also include knowledge tasks, during which participants answer knowledge-based questions regarding tasks that cannot otherwise be evaluated through hands-on activities. While content might vary between protocols for formative testing versus human factors (HF) validation testing, all protocols should include evaluation activities.
But what do evaluation activities consist of? And how do you develop them? This blog will describe steps to take before developing evaluation activities, as well as how to implement them into your protocol.
Steps to Develop Effective Evaluation Activities
Before developing evaluation activities
Before developing evaluation activities, first consider the usability study’s goal(s). Are you early enough in the design process where the labelling or product itself can be modified post-testing? If so, this is a great opportunity for user research and/or formative studies. Are you gearing up for final stages of market submission? Then, you are ready for the HF validation test required by regulators. Each type of usability test enables researchers to shape questions and evaluation activities in a way that produces the most valuable results. The minimum information required to develop evaluation activities includes understanding the intended users, the expected use environment(s) and the expected actions required to perform a task.
If your study will be task- and/or risk-driven, which is the case with late-stage formatives and validation tests, it is generally best practice to start with a task analysis that will ultimately produce a use-related risk analysis (URRA). This documentation ensures that the usability test will effectively evaluate the mitigations identified in the URRA and the intuitiveness of the system/device design.
In contrast, earlier stage testing and research may not be task- and/or risk-driven and instead is more exploratory. In these cases, evaluation activities can be more flexible and tend towards collecting participants’ subjective feedback.
In summary, designing evaluation activities begins with understanding study goals and should be structured around those goals. While formal HF documentation is useful – and necessary for later-stage research – early-stage research does not always require such thorough documentation.
Key Considerations for Use Scenario Components
Use scenarios are evaluation activities during which participants perform hands-on tasks to evaluate the system/device. Specifically, these use scenario sections in the protocol could include:
-
Test setup: Guidelines that detail how the test environment and supplies should be set up prior to starting the evaluation activity. Consider whether the system or device needs to be set up in a specific way to facilitate the evaluation activity. For example, should the system be unplugged because the participant must demonstrate full setup procedures, including plugging in the system? Should the device be stored in the refrigerator because the participant must interact with a cold device for effective device evaluation? Be sure to consider how to best simulate the intended use environment while also preparing the system/device and test environment for all tasks the user should perform during the use scenario.
-
Prompts: Detailed instructions/information that the test administrator/moderator verbalizes to the participant to prompt (i.e., introduce) the evaluation activity. Consider whether different user groups perform different expected actions while completing the same activity. If so, develop specific prompts for each user group. For example, a prompt asking a nurse to simulate an injection into a patient should be different from a prompt asking a patient to simulate a self-injection. Furthermore, avoid too much information in your prompts because leading questions can diminish or bias test results. These prompts should provide only enough information to instruct participants on what their task is without instructing them on how to complete it. Also consider how users will be presented with the task in a real scenario and mimic that presentation as much as possible during the usability study. For example, you could ask a participant to take any steps they might when they reach home after receiving an injection device from the pharmacy, but you should not instruct the participant to store the device when they reach home.
-
Expected actions: A list of tasks the user must perform to complete the scenario. If you are conducting a later-stage study, these expected actions should be pulled directly from the URRA, ensuring 1:1 alignment. All critical tasks should be accounted for in the protocol, though there are certain noncritical tasks that are required in order for users to perform critical tasks and should also be included in the protocol. In contrast, if you are conducting an early-stage study, these tasks can include actions that the user should perform to complete the scenario goals (which might or might not be associated with risk). These early-stage studies can also include the collection of specific subjective feedback, in which case the correlating “expected action” could be expected responses. For example, “participant rates grip ergonomics on 5-point scale.”
Knowledge Tasks and Subjective Feedback
What if you cannot realistically evaluate all tasks or expected actions as hands-on activities? This is where knowledge tasks come into play. Written regulatory guidance acknowledges that some critical tasks cannot be evaluated through simulated use because they “involve users’ understanding of information, which is difficult to ascertain by observing user behavior” (Applying Human Factors and Usability Engineering to Medical Devices, FDA Final Guidance, Feb 2016). Common examples of such critical tasks include device inspection, storage conditions and appropriate use environment.
These knowledge tasks should be presented to participants upon the conclusion of the use scenarios and could include:
-
Test setup guidelines describing what materials the test administrator and/or participant should have available at the commencement of the knowledge task(s), such as a paper IFU or digital resource.
-
Knowledge task introduction presenting the knowledge task portion of the session to the participant.
-
Expected response and/or success criteria indicating the intended response to each question.
In contrast, what if you want to collect subjective feedback from participants that do not necessarily require hand-on interactions (e.g., impressions of certain graphics in IFU)? This is typical for early-stage research and should be included in the protocol. However, be careful to focus and limit these questions to the most important goals for your study. It is normal to want to collect as much information as possible to maximize value of each session, but too many questions can ultimately dilute the collected feedback (e.g., due to participant fatigue or limited time to ask follow-up questions).
How to Improve Evaluation Activity Design for Better Results
Effective evaluation activity design starts with clear study goals. Early-stage research may not require rigid documentation but should focus on structured data collection. Later-stage usability tests must align with human factors engineering guidelines and regulatory expectations.
By developing evaluation activities with these key components in mind, test administrators can conduct usability testing with clear instructions, strong test protocols, and well-defined success criteria.
Contact our team to learn more about evaluation activity development. Or, sign up for a complimentary account with OPUS, our team’s software platform that provides HFE training, templates and exemplars.
Suruchi is a Senior Human Factors Specialist at Emergo by UL.
Request more information from our specialists
Thanks for your interest in our products and services. Let's collect some information so we can connect you with the right person.