Skip to main content
  • Insights

Noteworthy Strategies: Optimizing Data Collection During Complex System Usability Tests

Learn strategies for high-quality data capture in complex systems usability tests. Discover tips on structuring data collection tools, setting up test environments and synthesizing data.

Analyst performing data collection

July 8, 2024

By Ashley Mitchell and Tess Forton

Capturing high-quality data during usability test sessions is essential to facilitate analysis of participant interactions. The more complex a device or system is, the more challenging this can be, especially for the personnel documenting the detailed data, who we refer to as “analysts.” For example, the analyst often utilizes a laptop to capture real-time data, which can limit mobility, or they might have low visibility of a participant’s interactions under an in-vitro diagnostic system hood. Complex devices like robotic surgical systems might involve participant teams and require multiple analysts to efficiently and adequately capture the data.

There are three key opportunities to facilitate high-quality data capture – before test sessions by carefully structuring your data collection tool, during test sessions by strategically setting up the test environment, and after test sessions by thoroughly synthesizing captured data. Here, we present considerations for each of these three stages.

Preparing for and collecting data

A thoughtfully designed data collection tool is foundational to capturing robust and accurate data during complex system usability tests. Microsoft Excel or a similar spreadsheet-style tool can help organize content and enable data analysts to locate and navigate different tasks efficiently. Consider the following for your data collection tool:

  • Leverage timestamps and shorthand participant labels (e.g., S for surgeon and A for assisting staff), particularly for scenarios with many steps or multiple participants.
  • When keeping track of a large series of tasks, consider conserving space by consolidating sub-steps (when practical).
  • Consider assigning sub-headers to and/or color-coding groups of tasks and using short task titles to enable the analyst to read and locate tasks quickly.

For tests requiring multiple analysts, consider the most logical split of data capture. Refer to Conducting Usability Tests with Multiple Analysts for more detailed considerations for these approaches.

Other important considerations for effective data capture include how you set up your test room and materials:

  • Users often must move around various components or have multiple workstations (e.g., a pipetting station and an IVD instrument station) for a complex system, so data analysts need similar mobility. Consider using Mayo stands and rolling chairs or having multiple data capture stations set up to enable analysts to easily change viewing angles.
  • Augment the test team’s “eyes and ears” with tools such as lapel microphones and wireless speakers to increase the audibility of participant comments. Alternatively, set up live video feeds of angles with low visibility (e.g., a birds-eye view) or display device screens that are difficult to view from afar on a monitor near the analyst.
  • Visibility of some interactions may be challenging due to the system’s positioning requirements or risk of disrupting users’ natural interactions. Familiarize with how inputs to one portion of the system will impact outputs on others – for example, what will appear on the system screen in response to a user pressing a button – and use these cues to verify actions that cannot be viewed directly.
  • Despite best efforts, some interactions might be overlooked or impossible to view live. Consider which interactions or interface elements might be prone to this, and position recording equipment accordingly to ensure they are captured for later review.

Also, strategically managing time – particularly downtime – is important. The test team can use inherent downtime (e.g., sample processing time for IVD systems) to their advantage, such as by conferring on observed findings so far for better alignment and efficiency during the debriefing interview. For more insight on debriefing participant teams, refer to Usability Testing of Medical Devices Used by Teams.

Synthesizing collected data

After the test session is completed, allow time to debrief and cross-check data from multiple sources (i.e., audio-visual recordings, the moderator, other analysts) to clean, organize, and fill in any gaps. Complex devices and systems generate a large amount of data, thus combing through and ensuring findings are attributed to the correct participant and not double-counted are particularly important. Ideally, perform these steps right after the test session while observations are still fresh.

Collecting high-quality data during complex system usability tests requires thoughtful preparation. Ahead of the test, tailor your data collection tool to the complexity of the system and anticipate which interactions and components could be most challenging to observe and plan workarounds accordingly. Along with this, structure the test environment to facilitate data collection. Lastly, factor later data analysis and reporting needs into your data collection approach and polish captured data early and often.

Emergo has extensive experience conducting usability testing for an array of medical products. Contact our team to learn more about optimizing your usability test data collection strategy. Or, check out our Usability Test Data Collection Sheet template on OPUS, our team’s software platform that provides HFE training, tools, and templates.

Ashley Mitchell is a Human Factors Specialist and Tess Forton is a Managing User Researcher at Emergo by UL.

X

Request more information from our specialists

Thanks for your interest in our products and services. Let's collect some information so we can connect you with the right person.

Please wait…