This is an open-access article distributed under the terms of the Creative Commons Attribution License (https://creativecommons.org/licenses/by/4.0/), which permits unrestricted use, distribution, and reproduction in any medium, provided the original work, first published in JMIR Human Factors, is properly cited. The complete bibliographic information, a link to the original publication on https://humanfactors.jmir.org, as well as this copyright and license information must be included.
Poor usability is a primary cause of unintended consequences related to the use of electronic health record (EHR) systems, which negatively impacts patient safety. Due to the cost and time needed to carry out iterative evaluations, many EHR components, such as clinical decision support systems (CDSSs), have not undergone rigorous usability testing prior to their deployment in clinical practice. Usability testing in the predeployment phase is crucial to eliminating usability issues and preventing costly fixes that will be needed if these issues are found after the system’s implementation.
This study presents an example application of a systematic evaluation method that uses clinician experts with human-computer interaction (HCI) expertise to evaluate the usability of an electronic clinical decision support (CDS) intervention prior to its deployment in a randomized controlled trial.
We invited 6 HCI experts to participate in a heuristic evaluation of our CDS intervention. Each expert was asked to independently explore the intervention at least twice. After completing the assigned tasks using patient scenarios, each expert completed a
The 6 HCI experts included professionals from the fields of nursing (n=4), pharmaceutical science (n=1), and systems engineering (n=1). The mean overall severity scores of the identified heuristic violations ranged from 0.66 (
Our heuristic evaluation process is simple and systematic and can be used at multiple stages of system development to reduce the time and cost needed to establish the usability of a system before its widespread implementation. Furthermore, heuristic evaluations can help organizations develop transparent reporting protocols for usability, as required by Title IV of the 21st Century Cures Act. Testing of EHRs and CDSSs by clinicians with HCI expertise in heuristic evaluation processes has the potential to reduce the frequency of testing while increasing its quality, which may reduce clinicians’ cognitive workload and errors and enhance the adoption of EHRs and CDSSs.
Despite the great potential of electronic health records (EHRs), clinicians are often confronted with unintended consequences related to the use of these systems, which can negatively impact patient safety [
Usability measures the quality of a user’s experience when interacting with a system [
Usability evaluation methods are generally classified as expert- or user-based. Expert-based evaluations (eg, heuristic evaluations, cognitive walkthroughs, field observations) focus on ensuring that a system’s functionality is optimized and evidence-based interface standards and norms are met [
Clinical decision support systems (CDSSs) are specific components of EHRs that are frequently added and updated to reflect new evidence. CDSSs are defined as systems that provide clinicians with clinical knowledge and patient information that is “intelligently filtered and presented at appropriate times to improve patient care” [
A major challenge to establishing the usability of the CDSSs interfaced with EHRs has been the cost and time needed to carry out rigorous, iterative evaluations [
We believe that combining user- and expert-based evaluations has the potential to improve the efficiency and effectiveness of a system. In a user-based evaluation, the average cost per general user (ie, nonclinicians) is US $171, and at least twenty users are needed to identify 95% of the usability problems in a system [
In this study, we offer an example application of our heuristic evaluation process, which provides a low-cost, time-effective, and expert-based method that includes potential users (ie, clinician usability experts) to evaluate the usability of CDSSs prior to their deployment in clinical practice.
A heuristic evaluation is a usability-inspection method commonly used in the field of human-computer interaction (HCI) [
The example application of our approach involved the systematic evaluation of an electronic intervention containing clinical decision support (CDS) that was being prepared for deployment and testing by nurses in a national randomized controlled trial (RCT). Prior to nationwide deployment, we conducted a heuristic evaluation with HCI experts to identify any violations of usability principles in the CDS intervention.
We chose the heuristic evaluation process based on Nielsen’s 10 heuristics [
The main components of the CDS intervention evaluated in this paper were nursing diagnoses [
Subsequently, our team was funded by the National Institutes of Health to conduct a national, remotely administered RCT of the previously developed intervention. A desktop prototype in the 3 display formats (
Three types of display formats (reproduced with permission from the HANDS Research Team). NANDA-I: NANDA International nursing diagnosis; NIC: nursing intervention classification; NOC: nursing outcome classification; POC: plan of care.
Clinical decision support suggestions (reproduced with permission from the HANDS Research Team). NANDA-I: NANDA International nursing diagnosis; NIC: nursing intervention classification; NOC: nursing outcome classification; POC: plan of care.
We used purposive sampling to invite 6 HCI experts, including nursing and nonnursing HCI experts, to participate in this study from August 3, 2020, to September 11, 2020. The sample size was decided in accordance with current recommendations, which state that including more than 3 to 5 evaluators in a heuristic evaluation is unlikely to yield additional useful information [
Our heuristic evaluation was conducted virtually during the COVID-19 pandemic. Before the evaluation, each expert was given a standardized orientation using a Microsoft PowerPoint video and transcript about how the CDS intervention works. The experts were also presented with the 2 use cases shown in
After completing their given tasks using the use cases, each expert was asked to complete a
Use cases describing patient scenarios. POC: plan of care.
The University of Florida Institutional Review Board reviewed and approved the addition of an evaluation of the intervention software by experts, with no subjects in the clinical trial involved (IRB201902611).
Data analysis focused on the experts’ comments and overall severity scores collected via the
Descriptive statistics were used to analyze the overall severity of the identified usability factors collected using the checklist. The mean and standard deviation of the overall severity score were calculated for each heuristic principle.
The 6 HCI experts who participated in the heuristic evaluation were professionals in the fields of nursing (n=4), pharmaceutical sciences (n=1), and system engineering (n=1). The mean overall severity scores of the identified heuristic violations ranged from 0.66 (
The heuristic principles identified as the most in need of refinement were
Four highest mean severity scores by heuristic. Severity score from 0 to 4: no usability problems (0), cosmetic problem only (1), minor usability problem (2), major usability problem (3), and usability catastrophe (4).
Mean severity scores and sample comments from the heuristic evaluations.
Usability heuristic | Severity scorea, mean (SD) | Sample comments | |
Visibility of system status | 1.66 (1.21) |
Unclear if care plan icons (circle, square, triangle) are clickable |
|
Match between system and the real world | 1.00 (1.09) |
Pain scale in the CDS intervention, in which score 1 indicates “severe” pain is the opposite of common pain scales used in clinical practice (1 indicates “mild” pain) |
|
User control and freedom | 2.00 (1.09) |
Limited “Undo” functionality |
|
Consistency and standards | 1.16 (1.16) |
Unclear of formatting standards referred |
|
Help users recognize, diagnose, and recover from errors | 1.66 (1.63) |
Error message is not informative as it doesn’t indicate where the error occurred |
|
Error prevention | 2.00 (1.09) |
Need a warning message when clicking the minus button |
|
Recognition rather than recall | 1.83 (1.16) |
Unclear what was undone |
|
Flexibility and efficiency of use | 0.66 (1.03) |
Suggested helping users to find content on the site (hyperlinks, alphabetical index) |
|
Help and documentation | 1.83 (0.98) |
Needs HELP function to inform on how the CDS intervention works |
|
|
|||
|
Graph format | 1.33 (1.50) |
Not visually appealing from similar blues/grey shades |
Table format | 1.66 (1.50) |
Font is too small and difficult to read |
|
Text format | 1.16 (0.75) |
No labels in the icons Suggested we use text section headers instead of icons |
aSeverity score from 0=best to 4=worst: no usability problems (0), cosmetic problem only (1), minor usability problem (2), major usability problem (3), and usability catastrophe (4).
In response to
The next heuristics identified as requiring the most improvement were
Finally, for the heuristic
Action Required menu (reproduced with permission from the HANDS Research Team). NIC: nursing intervention classification; NOC: nursing outcome classification; POC: plan of care.
Warning message (reproduced with permission from the HANDS Research Team). NANDA-I: NANDA International nursing diagnosis; NIC: nursing intervention classification; NOC: nursing outcome classification; POC: plan of care.
Pain scale scores (reproduced with permission from the HANDS Research Team). NANDA-I: NANDA International nursing diagnosis; NIC: nursing intervention classification; NOC: nursing outcome classification; POC: plan of care.
With the proliferation of EHRs and CDSSs in today’s health care, rigorous and multistage usability evaluations are essential to the development of effective electronic systems; however, these evaluations are considered challenging due to the cost and time required to conduct them [
This study took approximately 2 months (from August to September 2020) to locate and enlist the experts, distribute study materials, and compile the results. It is important to emphasize that this study was conducted during the global COVID-19 pandemic, which potentially affected the recruitment period as well as data collection. Thus, our process can likely be performed in a shorter period of time than the 2 months we experienced.
Through expert-based usability testing, we discovered major and minor usability problems in the user interface of an electronic CDS intervention prior to its deployment for use by users. Despite their benefits, heuristic evaluations are rarely reported for usability testing, especially in late-stage (ie, predeployment stage after prototyping) usability testing. Although user-based usability testing is effective in identifying major usability issues that affect user performance, a focus on user testing alone may lead to missed usability violations that users who do not have HCI expertise may not recognize [
A heuristic evaluation with experts can identify minor usability problems that are often not detected in user testing but can be costly to fix if detected after a system’s deployment [
Since expert-based usability testing focuses on “making things work” in a natural and logical order, the experts in this study recommended changing the direction of our intervention’s pain scale to range from 0 (no pain) to 4 (severe); this pain scale now matches those used in real-world clinical practice and would be intuitive to use. It is important to note that this usability problem was detected only by nursing HCI experts who have backgrounds in clinical nursing practice; this underscores the advantage of having a panel of experts who boasts skills and experience in the relevant clinical domains (eg, nursing, medicine), as well as in usability and HCI, when evaluating clinical technologies [
The limitations of this study were related to the experts’ independent evaluations. To complete the evaluation, each expert used his or her own device (eg, desktop and laptop computers, tablets) with differing screen sizes; this could have influenced their evaluations of the CDS intervention. Nonetheless, to obtain an optimal idea of the intervention’s general scope, we asked the experts to use Google Chrome’s Incognito (ie, private) browser to access the intervention, as well as to carefully explore the user interface’s screen layout and interaction structure at least twice [
Another potential limitation of our study is that we did not collect the demographic information of our study participants. We invited them to participate in our expert-based evaluation as HCI experts either with or without domain expertise.
Our heuristic evaluation process is simple, systematic, and theoretical and can ensure a system’s optimal functionality. Beyond confirming that evidence-based interface standards and norms are met, our process can be used at multiple stages of system development before implementation (ie, predeployment phase after prototyping) to reduce the time and cost of the iterative evaluations needed to improve a system’s usability after widespread implementation. A heuristic evaluation that includes HCI experts with domain expertise (ie, clinician HCI experts) has the potential to reduce the frequency of testing while increasing its quality, which may reduce clinicians’ cognitive workload and EHR-related errors. Making this small investment in early refinement can reap sizable benefits to further enhance EHR and CDSS adoption and acceptance by various clinicians in real-world clinical practice.
clinical decision support
clinical decision support system
electronic health record
human-computer interaction
plan of care
randomized controlled trial
This study was supported by the National Institute of Nursing Research, National Institutes of Health, under award number R01 NR018416-01. The content is solely the responsibility of the authors and does not necessarily represent the official views of the National Institute of Nursing Research, National Institutes of Health. We are grateful to Rishabh Garg for providing technical help.
None declared.