Published on in Vol 3, No 1 (2016): Jan-Jun

How Does Learnability of Primary Care Resident Physicians Increase After Seven Months of Using an Electronic Health Record? A Longitudinal Study

How Does Learnability of Primary Care Resident Physicians Increase After Seven Months of Using an Electronic Health Record? A Longitudinal Study

How Does Learnability of Primary Care Resident Physicians Increase After Seven Months of Using an Electronic Health Record? A Longitudinal Study

Original Paper

1Department of Internal Medicine, Division of Cardiology, University of Nebraska Medical Center, Omaha, NE, United States

2Department of Family and Community Medicine, University of Missouri, Columbia, MO, United States

3Department of Health Management and Informatics, University of Missouri, Columbia, MO, United States

4Informatics Institute, University of Missouri, Columbia, MO, United States

*all authors contributed equally

Corresponding Author:

Min Soon Kim, PhD

Department of Health Management and Informatics

University of Missouri

CE728 Clinical Support & Education, DC006.00

5 Hospital Drive

Columbia, MO,

United States

Phone: 1 573 884 0115

Fax:1 573 882 6158

Email: kimms@health.missouri.edu


Background: Electronic health records (EHRs) with poor usability present steep learning curves for new resident physicians, who are already overwhelmed in learning a new specialty. This may lead to error-prone use of EHRs in medical practice by new resident physicians.

Objective: The study goal was to determine learnability gaps between expert and novice primary care resident physician groups by comparing performance measures when using EHRs.

Methods: We compared performance measures after two rounds of learnability tests (November 12, 2013 to December 19, 2013; February 12, 2014 to April 22, 2014). In Rounds 1 and 2, 10 novice and 6 expert physicians, and 8 novice and 4 expert physicians participated, respectively. Laboratory-based learnability tests using video analyses were conducted to analyze learnability gaps between novice and expert physicians. Physicians completed 19 tasks, using a think-aloud strategy, based on an artificial but typical patient visit note. We used quantitative performance measures (percent task success, time-on-task, mouse activities), a system usability scale (SUS), and qualitative narrative feedback during the participant debriefing session.

Results: There was a 6-percentage-point increase in novice physicians’ task success rate (Round 1: 92%, 95% CI 87-99; Round 2: 98%, 95% CI 95-100) and a 7-percentage-point increase in expert physicians’ task success rate (Round 1: 90%, 95% CI 83-97; Round 2: 97%, 95% CI 93-100); a 10% decrease in novice physicians’ time-on-task (Round 1: 44s, 95% CI 32-62; Round 2: 40s, 95% CI 27-59) and 21% decrease in expert physicians’ time-on-task (Round 1: 39s, 95% CI 29-51; Round 2: 31s, 95% CI 22-42); a 20% decrease in novice physicians mouse clicks (Round 1: 8 clicks, 95% CI 6-13; Round 2: 7 clicks, 95% CI 4-12) and 39% decrease in expert physicians’ mouse clicks (Round 1: 8 clicks, 95% CI 5-11; Round 2: 3 clicks, 95% CI 1-10); a 14% increase in novice mouse movements (Round 1: 9247 pixels, 95% CI 6404-13,353; Round 2: 7991 pixels, 95% CI 5350-11,936) and 14% decrease in expert physicians’ mouse movements (Round 1: 7325 pixels, 95% CI 5237-10,247; Round 2: 6329 pixels, 95% CI 4299-9317). The SUS measure of overall usability demonstrated only minimal change in the novice group (Round 1: 69, high marginal; Round 2: 68, high marginal) and no change in the expert group (74; high marginal for both rounds).

Conclusions: This study found differences in novice and expert physicians’ performance, demonstrating that physicians’ proficiency increased with EHR experience. Our study may serve as a guideline to improve current EHR training programs. Future directions include identifying usability issues faced by physicians when using EHRs, through a more granular task analysis to recognize subtle usability issues that would otherwise be overlooked.

JMIR Human Factors 2016;3(1):e9

doi:10.2196/humanfactors.4601

Keywords



Physicians’ Electronic Health Records (EHR) Use

Health information technology’s (HIT) functionality in clinical practice is expanding and physicians are increasingly adopting EHRs as a result of the financial incentives guaranteed by Centers for Medicare & Medicaid Services (CMS) [1]. Meaningful Use (MU) is one measure of successful adoption of EHRs as a component of the Health Information Technology for Economic and Clinical Health (HITECH) act proposed by the Office of the National Coordinator for Health Information Technology (ONC) and CMS. EHRs are “records of patient health information generated by visits in any health care delivery setting” [2]. EHRs center on the overall health of a patient beyond clinical data gathered from a single provider, and offer a more comprehensive view of a patient’s care. EHRs are designed for sharing data with other health care providers such as laboratories and specialists; therefore, EHRs contain information from every clinician involved in a patient’s care [3]. In a data brief in 2013, the National Center for Health Statistics (NCHS) reported that 78% of office-based physicians had adopted EHRs in their practice [4]. Presently, EHRs require a large investment of effort for users to become proficient in their use. Resident physicians were selected for this study because those who are not adequately trained in using EHRs may experience a steep learning curve when beginning their residency program [5]. In an effort to maximize physician proficiency with EHRs, hospitals and clinics provide comprehensive EHR training for their resident physicians. However, it is challenging to find sufficient time to train physicians to use new EHR systems [6-9]. Using information technology to manage the process of patient care and to communicate with patients is an essential redesign of clinical practice [10]. Some advantages expressed by EHR users of adopting an EHR consist of the following: increased adherence to guidelines in preventive care, decreased paperwork for providers, and improvement in overall quality and efficiency of patient care [11-13]. Nonetheless, there are possible drawbacks to EHRs: financial burden, mismatch of human and machine workflow models, and productivity loss potentially caused by EHR usability issues [11,12,14-22]. Usability is described as the degree to which software can by employed by users to effectively perform a particular task in a specific content area [23]. EHRs with poor usability may have a negative effect on clinicians’ EHR learning experience. This could lead to increased cognitive load, medical errors, and a decline in quality of patient care [24-29]. Learnability is defined as the extent to which a system permits users to understand how to use it [30]. Learnability deals with the amount of time and effort needed for a user to develop proficiency with a system over time and after multiple use [31]. In the literature, while there are variations in defining usability and learnability [32-34], definitions of learnability are strongly correlated with usability and proficiency [33,35,36]. Allowing physicians to efficiently accomplish clinical tasks within the EHR may ease time constraints experienced by physicians during patient visits.

According to an EHR user satisfaction survey completed in 2012 by 3088 family physicians, approximately 62% of survey respondents were not satisfied with many of the best-known EHR systems, and EHR vendor support and training were the areas with lowest satisfaction ratings [37]. Multiple studies on successful EHR implementations have stressed the usefulness of training in the implementation process [7,9,38-47]. A survey by Aaronson et al [44] concerning EHR use in 219 family practice residency programs indicated that resident physicians’ EHR training may have an impact not only on perceived ease of use of EHR systems, but also on the use of EHR systems in their practices after residency.

Prior EHR Usability Evaluation Studies

Previous studies have shown the importance of usability evaluation in the EHR adoption and implementation process. Current best practices promote the use of cognitive approaches to examine human-computer interactions in EHR systems [2,48-50]. Khajouei and Jasper performed a systematic review examining the impact of the design aspects of medication systems in computerized physician order entry systems (CPOE) (usually integrated in EHRs) on usability. They found that proper CPOE system design is fundamental to promoting physicians' adoption and diminishing medication errors [51]. Multiple studies have used heuristic evaluation as a method to identify usability issues in health information technology. Chan et al evaluated the usability of a CPOE order set system using heuristic evaluation and discovered 92 unique heuristic violations across 10 heuristic principles [52]. Harrington and Porch investigated an EHR’s usability and identified 14 usability heuristics that were violated 346 times in the intensive care unit clinical documentation [53]. Li et al evaluated clinical decision support with simulated usability testing using a think-aloud protocol, and found that 90% of negative comments from users were concerning navigation and workflow issues [54]. In a study at an urban medical center in New York, Kushniruk et al probed the association between usability tests and training of a commercial EHR system. About 1 month after in-class training, laboratory-based usability testing containing 22 sets of scenario-based tasks was conducted. Usability issues were identified as physicians completed their tasks, leading to numerous areas of potential improvement for system learnability and usability.

Objective

EHRs with poor usability present steep learning curves for new resident physicians, who are already overwhelmed in learning a new specialty. This may lead to error-prone use of EHRs in medical practice by new resident physicians. Identifying and addressing early barriers in the learning environment can help improve the overall capacity of new physicians and save costs for organizations. The objective of this study was to determine the difference in learnability by comparing changes in performance measures between expert and novice primary care physicians 3 and 7 months after 2 rounds of learnability tests (Round 1: November 12, 2013 to December 19, 2013; round 2: February 12, 2014 to April 22, 2014). We analyzed learnability by addressing 2 specific research questions: (1) Do performance measures of expert and novice physicians improve after 3 and 7 months of EHR experience? and (2) Does the learnability gap between novice and expert physician groups change after 7 months of EHR experience?


Study Design

To determine learnability gaps between expert and novice physicians when using EHRs, data were collected through learnability testing using Morae video analysis software (TechSmith). Twelve family medicine and 4 internal medicine resident physicians performed 19 artificial, scenario-based tasks in a laboratory setting. Four types of quantitative performance measures, a system usability scale (SUS), a survey instrument [55], and a qualitative debriefing session with participants were employed. This study was approved by the University of Missouri Health Sciences Institutional Review Board.

Organizational Setting

This study took place at the University of Missouri Health System (UMHS), which is a 536-bed, tertiary-care academic medical hospital located in Columbia, Missouri. In 2012, UMHS had approximately 553,300 clinic visits and employed more than 70 primary care physicians. The Department of Family and Community Medicine (FCM) runs 6 clinics, while the Department of Internal Medicine (IM) oversees 2 primary care clinics [56]. The Healthcare Information and Management Systems Society (HIMSS), a non-profit organization that scores how effectively hospitals employ electronic medical record (EMR) applications, assigned UMHS a rating of Stage 7 with respect to the EMR Adoption Model [57]. In other words, UMHS has adopted electronic patient charts, examined clinical data through data warehousing, and shares health information electronically with authorized health care bodies [58]. The CPOE within the EHR permits physicians to safely and electronically access and place lab and medication orders for patients, and transfer orders directly to departments that are responsible for implementing requests. UMHS’ EHR database comprises all data from the university’s hospitals and clinics. University of Missouri Health Care has been using a mature EHR system since 2003 from the same vendor. New users of the EHR receive 4 to 8 hours of training and also have drop-in access (or can book an appointment) to an EHR Help Room to receive help or further training. Supplemental online training materials such as documents, videos, and self-paced tutorials are also available. When new features are included in the EHR, illustrated instructions and explanations become available.

Participants

There is presently no evidence-based approach to measure a user’s EHR experience; therefore, novice and expert physicians were distinguished based on clinical training level and number of years using the EHR. This decision was based on a discussion with an experienced physician champion (JLB). This study will examine and confirm if after 1 year of EHR use, resident physicians have gained sufficient skills to be considered an expert [59]. Thus, 10 first-year resident physicians were grouped as novice users and 6 second and third-year resident physicians were grouped as expert users. Both FCM and IM run 3-year residency programs. A convenience sampling method was used when choosing participants [60]. UMHS FCM and IM physicians were selected for the sample because, as primary care residents, they have equivalent clinical roles and duties. Based on a review of the literature, a sample of 15 to 20 participants was judged suitable for exploratory usability studies to identify major problems to correct in a product development cycle [61-63]. However, we observed data saturation in terms of usability issues at 5 participants. Participation was voluntary and subjects were compensated US $20 for their involvement in the project.

In Round 1, 10 novice physicians and 6 expert physicians participated in the study. Out of the 10 novice physicians in Round 1, 7 were from family medicine and 3 from internal medicine. Of the 10 novice physicians, 6 (60%) were male, 8 (80%) identified their race as white, 1 (10%) identified as Asian, and 1 (10%) identified as both Asian and white. The age of novice physicians ranged from 27 to 31 and the mean age was 28 years. In Round 1, 4 (40%) novice physicians had no experience with an EHR other than the one at UMHS, 2 (20%) had less than 3 months of experience, 1 (10%) had 7 months to 1 year of experience, and 3 (30%) had over 2 years of experience with an EHR other than the one at UMHS. In this study, 5 family medicine and 1 internal medicine expert physicians participated in the study. Of the 6 expert physicians, 5 (83%) were female and all (100%) identified their race as white. In this study, 2 did not provide information on their date of birth and EHR experience and were not included in the calculation of age range, mean age, and EHR experience. The age of expert physicians ranged from 30 to 33 and the mean age was 31 years. In this study, 1 (17%) expert physician had no experience with an EHR other than the one at UMHS, 1 (17%) had 7 months to 1 year of experience, and 2 (33%) had over 2 years of experience with an EHR other than the one at UMHS.

Of the 8 novice physicians and 4 expert physicians who participated in Round 1 also participated in Round 2 of the study. A total of 2 novice and 2 expert physicians who participated in Round 1 declined participation in Round 2. Conducting 2 rounds of data collection was a major strength of this study, because it allowed us to measure valid learnability. Out of the 8 novice physicians in Round 2, 5 were from family medicine and 3 from internal medicine. Of the 8 novice physicians, 5 (63%) were male, 8 (75%) identified their race as white, 1 (13%) identified as Asian, and 1 (13%) identified as both Asian and white. The age of novice physicians ranged from 27 to 30 and the mean age was 28 years. In Round 2, 3 (38%) novice physicians had no experience with an EHR other than the one at UMHS, 2 (25%) had less than 3 months of experience, 1 (13%) had 7 months to 1 year of experience, and 2 (25%) had over 2 years of experience with an EHR other than the one at UMHS. Four family medicine expert physicians participated in the study. All 4 (100%) were female and all (100%) identified their race as white. The age of expert physicians ranged from 30 to 33 and the mean age was 31 years. In this study, 1 (25%) expert physician had no experience with an EHR other than the one at UMHS, 1 (25%) had 7 months to 1 year of experience, and 2 (50%) had over 2 years of experience with an EHR other than the one at UMHS. Because of the small sample size, we did not attempt to control for age or gender.

Scenario and Tasks

Two sets of artificial but realistic scenario-based tasks were used in the study. The tasks were created based on discussion with an experienced physician champion (JLB) and 2 chief resident physicians from both participating departments (FCM, IM). When completing Round 1 of the learnability test, resident physicians were given a scenario for a “scheduled follow-up visit after a hospitalization for pneumonia.” When completing Round 2 of the learnability test, resident physicians were given a scenario for a “scheduled follow-up visit after a hospitalization for heart failure.” While different, these 2 scenarios were equivalent in difficulty, workflow, and functionalities used. These scenarios were employed to assess physicians’ use of the EHR with realistic inpatient and outpatient information. We included 19 tasks that are generally completed by both novice and expert primary care physicians. These tasks also met 2014 EHR certification criteria 45 CFR 170.314 for meaningful use (MU) Stage 2 [31]. The alphanumeric code located beside each task corresponds to the EHR certification criteria that satisfies meaningful use Stage 2 objectives. In order to measure learnability more effectively, we confirmed that the tasks were also practiced during EHR training required of resident physicians at the commencement of their residency. The tasks had clear objectives that physicians were able to follow without needless clinical cognitive load or ambiguity, which would have deviated from the study aim. The tasks were as follows:

1. Start a new note (§170.314[e][2]).

2. Include visit information (§170.314[e][2]).

3. Include chief complaint (§170.314[e][2]).

4. Include history of present illness (§170.314[e][2]).

5. Review current medications contained in the note (§170.314[a][6]).

6. Review problem list contained in the note (§170.314[a][5]).

7. Document new medication allergy (§170.314[a][7]).

8. Include review of systems (§170.314[e][2]).

9. Include family history (§170.314[a][13]).

10. Include physical exam (§170.314[a][4] and §170.314[e][2]).

11. Include last comprehensive metabolic panel (CMP) (§170.314[b][5]).

12. Save the note.

13. Include diagnosis (§170.314[a][5]).

14. Place order for chest X-ray (§170.314[a][1] and §170.314[e][2]).

15. Place order for basic metabolic panel (BMP) (§170.314[a][1] and §170.314[e][2].

16. Change a medication (§170.314[a][1] and §170.314[a][6].

17. Add a medication to your favorites list (§170.314[a][1].

18. Renew one of the existing medications (§170.314[a][1] and §170.314[a][6].

19. Sign the note.

Performance Measures

Learnability was evaluated using 4 quantitative performance measures. Percent task success was the percentage of subtasks that participants successfully completed without error. Time-on-task calculated how long in seconds it took each participant to complete each task. Calculation began when a participant clicked on the “start task” button and ended when the “end task” button was clicked. Mouse clicks computed the number of times the participant clicked on the mouse when completing a given task. Mouse movement calculated in pixels the distance of the navigation path by the mouse to complete a given task.

For percent task success rate, a higher value usually signifies better performance, representing participants’ skill with the system. For time-on-task, mouse clicks, and mouse movements, a higher value usually indicates poorer performance [62,64,65]. As such, higher values may indicate that the participant encountered complications while using the system.

System Usability Scale

After testing, participants were asked to complete the System Usability Scale (SUS) to supplement the performance measures. The SUS is a 10-item survey measured on a Likert scale that provides fairly robust measures of subjective usability and is a widely used, validated instrument in HIT evaluation [31,55,66]. The SUS produces a single score (ranging from 0 to 100, with 100 being a perfect score [55]) that represents a composite measure of the overall usability of the system under examination. A score of 0 to 50 is considered not acceptable, 50 to 62 is low marginal, 63 to 70 is high marginal, and 70 to 100 is acceptable.

Data Collection

Two rounds of data collection were scheduled to measure learnability by comparing whether participants’ performance measures (task success, time-on-task, mouse clicks, and mouse movements) improved and if participants experienced fewer usability issues with longer exposure to the system. Learnability pertains to the amount of time and effort needed for a user to develop proficiency with a system over time and after multiple use [31]. The 2 groups (novice and expert physicians) were essential for our comparison, because experts’ measures were used to examine novices’ improvements toward becoming an expert. Round 1 learnability data were collected between November 12, 2013 and December 19, 2013 and Round 2 data were collected between February 12, 2014 and April 22, 2014. Round 1 data collection began 3 months after novice (Year 1) resident physicians completed their initial mandatory EHR training at UMHS. Resident physicians were invited to complete Round 2 approximately 3 months after the date they completed Round 1. Learnability testing was completed in approximately 20 minutes and conducted on a 15-inch laptop using Windows 7 operating system. To preserve consistency and reduce undesirable interruptions, the participant and facilitator were the only 2 individuals in the conference room. At the beginning of the session, participants were advised that their participation in the study was voluntary and they had the right to end the session at any time. Participants were provided with a binder that contained instructions on how to complete the task before the test began. Tasks were displayed at the top of the display as the test progressed. A think-aloud strategy was used throughout the session and audio, video, on-screen activity, and inputs from the keyboard and mouse were recorded using a Morae Recorder [67,68]. We prompted participants to talk aloud and describe what they were doing while completing the tasks. Participants completed the tasks without the assistance of the facilitator who would only intervene if there were any technical difficulties. However, there were none and the facilitator did not have to intervene. After participants completed the tasks, they completed the SUS and demographic survey. The test session concluded with a debriefing session during which participants were asked to comment on the specific tasks they found difficult. Interesting observations detected by the facilitator were discussed as well.

Data Analysis

We confirmed there were no EHR interface changes between data collection in Rounds 1 and 2 that may have influenced the study and tasks. The recorded sessions were examined using Morae Manager, a video analysis software program that was used to calculate performance measures using markers to identify difficulties and errors the participants encountered. Video analysis took approximately 1.5 hours for each 20-minute recorded session. The first step in the analysis was to review the recorded sessions and label any tasks that were unmarked during data collection. The second step was to divide each of the 19 tasks into smaller tasks to determine the task success rate and identify subtle usability challenges that we may have otherwise failed to notice. Geometric means were calculated for the performance measures with confidence intervals at 95% [69]. Performance measures have a strong tendency to be positively skewed, so geometric means were used because they provide the most accurate measure for sample sizes less than 25 [70]. The learnability comparison was a between comparison of 2 within comparisons. Therefore, we measured the difference between the novice and expert resident physician groups and the difference within novice and expert physician groups, 3 and 7 months after EHR training. Comparisons of learnability between the 2 groups were between comparisons. Time-on-task, mouse clicks, and mouse movements were measured while users interacted with the EHR system and performance measures were calculated automatically by the Morae Manager usability analysis software program. Percent task success was calculated by creating subtasks out of each task and then identifying each subtask the physician completed successfully. For example, for Task 8 (Include review of systems) the subtasks created to calculate the task success rate were the following: (1) go to review of systems, (2) add “no chills,” (3) add “no fever,” (4) add “fatigue,” (5) add “decreased activity,” (6) add “dry mouth,” (7) add “no dyspnea,” and (8) add “no edema.”


Percent Task Success Rate

Geometric mean values of percent task success rates were compared between the 2 physician groups across 2 rounds (Table 2) [69]. There was a 6-percentage-point increase in the novice physician group’s percent task success rate between Round 1 (92%, 95% CI 87%-99%) and Round 2 (98%, 95% CI 95%-100%). Similarly, expert physicians had a 7-percentage-point increase in percent task success rate between Round 1 (90%, 95% CI 83%-97%) and Round 2 (97%, 95% CI 93%-100%). When mean task success rates were compared between the physician groups, the novice physician group had a higher task success rate than the expert physician group did for both rounds.

Table 2. Geometric mean values of performance measures for novice and expert physicians across two rounds.
Performance MeasuresRound 1 NoviceRound 2 NoviceRound 1 ExpertRound 2 Expert
Task Success92%98%90%97%
Time-on-Task44403931
Mouse Clicks8785
Mouse Movements9247799273256329

In Round 1, the novice physician group achieved a higher success rate than expert physicians for 7 tasks (2, 8, 11, 13, and 15-17), the same success rate for 7 tasks (1, 3-6, 9, and 19), and a lower success rate for 5 tasks (7, 10, 12, 14, and 18). In Round 2, the novice physician group achieved a higher success rate for 3 tasks (8, 9, and 14), the same success rate for 15 tasks (1-7, 10-13, and 16-19), and a lower success rate for Task 15.

Both novice (6%) and expert physician groups (2%) had equally low task success for Task 7 (Add a medication to your favorites list) in Round 1. However, in Round 2 all physicians in both groups successfully completed Task 7 (100%).

Time-on-Task

Geometric mean values of time-on-task (TOT) were compared between the 2 physician groups across the 2 rounds (Table 2). There was a 10% decrease in novice physicians’ time-on-task between Round 1 (44s, 95% CI 32-62) and Round 2 (40s, 95% CI 27-59). There was a 21% decrease in the expert physician group’s time-on-task between Round 1 (39s, 95% CI 29-51) and Round 2 (31s, 95% CI 22-42). When time-on-task was compared between the physician groups, the overall novice physician group spent more time compared to the expert physician group for both rounds.

In Round 1, the novice physician group spent less time than expert physicians completing 4 out of 19 tasks (5, 11, 12, and 13), the same amount of time completing Task 17, and more time completing 14 tasks (1-4, 6-10, 14-16, 18, and 19). In Round 2, the novice physician group spent less time completing 4 out of 19 tasks (2, 6, 11, and 12), the same time completing Task 18, and more time completing 14 tasks (1, 3-5, 7-10, 13-17, and 19).

In Round 1, both physician groups had the longest time spent on Task 7 (Document new medication allergy). However, in Round 2, time on Task 7 decreased by 52% for the expert physician group (87s to 50s) and 29% for the novice physician group (133s to 95s).

Mouse Clicks

Geometric mean values of mouse clicks were compared between the 2 physician groups across the 2 rounds (Table 2). There was a 20% decrease in the novice physician group’s mouse clicks between Round 1 (8 clicks, 95% CI 6-13) and Round 2 (7 clicks, 95% CI 4-12). Similarly, there was a 39% decrease in the expert physician group’s mouse clicks between Round 1 (8 clicks, 95% CI 5-11) and Round 2 (5 clicks, 95% CI 1-10). When mouse clicks were compared between the physician groups, the novice physician group completed tasks with slightly more mouse clicks than expert physicians did in both rounds.

In Round 1, the novice physician group achieved lower mouse clicks than the expert physician group for 7 tasks (4, 6, 8, 11, 13, 17, and 19), higher mouse clicks for 9 tasks (1, 5, 7, 9 10, 12, and 14 – 16), and a comparable number of clicks for 3 tasks (2, 3, and 18). In Round 2, novice physicians used less mouse clicks when completing 6 tasks (8, 10, 11, 13, 18 and 19), the same number of clicks when completing 5 tasks (4-6, 12, and 15), and more clicks completing 8 tasks (1-3, 7, 9, 14, 16, and 17).

In Round 1, both novice and expert physicians had the highest number of mouse clicks out of all tasks when completing Task 7 (Add a medication to your favorites list). However, in Round 2, the task with the highest number of mouse clicks by expert physicians changed from Task 7 to Task 15 (Place order for basic metabolic panel [BMP]) and novice physicians had the highest mouse clicks when completing Task 14 (Place order for chest X-ray) in Round 2, compared to Task 7 in Round 1.

Mouse Movements

Geometric mean values of mouse movements (the length of the navigation path to complete a given task) were compared between the 2 physician groups across the 2 rounds. There was a 14% increase in novice physicians’ mouse movements between Round 1 (9247 pixels, 95% CI 6404-13,353) and Round 2 (7992 pixels, 95% CI 5350-11,936). There was also a 14% decrease in expert physicians’ mouse movements between Round 1 (7325 pixels, 95% CI 5237-10,247]) and Round 2 (6329 pixels, 95% CI 4299-9317). When mouse movements were compared between the physician groups, the novice physician group showed slightly longer mouse movements than expert physicians did across the 19 tasks in both rounds.

In Round 1, the novice physicians showed longer mouse movements for 15 of 19 tasks (1-4, 6-12, 14-16, and 18), and shorter mouse movements for 4 tasks (5, 13, 17, and 19). In Round 2, novice physicians used shorter mouse movements in completing 8 out of 19 tasks (2, 4, 6, 11-13, 18, and 19) and used longer movements completing 11 tasks (1, 3, 5, 7-10, and 14-17).

In Round 1, novice physicians had the longest mouse movements out of all tasks when completing Task 7 (Add a medication to your favorites list) and expert physicians had the longest mouse movements when completing Task 13 (Include diagnosis). In Round 2, the task with the longest mouse movements by novice physicians was Task 14 (Place order for chest X-ray) compared to Task 7 in Round 1 and expert physicians had the longest mouse movements when completing Task 15 (Place order for basic metabolic panel [BMP]).

System Usability Scale

In Round 1, 5 out of 6 expert physicians and all 10 novice physicians completed the SUS. In Round 2, all 4 expert physicians and all 9 novice physicians completed the SUS. The SUS illustrated that novice physicians ranked the system’s usability at a mean of 69 (high marginal) in Round 1 compared to 68 (high marginal) in Round 2. Experts rated the system’s usability at a mean of 74 (acceptable) in both rounds. A novice physician and 2 expert physicians had a score of 50 (not acceptable) or below. These results may indicate that expert users who have achieved a certain level of proficiency may be more confident using the EHR than novice users. A debriefing session confirmed the overall learnability test experience but did not reveal specific learnability issues. After analyzing the recording, however, it was clear that physicians encountered some difficulties when completing the tasks.

Usability Themes

Because of space limitations, a second manuscript is in preparation with a full review of the usability themes. Sub-task analysis was instrumental in identifying multiple usability concerns. We identified 31 common and 4 unique usability issues between the 2 physician groups across 2 rounds. Themes were created by analyzing and combining usability issues to form an overarching theme [71]. Five themes emerged during analysis: 6 usability issues were related to inconsistencies, 9 issues concerning user interface issues, 6 issues in relation to structured data issues, 7 ambiguous terminology issues, and 6 issues in regards to workarounds. An example of an inconsistency issue was illogical ordering of lists in Task 17 (Add a medication to your favorites list), such that the medication list could not be sorted alphabetically when imported into a patient’s visit note. This may frustrate physicians when they cannot discern how to sort the medication list. An example of a user interface issue was the long note template list physicians had to navigate when they completed Task 1 (Start a new note). A lengthy list of different templates was chosen from when creating a note and the templates were not specialty specific, such that searching through the template list and choosing a desired template was time consuming and caused extra cognitive load. An example of a structured data issue was a lack of distinction between columns in Task 9 (Include Family History). In this task, the blue or white columns (indicating negative vs positive findings) for family members were unlabeled, such that physicians were unsure how to mark a family history item “positive.” An example of an ambiguous terminology issue was multiple fields having the same functionality. When completing Tasks 14 and 15, there was no clear difference between the drop-down menu labeled “Requested Start Date,” the drop-down menu labeled “Requested Time Frame,” and the radio button labeled “Future Order.” This could cause future lab tests not to be ordered properly, such that lab tests may not be completed at the right time and patients may have to get the test redone, which adds additional cost for the patient. An example of a workaround was unawareness of functions. When completing Task 13 (Include diagnosis), physicians were not able to move “hypertension” from the problem list to the current diagnosis list, so they re-added “hypertension” as a new problem, which took additional time.


Principal Findings

Our findings show that there were mixed changes in performance measures and expert physicians were more proficient than novice physicians on all four performance measures.

Relation to Prior Studies

In our study, differences were found between expert and novice physicians’ performance measures across Round 1 and Round 2. A study by Kjeldskov, Skov, and Stage [72] identifying usability problems encountered by novice and expert nurses examined whether or not usability issues disappeared over time. In this study, 7 nurses completed 14 and 30 hours of training prior to the first evaluation that included 7 tasks and subtasks centered on the core purpose of the system. The same nurses completed the same 7 tasks after 15 months of daily use of the system. All expert subjects solved all 7 tasks either completely or partially while only 2 novice subjects solved all tasks (P=.01). No statistically significant difference between novice and expert nurses was found when considering only completely solved tasks (P=.08). Our study did not report P values due to the small sample size; however, we observed overall improvement in performance measures for both novice and expert physician groups across 2 rounds. The contradictory results from this study and the study by Kjeldskov, Skov, and Stage, suggest that further research is necessary to draw more definite conclusions about task success between novice and expert physicians.

Alternatively, a study by Lewis et al measured performance of novice health sciences students and a predictive model of skilled human performance when performing EHR tasks using a touchscreen. Novice participants were adults with no prior experience using an EHR touchscreen interface using CogTool. CogTool is an open-source user-interface prototyping tool that uses a human performance model to automatically evaluate how efficiently a skilled user can complete a task. Participants completed 31 tasks commonly performed by nurses and patient registration clerks in an Anti-Retroviral Therapy clinic. The mean novice performance time for all tasks was significantly slower than predictions of skilled use (P<.00) [73]. Although novice EHR users completed touchscreen tasks slower than a skilled user, they were able to execute some tasks at a skilled level within the first hour of system use. Our study also found novice physicians completing tasks slower than expert physicians, although they decreased their time-on-task by 10% in Round 2. However, our study is different from Lewis et al in that we used human expert physicians instead of a predictive model, which gives a more realistic comparison between novice and expert users. The common findings between this study and those of Lewis et al suggest that physicians become efficient as EHR experience increases, in relation to task completion time, because physicians may become familiar with the system.

Physicians’ perceptions of the usability of a system may have relations to learnability; that is, physicians may find the system more user-friendly (usability) if the amount of time and effort needed to develop proficiency with the system is shorter (learnability). In our study, the SUS, which measures overall usability, illustrated that there was only a slight change in novice (Round 1: 69 [high marginal], Round 2: 68 [high marginal]) and expert (Round 1: 74 [high marginal], Round 2: 74 [high marginal]) physicians’ rankings of the system’s usability. In a study by Haarbrandt et al, primary care providers gave a SUS rating of 70.7 (marginally acceptable) when asked about their perception of a health information exchange system, which was similar to the physicians’ scores in our study. Expert and novice participants found the graphical user interface easy to use; however, they only rated the system as acceptable [74]. Kim et al [62] measured usability gaps in emergency department (ED) nurses, and found that novice ED nurses were not satisfied with their system (43 [unacceptable] to 55 [low marginal]) in comparison to expert nurses who were satisfied (75 to 81 [good to excellent]), which was different from our study’s result. The varying SUS scores from the studies mentioned suggest that physicians with more experience using an EHR are more likely to give the system higher SUS scores. Contrary to the assumption that SUS produce reliable scores, there are mixed results that SUS scores clearly associate with performance measures. For example, Kim et al showed very low correlations between performance measures and SUS Scores, indicating that care needs to be taken when interpreting usability data and comprehensive rather than single measures are necessary.

Study Limitations

This study had several limitations in terms of the methodology. First, it involved a small sample of physicians; therefore, the sample size may not have been sufficient to obtain statistical significance when reporting quantitative results of learnability. However, the sample size was sufficient when identifying usability issues experienced by participants when interacting with the EHR system. This study was conducted at a health care institution where only 1 EHR system was used and may not be representative of all primary care practice. As such, the study’s findings may have limited generalizability to other ambulatory clinic settings, due to different types of EHR applications and physician practice settings. However, the EHR platform employed in this study is one of the top commercial products with significant market share. Based on data from Office of the National Coordinator for Health Information Technology, Cerner was reported as the primary EHR Vendor by 20% of hospitals participating in the CMS EHR incentive programs, making it the second most implemented EHR in hospitals [75]. Second, a limited number of clinical tasks were used in the learnability test and may not have encompassed other tasks completed by physicians in other clinical scenarios. However, these tasks included realistic inpatient and outpatient tasks that resident physicians would usually complete in a clinical scenario. Third, this study was conducted in a laboratory setting, which did not take into account common distractions physicians may experience during a clinical encounter. Nonetheless, laboratory-based learnability tests allow for flexibility in questioning and give room for more in-depth probing. Direct observation in laboratory learnability testing also allows for interaction between participant and facilitator. Although this study contained some methodological limitations, we believe it to be a well-controlled study that used a rigorous evaluation method with validated performance measures that are widely accepted in HIT evaluation. In addition, the clear instructions allowed physician participants to complete the required tasks without excessive cognitive load.

Conclusion

Overall, this study identified varying degrees of learnability gaps between expert and novice physician groups that may impede the use of EHRs. Our results suggest that longer experience with an EHR may not be equivalent to being an expert or proficient in its use. The physicians’ interactions with the EHR can be communicated to EHR vendors, to assist in improving the user interface for effective use by physicians. This study may also assist in the design of EHR education and training programs by highlighting the areas (ie, tasks and related features and functionalities) of difficulty that resident physicians face. Resident physicians in primary care are offered extensive EHR training by their institutions. However, it is a great challenge for busy physicians to find time for training. Furthermore, it is an arduous task attempting to meet the needs of users and provide hands-on, on-site support [7], and evidence-based guidelines for training resident physicians effectively on how to use EHRs for patient care are scarce [76]. Thus, our study may also serve as a guideline to potentially improve EHR training programs, which may increase physicians’ performance, by improving competency when using the system.

Conflicts of Interest

None declared.

  1. Centers for Medicare & Medicaid Services. Medicare and Medicaid EHR Incentive Program Basics   URL: https://www.cms.gov/regulations-and-guidance/legislation/ehrincentiveprograms/basics.html [accessed 2015-10-14] [WebCite Cache]
  2. Mollon B, Chong J, Holbrook AM, Sung M, Thabane L, Foster G. Features predicting the success of computerized decision support for prescribing: A systematic review of randomized controlled trials. BMC Med Inform Decis Mak 2009;9:11 [FREE Full text] [CrossRef] [Medline]
  3. Garrett P, Seidman J. Electronic Health & Medical Records 2011. EMR vs EHR - What is the Difference?   URL: http://www.healthit.gov/buzz-blog/electronic-health-and-medical-records/emr-vs-ehr-difference/ [accessed 2015-10-14] [WebCite Cache]
  4. Hsiao C, Hing E. Use and characteristics of electronic health record systems among office-based physician practices: United States, 2001-2012. NCHS Data Brief 2012 Dec(111):1-8 [FREE Full text] [Medline]
  5. Yoon-Flannery K, Zandieh SO, Kuperman GJ, Langsam DJ, Hyman D, Kaushal R. A qualitative analysis of an electronic health record (EHR) implementation in an academic ambulatory setting. Inform Prim Care 2008;16(4):277-284. [Medline]
  6. Carr DM. A team approach to EHR implementation and maintenance. Nurs Manage. Suppl 5 2004;35:24-16. [Medline]
  7. Terry AL, Thorpe CF, Giles G, Brown JB, Harris SB, Reid GJ, et al. Implementing electronic health records: Key factors in primary care. Can Fam Physician 2008 May;54(5):730-736 [FREE Full text] [Medline]
  8. Lorenzi N, Kouroubali A, Detmer D, Bloomrosen M. How to successfully select and implement electronic health records (EHR) in small ambulatory practice settings. BMC Med Inform Decis Mak 2009;9:15 [FREE Full text] [CrossRef] [Medline]
  9. Whittaker AA, Aufdenkamp M, Tinley S. Barriers and facilitators to electronic documentation in a rural hospital. J Nurs Scholarsh 2009;41(3):293-300. [CrossRef] [Medline]
  10. Morrison I, Smith R. Hamster health care. BMJ 2000;321(7276):1541-1542 [FREE Full text] [Medline]
  11. Chaudhry B, Wang J, Wu S, Maglione M, Mojica W, Roth E, et al. Systematic review: Impact of health information technology on quality, efficiency, and costs of medical care. Ann Intern Med 2006 May 16;144(10):742-752. [Medline]
  12. Miller RH, West C, Brown TM, Sim I, Ganchoff C. The value of electronic health records in solo or small group practices. Health Aff (Millwood) 2005;24(5):1127-1137 [FREE Full text] [CrossRef] [Medline]
  13. Shekelle PG, Morton SC, Keeler EB. Costs and benefits of health information technology. Evid Rep Technol Assess (Full Rep) 2006 Apr(132):1-71. [Medline]
  14. Menachemi N, Collum TH. Benefits and drawbacks of electronic health record systems. Risk Manag Healthc Policy 2011;4:47-55 [FREE Full text] [CrossRef] [Medline]
  15. Goldzweig C, Towfigh A, Maglione M, Shekelle P. Costs and benefits of health information technology: New trends from the literature. Health Aff (Millwood) 2009;28(2):w282-w293 [FREE Full text] [CrossRef] [Medline]
  16. Grabenbauer L, Fraser R, McClay J, Woelfl N, Thompson CB, Cambell J, et al. Adoption of electronic health records: A qualitative study of academic and private physicians and health administrators. Appl Clin Inform 2011;2(2):165-176 [FREE Full text] [CrossRef] [Medline]
  17. Reynolds CJ, Wyatt JC. Open source, open standards, and health care information systems. J Med Internet Res 2011;13(1):e24 [FREE Full text] [CrossRef] [Medline]
  18. Zahabi M, Kaber DB, Swangnetr M. Usability and safety in electronic medical records interface design: A review of recent literature and guideline formulation. Hum Factors 2015 Aug;57(5):805-834. [CrossRef] [Medline]
  19. Sheikh A, Sood HS, Bates DW. Leveraging health information technology to achieve the “triple aim” of healthcare reform. J Am Med Inform Assoc 2015 Jul;22(4):849-856. [CrossRef] [Medline]
  20. Ratwani R, Fairbanks R, Hettinger A, Benda N. Electronic health record usability: Analysis of the user-centered design processes of eleven electronic health record vendors. J Am Med Inform Assoc 2015 Nov;22(6):1179-1182. [CrossRef] [Medline]
  21. Hübner U. What are complex eHealth innovations and how do you measure them? Position paper. Methods Inf Med 2015;54(4):319-327. [CrossRef] [Medline]
  22. Berlin J. Better bridges, better systems. Tex Med 2015 Sep;111(9):39-43. [Medline]
  23. ISO/TC 159/SC 4 - Ergonomics of human-system interaction. Geneva, Switzerland: International Organization for Standardization; 1998. ISO 9241-11: Ergonomic Requirements for Office Work with Visual Display Terminals (VDTs): Part 11: Guidance on Usability   URL: http://www.iso.org/iso/catalogue_detail.htm?csnumber=16883 [accessed 2015-12-22] [WebCite Cache]
  24. Clarke MA, Steege LM, Moore JL, Koopman RJ, Belden JL, Kim MS. Determining primary care physician information needs to inform ambulatory visit note display. Appl Clin Inform 2014;5(1):169-190 [FREE Full text] [CrossRef] [Medline]
  25. Love JS, Wright A, Simon SR, Jenter CA, Soran CS, Volk LA, et al. Are physicians' perceptions of healthcare quality and practice satisfaction affected by errors associated with electronic health record use? J Am Med Inform Assoc 2012;19(4):610-614 [FREE Full text] [CrossRef] [Medline]
  26. McLane S, Turley JP. One size does not fit all: EHR clinical summary design requirements for nurses. 2012 Presented at: NI 2012: 11th International Congress on Nursing Informatics; June 23-27, 2012; Montreal, QC p. 283.
  27. Viitanen J, Hyppönen H, Lääveri T, Vänskä J, Reponen J, Winblad I. National questionnaire study on clinical ICT systems proofs: Physicians suffer from poor usability. Int J Med Inform 2011 Oct;80(10):708-725. [CrossRef] [Medline]
  28. Clarke MA, Steege LM, Moore JL, Belden JL, Koopman RJ, Kim MS. Addressing human computer interaction issues of electronic health record in clinical encounters. In: Design, User Experience, and Usability. Health, Learning, Playing, Cultural, and Cross-Cultural User Experience.: Springer Berlin Heidelberg; 2013 Jan 01 Presented at: Second International Conference on Design, User Experience, and Usability; July 21-26, 2013; Las Vegas, NV p. 381-390. [CrossRef]
  29. Clarke MA, Steege LM, Moore JL, Koopman RJ, Belden JL, Kim MS. Determining primary care physician information needs to inform ambulatory visit note display. Appl Clin Inform 2014;5(1):169-190 [FREE Full text] [CrossRef] [Medline]
  30. ISO/IEC JTC 1/SC 7 - Software and systems engineering. Geneva, Switzerland: International Organization for Standardization; 2011. ISO/IEC 25010:2011 Systems and software engineering -- Systems and software Quality Requirements and Evaluation (SQuaRE) -- System and software quality models   URL: http://www.iso.org/iso/catalogue_detail.htm?csnumber=35733 [accessed 2015-12-22] [WebCite Cache]
  31. Tullis T, Albert W. Measuring the User Experience: Collecting, Analyzing, and Presenting Usability Metrics. Amsterdam: Elsevier/Morgan Kaufmann; 2008.
  32. ISO/IEC 25010:2011 Systems and software engineering -- Systems and software Quality Requirements and Evaluation (SQuaRE) -- System and software quality models. Geneva: International Organization for Standardization; 2011:34.
  33. Elliott G, Jones E, Barker P. A grounded theory approach to modelling learnability of hypermedia authoring tools. Interacting Comput 2002 Oct;14(5):547-574. [CrossRef]
  34. Nielsen J. Usability engineering. San Francisco, Calif: Morgan Kaufmann Publishers; 1993.
  35. Whiteside J, Jones S, Levy P, Wixon D. User performance with command, menu, and iconic interfaces. 1985 Presented at: SIGCHI Conference on Human Factors in Computing Systems; April 1, 1985; San Francisco, CA p. 185-191. [CrossRef]
  36. Lin H, Choong Y, Salvendy G. A proposed index of usability: A method for comparing the relative usability of different software systems. Behav Inf Technol 1997;16:267-277 [FREE Full text]
  37. Edsall R, Adler K. The 2012 EHR User Satisfaction Survey: Responses from 3,088 family physicians. Fam Pract Manag 2012;19(6):23-30 [FREE Full text] [Medline]
  38. Anderson L, Stafford C. The “big bang” implementation: Not for the faint of heart. Comput Nurs 2002;20(1):14-20; quiz 20. [Medline]
  39. Ash J, Bates D. Factors and forces affecting EHR system adoption: Report of a 2004 ACMI discussion. J Am Med Inform Assoc 2005;12(1):8-12 [FREE Full text] [CrossRef] [Medline]
  40. Brokel JM, Harrison MI. Redesigning care processes using an electronic health record: A system's experience. Jt Comm J Qual Patient Saf 2009 Feb;35(2):82-92. [Medline]
  41. McAlearney AS, Song PH, Robbins J, Hirsch A, Jorina M, Kowalczyk N, et al. Moving from good to great in ambulatory electronic health record implementation. J Healthc Qual 2010;32(5):41-50. [CrossRef] [Medline]
  42. TEKsystems. Hanover, MD; 2013 Jun 10. EHR Implementation Survey: Proactive Consideration and Planning Lead to Successful EHR Implementation   URL: http://www.teksystems.com/resources/thought-leadership/it-industry-trends/ehr-implementation-survey [accessed 2015-12-21] [WebCite Cache]
  43. Yan H, Gardner R, Baier R. Beyond the focus group: Understanding physicians' barriers to electronic medical records. Jt Comm J Qual Patient Saf 2012 Apr;38(4):184-191. [Medline]
  44. Aaronson JW, Murphy-Cullen CL, Chop WM, Frey RD. Electronic medical records: The family practice resident perspective. Fam Med 2001 Feb;33(2):128-132. [Medline]
  45. Keenan CR, Nguyen HH, Srinivasan M. Electronic medical records and their impact on resident and medical student education. Acad Psychiatry 2006;30(6):522-527. [CrossRef] [Medline]
  46. Terry AL, Giles G, Brown JB, Thind A, Stewart M. Adoption of electronic medical records in family practice: The providers' perspective. Fam Med 2009;41(7):508-512 [FREE Full text] [Medline]
  47. Hammoud MM, Margo K, Christner JG, Fisher J, Fischer SH, Pangaro LN. Opportunities and challenges in integrating electronic health records into undergraduate medical education: A national survey of clerkship directors. Teach Learn Med 2012;24(3):219-224. [CrossRef] [Medline]
  48. Saleem JJ, Patterson ES, Militello L, Anders S, Falciglia M, Wissman JA, et al. Impact of clinical reminder redesign on learnability, efficiency, usability, and workload for ambulatory clinic nurses. J Am Med Inform Assoc 2007;14(5):632-640 [FREE Full text] [CrossRef] [Medline]
  49. Patterson ES, Doebbeling BN, Fung CH, Militello L, Anders S, Asch SM. Identifying barriers to the effective use of clinical reminders: Bootstrapping multiple methods. J Biomed Inform 2005 Jun;38(3):189-199 [FREE Full text] [CrossRef] [Medline]
  50. Yen P, Bakken S. Review of health information technology usability study methodologies. J Am Med Inform Assoc 2012;19(3):413-422 [FREE Full text] [CrossRef] [Medline]
  51. Khajouei R, Jaspers MW. The impact of CPOE medication systems' design aspects on usability, workflow and medication orders: A systematic review. Methods Inf Med 2010;49(1):3-19. [CrossRef] [Medline]
  52. Chan J, Shojania KG, Easty AC, Etchells EE. Usability evaluation of order sets in a computerised provider order entry system. BMJ Qual Saf 2011 Nov;20(11):932-940. [CrossRef] [Medline]
  53. Harrington L, Porch L, Acosta K, Wilkens K. Realizing electronic medical record benefits: An easy-to-do usability study. J Nurs Adm 2011;41(7-8):331-335. [CrossRef] [Medline]
  54. Li AC, Kannry JL, Kushniruk A, Chrimes D, McGinn TG, Edonyabo D, et al. Integrating usability testing and think-aloud protocol analysis with “near-live” clinical simulations in evaluating clinical decision support. Int J Med Inform 2012 Nov;81(11):761-772. [CrossRef] [Medline]
  55. Brooke J. SUS-A quick and dirty usability scale. In: Jordan PW, Thomas B, McClelland IL, Weerdmeester B, editors. Usability Evaluation in Industry. Boca Raton, FL: CRC Press; 1996:189-194.
  56. MU 2011 Annual Report. MU Healthcare 2011   URL: http://bluetoad.com/publication/?i=106794&pre=1 [accessed 2012-04-15] [WebCite Cache]
  57. University Of Missouri Health Care Achieves Highest Level of Electronic Medical Record Adoption. Columbia, MO: University of Missouri Health System; 2013. University of Missouri Health Care News Releases   URL: http://www.muhealth.org/news/2012/university-of-missouri-health-care-achieves-highest-level-of-ele/ [WebCite Cache]
  58. HIMSS Analytics. U.S. EMR Adoption Model Trends   URL: https://app.himssanalytics.org/docs/HA_EMRAM_Overview_ENG.pdf [accessed 2015-10-14] [WebCite Cache]
  59. Clarke MA, Belden JL, Kim MS. Determining differences in user performance between expert and novice primary care doctors when using an electronic health record (EHR). J Eval Clin Pract 2014 Dec;20(6):1153-1161. [CrossRef] [Medline]
  60. Battaglia M. Convenience sampling. In: Lavrakas P, editor. Encyclopedia of Survey Research Methods. Thousand Oaks, CA: Sage Publications, Inc; 2008.
  61. Barnum C. The ‘magic number 5’: Is it enough for web testing? Inf Des J 2003 Jan 01;11(3):160-170. [CrossRef]
  62. Kim MS, Shapiro JS, Genes N, Aguilar MV, Mohrer D, Baumlin K, et al. A pilot study on usability analysis of emergency department information system by nurses. Appl Clin Inform 2012;3(1):135-153 [FREE Full text] [CrossRef] [Medline]
  63. Lowry S, Quinn M, Ramaiah M, Schumacher R, Patterson E, North R, et al. National Institute of Standards and Technology (NIST). 2012. Technical Evaluation, Testing, and Validation of the Usability of Electronic Health Records   URL: http://www.nist.gov/customcf/get_pdf.cfm?pub_id=909701 [accessed 2015-12-22] [WebCite Cache]
  64. Khajouei R, Peek N, Wierenga PC, Kersten MJ, Jaspers MW. Effect of predefined order sets and usability problems on efficiency of computerized medication ordering. Int J Med Inform 2010 Oct;79(10):690-698. [CrossRef] [Medline]
  65. Koopman RJ, Kochendorfer KM, Moore JL, Mehr DR, Wakefield DS, Yadamsuren B, et al. A diabetes dashboard and physician efficiency and accuracy in accessing data needed for high-quality diabetes care. Ann Fam Med 2011;9(5):398-405 [FREE Full text] [CrossRef] [Medline]
  66. Lewis J, Sauro J. The Factor Structure of the System Usability Scale. In: Human Centered Design: First International Conference.: Springer Berlin Heidelberg; 2009 Presented at: HCI International 2009; July 19-24, 2009; San Diego, CA p. 94-103. [CrossRef]
  67. Van Someren MW, Barnard Y, Sandberg JAC. The Think Aloud Method: A Practical Guide to Modelling Cognitive Processes. London: Academic Press; 1994.
  68. Press A, McCullagh L, Khan S, Schachter A, Pardo S, McGinn T. Usability testing of a complex clinical decision support tool in the emergency department: Lessons learned. JMIR Human Factors 2015 Sep 10;2(2):e14 [FREE Full text] [CrossRef]
  69. Cordes R. The effects of running fewer subjects on time‐on‐task measures. Int J Hum-Comput Interact 1993 Oct;5(4):393-403 [FREE Full text] [CrossRef]
  70. Sauro J, Lewis J. Average Task Times in Usability Tests: What to Report? 2010 Presented at: Conference in Human Factors in Computing Systems (CHI 2010); April 10-15, 2010; Atlanta, GA p. 2347-2350.
  71. Braun V, Clarke V. Using thematic analysis in psychology. Qual Res Psychol 2006 Jan;3(2):77-101. [CrossRef]
  72. Kjeldskov J, Skov MB, Stage J. A longitudinal study of usability in health care: Does time heal? Int J Med Inform 2010 Jun;79(6):e135-e143. [CrossRef] [Medline]
  73. Lewis ZL, Douglas GP, Monaco V, Crowley RS. Touchscreen task efficiency and learnability in an electronic medical record at the point-of-care. Stud Health Technol Inform 2010;160(Pt 1):101-105. [Medline]
  74. Haarbrandt B, Schwartze J, Gusew N, Seidel C, Haux R. Primary care providers' acceptance of health information exchange utilizing IHE XDS. Stud Health Technol Inform 2013;190:106-108. [Medline]
  75. Electronic Health Record Vendors Reported by Hospitals Participating in the CMS EHR Incentive Programs. Health IT Quick-Stat #29. 2015.   URL: http://dashboard.healthit.gov/quickstats/pages/FIG-Vendors-of-EHRs-to-Participating-Hospitals.php [accessed 2015-10-01] [WebCite Cache]
  76. Peled JU, Sagher O, Morrow JB, Dobbie AE. Do electronic health records help or hinder medical education? PLoS Med 2009 May 5;6(5):e1000069 [FREE Full text] [CrossRef] [Medline]


CMS: Centers for Medicare & Medicaid Services
CPOE: computerized physician order entry systems
ED: emergency department
EHRs: electronic health records
FCM: Department of Family and Community Medicine
HIMSS: Healthcare Information and Management Systems Society
HIT: health information technology
HITECH: Health Information Technology for Economic and Clinical Health (act)
IM: Department of Internal Medicine
MU: meaningful use
NCHS: National Center for Health Statistics
ONC: Office of the National Coordinator for Health Information Technology
SUS: system usability scale
TOT: time-on-task
UMHS: University of Missouri Health System


Edited by G Eysenbach; submitted 01.05.15; peer-reviewed by T van Mierlo, J Avis, G Stiglic; comments to author 19.08.15; revised version received 14.10.15; accepted 05.11.15; published 15.02.16

Copyright

©Martina A Clarke, Jeffery L Belden, Min Soon Kim. Originally published in JMIR Human Factors (http://humanfactors.jmir.org), 15.02.2016.

This is an open-access article distributed under the terms of the Creative Commons Attribution License (http://creativecommons.org/licenses/by/2.0/), which permits unrestricted use, distribution, and reproduction in any medium, provided the original work, first published in JMIR Human Factors, is properly cited. The complete bibliographic information, a link to the original publication on http://humanfactors.jmir.org, as well as this copyright and license information must be included.