Maintenance Notice

Due to necessary scheduled maintenance, the JMIR Publications website will be unavailable from Monday, March 11, 2019 at 4:00 PM to 4:30 PM EST. We apologize in advance for any inconvenience this may cause you.

Who will be affected?


Journal Description

JMIR Human Factors (JHF) is a PubMed-indexed, peer-reviewed sister journal of JMIR, a leading open access eHealth journal (Impact Factor 2017: 4.671).
JMIR Human Factors is a multidisciplinary journal with contributions from medical researchers, engineers, and social scientists.

JMIR Human Factors focuses on understanding how the behaviour and thinking of humans can influence and shape the design of health care interventions and technologies, and how the design can be evaluated and improved to make health care interventions and technologies usable, safe, and effective. JHF aspires to lead health care towards a culture of testing and safety by promoting and publishing reports rigorously evaluating the usability and human factors aspects in health care, as well as encouraging the development and debate on new methods in this emerging field. 

All articles are professionally copyedited and typeset, ready for indexing in PubMed/PubMed Central. Possible contributions include usability studies and heuristic evaluations, studies concerning ergonomics and error prevention, design studies for medical devices and healthcare systems/workflows, enhancing teamwork through Human Factors based teamwork training, measuring non-technical skills in staff like leadership, communication, situational awareness and teamwork, and healthcare policies and procedures to reduce errors and increase safety. Reviews, viewpoint papers and tutorials are as welcome as original research.

Editorial Board members are currently being recruited, please contact us if you are interested ( at


Recent Articles:

  • Source: Image created by the Authors; Copyright: The Authors; URL:; License: Creative Commons Attribution (CC-BY).

    High-Fidelity Prototyping for Mobile Electronic Data Collection Forms Through Design and User Evaluation


    Background: Mobile data collection systems are often difficult to use for nontechnical or novice users. This can be attributed to the fact that developers of such tools do not adequately involve end users in the design and development of product features and functions, which often creates interaction challenges. Objective: The main objective of this study was to assess the guidelines for form design using high-fidelity prototypes developed based on end-user preferences. We also sought to investigate the association between the results from the System Usability Scale (SUS) and those from the Study Tailored Evaluation Questionnaire (STEQ) after the evaluation. In addition, we sought to recommend some practical guidelines for the implementation of the group testing approach particularly in low-resource settings during mobile form design. Methods: We developed a Web-based high-fidelity prototype using Axure RP 8. A total of 30 research assistants (RAs) evaluated this prototype in March 2018 by completing the given tasks during 1 common session. An STEQ comprising 13 affirmative statements and the commonly used and validated SUS were administered to evaluate the usability and user experience after interaction with the prototype. The STEQ evaluation was summarized using frequencies in an Excel sheet while the SUS scores were calculated based on whether the statement was positive (user selection minus 1) or negative (5 minus user selection). These were summed up and the score contributions multiplied by 2.5 to give the overall form usability from each participant. Results: Of the RAs, 80% (24/30) appreciated the form progress indication, found the form navigation easy, and were satisfied with the error messages. The results gave a SUS average score of 70.4 (SD 11.7), which is above the recommended average SUS score of 68, meaning that the usability of the prototype was above average. The scores from the STEQ, on the other hand, indicated a 70% (21/30) level of agreement with the affirmative evaluation statements. The results from the 2 instruments indicated a fair level of user satisfaction and a strong positive association as shown by the Pearson correlation value of .623 (P<.01). Conclusions: A high-fidelity prototype was used to give the users experience with a product they would likely use in their work. Group testing was done because of scarcity of resources such as costs and time involved especially in low-income countries. If embraced, this approach could help assess user needs of the diverse user groups. With proper preparation and the right infrastructure at an affordable cost, usability testing could lead to the development of highly usable forms. The study thus makes recommendations on the practical guidelines for the implementation of the group testing approach particularly in low-resource settings during mobile form design.

  • Emergency physician and addiction counselor speaking with a patient about opioid use. Source: Image created by the Authors; Copyright: The Authors; URL:; License: Creative Commons Attribution (CC-BY).

    Computerized Clinical Decision Support System for Emergency Department–Initiated Buprenorphine for Opioid Use Disorder: User-Centered Design


    Background: Emergency departments (EDs) frequently care for individuals with opioid use disorder (OUD). Buprenorphine (BUP) is an effective treatment option for patients with OUD that can safely be initiated in the ED. At present, BUP is rarely initiated as a part of routine ED care. Clinical decision support (CDS) could accelerate adoption of ED-initiated BUP into routine emergency care. Objective: This study aimed to design and formatively evaluate a user-centered decision support tool for ED initiation of BUP for patients with OUD. Methods: User-centered design with iterative prototype development was used. Initial observations and interviews identified workflows and information needs. The design team and key stakeholders reviewed prototype designs to ensure accuracy. A total of 5 prototypes were evaluated and iteratively refined based on input from 26 attending and resident physicians. Results: Early feedback identified concerns with the initial CDS design: an alert with several screens. The timing of the alert led to quick dismissal without using the tool. User feedback on subsequent iterations informed the development of a flexible tool to support clinicians with varied levels of experience with the intervention by providing both one-click options for direct activation of care pathways and user-activated support for critical decision points. The final design resolved challenging navigation issues through targeted placement, color, and design of the decision support modules and care pathways. In final testing, users expressed that the tool could be easily learned without training and was reasonable for use during routine emergency care. Conclusions: A user-centered design process helped designers to better understand users’ needs for a Web-based clinical decision tool to support ED initiation of BUP for OUD. The process identified varying needs across user experience and familiarity with the protocol, leading to a flexible design supporting both direct care pathways and user-initiated decision support.

  • EMR System Use in Sub-Saharan Africa. Source: Image created by the Author; Copyright: Michael Kavuma; URL:; License: Creative Commons Attribution + Noncommercial (CC-BY-NC).

    The Usability of Electronic Medical Record Systems Implemented in Sub-Saharan Africa: A Literature Review of the Evidence

    Authors List:


    Background: Electronic medical record (EMR) systems hold the exciting promise of accurate, real-time access to patient health care data and great potential to improve the quality of patient care through decision support to clinicians. This review evaluated the usability of EMR systems implemented in sub-Saharan Africa based on a usability evaluation criterion developed by the Healthcare Information and Management Systems Society (HIMSS). Objective: This review aimed to evaluate EMR system implementations in sub-Saharan Africa against a well-defined evaluation methodology and assess their usability based on a defined set of metrics. In addition, the review aimed to identify the extent to which usability has been an enabling or hindering factor in the implementation of EMR systems in sub-Saharan Africa. Methods: Five key metrics for evaluating EMR system usability were developed based on the methodology proposed by HIMSS. These were efficiency, effectiveness, ease of learning, cognitive load, and user satisfaction. A 5-point rating system was developed for the review. EMR systems in 19 reviewed publications were scored based on this rating system. It awarded 5 points per metric to any EMR system that was identified as excellent, 4 points for good, 3 points for fair, 2 points for poor, and 1 point for bad. In addition, each of the 5 key metrics carried a maximum weighted score of 20. The percentage scores for each metric were then computed from the weighted scores from which the final overall usability score was derived. Results: In possibly contributing to the usability of implemented EMR systems, ease of learning obtained the highest percentage score of 71% (SD 1.09) followed by cognitive load in second place with a score of 68% (SD 1.62). Effectiveness followed closely in third place at 67% (SD 1.47) and efficiency was in fourth place at 64% (SD 1.04). User satisfaction came in last at 63% (SD 1.70). The overall usability score for all systems was calculated to be 66%. Conclusions: The usability of EMR systems implemented in sub-Saharan Africa has been good with ease of learning possibly being the biggest positive contributor to this rating. Cognitive load and effectiveness have also possibly positively influenced the usability of EMR systems, whereas efficiency and user satisfaction have perhaps contributed least to positively influencing EMR system usability.

  • Source: Flickr; Copyright: US Department of Agriculture (Bob Nichols); URL:; License: Creative Commons Attribution (CC-BY).

    Improving Provider Adoption With Adaptive Clinical Decision Support Surveillance: An Observational Study


    Background: Successful clinical decision support (CDS) tools can help use evidence-based medicine to effectively improve patient outcomes. However, the impact of these tools has been limited by low provider adoption due to overtriggering, leading to alert fatigue. We developed a tracking mechanism for monitoring trigger (percent of total visits for which the tool triggers) and adoption (percent of completed tools) rates of a complex CDS tool based on the Wells criteria for pulmonary embolism (PE). Objective: We aimed to monitor and evaluate the adoption and trigger rates of the tool and assess whether ongoing tool modifications would improve adoption rates. Methods: As part of a larger clinical trial, a CDS tool was developed using the Wells criteria to calculate pretest probability for PE at 2 tertiary centers’ emergency departments (EDs). The tool had multiple triggers: any order for D-dimer, computed tomography (CT) of the chest with intravenous contrast, CT pulmonary angiography (CTPA), ventilation-perfusion scan, or lower extremity Doppler ultrasound. A tracking dashboard was developed using Tableau to monitor real-time trigger and adoption rates. Based on initial low provider adoption rates of the tool, we conducted small focus groups with key ED providers to elicit barriers to tool use. We identified overtriggering of the tool for non-PE-related evaluations and inability to order CT testing for intermediate-risk patients. Thus, the tool was modified to allow CT testing for the intermediate-risk group and not to trigger for CT chest with intravenous contrast orders. A dialogue box, “Are you considering PE for this patient?” was added before the tool triggered to account for CTPAs ordered for aortic dissection evaluation. Results: In the ED of tertiary center 1, 95,295 patients visited during the academic year. The tool triggered for an average of 509 patients per month (average trigger rate 2036/30,234, 6.73%) before the modifications, reducing to 423 patients per month (average trigger rate 1629/31,361, 5.22%). In the ED of tertiary center 2, 88,956 patients visited during the academic year, with the tool triggering for about 473 patients per month (average trigger rate 1892/29,706, 6.37%) before the modifications and for about 400 per month (average trigger rate 1534/30,006, 5.12%) afterward. The modifications resulted in a significant 4.5- and 3-fold increase in provider adoption rates in tertiary centers 1 and 2, respectively. The modifications increased the average monthly adoption rate from 23.20/360 (6.5%) tools to 81.60/280.20 (29.3%) tools and 46.60/318.80 (14.7%) tools to 111.20/263.40 (42.6%) tools in centers 1 and 2, respectively. Conclusions: Close postimplementation monitoring of CDS tools may help improve provider adoption. Adaptive modifications based on user feedback may increase targeted CDS with lower trigger rates, reducing alert fatigue and increasing provider adoption. Iterative improvements and a postimplementation monitoring dashboard can significantly improve adoption rates.

  • Source: Wikimedia Commons; Copyright: Kgbo; URL:; License: Creative Commons Attribution + ShareAlike (CC-BY-SA).

    Nurses’ Perceptions of a Care Plan Information Technology Solution With Hundreds of Clinical Practice Guidelines in Adult Intensive Care Units: Survey Study


    Background: The integration of clinical practice guidelines (CPGs) into the nursing care plan and documentation systems aims to translate evidence into practice, improve safety and quality of care, and standardize care processes. Objective: This study aimed to evaluate nurses’ perceptions of the usability of a nursing care plan solution that includes 234 CPGs. Methods: A total of 100 nurses from 4 adult intensive care units (ICUs) responded to a survey measuring nurses’ perceptions of system usability. The survey included 37 rated items and 3 open-ended questions. Results: Nurses’ perceptions were favorable with more than 60.0% (60/100) in agreement on 12 features of the system and negative to moderate with 20.0% (20/100), to 59.0% (59/100) in agreement on 19 features. The majority of the nurses (80/100, 80.0% to 90/100, 90.0%) agreed on 4 missing safety features within the system. More than half of the nurses believed they would benefit from refresher classes on system use. Overall satisfaction with the system was just above average (54/100, 54.0%). Common positive themes from the narrative data were related to the system serving as a reminder for complete documentation and individualizing patient care. Common negative aspects were related to duplicate charting, difficulty locating CPGs, missing unit-specific CPGs, irrelevancy of information, and lack of perceived system value on patient outcomes. No relationship was found between years of system use or ICU experience and satisfaction with the system (P=.10 to P=.25). Conclusions: Care plan systems in ICUs should be easy to navigate; support efficient documentation; present relevant, unit-specific, and easy-to-find information; endorse interdisciplinary communication; and improve safety and quality of care.

  • Source: Pixabay; Copyright: engin akyurt; URL:; License: Licensed by JMIR.

    A Qualitative Study of the Theory Behind the Chairs: Balancing Lean-Accelerated Patient Flow With the Need for Privacy and Confidentiality in an Emergency...


    Background: Many emergency departments (EDs) have used the Lean methodology to guide the restructuring of their practice environments and patient care processes. Despite research cautioning that the layout and design of treatment areas can increase patients’ vulnerability to privacy breaches, evaluations of Lean interventions have ignored the potential impact of these on patients’ informational and physical privacy. If professional regulatory organizations are going to require that nurses and physicians interact with their patients privately and confidentially, we need to examine the degrees to which their practice environment supports them to do so. Objective: This study explored how a Lean intervention impacted the ability of emergency medicine physicians and nurses to optimize conditions of privacy and confidentiality for patients under their care. Methods: From July to December 2017, semistructured interviews were iteratively conducted with health care professionals practicing emergency medicine at a single teaching hospital in Ontario, Canada. The hospital has 1000 beds, and approximately 128,000 patients visit its 2 EDs annually. In response to poor wait times, in 2013, the hospital’s 2 EDs underwent a Lean redesign. As the interviews proceeded, information from their transcripts was first coded into topics and then organized into themes. Data collection continued to theoretical sufficiency. Results: Overall, 15 nurses and 5 physicians were interviewed. A major component of the Lean intervention was the construction of a three-zone front cell at both sites. Each zone was outfitted with a set of chairs in an open concept configuration. Although, in theory, professionals perceived value in having the chairs, in practice, these served multiple, and often, competing uses by patients, family members, and visitors. In an attempt to work around limitations they encountered and keep patients flowing, professionals often needed to move a patient out from a front chair and actively search for another location that better protected individuals’ informational and physical privacy. Conclusions: To our knowledge, this is the first qualitative study of the impact of a Lean intervention on patient privacy and confidentiality. The physical configuration of the front cell often intensified the clinical work of professionals because they needed to actively search for spaces better affording privacy and confidentiality for patient encounters. These searches likely increased clinical time and added to these patients’ length of stay. We advocate that the physical structure and configuration of the front cell should be re-examined under the lens of Lean’s principle of value-added activities. Future exploration of the perspectives of patients, family members, and visitors regarding the relative importance of privacy and confidentiality during emergency care is warranted.

  • Source: Freepik; Copyright: Freepik; URL:; License: Licensed by JMIR.

    Supporting Older Adults in Exercising With a Tablet: A Usability Study


    Background: For older adults, physical activity is vital for maintaining their health and ability to live independently. Home-based programs can help them achieve the recommended exercise frequency. An application for a tablet computer was developed to support older adults in following a personal training program. It featured goal setting, tailoring, progress tracking, and remote feedback. Objective: In line with the Medical Research Council Framework, which prescribes thorough testing before evaluating the efficacy with a randomized controlled trial, the aim of this study was to assess the usability of a tablet-based app that was designed to support older adults in doing exercises at home. Methods: A total of 15 older adults, age ranging from 69 to 99 years old, participated in a usability study that utilized a mixed-methods approach. In a laboratory setting, novice users were asked to complete a series of tasks while verbalizing their ongoing thoughts. The tasks ranged from looking up information about exercises and executing them to tailoring a weekly exercise schedule. Performance errors and time-on-task were calculated as proxies of effective and efficient usage. Overall satisfaction was assessed with a posttest interview. All responses were analyzed independently by 2 researchers. Results: The participants spent 13-85 seconds time-on-task. Moreover, 79% (11/14)-100% (14/14) participants completed the basic tasks with either no help or after having received 1 hint. For expert tasks, they needed a few more hints. During the posttest interview, the participants made 3 times more positive remarks about the app than negative remarks. Conclusions: The app that was developed to support older adults in doing exercises at home is usable by the target audience. First-time users were able to perform basic tasks in an effective and efficient manner. In general, they were satisfied with the app. Tasks that were associated with behavior execution and evaluation were performed with ease. Complex tasks such as tailoring a personal training schedule needed more effort. Learning effects, usefulness, and long-term satisfaction will be investigated through longitudinal follow-up studies.

  • Source: Pexels; Copyright: Brett Sayles; URL:; License: Licensed by JMIR.

    Designing Online Interventions in Consideration of Young People’s Concepts of Well-Being: Exploratory Qualitative Study


    Background: A key challenge in developing online well-being interventions for young people is to ensure that they are based on theory and reflect adolescent concepts of well-being. Objective: This exploratory qualitative study aimed to understand young people’s concepts of well-being in Australia. Methods: Data were collected via workshops at five sites across rural and metropolitan sites with 37 young people from 15 to 21 years of age, inclusive. Inductive, data-driven coding was then used to analyze transcripts and artifacts (ie, written or image data). Results: Young adults’ conceptions of well-being were diverse, personally contextualized, and shaped by ongoing individual experiences related to physical and mental health, along with ecological accounts acknowledging the role of family, community, and social factors. Key emerging themes were (1) positive emotions and enjoyable activities, (2) physical wellness, (3) relationships and social connectedness, (4) autonomy and control, (5) goals and purpose, (6) being engaged and challenged, and (7) self-esteem and confidence. Participants had no difficulty describing actions that led to positive well-being; however, they only considered their own well-being at times of stress. Conclusions: In this study, young people appeared to think mostly about their well-being at times of stress. The challenge for online interventions is to encourage young people to monitor well-being prior to it becoming compromised. A more proactive focus that links the overall concept of well-being to everyday, concrete actions and activities young people engage in, and that encourages the creation of routine good habits, may lead to better outcomes from online well-being interventions.

  • Source: Unsplash; Copyright: Rawpixel; URL:; License: Licensed by the authors.

    Advancing Cardiac Surgery Case Planning and Case Review Conferences Using Virtual Reality in Medical Libraries: Evaluation of the Usability of Two Virtual...


    Background: Care providers and surgeons prepare for cardiac surgery using case conferences to review, discuss, and run through the surgical procedure. Surgeons visualize a patient’s anatomy to decide the right surgical approach using magnetic resonance imaging and echocardiograms in a presurgical case planning session. Previous studies have shown that surgical errors can be reduced through the effective use of immersive virtual reality (VR) to visualize patient anatomy. However, inconsistent user interfaces, delegation of view control, and insufficient depth information cause user disorientation and interaction difficulties in using VR apps for case planning. Objective: The objective of the study was to evaluate and compare the usability of 2 commercially available VR apps—Bosc (Pyrus Medical systems) and Medical Holodeck (Nooon Web & IT GmbH)—using the Vive VR headset (HTC Corporation) to evaluate ease of use, physician attitudes toward VR technology, and viability for presurgical case planning. The role of medical libraries in advancing case planning is also explored. Methods: After screening a convenience sample of surgeons, fellows, and residents, ethnographic interviews were conducted to understand physician attitudes and experience with VR. Gaps in current case planning methods were also examined. We ran a usability study, employing a concurrent think-aloud protocol. To evaluate user satisfaction, we used the system usability scale (SUS) and the National Aeronautics and Space Administration-Task Load Index (NASA-TLX). A poststudy questionnaire was used to evaluate the VR experience and explore the role of medical libraries in advancing presurgical case planning. Semistructured interview data were analyzed using content analysis with feedback categorization. Results: Participants were residents, fellows, and surgeons from the University of Washington with a mean age of 41.5 (SD 11.67) years. A total of 8 surgeons participated in the usability study, 3 of whom had prior exposure to VR. Users found Medical Holodeck easier to use than Bosc. Mean adjusted NASA-TLX score for Medical Holodeck was 62.71 (SD 18.25) versus Bosc’s 40.87 (SD 13.90). Neither app passed the mean SUS score of 68 for an app to be considered usable, though Medical Holodeck (66.25 [SD 12.87]) scored a higher mean SUS than Bosc (37.19 [SD 22.41]). One user rated the Bosc usable, whereas 3 users rated Medical Holodeck usable. Conclusions: Interviews highlighted the importance of precise anatomical conceptualization in presurgical case planning and teaching, identifying it as the top reason for modifying a surgical procedure. The importance of standardized user interaction features such as labeling is justified. The study also sheds light on the new roles medical librarians can play in curating VR content and promoting interdisciplinary collaboration.

  • Mobile app for reporting medication errors anonymously (montage). Source: The Authors / Placeit; Copyright: JMIR Publications; URL:; License: Creative Commons Attribution (CC-BY).

    Usability Testing of a Mobile App to Report Medication Errors Anonymously: Mixed-Methods Approach


    Background: Reporting of medication errors is one of the essential mechanisms to identify risky health care systems and practices that lead to medication errors. Unreported medication errors are a real issue; one of the identified causes is a burdensome medication error reporting system. An anonymous and user-friendly mobile app for reporting medication errors could be an alternative method of reporting medication error in busy health care settings. Objective: The objective of this paper is to report usability testing of the Medication Error Reporting App (MERA), a mobile app for reporting medication errors anonymously. Methods: Quantitative and qualitative methods were employed involving 45 different testers (pharmacists, doctors, and nurses) from a large tertiary hospital in Malaysia. Quantitative data was retrieved using task performance and rating of MERA and qualitative data were retrieved through focus group discussions. Three sessions, with 15 testers each session, were conducted from January to March 2018. Results: The majority of testers were pharmacists (23/45, 51%), female (35/45, 78%), and the mean age was 36 (SD 9) years. A total of 135 complete reports were successfully submitted by the testers (three reports per tester) and 79.2% (107/135) of the reports were correct. There was significant improvement in mean System Usability Scale scores in each session of the development process (P<.001) and mean time to report medication errors using the app was not significantly different between each session (P=.70) with an overall mean time of 6.7 (SD 2.4) minutes. Testers found the app easy to use, but doctors and nurses were unfamiliar with terms used especially medication process at which error occurred and type of error. Although, testers agreed the app can be used in the future for reporting, they were apprehensive about security, validation, and abuse of feedback featured in the app. Conclusions: MERA can be used to report medication errors easily by various health care personnel and it has the capacity to provide feedback on reporting. However, education on medication error reporting should be provided to doctors and nurses in Malaysia and the security of the app needs to be established to boost reporting by this method.

  • Source: Image created by the Authors; Copyright: Astrid Torbjørnsen; URL:; License: Creative Commons Attribution (CC-BY).

    The Service User Technology Acceptability Questionnaire: Psychometric Evaluation of the Norwegian Version


    Background: When developing a mobile health app, users’ perception of the technology should preferably be evaluated. However, few standardized and validated questionnaires measuring acceptability are available. Objective: The aim of this study was to assess the validity of the Norwegian version of the Service User Technology Acceptability Questionnaire (SUTAQ). Methods: Persons with type 2 diabetes randomized to the intervention groups of the RENEWING HEALTH study used a diabetes diary app. At the one-year follow-up, participants in the intervention groups (n=75) completed the self-reported instrument SUTAQ to measure the acceptability of the equipment. We conducted confirmatory factor analysis for evaluating the fit of the original five-factor structure of the SUTAQ. Results: We confirmed only 2 of the original 5 factors of the SUTAQ, perceived benefit and care personnel concerns. Conclusions: The original five-factor structure of the SUTAQ was not confirmed in the Norwegian study, indicating that more research is needed to tailor the questionnaire to better reflect the Norwegian setting. However, a small sample size prevented us from drawing firm conclusions about the translated questionnaire.

  • Untitled. Source: Adobe Stock; Copyright: angellodeco; URL:; License: Licensed by the authors.

    An Optimization Program to Help Practices Assess Data Quality and Workflow With Their Electronic Medical Records: Observational Study


    Background: Electronic medical record (EMR) adoption among Canadian primary care physicians continues to grow. In Ontario, >80% of primary care providers now use EMRs. Adopting an EMR does not guarantee better practice management or patient care; however, EMR users must understand how to effectively use it before they can realize its full benefit. OntarioMD developed an EMR Practice Enhancement Program (EPEP) to overcome challenges of clinicians and staff in finding time to learn a new technology or workflow. EPEP deploys practice consultants to work with clinicians onsite to harness their EMR toward practice management and patient care goals. Objective: This paper aims to illustrate the application of the EPEP approach to address practice-level factors that impede or enhance the effective use of EMRs to support patient outcomes and population health. The secondary objective is to draw attention to the potential impact of this practice-level work to population health (system-level), as priority population health indicators are addressed by quality improvement work at the practice-level. Methods: EPEP’s team of practice consultants work with clinicians to identify gaps in their knowledge of EMR functionality, analyze workflow, review EMR data quality, and develop action plans with achievable tasks. Consultants establish baselines for data quality in key clinical indicators and EMR proficiency using OntarioMD-developed maturity assessment tools. We reassessed and compared postengagement, data quality, and maturity. Three examples illustrating the EPEP approach and results are presented to illuminate strengths, limitations, and implications for further analysis. In each example, a different consultant was responsible for engaging with the practice to conduct the EPEP method. No standard timeframe exists for an EPEP engagement, as requirements differ from practice to practice, and EPEP tailors its approach and timeframe according to the needs of the practice. Results: After presenting findings of the initial data quality review, workflow, and gap analysis to the practice, consultants worked with practices to develop action plans and begin implementing recommendations. Each practice had different objectives in engaging the EPEP; here, we compared improvements across measures that were common priorities among all 3—screening (colorectal, cervical, and breast), diabetes diagnosis, and documentation of the smoking status. Consultants collected postengagement data at intervals (approximately 6, 12, and 18 months) to assess the sustainability of the changes. The postengagement assessment showed data quality improvements across several measures, and new confidence in their data enabled practices to implement more advanced functions (such as toolbars) and targeted initiatives for subpopulations of patients. Conclusions: Applying on-site support to analyze gaps in EMR knowledge and use, identify efficiencies to improve workflow, and correct data quality issues can make dramatic improvements in a practice’s EMR proficiency, allowing practices to experience greater benefit from their EMR, and consequently, improve their patient care.

Citing this Article

Right click to copy or hit: ctrl+c (cmd+c on mac)

Latest Submissions Open for Peer-Review:

View All Open Peer Review Articles
  • Understanding Health Behavior Technology Engagement: Pathway to Measuring Digital Behavior Change Interventions (DBCI)

    Date Submitted: Mar 19, 2019

    Open Peer Review Period: Mar 21, 2019 - Apr 4, 2019

    Researchers and practitioners of Digital Behavior Change Interventions (DBCI) use varying and oftentimes incongruent definitions of the term “engagement;” thus, leading to lack of precision in DBC...

    Researchers and practitioners of Digital Behavior Change Interventions (DBCI) use varying and oftentimes incongruent definitions of the term “engagement;” thus, leading to lack of precision in DBCI measurement and evaluation. The objective of this paper is to propose discrete definitions for various types of user-engagement and explain why precision in the measurement of these engagement types is integral to ensuring intervention effectiveness. Additionally, this paper presents a framework and practical steps for how engagement can be measured in practice and used to inform DBCI design and evaluation. The key purpose of a DBCI is to influence change in a target health behavior of a user, which may ultimately improve a health outcome. Using available literature and practice-based knowledge of DBCI, the framework conceptualizes two primary categories of engagement that must be measured in DBCI. The categories are health behavior engagement, referred to as “Big E” and digital behavior change intervention (DBCI) engagement, referred to as “Little e.” DBCI engagement is further bifurcated into two sub-classes: 1) user interactions with features of the intervention designed to encourage frequency of use (i.e., simple login, games, social interactions) and make the user experience appealing; and 2) user interactions with behavior change intervention components (i.e., behavior change techniques) which influence determinants of health behavior-- and subsequently, influence health behavior. Achievement of Big E, health behavior engagement, in an intervention delivered via digital means, is contingent upon Little e, DBCI engagement. If users do not interact with DBCI features and enjoy the user experience, exposure to behavior change intervention components will be limited and less likely to influence the behavioral determinants that lead to Big E, health behavior engagement. Big E, health behavior engagement, is also dependent upon the quality and relevance of the behavior change intervention components within the solution. Therefore, the combination of user interactions and behavior change intervention components create Little e, DBCI engagement, which in turn is designed to improve Big E, health behavior engagement. The proposed framework includes a model to support measurement of DBCI that describes categories of engagement, and details how features of Little e produce Big E. This framework can be applied to DBCI supporting various health behaviors and outcomes; and can be utilized to identify gaps in intervention efficacy and effectiveness.