Original Paper
Abstract
Background: The availability of patient outcomes–based feedback is limited in episodic care environments such as the emergency department. Emergency medicine (EM) clinicians set care trajectories for a majority of hospitalized patients and provide definitive care to an even larger number of those discharged into the community. EM clinicians are often unaware of the short- and long-term health outcomes of patients and how their actions may have contributed. Despite large volumes of patients and data, outcomes-driven learning that targets individual clinician experiences is meager. Integrated electronic health record (EHR) systems provide opportunity, but they do not have readily available functionality intended for outcomes-based learning.
Objective: This study sought to unlock insights from routinely collected EHR data through the development of an individualizable patient outcomes feedback platform for EM clinicians. Here, we describe the iterative development of this platform, Linking Outcomes Of Patients (LOOP), under a human-centered design framework, including structured feedback obtained from its use.
Methods: This multimodal study consisting of human-centered design studios, surveys (24 physicians), interviews (11 physicians), and a LOOP application usability evaluation (12 EM physicians for ≥30 minutes each) was performed between August 2019 and February 2021. The study spanned 3 phases: (1) conceptual development under a human-centered design framework, (2) LOOP technical platform development, and (3) usability evaluation comparing pre- and post-LOOP feedback gathering practices in the EHR.
Results: An initial human-centered design studio and EM clinician surveys revealed common themes of disconnect between EM clinicians and their patients after the encounter. Fundamental postencounter outcomes of death (15/24, 63% respondents identified as useful), escalation of care (20/24, 83%), and return to ED (16/24, 67%) were determined high yield for demonstrating proof-of-concept in our LOOP application. The studio aided the design and development of LOOP, which integrated physicians throughout the design and content iteration. A final LOOP prototype enabled usability evaluation and iterative refinement prior to launch. Usability evaluation compared to status quo (ie, pre-LOOP) feedback gathering practices demonstrated a shift across all outcomes from “not easy” to “very easy” to obtain and from “not confident” to “very confident” in estimating outcomes after using LOOP. On a scale from 0 (unlikely) to 10 (most likely), the users were very likely (9.5) to recommend LOOP to a colleague.
Conclusions: This study demonstrates the potential for human-centered design of a patient outcomes–driven feedback platform for individual EM providers. We have outlined a framework for working alongside clinicians with a multidisciplined team to develop and test a tool that augments their clinical experience and enables closed-loop learning.
doi:10.2196/30130
Keywords
Introduction
Proficiency in the practice of medicine is achieved over years of rigorous training and is maintained through a lifelong commitment to practice-based learning and improvement [
]. This is among the core competencies described by the Accreditation Council for Graduate Medical Education (ACGME) for all physician trainees and has been integrated into the Maintenance of Certification program by the American Board of Medical Specialties (ABMS) [ , ]. To engage in practice-based learning, clinicians must continuously assess the effectiveness of their own clinical practice [ ] and actively work to make improvements at the individual and system levels [ ]. Optimal experiential learning requires robust feedback mechanisms; therefore, the learner understands the real-world consequences of the actions taken and is provided the opportunity to correct course in response to suboptimal outcomes [ ]. This type of closed-loop learning is a core component of deliberate practice and has been central to medical education since the time of William Osler [ ]. However, the availability of outcomes-based feedback is highly variable across practice settings and medical specialties [ , ] and clinicians are often unaware of how their actions affect the short- and long-term health of patients [ ].Emergency medicine (EM) clinicians play a pivotal role in the health care system; yet, practice in an environment makes outcomes-driven learning particularly challenging. Emergency departments (EDs) are a point of entry for acutely injured and critically ill patients and are a primary source of health care for vulnerable populations [
]. In 2016, ED encounters exceeded 145 million in the United States alone [ ]. While making high-stakes decisions under excessive cognitive loading and time pressure [ - ], EM clinicians set care trajectories for the majority of hospitalized patients and provide definitive care to an even larger number who are discharged into the community [ ]. Additionally, because of the episodic nature of emergency care, longitudinal doctor-patient relationships do not exist in the ED. Currently, there is no mechanism for delivering systematic information about post-ED encounter patient outcomes to emergency clinicians for patient outcome–based feedback [ , ]. Emergency clinicians recognize the value of practice-based and outcomes-informed experiential learning, and they are interested in more robust postencounter feedback systems. These systems have shown potential to decrease adverse ED events, improve team function, and further clinician professional development [ , , , ]. Currently, postencounter telephone calls to patients of interest and case conferences (eg, morbidity, mortality) are the most common methods used to elicit postencounter patient outcome feedback in EM [ ]. Over the past decade, the widespread adoption of electronic health records (EHRs) has generated continuously growing pools of clinical data, including data related to post-ED encounter patient outcomes with the potential to inform clinician practice and facilitate practice-based learning [ , ]. To date, this potential has not been realized.In this study, we sought to unlock insights from routinely collected EHR data through the development of an individualizable patient outcomes–feedback platform. Here, we describe the iterative development of this platform, Linking Outcomes of Patients (LOOP), under a human-centered design (HCD) framework [
] and execute this through a unique collaboration between the Johns Hopkins Schools of Medicine and Engineering as well as Maryland Institute College of Art (MICA). We also report the functionality and usability of LOOP as assessed by the direct measurement of the end user clinicians’ knowledge, skills, and attitudes as they interacted with and used LOOP.Methods
Research Team Structure and Study Population
This mixed methods study was performed between August 2019 and February 2021 via a collaborative effort between the Center for Social Design at MICA and the Center for Data Science in Emergency Medicine (CDEM) at the Johns Hopkins University School of Medicine in Baltimore, Maryland. Our core study team comprised EM physicians, design researchers, human factors engineers, software engineers, and data analysts. This project was conducted in 3 phases: (1) conceptual development under an HCD framework, (2) technical platform development, and (3) usability evaluation. Study sites included a large quaternary academic medical center ED and a community hospital ED; all study participants were EM clinicians who practiced at one of these sites.
Ethics Approval
This study was approved by our institutional review board (IRB00185078) after expedited review.
Phase 1: Conceptual Development Under an HCD Framework
In the fall of 2019, we conducted an intensive 16-week HCD studio [
] focused on addressing the delivery of feedback to EM clinicians related to post-ED encounter patient outcomes. MICA design faculty (BS and CM) led the studio in partnership with CDEM researchers. As shown in , our HCD studio consisted of 6 stages: frame and plan, research, synthesize, ideate, prototype, and iterate and implement.First, our multidisciplinary research team conducted cocreation sessions to fine-tune the scope and objectives of the project. We then engaged in design research with end users via a mixed methods approach that included observations, semistructured interviews, and surveys of EM clinicians. Observations focused on EM clinician interactions with existing technologies, and semistructured interviews focused on current patient outcome follow-up and practice-based learning behaviors (
). Surveys were used to assess current patient follow-up practices, identify important patient outcomes for post-ED encounter follow-up, and define ideal timeframes for outcome reporting ( ). Thematic analysis of research output was used to synthesize “personas,” “design principles,” and “opportunity areas” that would guide future HCD studio activities.Question category | Examples |
Rapport building | What does a perfect day at the emergency department look like for you? |
Stress | What things currently complicate your decision making in the emergency department? (individual, institutional, environmental) Optional follow up: Could you relate that back to feedback? |
Feedback | How do you find out what happens to your patients after they leave the emergency department? Optional Follow up: Why do you think you don’t receive the kind of information/feedback you would like? |
Design | What format would prefer to receive feedback on what happens to respiratory infection patients and why? (Prompt: ask about how they prefer seeing info, ie, text, charts, images, video, voice messages) |
Diagnostic/decision making | Walk me through how you develop a set of possible diagnoses and how you differentiate between them to come to a final diagnosis. |
Next, end users were reincorporated into the HCD studio through an in-person multistakeholder ideation session that also included engineers and data analysts. Previously defined design principles and opportunity areas were employed as guides, and potential solutions were generated using numerous ideation techniques (eg, brainstorming, brain dumping, sketching, storyboarding). The ideation session output was subsequently used by designers to develop a series of solution prototypes. Prototypes progressed through several levels of fidelity as designers collaborated with clinical and technical research team members to ensure that solutions were compatible with existing health information technology. Finally, designers and end users convened to develop an idealized version of the feedback tool, which incorporated features generated during the ideation and prototyping stages. User personas were used as a touch point to build out dimensionality for the ideal feedback tool. Attendees of these sessions also created an experience map, which described the envisioned end user’s journey with the tool through 4 phases: learning about the tool, using the tool, owning the tool, and implementing the tool into practice.
Phase 2: Technical Platform Development
Patient Outcome Selection
An initial set of patient outcomes was selected for inclusion in the initial version of our tool based on end user preferences (discovered through design research) and feasibility of data collection and standardization. Outcomes selected were in-hospital mortality, escalation of level of care (eg, floor to intensive care unit) within 24 hours of ED departure, and return to the ED within 72 hours of discharge. These outcomes are commonly used quality measures [
, ], applicable to the entire ED population, and reliably recorded in the EHR.Data Capture, Normalization, and Delivery
CDEM data analysts and engineers developed a data processing pipeline to facilitate the population of post-ED encounter patient outcomes within our feedback platform. In brief, raw EHR data were first populated within a Health Insurance Portability and Accountability Act–compliant research computing environment, where a normalization code was developed to identify and label post-ED encounter patient-level events and to attribute patient encounters to individual EM clinicians by using native EHR data fields and timestamps. The normalization code was validated via chart review by an EM clinician and data analyst. Following validation, the normalization code was applied to daily extracts of EHR data within a reporting server, and aggregated views of normalized data were pushed to a presentation server that would power our patient feedback platform.
Digital Feedback Platform Design
Working alongside EM clinicians (JH, CK, and SP) and software engineers (AGS and MT) in a co-design process, our lead design researcher (CM) transitioned the final analog prototype into a fully operational digital platform. Early digital prototypes were built using dummy data and design software that facilitated realistic end-user interaction and rapid iterative improvement (Agile methodology) [
, ]. User needs, including minimum information requirements, optimal outcome definitions, and data labels and data filtering capacities, were further defined within this environment. A final digital mock-up was then used as a template to develop LOOP within the clinician-facing analytic software used by our institution to ensure that our final product adhered to the principles and requirements generated through co-design and would function within our local information technology infrastructure. Throughout this process, our design researcher also worked closely with data analysts and software engineers who led the data processing pipeline development to ensure interoperability of our entire LOOP system.Phase 3: Usability Evaluation
Usability evaluation was performed by 3 members of the research team: a frontline EM physician (CK), a design researcher (CM), and a health systems engineer with a clinical background (ATS). All participants included were practicing clinicians at an ED study site; members of our study team were excluded from participation in the usability evaluation. Participants were selected using a purposive stratified sample of EM clinicians with representation from multiple end user groups based on trainee status (eg, year of residency), clinical experience (eg, trainee, advanced practice provider, and attending), and gender. After verbal consent was obtained, usability evaluation was performed virtually using an audio-video platform and the sessions were recorded.
Pre-LOOP Survey
Participants first completed a brief anonymous electronic survey. The survey assessed demographics, current method(s) used for patient outcomes review, baseline knowledge of patient outcomes, and attitudes about their current method(s) of review and outcomes (
). To assess knowledge as opposed to memory, participants were advised prior to the usability evaluation to bring any materials they use to track their patient outcomes and encouraged to refer to these aids to demonstrate their knowledge about their patient outcomes during the survey, which required participants to estimate the frequency of postencounter patient outcomes over several time periods. We assessed their attitudes about patient outcomes by asking about the confidence in estimates, ease of finding this information, and usefulness of knowing this information. Furthermore, we asked about their willingness to use their current method to find these outcomes, their trust in the data obtained, and whether the information collected is representative of the overall trends for all their patients.Task Analysis
We then provided participants access to LOOP and performed task analysis while they used the tool. The first set of tasks was determining the number of patients they had seen over the past 30 days who experienced each outcome of interest. Additionally, we asked users to navigate to the chart of a patient who returned within 72 hours and was admitted within our EHR. We also asked them to identify a patient who had died during their hospitalization and email the patient’s information to a member of our team (to simulate informing a colleague about a patient outcome). Lastly, we asked the users to locate the list of all patients they had seen in the past 30 days and to identify a patient who was dispositioned to hospital observation. While completing these tasks, participants were asked to use the “think-aloud” method, verbalizing their thought process. Research team observers assessed usability performance metrics such as task completion time and methods of navigation and identified struggle points.
Post-LOOP Survey and Interview
After using LOOP, participants were asked to complete a second survey to assess their knowledge, skills, and attitudes related to using LOOP. This survey asked the same questions as the initial survey and assessed their experience using LOOP. To assess usability of the tool, we combined and adapted the System Usability Scale [
], the Standardized User Experience Percentile Rank Questionnaire [ ], and validated instruments composed of statements, and asked users to indicate their level of agreement: strongly disagree, disagree, agree, strongly agree ( ). In the final step of the usability evaluation, we performed semistructured interviews to debrief with the participant about their experience with LOOP related to the perceived benefits, usefulness, and intention to use ( ). At the end of each usability evaluation, we asked for feedback about their usability evaluation experience, and observers had the opportunity to ask follow-up questions about observations from the task analyses.Questions from a usability evaluation semistructured interview.
- Overall, how would you describe your experience with the Linking Outcomes Of Patients (LOOP)?
- I know you were asked in the surveys about ease—do you find the interface easy to use? Why?
- Did you find any aspects difficult? What did you expect to happen?
- How do you feel about the level of information being shown in the outcomes? Do you find the information easy or difficult to digest? Why?
- Was there any information you found surprising?
- Do you find LOOP is more or less effective than your current method of reviewing patients? Why? About how long would you estimate you spend on your current method?
- Are there other benefits you see to using this tool?
- When do you see yourself using LOOP? What feature or addition that would bring you back to using LOOP?
Data Analysis
After each usability evaluation session, the research team debriefed, uploaded their data to a secure team folder, and collectively summarized the performance metrics, key findings, and issues to address by comparing notes to reach consensus. Issues raised by users during testing as well their semistructured interviews were classified as either front-end (design) or back-end (data infrastructure) challenges. Additionally, issues were categorized based on benefit to the user (significant benefit/minimal benefit) and effort to address (easy, intermediate, difficult). For statistical analysis of the preusability and postusability evaluation survey results as well as task times, descriptive statistics were calculated and data were visualized using R 4.0.3 (R Core Team).
Results
Phase 1: Conceptual Development Under an HCD Framework
Frame and Plan
Through cocreation sessions, the research team came to consensus on a well-defined focus to create a closed feedback loop with postencounter patient-based outcomes for EM providers. The team also determined the steps for accomplishing the remaining stages of the project, which will be further defined below.
Research and Synthesis
The design researchers completed 18 person-hours of workflow observations across both ED sites and conducted semistructured interviews of 11 EM clinicians. Attending EM physicians, resident physicians, and advanced practice providers participated in observations and interviews. Surveys were completed by 24 attending EM physicians. Several important themes emerged from observations and interviews, as detailed in
. Many EM clinicians were observed using self-devised work-around solutions to track post-ED encounter patient outcomes, including manual creation of patient lists (electronic or handwritten) to facilitate EHR review in the future and exchange of contact information with patients; others reported similar approaches during interviews. We found that EM clinicians desire information about post-ED encounter patient outcomes and see this type of feedback as important to practice-based learning. They also reported that this information is most often unavailable, and when available, is predominantly negative (ie, associated with adverse patient outcomes). Several clinicians reported that these strategies are only effective when they are time permitted, which is a continuous challenge in the ED environment.Theme | Examples of observations | Examples of interview quotations |
Emergency Medicine clinicians value postencounter outcomes–driven feedback. | Physician writes down patient’s phone number and reports they do this when they want feedback about that patient’s outcome. Physician gives their personal telephone number to older patients to enable closed-loop feedback. | …I wish there was a way they could contact me and say, ‘I’m not improving, I’m going to see my primary care provider.’ …Post encounter feedback is crucial. (About patient outcome follow-up) …It’s sort of like a vitamin that I have to take every day for my health. It’s something that will make me a better doctor in the long run. |
Existing systems for delivery of information about patient outcomes are severely limited. | Physician pulls out the list of patients they keep track of from their pocket. Says they are only able to track a couple of patients in each shift. | …If patients are not put on a list in (the electronic health record) they disappear. …The feedback (we receive) is not representative of what is actually happening. |
Currently available outcomes-driven feedback is predominantly negative. | Physician reports that if something bad happens, clinicians find out from leadership. Physician seems tense when discussing. | …Feedback is limited to lawsuits and bad outcomes. …I get feedback if someone complains or dies. |
Emergency Medicine clinicians use workaround solutions to obtain outcome information for cases perceived as interesting or high risk. | Physician leaves patient notes unsigned so that patients’ charts will remain in their electronic health record workflow, forcing additional case review | …I call patients if I’m concerned about them. …I keep a list of patients on my electronic health record profile when I want to see what happened to them. |
These themes were further supported by survey results. The most frequent mode reported for learning about patient outcomes was manual EHR chart review (20 of 24 surveyed EM clinicians), followed by email (7/24), phone (5/24), and face-to-face communications (4/24) between colleagues and patients. A small proportion of EM clinicians (2/24) reported learning about outcomes “haphazardly” and during morbidity and mortality conferences, further reinforcing the idea that most feedback available to EM clinicians is negative. Three-quarters of those surveyed (18/24) preferred to receive both positive and negative feedback, while 6 preferred to receive feedback related to negative outcomes only. Most EM clinicians wanted to know if patients required escalation of care level (eg, transfer from floor to intensive care unit) shortly after admission (20/24), died during their hospitalization (15/24), had a discrepancy between diagnosis assigned at time of admission (ED diagnosis) and diagnosis assigned at hospital discharge (inpatient diagnosis) (14/24) or returned for repeat ED evaluation within a brief time window after ED discharge (16/24). Fewer surveyed clinicians (<30%) wished to be notified when patients filled ED prescriptions, visited an urgent care clinic, followed up with primary care physicians, or had medication treatment regimens changed in the inpatient setting after ED departure.
User personas, guiding design principles, and opportunity areas were also defined using information gathered during design research activities. Six user personas that spanned age groups, learning styles, and affinities for technology were generated and used to drive iterative tool design and development and to perform internal testing of prototypes (
). Design principles around which all future design and development activities would revolve included (1) recognition and demonstration of value for the clinician’s practice, (2) capturing curiosity and encouraging action through knowledge building, (3) prioritization of clear and simple information delivery, and (4) maintenance of flexibility to respond to end user clinicians’ needs and preference as they arise throughout the co-design process. Finally, the 4 primary opportunity areas identified for meaningful impact were (1) provision of balanced positive and negative feedback, (2) provision of feedback in a format that allows for improvement of decision-making without overwhelming clinicians, (3) provision of both population-level and patient-level outcome data, and (4) creation of a platform that is customizable at the individual clinician level.Ideation, Prototyping, and Iteration
During our multistakeholder ideation session, designers collaborated with engineers, data specialists, and end user EM clinicians to translate the themes, opportunity areas, and design principles above into a set of target design features that would guide analog prototyping of our tool. Design features that emerged from this session are shown in
. Each participating EM clinician was then assigned a clinician user persona, and a 2D sketch representation (first prototype) of a feedback platform was generated for that persona (see for a representative example). Although all design features were represented across the user-generated prototypes, only comprehensive patient lists, data tagging, and EHR interoperability were observed in all prototypes. Over several additional weeks, designers analyzed user prototypes and developed a final analog prototype that included as many design features as possible through iteration.Design features | Purpose |
Comprehensive lists | Allows users to find information of all the patients they have cared for |
Electronic health record interoperability | Allows users to transition between platform and patient’s clinical chart |
Data tagging | Gives users the power to drill down through the use of data labels/tags |
Pin a patient | Allows users to prioritize patients of interest for future review |
Glow moments | Allows other users to appreciate the work of fellow clinicians |
Task timer | Allows a user to customize their experience based on time availability |
Home/hospital toggle | Limits visibility of protected health information outside of hospital setting |
Patient timeline | Allows users to see a patient’s journey after their care |
Notification/reminder | Allows users to set an alarm to review outcomes |
Phase 2: Technical Platform Development
As shown in
, analog prototypes were transitioned to a digital feedback platform in a stepwise fashion. The first digital versions of LOOP were developed using design software that was not connected to real-time data flows. Earliest version ( A) development focused on incorporation of design features defined during phase 1 ideation, while later versions focused on establishing dimensionality that would facilitate attribution of patients to individual clinicians and allow for data sorting and filtering by time and outcome ( A). Finally, the digital platform was translated into analytic software used by our health care system ( A) and optimized to accept real-time feeds of normalized EHR data. End user EM clinicians, engineers, and data analysts were included at every stage of digital development. Engineers and data analysts ensured the platform was technically feasible, while end users ensured it was functional and maintained consistency with the themes and design principles generated during our HCD studio. Most design features that were incorporated into early end user protypes ( ) were included in the final version of LOOP, including comprehensive lists, EHR interoperability, data tagging, and home/hospital toggle ( B). Other features, including pin a patient, task timer, and notification reminder, were not included as explicit features of the final platform, but tasks associated with these features were possible to perform within the platform using other mechanisms. Others, including glow moments and patient timeline, were not included in the final platform owing to technical limitations, but they are features that we will seek to incorporate in future versions.Phase 3: Usability Evaluation
For usability evaluation, our study population included 12 EM providers and 6 (50%) were women. There were 3 (25%) attending physicians, 7 (58%) resident physicians, and 2 (17%) advanced practice providers. The median age was 33.5 (IQR 28-38.3) years. Usability evaluation sessions ranged from 30 to 45 minutes in duration.
Pre-LOOP Survey
When asked about their current method for identifying outcomes for their patients, the median time spent per week to follow-up on patient outcomes was 1.5 (range 0.5-3.5) hours. Of the 12 EM providers, 9 (75%) that described their current method is manually adding each patient to custom-made lists within the EHR; 2 (17%) stated they make handwritten lists of their patients, and 1 (8%) had no method for tracking patient outcomes. Participant attitudes about their current method for determining the outcomes for their patients indicated there was room for improvement. When asked their level of agreement to the statement “I am likely/willing to review my patients using my current method,” 11 (92%) users either “strongly disagreed” or “disagreed;” 8 (67%) users disagreed with the statement “I trust the data I am able to find on my patients using my current method.” Most users (9/12, 75%) “agreed” or “strongly agreed” that the data gathered using their current method were representative of the overall trends for all their patients.
shows participant attitudes about the usefulness of access to outcomes. Most users reported that being able to access all 3 outcomes was “very useful” at the individual patient level and for all their patients.Task Analysis
The task analysis of the participants using LOOP to find all 3 outcomes for their patients from the last 30 days was completed in a median of 1.09 (range 0.7-4.3) minutes. The median time to complete all 3 special function tasks (eg, navigating to the electronic record from LOOP, emailing patient information to a colleague, navigating filter features of LOOP) was 6.9 (range 1.7-12.2) minutes. Examples of important observations during this task analysis were (1) the user spent time interpreting a graph instead of noticing the summarizing number somewhere else on the screen and (2) data display errors.
Post-LOOP Survey and Interview
Post-LOOP survey completion time was a median of 1.8 (range 1.1-2.7) minutes. Participants’ knowledge of the number of patients that experienced each outcome was different from observed outcomes in LOOP (Figure S1 in
). Participants underestimated the number of patients who died in hospital but overestimated the number who required an escalation of care outcome. Of note, 1 participant was excluded owing to nonresponse on the initial survey (Figure S2 in ). The participants’ attitudes about their estimates of each patient outcome over the past 30 days improved after using LOOP (Figure S2 in ). Of note, 1 participant was removed from the analysis for only the question “How easy is it for you to determine this outcome for all your patients?” for all 3 outcomes owing to nonresponse on the post-LOOP survey. For all 3 outcomes, the users shifted from feeling the outcomes were “not easy” to determine at the individual and cumulative patient levels to feeling it was “very easy” after using LOOP. Additionally, they changed from feeling “not confident” and “somewhat confident” about their estimates to overall “very confident” after using LOOP. Participant attitudes about LOOP were overall favorable ( ). On a scale from 0 (unlikely) to 10 (most likely), the users were likely to recommend (score=9.5) LOOP to a colleague. The semistructured interview to debrief with the users about their LOOP experience helped further inform our understanding of their perceptions about LOOP. The users identified several benefits about LOOP, such as access to data on patients you would not have originally had the time or foresight to follow up on later. One user described LOOP as “much more systematic” than their current methods. Importantly, trainees identified the opportunity to use LOOP as an educational tool that facilitates discussion with their attendings about prior cases with surprising outcomes. A user commented that LOOP is a “fantastic learning tool to make you a better clinician.” Regarding the issues identified, the constructive feedback received was helpful and could be addressed. Many concerns centered around harmonizing the visual layout with the functionality on the main page as well as across the platform (eg, connecting summary numbers with the corresponding patient lists). Another issue was improving the defaults and layouts of filtered lists; therefore, the interaction was more intuitive. The ability to discuss with the users while having the tool in front of them to demonstrate the concerns and possible improvements was highly informative.Discussion
Principal Results
Leveraging previously underutilized EHR data, LOOP was designed and developed to enable systematic delivery of personalized patient outcomes feedback to EM clinicians. The platform allows EM clinicians to (1) quickly review post-ED encounter outcomes at the individual patient level, (2) see outcomes for all patients in their census, and (3) customize views of data based on user preference. Our usability evaluation showed the tool is easy to use and that information presented within LOOP is viewed as valuable and reliable by end users. The usability evaluation also revealed that information delivered within LOOP is not currently known by EM clinicians. Our team’s creation of a tool that is both useful and usable was enabled by a process centered on HCD principles and by a commitment to incorporation of end users at every stage of design and development. Although the version of LOOP reported here includes a selected set of outcomes only (in-hospital mortality, inpatient level of care escalations, and return ED visits), the tool was designed with flexibility to allow for ongoing rapid integration of additional outcomes based on user feedback and clinical need.
Comparison With Prior Work
Absence of patient outcomes feedback in EM is well-documented as is recognition of the importance of this information and a desire to receive it among EM clinicians [
, , , ]. Detailed information related to patient outcomes is routinely collected and stored within the EHR data infrastructure; opportunity exists to enable practice-based learning and improvement through standardization and delivery of this information to clinicians [ , ]. However, most efforts to provide patient outcomes feedback in EM have relied on analog and nonsystematic approaches such as manual creation of patient follow-up lists or telephone calls to patients by clinicians [ , - ]. These approaches are labor-intensive and time-consuming. They also have the potential to promote cognitive bias. When tasked with selecting patients for follow-up themselves, clinicians often focus on cases they already perceived as challenging or at the highest risk for adverse outcome [ , ]. The automated and comprehensive method of data delivery used by LOOP ensures that clinicians receive robust unbiased outcome data, including information that is unexpected and not discoverable using previously described methods. Exposure of these outcomes is important for both early career and seasoned clinicians because unexpected events uncover unconscious deficits and present opportunities to improve competence and increase the quality and safety of care delivered [ , - ].Prior efforts to harness EHR data for EM practice improvement have focused on the development of dashboards to guide ED operations management or to enhance real-time display of patient data during the ED encounter [
- ]. To our knowledge, LOOP is the first tool that translates EHR data into post-ED encounter patient outcomes feedback for individual EM clinicians. The systematic delivery of these data by LOOP facilitates deliberate practice in EM, a process whereby expertise can be developed through repeated action and skills improvement, driven by continual feedback and reassessment [ , , ]. Such practice-based learning is critical to the personal and professional development of clinicians and is mandated by both the ACGME and ABMS [ , ]. The generation and automation of personalized EHR data flows to fill outcome knowledge gaps is a significant step forward for experiential learning in EM. It also represents a step toward more meaningful use of the EHR and development of a learning health care system, which are both the major goals for our nation’s health care system [ ].Pragmatic Usability Evaluation
While the potential value of information delivered by LOOP is clear, its real-world utility is dependent on end-user acceptance and long-term adoption. User interface and information display greatly impact whether a tool is adopted by the user [
]. Activities associated with our HCD studio revealed high variation among potential users of LOOP (EM clinicians) with respect to clinical experience, experience and comfort with technology, current practice-based learning behaviors, and desired patient outcomes feedback—all of which were considered during our HCD process. We used a robust and pragmatic approach to assessment that allowed for evaluation of LOOP in a near real-world setting. Our assessment, grounded by the knowledge, skills, and attitudes framework [ ], included direct observation and task analysis as users interacted with the tool to find information about real patient encounters, surveys that included standard usability questions and assessed knowledge and attitudes that allowed for comparison with current practices, and semistructured interviews that further explored these topics. We also performed this assessment among a diverse and representative group of end users. Our findings were almost exclusively positive.Human-Centered Design (HCD)
Our commitment to the use of HCD methods at every stage of this project was critical to its success. The incorporation of user input into the development of information technology platforms is now considered essential in health care [
]. Previously reported clinician user involvement in similar projects is variable, with some groups limiting their involvement to ideation, implementation, or testing phases only [ , , ]. To our knowledge, the HCD methodology used here is among the most intentional and extensive of those reported to date. This approach was enabled by our multidisciplinary team structure. Longitudinal collaboration between clinician-scientists, engineers, and designers allowed for interpretation of end user contributions through multiple lenses and ensured that the final product of our work was a well-rounded representation of user-generated specifications. We believe this increased end user trust in the final product and will translate to higher rates of adoption in clinical practice. The results of our usability evaluation suggest this is true.Limitations
This study has several limitations. First, the study was performed within a single health care system, which may limit its generalizability. However, our incorporation of end user EM clinicians throughout design, development, and evaluation activities from varied practice settings (urban academic and suburban community) and levels of training (attending EM physicians, resident physicians, advanced practice providers) minimizes this limitation. In addition, our HCD focus and technical approach to EHR data normalization and presentation are both reproducible and generalizable. Feedback platforms that integrate the needs and wants of clinical staff could be generated by other groups using a similar methodological approach. Second, our usability evaluation was performed in a relatively small sample of EM clinicians. This limitation was minimized through inclusion of a diverse and representative user group and by inclusion of various qualitative assessment techniques, which included surveys, direct observations, and semistructured interviews. Our sample size was consistent with those previously reported by others and sufficient to reach thematic saturation using these methods [
, ]. Finally, this study did not evaluate long-term adoption rates or the effectiveness of LOOP for clinical practice improvement. These are both important ongoing research objectives of our team. LOOP is currently in use across multiple EDs, and data collection to facilitate study of these questions is underway.Conclusions
This study demonstrates the potential of HCD in EM and the power of EHR data to augment practice-based learning in episodic care environments. We have outlined a framework for working alongside end-user EM clinicians to develop and test a tool that augments their clinical experience and exposes previously unavailable information to create a closed-loop feedback-driven learning platform. Future objectives include incorporation of additional patient outcomes into LOOP, measurement of long-term adoption rates, and impacts of patient outcomes feedback provided by LOOP on clinical practice.
Acknowledgments
This work was supported by grant R18 HS026640-02 (JH and SL) from the Agency for Healthcare Research and Quality (AHRQ), US Department of Health and Human Services. The authors are solely responsible for this document’s contents, findings, and conclusions, which do not necessarily represent the views of AHRQ. Contributions to this project by members of the digital health company StoCastic (AdA and AD) were supported by grant R18 HS026640-02. We thank all participants in the Connected Emergency Care Patient Safety Learning Lab for their involvement. We also thank Maryland Institute College of Art (MICA)’s codirector of the Center for Social Design, Lee Davis, and MICA’s graduate students involved with the design studio: Kadija Hart, Vidisha Agarwalla, Harley French, Sasha Avrutina, Ayangbe Mannen, Eesha Patne, and Eunsoo Kim.
Authors' Contributions
All authors contributed to the editing and final approval of this manuscript. ATS, CM, CEK, BS, SL, and JH contributed to data collection. ATS, CM, CEK, BS, AGS, EK, MT, SL, and JH contributed to data analysis.
Conflicts of Interest
JH serves as StoCastic’s Chief Medical Officer and SL serves as StoCastic’s Chief Technical Officer; both own equity in StoCastic. This arrangement has been reviewed and approved by the Johns Hopkins University in accordance with its conflict of interest policies. AdA and AD, members of the digital health company StoCastic, also contributed to this project. The other authors do not have any other conflicts of interest.
Survey deployed as part of the human-centered design studio. The survey helped provide more specific data around what outcomes physicians wanted after the encounter for admitted and discharged patients.
PDF File (Adobe PDF File), 68 KB
Usability evaluation surveys collecting demographic information, assessing current practice, and capturing pre– and post–Linking Outcomes Of Patients perceptions about their patient outcomes. The post–Linking Outcomes Of Patients survey also specifically addressed their experience with Linking Outcomes Of Patients.
DOCX File , 32 KB
Participant knowledge about patient outcomes such as (A) in-hospital mortality, (B) escalation of level of care, and (C) return to the emergency department. There were 12 participants, but one is not applicable owing to no response.
PNG File , 68 KB
Participant attitudes about their estimates of pre–Linking Outcomes Of Patients and post–Linking Outcomes Of Patients for patients with each outcome. Three questions were asked for each outcome in the presurvey and postsurvey: 1. How easy is it for you to determine for all your patients? 2. How easy is it for you to determine for an individual patient? 3. How confident are you?
PNG File , 22 KBReferences
- Staker LV. Practice-based learning for improvement: the pursuit of clinical excellence. Tex Med 2000 Oct;96(10):53-60. [Medline]
- Hayden SR, Dufel S, Shih R. Definitions and competencies for practice-based learning and improvement. Acad Emerg Med 2002 Nov;9(11):1242-1248 [FREE Full text] [CrossRef] [Medline]
- Burke AE, Benson B, Englander R, Carraccio C, Hicks PJ. Domain of competence: Practice-based learning and improvement. Acad Pediatr 2014;14(2 Suppl):S38-S54. [CrossRef] [Medline]
- Epstein RM. Mindful practice. JAMA 1999 Sep 01;282(9):833-839. [CrossRef] [Medline]
- Berwick DM. Developing and testing changes in delivery of care. Ann Intern Med 1998 Apr 15;128(8):651-656. [CrossRef] [Medline]
- Croskerry P. The feedback sanction. Acad Emerg Med 2000 Nov;7(11):1232-1238 [FREE Full text] [CrossRef] [Medline]
- Ericsson KA. Deliberate practice and the acquisition and maintenance of expert performance in medicine and related domains. Acad Med 2004 Oct;79(10 Suppl):S70-S81. [CrossRef] [Medline]
- Dine C, Miller J, Fuld A, Bellini L, Iwashyna T. Educating Physicians-in-Training About Resource Utilization and Their Own Outcomes of Care in the Inpatient Setting. J Grad Med Educ 2010 Jun;2(2):175-180 [FREE Full text] [CrossRef] [Medline]
- Maggard-Gibbons M. The use of report cards and outcome measurements to improve the safety of surgical care: the American College of Surgeons National Surgical Quality Improvement Program. BMJ Qual Saf 2014 Jul;23(7):589-599 [FREE Full text] [CrossRef] [Medline]
- Lavoie CF, Plint AC, Clifford TJ, Gaboury I. "I never hear what happens, even if they die": a survey of emergency physicians about outcome feedback. CJEM 2009 Nov;11(6):523-528. [CrossRef] [Medline]
- Juntti-Patinen L, Kuitunen T, Pere P, Neuvonen PJ. Drug-related visits to a district hospital emergency room. Basic Clin Pharmacol Toxicol 2006 Feb;98(2):212-217 [FREE Full text] [CrossRef] [Medline]
- National hospital ambulatory medical care survey: emergency department summary tables. Center for Disease Control and Prevention (CDC). URL: https://www.cdc.gov/nchs/data/nhamcs/web_tables/2016_ed_web_tables.pdf [accessed 2021-01-01]
- Kovacs G, Croskerry P. Clinical decision making: an emergency medicine perspective. Acad Emerg Med 1999 Sep;6(9):947-952 [FREE Full text] [CrossRef] [Medline]
- Zavala AM, Day GE, Plummer D, Bamford-Wade A. Decision-making under pressure: medical errors in uncertain and dynamic environments. Aust. Health Review 2018;42(4):395. [CrossRef]
- Augustine J. The trip down admission lane. ACEP Now. URL: https://www.acepnow.com/wp-content/uploads/2019/12/ACEP_December_2019.pdf [accessed 2022-03-14]
- Niska R, Bhuiya F, Xu J. National Hospital Ambulatory Medical Care Survey: 2007 emergency department summary. Natl Health Stat Report 2010 Aug 06(26):1-31 [FREE Full text] [Medline]
- Lavoie CF, Schachter H, Stewart AT, McGowan J. Does outcome feedback make you a better emergency physician? A systematic review and research framework proposal. CJEM 2009 Nov;11(6):545-552. [CrossRef] [Medline]
- Chern C, How C, Wang L, Lee C, Graff L. Decreasing clinically significant adverse events using feedback to emergency physicians of telephone follow-up outcomes. Ann Emerg Med 2005 Jan;45(1):15-23. [CrossRef] [Medline]
- Kruse CS, Goswamy R, Raval Y, Marawi S. Challenges and Opportunities of Big Data in Health Care: A Systematic Review. JMIR Med Inform 2016 Nov 21;4(4):e38 [FREE Full text] [CrossRef] [Medline]
- Janke AT, Overbeek DL, Kocher KE, Levy PD. Exploring the Potential of Predictive Analytics and Big Data in Emergency Care. Ann Emerg Med 2016 Feb;67(2):227-236. [CrossRef] [Medline]
- Melles M, Albayrak A, Goossens R. Innovating health care: key characteristics of human-centered design. Int J Qual Health Care 2021 Jan 12;33(Supplement_1):37-44 [FREE Full text] [CrossRef] [Medline]
- Graff L, Stevens C, Spaite D, Foody J. Measuring and improving quality in emergency medicine. Acad Emerg Med 2002 Nov;9(11):1091-1107 [FREE Full text] [CrossRef] [Medline]
- Claeys KC, Lagnf AM, Patel TB, Jacob MG, Davis SL, Rybak MJ. Acute Bacterial Skin and Skin Structure Infections Treated with Intravenous Antibiotics in the Emergency Department or Observational Unit: Experience at the Detroit Medical Center. Infect Dis Ther 2015 Jun;4(2):173-186 [FREE Full text] [CrossRef] [Medline]
- Collier K. Introducing agile analytics. In: Agile Analytics: A Value-driven Approach to Business Intelligence and Data Warehousing. Boston, MA: Addison-Wesley Professional; Jul 27, 2011:3-23.
- What is agile software development? Agile Alliance. 2019. URL: https://www.agilealliance.org/agile101/ [accessed 2020-11-29]
- Sadosty AT, Stead LG, Boie ET, Goyal DG, Weaver AL, Decker WW. Evaluation of the Educational Utility of Patient Follow-up. Acad Emergency Med 2004 May;11(5):715-719. [CrossRef]
- Bentley B, DeBehnke D, Ma O. Survey of follow-up systems in emergency medicine residencies: analysis and proposal. Acad Emerg Med 1994;1(2):1116-1120 [FREE Full text] [Medline]
- Gupta R, Siemens I, Campbell S. The use of outcome feedback by emergency medicine physicians: Results of a physician survey. World J Emerg Med 2019;10(1):14-18 [FREE Full text] [CrossRef] [Medline]
- Wogan JM. Emergency department follow-up of hospitalized patients. Am J Emerg Med 2000 Jul;18(4):510-511. [CrossRef] [Medline]
- Bowen JL, Ilgen JS, Irby DM, Ten Cate O, O'Brien BC. "You Have to Know the End of the Story": Motivations to Follow Up After Transitions of Clinical Responsibility. Acad Med 2017 Nov;92(11S Association of American Medical Colleges Learn Serve Lead: Proceedings of the 56th Annual Research in Medical Education Sessions):S48-S54. [CrossRef] [Medline]
- Bowen JL, O'Brien BC, Ilgen JS, Irby DM, Ten Cate O. Chart stalking, list making, and physicians' efforts to track patients' outcomes after transitioning responsibility. Med Educ 2018 Apr;52(4):404-413. [CrossRef] [Medline]
- McDaniel RR, Jordan ME, Fleeman BF. Surprise, Surprise, Surprise! A complexity science view of the unexpected. Health Care Manage Rev 2003;28(3):266-278. [CrossRef] [Medline]
- Vincent C. Understanding and Responding to Adverse Events. N Engl J Med 2003 Mar 13;348(11):1051-1056. [CrossRef]
- Adams L. Learning a new skill is easier said than done. Gordon Training International. URL: https://www.gordontraining.com/free-workplace-articles/learning-a-new-skill-is-easier-said-than-done/ [accessed 2021-04-29]
- Mazor I, Heart T, Even A. Simulating the impact of an online digital dashboard in emergency departments on patients length of stay. Journal of Decision Systems 2016 Jun 16;25(sup1):343-353. [CrossRef]
- Franklin A, Gantela S, Shifarraw S, Johnson TR, Robinson DJ, King BR, et al. Dashboard visualizations: Supporting real-time throughput decision-making. J Biomed Inform 2017 Jul;71:211-221 [FREE Full text] [CrossRef] [Medline]
- Yoo J, Jung KY, Kim T, Lee T, Hwang SY, Yoon H, et al. A Real-Time Autonomous Dashboard for the Emergency Department: 5-Year Case Study. JMIR Mhealth Uhealth 2018 Nov 22;6(11):e10666 [FREE Full text] [CrossRef] [Medline]
- Ericsson K. An expert-performance perspective of research on medical expertise: the study of clinical performance. Med Educ 2007 Dec;41(12):1124-1130. [CrossRef] [Medline]
- Duvivier RJ, van Dalen J, Muijtjens AM, Moulaert VR, van der Vleuten CP, Scherpbier AJ. The role of deliberate practice in the acquisition of clinical skills. BMC Med Educ 2011 Dec 06;11:101 [FREE Full text] [CrossRef] [Medline]
- Perlin JB, Baker DB, Brailer DJ, Fridsma DB, Frisse ME, Halamka JD, et al. Information Technology Interoperability and Use for Better Care and Evidence: A Vital Direction for Health and Health Care. NAM Perspectives 2016 Sep 19;6(9):1-16. [CrossRef]
- Wright M, Dunbar S, Macpherson B, Moretti E, Del Fiol G, Bolte J, et al. Toward Designing Information Display to Support Critical Care. Appl Clin Inform 2017 Dec 18;07(04):912-929. [CrossRef]
- Kraiger K, Ford J, Salas E. Application of cognitive, skill-based, and affective theories of learning outcomes to new methods of training evaluation. Journal of Applied Psychology 1993 Nov;78(2):311-328. [CrossRef]
- Ghazisaeidi M, Safdari R, Torabi M, Mirzaee M, Farzi J, Goodini A. Development of Performance Dashboards in Healthcare Sector: Key Practical Issues. Acta Inform Med 2015 Oct;23(5):317-321 [FREE Full text] [CrossRef] [Medline]
- Swartz JL, Cimino JJ, Fred MR, Green RA, Vawdrey DK. Designing a clinical dashboard to fill information gaps in the emergency department. AMIA Annu Symp Proc 2014;2014:1098-1104 [FREE Full text] [Medline]
- Ryan GW, Bernard HR. Techniques to Identify Themes. Field Methods 2016 Jul 24;15(1):85-109. [CrossRef]
- Guest G, Bunce A, Johnson L. How Many Interviews Are Enough? Field Methods 2016 Jul 21;18(1):59-82. [CrossRef]
Abbreviations
ABMS: American Board of Medical Specialties |
ACGME: Accreditation Council for Graduate Medical Education |
CDEM: Center for Data Science in Emergency Medicine |
ED: emergency department |
EHR: electronic health record |
EM: emergency medicine |
HCD: human-centered design |
LOOP: Linking Outcomes Of Patients |
MICA: Maryland Institute College of Art |
Edited by A Kushniruk; submitted 03.05.21; peer-reviewed by T Johnson, F Ghezelbash; comments to author 28.06.21; revised version received 11.07.21; accepted 11.11.21; published 23.03.22
Copyright©Alexandra T Strauss, Cameron Morgan, Christopher El Khuri, Becky Slogeris, Aria G Smith, Eili Klein, Matt Toerper, Anthony DeAngelo, Arnaud Debraine, Susan Peterson, Ayse P Gurses, Scott Levin, Jeremiah Hinson. Originally published in JMIR Human Factors (https://humanfactors.jmir.org), 23.03.2022.
This is an open-access article distributed under the terms of the Creative Commons Attribution License (https://creativecommons.org/licenses/by/4.0/), which permits unrestricted use, distribution, and reproduction in any medium, provided the original work, first published in JMIR Human Factors, is properly cited. The complete bibliographic information, a link to the original publication on https://humanfactors.jmir.org, as well as this copyright and license information must be included.