Published on in Vol 9, No 1 (2022): Jan-Mar

Preprints (earlier versions) of this paper are available at https://preprints.jmir.org/preprint/28639, first published .
Human Factors and Technological Characteristics Influencing the Interaction of Medical Professionals With Artificial Intelligence–Enabled Clinical Decision Support Systems: Literature Review

Human Factors and Technological Characteristics Influencing the Interaction of Medical Professionals With Artificial Intelligence–Enabled Clinical Decision Support Systems: Literature Review

Human Factors and Technological Characteristics Influencing the Interaction of Medical Professionals With Artificial Intelligence–Enabled Clinical Decision Support Systems: Literature Review

Review

Department of Information Systems, University of Siegen, Siegen, Germany

Corresponding Author:

Michael Knop, MSc

Department of Information Systems

University of Siegen

Kohlbettstrasse 15

Siegen, 57072

Germany

Phone: 49 15755910502

Email: michael.knop@uni-siegen.de


Background: The digitization and automation of diagnostics and treatments promise to alter the quality of health care and improve patient outcomes, whereas the undersupply of medical personnel, high workload on medical professionals, and medical case complexity increase. Clinical decision support systems (CDSSs) have been proven to help medical professionals in their everyday work through their ability to process vast amounts of patient information. However, comprehensive adoption is partially disrupted by specific technological and personal characteristics. With the rise of artificial intelligence (AI), CDSSs have become an adaptive technology with human-like capabilities and are able to learn and change their characteristics over time. However, research has not reflected on the characteristics and factors essential for effective collaboration between human actors and AI-enabled CDSSs.

Objective: Our study aims to summarize the factors influencing effective collaboration between medical professionals and AI-enabled CDSSs. These factors are essential for medical professionals, management, and technology designers to reflect on the adoption, implementation, and development of an AI-enabled CDSS.

Methods: We conducted a literature review including 3 different meta-databases, screening over 1000 articles and including 101 articles for full-text assessment. Of the 101 articles, 7 (6.9%) met our inclusion criteria and were analyzed for our synthesis.

Results: We identified the technological characteristics and human factors that appear to have an essential effect on the collaboration of medical professionals and AI-enabled CDSSs in accordance with our research objective, namely, training data quality, performance, explainability, adaptability, medical expertise, technological expertise, personality, cognitive biases, and trust. Comparing our results with those from research on non-AI CDSSs, some characteristics and factors retain their importance, whereas others gain or lose relevance owing to the uniqueness of human-AI interactions. However, only a few (1/7, 14%) studies have mentioned the theoretical foundations and patient outcomes related to AI-enabled CDSSs.

Conclusions: Our study provides a comprehensive overview of the relevant characteristics and factors that influence the interaction and collaboration between medical professionals and AI-enabled CDSSs. Rather limited theoretical foundations currently hinder the possibility of creating adequate concepts and models to explain and predict the interrelations between these characteristics and factors. For an appropriate evaluation of the human-AI collaboration, patient outcomes and the role of patients in the decision-making process should be considered.

JMIR Hum Factors 2022;9(1):e28639

doi:10.2196/28639

Keywords



Background

From a global perspective, many health care systems face comprehensive challenges that affect how care is delivered to society. In this regard, several factors increasingly strain care structures, processes, and the actors involved. For instance, demographic changes and the overall aging of society raise age-related health issues and demands [1,2] and introduce further case complexity; for example, in the form of comorbidity [3]. Simultaneously, a shortage of personnel and medical expertise can be discerned in many—often remote and rural—regions, caused by the low attractiveness of jobs in care due to inappropriate compensation and high workload [4], the attractiveness of urban areas and structures [5], the absence of young graduates willing to establish new or continue existing practices [6], or the trend toward centralized care facilities, inter alia [7]. As a result, larger catchment areas develop for providers who have to cope with deficient and inequitably distributed first-hand access to care [8]. Further, on a societal level, detrimental access to care can marginalize lower socioeconomic groups, as a study from the United States suggests [9], impeding the maintenance of comprehensive and inclusive care. Considering the increasing complexity of medical care on the one hand and the decreasing time and personnel resources on the other hand, the need to actively support clinicians at the point of care is growing.

Clinical Decision Support Systems

Representing a promising and widely adopted technology to render processes and decisions more efficient, so-called clinical decision support systems (CDSSs) are software applications capable of catalyzing and informing the process of decision-making of medical professionals [10]. Although applications exist that target the decisional processes of patients, often called decision aids [11] or patient decision support interventions [12], the clinical use of CDSSs remains the primary domain for decision support. Here, the evaluation of performance, adoption, effectiveness, and impact on patient outcomes advances, but still lacks comprehensive approaches [10], including an analysis of relationships among technological characteristics, continual use, and effects on diagnosis and treatment. Nevertheless, the potential of CDSSs to support diagnostic processes leads to their use in other contexts of medicine; for example, primary care [13], and in several different disciplines, from emergency medicine [14] and dermatology [15] to radiology [16]. Aside from diagnostic purposes, CDSSs are used to detect possible inadequate prescriptions of medication [17] or to simulate different treatment strategies and their impact on patient outcomes [18]. Until today, CDSSs have had partial nonadoption for numerous reasons; for example, workflow disturbances or trust deficits, and their adoption is linked to many different factors concerning technology and human-technology interaction [19,20]. In particular, the subjective perception of and attitude toward the CDSS remains a crucial predictor of adoption [21]. This is because the CDSS surpasses the preferably objective description of medical information (eg, in electronic health records) and interprets this information to support clinical interventions [19]. Meanwhile, the comparability of CDSS among different contexts is difficult because of the already-mentioned variation in user groups (patients, physicians, nurses, etc), medical domains (clinical care, primary care, etc), medical disciplines (dermatology, radiology, etc), and purposes (diagnosis, prescription, treatment, etc).

Owing to technological innovations, health care technologies, including CDSSs, are increasingly enabled by artificial intelligence (AI) [22]. The first evaluation of an AI-enabled CDSS promises increased performance and accuracy compared with a conventional CDSS [23]. In addition, clinicians and experts in the field generally expect simplification of organizational processes, such as patient flows, with the advent of AI [24]. Defined as a technology’s capability to work in a way that a human perceives as intelligent [25], AI is used on various occasions with regard to CDSS, such as risk prediction for medical complications [26] and adverse drug effects [27]. However, a rigorous and consistent definition of AI is challenging. Therefore, we followed Helm et al [28] and Schuetz and Venkatesh [29] on their emphasis on the adaptive characteristics of AI, meaning that AI-enabled CDSSs are learning entities that change over time while considering their environmental conditions. Consequently, these systems are not deterministic and may provide different outputs from the same input at different times [30]. Compared with medical professionals, AI-enabled systems can outperform human ratings or predictions; for example, concerning the classification of dermal lesions and proliferation [31]. Regarding the adoption of AI-enabled systems in general, ongoing research reports several concerns indicated by clinicians. Although the fear of being replaced appears to depend on the level of knowledge about the concept of AI that the clinicians possess [32], studies report that clinicians fear being biased by the recommendations of AI, resulting in overconfidence and harmful consequences for patients [33]. In addition, clinicians are concerned that AI might increase the threat of data breaches and the associated risks for patients’ privacy, as well as legal consequences resulting from treatment errors [34]. Nevertheless, current research suggests an ambivalent perception of AI. Considering the aforementioned concerns and potential hindrances for adoption, clinicians assume that AI-enabled systems might save time and improve the continuous monitoring of patients [35]. Furthermore, research has highlighted that only a few clinicians comprehend the variety of applications of AI and its conceptual nature [34,35]. Differences in the perception of AI; for example, regarding the fear of being replaced [36], emphasize the subjectivity of clinicians’ attitude toward AI.

Considering the ambiguity of concerns regarding clinicians’ attitudes toward AI, the mentioned hindrances of CDSS adoption and the similarity between concerns associated with AI and CDSS (eg, biased decision-making, legal consequences, or fear of being replaced), an AI-enabled CDSS might actually increase the relevance of perceptive and subjective factors for adoption and their interplay with technological characteristics. During the process of development and evaluation of AI-enabled CDSS, it became apparent that the potential benefits for clinical performance and treatment quality are maximized by human–AI collaboration, rather than by human-AI competition [31]. However, owing to the interactive and adaptive nature of AI-enabled CDSS, traditional theories and models to explain the use and adoption of these systems forfeit their power to explain and predict a successful collaboration between AI and human beings [29,37]. Specific factors regarding AI-enabled technology and human actors such as dermatologists, radiologists, and other medical professionals are emphasized to influence the relationship among them, including the explainability or understandability of the system [38], its purpose [39], and the resulting trust a human actor perceives in the system [40]. Considering that factors related to the subjective attitude and perception of clinicians, such as trust, already impact the adoption of non-AI CDSS [21,41,42], we argue that the advent of AI-enabled systems increases the importance of specific factors that are not exclusively bound to technological characteristics. Considering the already investigated hindrances impeding the adoption of CDSS by clinicians [43,44], the lack of a sound theoretical basis, or the reliance on traditional theoretical approaches within ongoing research [45], the need for a review of AI-specific factors influencing the collaboration between AI and human actors has increased.

Human-AI Interaction and Collaboration

To understand the dyadic relationship between humans and AI, it is necessary to understand key concepts and their interrelations. Although many researchers use the term interaction [46], literature defining what interaction means is seldom. Hornbæk et al [46] showed that there is no common definition and identified 7 concepts of interaction that highlight different perspectives. However, the human–computer interaction framework of Li and Zhang [47] shows that interaction can be generally understood as a process of using a technology for a task in a specific context. In turn, collaboration etymologically stems from the term collaborare which means work with. As the origin reveals, collaboration can be understood as a joint effort in which a common goal is pursued. From our perspective, collaboration is thus a successful interaction with an adaptive AI-enabled system. Under the assumption that both human and AI-enabled systems are not error-free, a human-AI collaboration is effective when errors are prevented. In this context, a key driver of such effective collaboration is that medical professionals perceive the system as trustworthy (ie, a certain level of trust) for the tasks to be done and accept it. Trust is a complex psychological construct that is described as the will to make oneself vulnerable [48]. If a party considers another party to be trustworthy, the relationship is in turn determined by the perception of the other parties’ attributes of ability (the legitimacy of a system’s recommendation for a specific decision), benevolence (the accordance of a human actor’s and the system’s intention and motivation to do good), and integrity (the accordance of a human actor’s and the system’s superordinate values) [40]. Nevertheless, it remains unclear how system design can influence the perception of trustworthiness and what human traits foster the propensity to trust.

Objectives

The objective of this study is to summarize the factors influencing effective collaboration between medical professionals and AI-enabled CDSSs. Capturing these factors is essential for medical professionals, management, and technology designers to reflect the adoption, implementation, and development of AI-enabled CDSSs [48,49]. Further, we seek to explore what specific outcomes are used to evaluate successful collaboration between humans and AI-enabled CDSSs (performance, effectiveness, impact on patient outcomes, etc) and the theoretical foundations on which they are based. Finally, the comparison between factors that are associated with AI-enabled CDSSs and those associated with CDSSs not enabled by AI appears to be important in evaluating the extent to which the current literature has already reflected the uniqueness of human-AI collaboration.


Overview

We conducted a narrative review to summarize the current literature regarding our specific objectives [50]. In the following, we report the search for relevant literature to meet our objective, its selection, and its synthesis to counteract the subjectivity of our results [50]. We selected 3 different meta-databases to search for studies that met our research objective. We defined our search strategy in accordance with the relatively broad scope of our study [51]. To report our results, we followed the PRISMA (Preferred Reporting Items for Systematic Reviews and Meta-Analyses) guidelines for reviews [52]. Through our initial search, we identified 1161 studies by screening titles and abstracts, of which 100 (8.61%) satisfied our inclusion criteria. Through a backward search, we identified another study that was included in our full-text assessment, resulting in 101 articles assessed for eligibility. Finally, 6.9% (7/101) of studies were included in our synthesis of results.

Databases

We included the databases PubMed, PsycInfo, and Business Source Complete for our literature review. PubMed indexes >5000 journals in the fields of medicine, health care, and related disciplines. We used PubMed, in particular, to gather information about the clinical effectiveness and implementation of AI-enabled CDSSs. PsycInfo contains >2000 journals from behavioral and social science research. We searched PsycInfo to examine the psychological dimensions of AI-enabled CDSSs and decisional processes. Finally, we scanned results from Business Source Complete, containing >1000 journals in the field of business sciences, to obtain insights regarding our objective from an economic and procedural perspective.

Study Selection

We combined 2 different sections of search terms (AND conditions). The first section represented the technologies associated with the objective of our research (AI OR artificial intelligence OR machine learning OR cognitive computing OR intelligent agent OR decision support OR recommendation agent). The second section reflected the interactional dimension of human-AI collaboration (trust* OR acceptance OR *agreement OR consent OR compliance OR congruency OR collaboration OR resistance). We included articles published in English over the last 10 years. To select relevant literature, 2 authors (MK and SW) independently screened titles and abstracts to exclude articles that did not involve AI-enabled technology (see the definition in the Clinical Decision Support Systems section) and those that were not related to health care or medicine. The inclusion and exclusion criteria were discussed in detail before the screening. In addition, to familiarize themselves with the procedure, an initial sample of 100 entries was screened. A high level of agreement was achieved, and disagreements were resolved through discussion between the 2 authors (MK and SW). In the remaining papers, only a few borderline cases were discussed until consensus was reached, and both the authors (MK and SW) finally came to the same result. In the full-text screening, articles that did not involve AI or AI-enabled systems (n=32), did not consider the interaction between the human actors and AI-enabled systems (n=15), did not distinguish between AI-enabled and non–AI-enabled CDSSs (n=6), did not involve CDSS (n=38) or the perspective of medical professionals (n=1), or appeared to be gray literature or opinion (n=2) were excluded. Detailed documentation of the exclusion process for full-text screening is provided in Multimedia Appendix 1, where all excluded studies and the reasons for exclusion are presented. The selection of relevant literature is represented through a PRISMA flowchart (Figure 1). If articles were eligible, we summarized and reported the specific factors influencing effective collaboration.

Figure 1. PRISMA (Preferred Reporting Items for Systematic Reviews and Meta-Analyses) flowchart. AI: artificial intelligence; CDSS: clinical decision support system.
View this figure

Overview

On the basis of our study selection, 7 studies were included in our final synthesis. From our perspective, this result stems from the fact that many studies of AI-enabled CDSSs (1) compare solely the diagnostic accuracies of human raters and those of AI-enabled systems and (2) focus on technological characteristics and the development of these systems, but do not discuss their effects on the interaction or collaboration between technology and human actors. Therefore, most (5/7, 71%) of the included studies reflected on the relevance of specific characteristics or factors by contemplating the objective from a meta-perspective (Table 1).

Table 1. Summary of study characteristics included in our review.
StudyType of studyContextFocal point of interest
Cabitza et al [53]Narrative reviewClinical care; health care (general); clinicians; no specific purposeTrust
Felmingham et al [54]Narrative reviewClinical care; dermatology; physicians; diagnosticsMortality or morbidity
Gomolin at al [55]Narrative reviewClinical care; dermatology; physicians; diagnosticsExplainability
Reyes et al [56]Narrative reviewClinical care; radiology; physicians; diagnosticsTrust; explainability
Jeng and Tzeng [57]Quantitative studyClinical care; health care (general); physicians; diagnosticsIntention
Tschandl et al [31]Quantitative studyClinical care; dermatology; physicians; diagnosticsPerformance
Asan et al [30]Narrative reviewClinical care; health care (general); clinicians; no specific purposeTrust

Factors Influencing Collaboration

Technological Characteristics

This study addresses different dimensions or steps in the development, implementation, and adoption of an AI-enabled CDSS. The technological characteristics of these systems, that is, the abilities and attributes of technology that are defined by their design [58], are described as meaningful determinants for the way the interaction between the system and the human actor is shaped. For instance, Cabitza et al [53] concluded that a “truthful, reliable, and representative” system needs high-quality data based on which it is trained. Similarly, Asan et al [30] argued that the development of a “healthy trust relationship” with algorithmic decision-making relies on the thoughtful design of system characteristics. In general, the resulting performance of the system and its ability to explain or justify its conclusions appear to be strong predictors of a positive relationship [30,31,54-56]. Reyes et al [56] defined the explainability of an AI-enabled system as the ability to ensure that a human actor understands “the link between the features used by the machine learning system and the prediction.” In current literature, explainability and transparency of a system are often used interchangeably [54] or in the sense that transparency appears to be a superordinate category of explainability [30]. Closely linked to a system’s ability to explain its internal processes is the resulting effect on human actors with respect to the subjective interpretability of the given information [55].

Furthermore, Tschandl et al [31] argue that the output of an AI-enabled CDSS in its dimensions of simplicity, granularity, and concreteness might affect the final decision of clinicians; the better an AI-enabled system’s output is adapted to the situational context of its use, the more precise the overall diagnostic performance of the AI and humans (eg, clinicians facing multiclass diagnostic problems are supported by AI-based multiclass probabilities). In addition, a study mentioned the importance of usability and user satisfaction for effective human-AI collaboration [53] but does not provide a definition in the context of AI-enabled CDSSs.

Human Factors

In addition, social (eg, trust), psychological (eg, personality traits), and cognitive characteristics (eg, cognitive biases) of a human actor affecting their interaction with technology, that is, human factors [59], appear to be meaningful prerequisites for the relationship between systems and actors as well. Asan et al [30], Tschandl et al [31], Felmingham et al [54], and Jeng and Tzeng [57] argued that the clinical experience of medical professionals is a highly important factor in determining the interaction and performance of human-AI collaboration. In general, these studies show that less experienced physicians benefit the most from AI-enabled CDSSs and attain a higher overall diagnostic accuracy, whereas an experienced physician’s diagnostic accuracy differs little or not at all. In addition, Asan et al [30] and Felmingham et al [54] argue that technological experience and even the personality of medical professionals are important factors for medical professionals’ decision-making processes, although no study has yet investigated their effect on the collaboration between AI-enabled CDSSs and human actors. Furthermore, Asan et al [30] and Felmingham et al [54] mentioned that the relationship between the system and human actor can be disrupted by several cognitive biases affecting collaboration at different times, that is, confirmation bias, anchoring effect, overconfidence, availability bias, framing effect, premature closure, and automation bias. Already known from medical decision-making in general, cognitive biases alter rational processes of medical professionals, resulting in erroneous diagnostics and treatments [60]. Because of biased thinking in decisional processes and the variety of biases occurring at different times in these processes, AI-enabled CDSSs are prone to be affected by these biases [54].

Among the included studies, a human actor’s trust in an AI-enabled CDSS appeared to be another important factor that directly influenced the quality of collaboration and adoption of technology. For instance, Cabitza et al [53] argued that a lack of trust might result from different technological characteristics and their situational fit but always negatively impacts the overall performance of the human-AI team. Reyes et al [56] hypothesized that the comprehensible explainability of a system ensures a high level of trust, including a system’s ability to explicate its learning process and essential or most effective determinants for its prediction, as well as adequate and situational visualization of its internal processes. Felmingham et al [54] argued that trust is created through an interactional process between AI and humans. Accordingly, Asan et al [30] also highlighted the interdependency of human factors and system features as constituting factors of trust. However, Asan et al [30] argued that maximizing trust should not be the ultimate goal, as AI also has its limitations in that blind trust could lead to undesirable consequences. Instead, system designers should establish mechanisms that encourage reciprocal skepticism, create healthy trust relationships, and maximize the accuracy of clinical decisions. From this perspective, trust is highly dependent on the personality of the human actor, system design, and cognitive biases that might emerge in the collaboration. The reported technological characteristics and human factors influencing effective AI-human collaboration are summarized in Table 2.

Table 2. Technological characteristics and human factors influencing and shaping the relationship and collaboration between AI-enabled clinical decision support systems (CDSSs) and human actors.
ParametersDefinitionStudy
Technological characteristics

Training data qualityInformation used for training of AI-enabled CDSSs to create a truthful, reliable, and representative system[53]

PerformanceThe accuracy and reliability of an AI-enabled CDSS[30,55]

Explainability or transparencyAn AI-enabled CDSS\' ability to ensure that a human actor understands the processes that lead to the prediction and the prediction itself[30,31,54-56]

Adapted output or adaptabilityThe degree to which an AI-enabled CDSS fits into a specific context or environment according to the subdimensions simplicity, granularity, and concreteness[31]
Human factors


Medical expertiseThe degree of medical experience of a human actor within the context of collaboration with an AI-enabled CDSS[30,31,54,57]

Technological expertiseThe degree of technological experience of a human actor with regard to an AI-enabled CDSS[30,54]

PersonalityA medical professional’s attributes and characteristics that influence the interaction with AI-enabled a CDSS[54]

Cognitive biasesThe cognitive processes that alter rational decision-making and perceptions of an AI-enabled CDSS[30,54]

TrustThe subjective impression of a medical professional that an AI-enabled CDSS is truthful and reliable[30,53,54]

Evaluation of Medical Outcomes

Of the 7 included studies, only 1 (14%) study mentioned the interrelation between an effective human-AI collaboration and primary clinical outcomes. Reviewing an AI-enabled CDSS for skin cancer diagnostics, Felmingham et al [54] mentioned the possible impacts of these systems on a patient’s morbidity and mortality associated with skin cancer in general. Other studies described secondary outcomes, such as a system’s mathematical accuracy [55] or behavioral intentions to use a CDSS [57]. No study investigated the impact of technological characteristics or human factors on primary clinical outcomes.

Theoretical Foundation of Research

Of the 7 included studies, only 1 (14%) study mentioned the theoretical foundations on which implications for practice are based explicitly. Jeng and Tzeng [57] derived hypotheses for their empirical investigation from the unified theory of acceptance and use of technology, which is a technology acceptance theory widely adopted to explain the intention to use technology and the subsequent use behavior [61]. An important predecessor in this theoretical model is social influence (ie, “...the degree to which an individual perceives that important others believe he or she should use the new system” [61]). However, based on their results, Jeng and Tzeng [57] discarded their theoretical assumption about social influence affecting clinicians’ intentions to use a CDSS. Felmingham et al [54] discussed the role of cognitive biases in decisional processes involving AI-enabled CDSSs. Nevertheless, Felmingham et al [54] did not explicitly mention the origin of cognitive biases in the prospect theory by Kahneman and Tversky [62].


Principal Findings

Our results show that only a few (7/101, 6.9%) studies have already broached the issue of individual factors influencing effective collaboration between a human actor and an AI-enabled CDSS. Although unique considerations with regard to these systems appear; for example, the important role of trust [30,53], scarce empirical evidence exists for the relational structure of essential factors or characteristics. In addition, many studies did not describe the involved system and its characteristics extensively, enabling differentiation between AI and non–AI-enabled systems accurately [42]. Therefore, we argue that a more thorough description of the involved system and its characteristics is highly relevant for future research as it lays the foundation for comparing different systems and their effectiveness. Nevertheless, in the process of reviewing the literature, we were able to differentiate between factors primarily associated with technological structures and functions (technological characteristics), and those primarily associated with human actors’ psychological or perceptual attributes (human factors). Both technological characteristics and human factors influence the nature of the interaction between human actors and AI-enabled CDSSs. Interestingly, some technological characteristics and human factors appear to be antecedents of interaction; for example, the personality of medical professionals [54], whereas others appear to be effects of an interaction [53]. Therefore, as suggested by Felmingham et al [54], it can be assumed that human factors and technological characteristics are mutually dependent and together shape the interaction between human actors and AI-enabled CDSSs. As described in the Background section, the shape of an interaction between human actors and AI and their resulting interactional relationship can be considered a condition for successful collaboration. However, the foundation for evaluating an AI-enabled CDSS differs, in accordance with current research addressing non-AI CDSS [20]. Studies from our results discussed the accuracy or mathematical performance of systems, adoption by medical professionals, sustainability and congruency of interaction, and the effects on patient outcomes to be relevant for evaluation. Although the effectiveness of a collaboration between human actors and AI currently depends on the context and objective of a system [53], the paradigm of medicine clearly dictates the final evaluation of a CDSS by its ability to improve primary and secondary outcomes of patients [22]. As AI-enabled systems are characterized by their adaptive nature [29], processes of individual interaction and collaboration are likely to be iterative and reciprocal and will change and be refined over time. Figure 2 summarizes this process based on our results and can be considered a proposed descriptive framework for human-AI collaboration.

Figure 2. Steps and elements of reciprocal processes of human–artificial intelligence collaboration.
View this figure

When comparing our results to research concerning medical professionals’ interactions with non-AI CDSS, high correspondence can be noted. Khairat et al [20] mentioned workflow fit (adaptability), computer literacy (technological expertise), trust, a general optimistic attitude of clinicians (personality), and clinical expertise (medical expertise) as important factors for effective adoption. In addition, Khairat et al [20] reported usability and perceived usefulness as determinants. As perceived usefulness needs further concretization within the context of AI-enabled systems [53], usability might generate only minor relevance for AI-enabled CDSSs, as these systems are based on automated processes of use and integrate human-like ways to communicate (eg, natural language processing for voice control) [29]. In contrast, the explainability of a CDSS appears to be a technological characteristic that strongly influences the collaboration between humans and the system, whether it is enabled by AI or not [63]. However, differentiation of explainability and related terms such as understandability, interpretability, and transparency has not yet been completed, and the impact of explainability on other relevant factors, including trust, has not yet been empirically verified [63]. In general, it is not clear how and if technological characteristics and human factors influence other specific aspects of collaboration between human actors and AI-enabled CDSSs. For instance, studies suggest that high clinical expertise influences overall collaborative performance [54] but does not hypothesize possible explanations. Clinical expertise might be associated with a lack of trust in these systems, overconfidence biases, or the fact that these systems are sometimes less accurate than experienced physicians.

Furthermore, other studies involving non-AI CDSSs have emphasized the essential role of trust in the effective interaction between humans and the system. Trust is a multidimensional construct. A lack of trust might result from reservations regarding the mathematical accuracy or appropriateness of a system or the purpose of a system in improving patient outcomes [64]. As our literature review reveals the importance of the technological accuracy of AI-enabled CDSSs, research focusing on trust in human-like technology has shown that ability, benevolence, and integrity are essential prerequisites for sustainable adoption [37,40]. However, only 14% (1/7) of the included studies highlighting trust in its role in successful collaboration defined the actual meaning of trust [30], and none of the included studies paid attention to the prerequisites. Considering the inconsistent definition of trust in technology [48,65], future research might reveal important prerequisites for trust within the interaction between human actors and AI-enabled technologies. In addition, the relationship between trust in AI-enabled CDSSs and improvement in clinical outcomes requires further investigation.

Findings from our literature as well as ongoing research concerning non-AI patient decisional aid suggest that a stronger theoretical foundation for the interaction between human actors and CDSSs is important [66]. Felmingham et al [54] already demonstrated that cognitive biases, originating from the prospect theory, might decisively impact effective collaboration, that is, the tendency to confirm assumptions already made rather than falsify them, known as confirmation bias [67], might distort the relationship between medical professionals and AI-enabled CDSS in the sense that they might not accept a different opinion except their own. In contrast, relying on automated information instead of vigilantly seeking and interpreting information, known as automation bias [68], might actually cause the unreflected acceptance of suggestions made by a CDSS. Therefore, to discuss suitable theoretical foundations, it might be helpful to further explicate and structure the aforementioned nontransparent relations of different constructs, factors, and characteristics influencing decision-making and collaboration. In addition, problems originating from the application of traditional technology-centered theories (such as the technology acceptance model or unified theory of acceptance and use of technology) on AI-enabled decision-making might lead to inappropriate results [29,69]. Theories concerning the trust-based adoption of human-like technology [40] promise to encounter these deficits by emphasizing the interactional components of technology adoption and use.

Limitations

Our study had some limitations. As some studies derived their conclusions about collaboration between AI-enabled CDSSs and human actors from studies of CDSSs not enabled by AI or assigned results from non-CDSSs to CDSSs, reasoning about interrelations between different technological characteristics and human factors is preliminary and requires further investigation. Although our results fit well with the current findings about the uniqueness and specific nature of human-AI interaction, very few (7/101, 6.9%) studies, of which most were narrative reviews, were included because of our innovative and novel objective as well as the specific context. This may be a result of our relatively narrow search, which could be extended by explicating the related constructs and prerequisites of trust. Explorative empirical studies based on suitable theoretical foundations might yield frameworks and models to structure future research on AI-enabled CDSSs, as our study primarily provides an orientation about relevant individual characteristics and factors. The consideration of environmental influences (eg, organizational policies or culture [30] and patients’ views [70]) on AI-supported decisional processes for medical care is vital for a comprehensible understanding but cannot be provided within the scope of our review.

Conclusions

We extracted the technological characteristics and human factors relevant for effective collaboration between medical professionals and AI-enabled CDSSs. Although most of the findings from previous research on non–AI-enabled CDSSs are in accordance with our results, the weighting of specific factors might change with AI-enabled systems. The adaptive and increasing human-like nature of AI-enabled CDSSs emphasizes the time sensitivity and reciprocity of decisional processes that should ultimately lead to an improvement in care. Cognitive biases may occur at any time during these processes, varying the effectiveness of collaboration. Explainability remains an essential prerequisite for interaction, and the expertise and personalities of medical professionals have come into focus. In addition, trust between humans and the system emerges as a central aspect of decisional support, whereas the interrelations among these facets still need to be investigated. Concepts such as shared decision-making justify the integration of patients’ demands and wishes, an important factor for medical care, and its role in human-AI collaboration is yet underrepresented. Currently, it is unclear how these concepts can be integrated into AI-enhanced decisional processes and to what extent medical decisions with the help of the CDSS are influenced by the subjective meaning and understanding of diagnoses or treatments by patients. In addition, as several studies have measured the effectiveness of collaboration by means of other parameters, primary and secondary patient outcomes should be considered in future research.

As described earlier, modern health care structures are under increasing pressure. Involved medical professionals face immense workloads per capita, and the supply of personnel declines. Because these structures form the initial access points for most citizens in need of care and treatment, approaches that foster more efficient decision-making and treatment processes are becoming imperative to maintain comprehensive care. Thus, an AI-enabled CDSS represents an important and future-oriented measure that enables actors in the health care domain to improve resource allocation, make timelier and less stressful decisions, and cope with shortages in personnel, facilities, and expertise. However, the potential application of CDSSs and pursued benefits calls for investigations that shed light on how AI-enabled processes can be implemented within prevalent health care structures so that the associated risks and challenges, such as the oversimplification of individual patient data or the automated initiation of suboptimal or erroneous treatments, can be mitigated.

Conflicts of Interest

None declared.

Multimedia Appendix 1

Documentation of full-text exclusion.

XLSX File (Microsoft Excel File), 25 KB

  1. Davis S, Bartlett H. Healthy ageing in rural Australia: issues and challenges. Australas J Ageing 2008;27(2):56-60. [CrossRef] [Medline]
  2. Demiris G, Hensel BK. Technologies for an aging society: a systematic review of "smart home" applications. Yearb Med Inform 2008:33-40. [Medline]
  3. Kazdin AE, Whitley MK. Comorbidity, case complexity, and effects of evidence-based treatment for children referred for disruptive behavior. J Consult Clin Psychol 2006;74(3):455-467. [CrossRef] [Medline]
  4. Thommasen HV, Lavanchy M, Connelly I, Berkowitz J, Grzybowski S. Mental health, job satisfaction, and intention to relocate. Opinions of physicians in rural British Columbia. Can Fam Physician 2001;47:737-744 [FREE Full text] [Medline]
  5. Yang J. Potential urban-to-rural physician migration: the limited role of financial incentives. Can J Rural Med 2003;8(2):101-106.
  6. Adarkwah CC, Schwaffertz A, Labenz J, Becker A, Hirsch O. Assessment of the occupational perspectives of general practitioners in a rural area. Results from the study HaMedSi (Hausärzte [GPs] for Medical education in Siegen-Wittgenstein). MMW Fortschr Med 2019;161(Suppl 6):9-14. [CrossRef] [Medline]
  7. Rechel B, Džakula A, Duran A, Fattore G, Edwards N, Grignon M, et al. Hospitals in rural or remote areas: an exploratory review of policies in 8 high-income countries. Health Policy 2016;120(7):758-769 [FREE Full text] [CrossRef] [Medline]
  8. Politzer RM, Yoon J, Shi L, Hughes RG, Regan J, Gaston MH. Inequality in America: the contribution of health centers in reducing and eliminating disparities in access to care. Med Care Res Rev 2001;58(2):234-248. [CrossRef] [Medline]
  9. Baah FO, Teitelman AM, Riegel B. Marginalization: conceptualizing patient vulnerabilities in the framework of social determinants of health-an integrative review. Nurs Inq 2019;26(1):e12268 [FREE Full text] [CrossRef] [Medline]
  10. Kruse CS, Ehrbar N. Effects of computerized decision support systems on practitioner performance and patient outcomes: systematic review. JMIR Med Inform 2020;8(8):e17283 [FREE Full text] [CrossRef] [Medline]
  11. Stacey D, Légaré F, Lewis K, Barry MJ, Bennett CL, Eden KB, et al. Decision aids for people facing health treatment or screening decisions. Cochrane Database Syst Rev 2017;4:CD001431 [FREE Full text] [CrossRef] [Medline]
  12. Elwyn G, Scholl I, Tietbohl C, Mann M, Edwards AG, Clay C, et al. "Many miles to go …": a systematic review of the implementation of patient decision support interventions into routine clinical practice. BMC Med Inform Decis Mak 2013;13 Suppl 2:S14 [FREE Full text] [CrossRef] [Medline]
  13. Nurek M, Kostopoulou O, Delaney BC, Esmail A. Reducing diagnostic errors in primary care. A systematic meta-review of computerized diagnostic decision support systems by the LINNEAUS collaboration on patient safety in primary care. Eur J Gen Pract 2015;21 Suppl(sup1):8-13 [FREE Full text] [CrossRef] [Medline]
  14. Bennett P, Hardiker NR. The use of computerized clinical decision support systems in emergency care: a substantive review of the literature. J Am Med Inform Assoc 2017;24(3):655-668 [FREE Full text] [CrossRef] [Medline]
  15. Anderson AM, Matsumoto M, Saul MI, Secrest AM, Ferris LK. Accuracy of skin cancer diagnosis by physician assistants compared with dermatologists in a large health care system. JAMA Dermatol 2018;154(5):569-573 [FREE Full text] [CrossRef] [Medline]
  16. Lambin P, Leijenaar RT, Deist TM, Peerlings J, de Jong EE, van Timmeren J, et al. Radiomics: the bridge between medical imaging and personalized medicine. Nat Rev Clin Oncol 2017;14(12):749-762. [CrossRef] [Medline]
  17. Scott IA, Pillans PI, Barras M, Morris C. Using EMR-enabled computerized decision support systems to reduce prescribing of potentially inappropriate medications: a narrative review. Ther Adv Drug Saf 2018;9(9):559-573 [FREE Full text] [CrossRef] [Medline]
  18. Valdes G, Simone 2nd CB, Chen J, Lin A, Yom SS, Pattison AJ, et al. Clinical decision support of radiotherapy treatment planning: a data-driven machine learning strategy for patient-specific dosimetric decision making. Radiother Oncol 2017;125(3):392-397. [CrossRef] [Medline]
  19. Sutton RT, Pincock D, Baumgart DC, Sadowski DC, Fedorak RN, Kroeker KI. An overview of clinical decision support systems: benefits, risks, and strategies for success. NPJ Digit Med 2020;3:17 [FREE Full text] [CrossRef] [Medline]
  20. Khairat S, Marc D, Crosby W, Al Sanousi A. Reasons for physicians not adopting clinical decision support systems: critical analysis. JMIR Med Inform 2018;6(2):e24 [FREE Full text] [CrossRef] [Medline]
  21. Laka M, Milazzo A, Merlin T. Factors that impact the adoption of clinical decision support systems (CDSS) for antibiotic management. Int J Environ Res Public Health 2021;18(4):1901 [FREE Full text] [CrossRef] [Medline]
  22. Morley J, Machado CC, Burr C, Cowls J, Joshi I, Taddeo M, et al. The ethics of AI in health care: a mapping review. Soc Sci Med 2020;260:113172. [CrossRef] [Medline]
  23. Hameed BM, Shah M, Naik N, Singh Khanuja H, Paul R, Somani BK. Application of artificial intelligence-based classifiers to predict the outcome measures and stone-free status following percutaneous nephrolithotomy for staghorn calculi: cross-validation of data and estimation of accuracy. J Endourol 2021;35(9):1307-1313. [CrossRef] [Medline]
  24. Dawoodbhoy FM, Delaney J, Cecula P, Yu J, Peacock I, Tan J, et al. AI in patient flow: applications of artificial intelligence to improve patient flow in NHS acute mental health inpatient units. Heliyon 2021;7(5):e06993 [FREE Full text] [CrossRef] [Medline]
  25. McCarthy J, Minsky ML, Rochester N, Shannon CE. A proposal for the Dartmouth summer research project on artificial intelligence. Stanford University. 1995.   URL: http://jmc.stanford.edu/articles/dartmouth/dartmouth.pdf [accessed 2020-12-08]
  26. Stone EL. Clinical decision support systems in the emergency department: opportunities to improve triage accuracy. J Emerg Nurs 2019;45(2):220-222. [CrossRef] [Medline]
  27. Patterson BW, Pulia MS, Ravi S, Hoonakker PL, Schoofs Hundt A, Wiegmann D, et al. Scope and influence of electronic health record-integrated clinical decision support in the emergency department: a systematic review. Ann Emerg Med 2019;74(2):285-296 [FREE Full text] [CrossRef] [Medline]
  28. Helm JM, Swiergosz AM, Haeberle HS, Karnuta JM, Schaffer JL, Krebs VE, et al. Machine learning and artificial intelligence: definitions, applications, and future directions. Curr Rev Musculoskelet Med 2020;13(1):69-76 [FREE Full text] [CrossRef] [Medline]
  29. Schuetz S, Venkatesh V. Research perspectives: the rise of human machines: how cognitive computing systems challenge assumptions of user-system interaction. J Assoc Inf Syst 2020;21(2):460-482. [CrossRef]
  30. Asan O, Bayrak AE, Choudhury A. Artificial intelligence and human trust in healthcare: focus on clinicians. J Med Internet Res 2020;22(6):e15154 [FREE Full text] [CrossRef] [Medline]
  31. Tschandl P, Rinner C, Apalla Z, Argenziano G, Codella N, Halpern A, et al. Human-computer collaboration for skin cancer recognition. Nat Med 2020;26(8):1229-1234. [CrossRef] [Medline]
  32. Abdullah R, Fakieh B. Health care employees' perceptions of the use of artificial intelligence applications: survey study. J Med Internet Res 2020;22(5):e17620 [FREE Full text] [CrossRef] [Medline]
  33. Liyanage H, Liaw ST, Jonnagaddala J, Schreiber R, Kuziemsky C, Terry AL, et al. Artificial intelligence in primary health care: perceptions, issues, and challenges. Yearb Med Inform 2019;28(1):41-46 [FREE Full text] [CrossRef] [Medline]
  34. Castagno S, Khalifa M. Perceptions of artificial intelligence among healthcare staff: a qualitative survey study. Front Artif Intell 2020;3:578983 [FREE Full text] [CrossRef] [Medline]
  35. Laï MC, Brian M, Mamzer MF. Perceptions of artificial intelligence in healthcare: findings from a qualitative survey study among actors in France. J Transl Med 2020;18(1):14 [FREE Full text] [CrossRef] [Medline]
  36. Oh S, Kim JH, Choi SW, Lee HJ, Hong J, Kwon SH. Physician confidence in artificial intelligence: an online mobile survey. J Med Internet Res 2019;21(3):e12422 [FREE Full text] [CrossRef] [Medline]
  37. Siau K, Wang W. Building trust in artificial intelligence, machine learning, and robotics. Cut Bus Technol J 2018;31(2):47-53.
  38. Holzinger A, Langs G, Denk H, Zatloukal K, Müller H. Causability and explainability of artificial intelligence in medicine. Wiley Interdiscip Rev Data Min Knowl Discov 2019;9(4):e1312 [FREE Full text] [CrossRef] [Medline]
  39. Toreini E, Aitken M, Coopamootoo K, Elliott K, Gonzalez Zelaya C, van Moorsel A. The relationship between trust in AI and trustworthy machine learning technologies. In: Proceedings of the 2020 Conference on Fairness, Accountability, and Transparency. New York, NY: Association for Computing Machinery; 2020 Presented at: FAT* '20; January 27-30, 2020; Barcelona, Spain p. 272-283. [CrossRef]
  40. Lankton NK, McKnight DH, Tripp J. Technology, humanness, and trust: rethinking trust in technology. J Assoc Inf Syst 2015;16(10):880-918. [CrossRef]
  41. Kortteisto T, Komulainen J, Mäkelä M, Kunnamo I, Kaila M. Clinical decision support must be useful, functional is not enough: a qualitative study of computer-based clinical decision support in primary care. BMC Health Serv Res 2012;12:349 [FREE Full text] [CrossRef] [Medline]
  42. Johnson MP, Zheng K, Padman R. Modeling the longitudinality of user acceptance of technology with an evidence-adaptive clinical decision support system. Decis Support Syst 2014;57:444-453. [CrossRef]
  43. Liberati EG, Ruggiero F, Galuppo L, Gorli M, González-Lorenzo M, Maraldi M, et al. What hinders the uptake of computerized decision support systems in hospitals? A qualitative study and framework for implementation. Implement Sci 2017;12(1):113 [FREE Full text] [CrossRef] [Medline]
  44. Moxey A, Robertson J, Newby D, Hains I, Williamson M, Pearson SA. Computerized clinical decision support for prescribing: provision does not guarantee uptake. J Am Med Inform Assoc 2010;17(1):25-33 [FREE Full text] [CrossRef] [Medline]
  45. Khong PC, Holroyd E, Wang W. A critical review of the theoretical frameworks and the conceptual factors in the adoption of clinical decision support systems. Comput Inform Nurs 2015;33(12):555-570. [CrossRef] [Medline]
  46. Hornbæk K, Mottelson A, Knibbe J, Vogel D. What do we mean by “Interaction”? An analysis of 35 years of CHI. ACM Trans Comput-Hum Interact 2019;26(4):1-30. [CrossRef]
  47. Li NL, Zhang P. The intellectual development of human-computer interaction research: a critical assessment of the MIS literature (1990-2002). J Assoc Inf Syst 2005;6(11):227-292. [CrossRef]
  48. Mayer RC, Davis JH, Schoorman FD. An integrative model of organizational trust. Acad Manag Rev 1995;20(3):709-734. [CrossRef]
  49. Mcknight DH, Carter M, Thatcher JB, Clay PF. Trust in a specific technology: an investigation of its components and measures. ACM Trans Manage Inf Syst 2011;2(2):1-25. [CrossRef]
  50. Rocco TS, Plakhotnik MS. Literature reviews, conceptual frameworks, and theoretical frameworks: terms, functions, and distinctions. Hum Resour Dev Rev 2009;8(1):120-130. [CrossRef]
  51. Aida A, Svensson T, Svensson AK, Chung UI, Yamauchi T. eHealth delivery of educational content using selected visual methods to improve health literacy on lifestyle-related diseases: literature review. JMIR Mhealth Uhealth 2020;8(12):e18316 [FREE Full text] [CrossRef] [Medline]
  52. Moher D, Liberati A, Tetzlaff J, Altman DG, PRISMA Group. Preferred reporting items for systematic reviews and meta-analyses: the PRISMA statement. J Clin Epidemiol 2009;62(10):1006-1012. [CrossRef] [Medline]
  53. Cabitza F, Campagner A, Balsano C. Bridging the "last mile" gap between AI implementation and operation: "data awareness" that matters. Ann Transl Med 2020;8(7):501 [FREE Full text] [CrossRef] [Medline]
  54. Felmingham CM, Adler NR, Ge Z, Morton RL, Janda M, Mar VJ. The importance of incorporating human factors in the design and implementation of artificial intelligence for skin cancer diagnosis in the real world. Am J Clin Dermatol 2021;22(2):233-242. [CrossRef] [Medline]
  55. Gomolin A, Netchiporouk E, Gniadecki R, Litvinov IV. Artificial intelligence applications in dermatology: where do we stand? Front Med (Lausanne) 2020;7:100 [FREE Full text] [CrossRef] [Medline]
  56. Reyes M, Meier R, Pereira S, Silva CA, Dahlweid FM, von Tengg-Kobligk H, et al. On the interpretability of artificial intelligence in radiology: challenges and opportunities. Radiol Artif Intell 2020;2(3):e190043 [FREE Full text] [CrossRef] [Medline]
  57. Jeng DJ, Tzeng GH. Social influence on the use of clinical decision support systems: revisiting the unified theory of acceptance and use of technology by the fuzzy DEMATEL technique. Comput Ind Eng 2012;62(3):819-828. [CrossRef]
  58. Cheng YM. Towards an understanding of the factors affecting m-learning acceptance: roles of technological characteristics and compatibility. Asia Pac Manag Rev 2015;20(3):109-119. [CrossRef]
  59. Harte R, Glynn L, Rodríguez-Molinero A, Baker PM, Scharf T, Quinlan LR, et al. A human-centered design methodology to enhance the usability, human factors, and user experience of connected health systems: a three-phase methodology. JMIR Hum Factors 2017;4(1):e8 [FREE Full text] [CrossRef] [Medline]
  60. Norman GR, Monteiro SD, Sherbino J, Ilgen JS, Schmidt HG, Mamede S. The causes of errors in clinical reasoning: cognitive biases, knowledge deficits, and dual process thinking. Acad Med 2017;92(1):23-30. [CrossRef] [Medline]
  61. Venkatesh V, Morris MG, Davis GB, Davis FD. User acceptance of information technology: toward a unified view. MIS Q 2003;27(3):425-478. [CrossRef]
  62. Kahneman D, Tversky A. Prospect theory: an analysis of decision under risk. Econometrica 1979;47(2):263-292. [CrossRef]
  63. Markus AF, Kors JA, Rijnbeek PR. The role of explainability in creating trustworthy artificial intelligence for health care: a comprehensive survey of the terminology, design choices, and evaluation strategies. J Biomed Inform 2021;113:103655 [FREE Full text] [CrossRef] [Medline]
  64. Sousa VE, Lopez KD, Febretti A, Stifter J, Yao Y, Johnson A, et al. Use of simulation to study nurses' acceptance and nonacceptance of clinical decision support suggestions. Comput Inform Nurs 2015;33(10):465-472 [FREE Full text] [CrossRef] [Medline]
  65. Mcknight DH, Chervany NL. The meanings of trust. Minneapolis, MN: Carlson School of Management, University of Minnesota; 1996.
  66. Elwyn G, Stiel M, Durand MA, Boivin J. The design of patient decision support interventions: addressing the theory-practice gap. J Eval Clin Pract 2011;17(4):565-574. [CrossRef] [Medline]
  67. Nickerson RS. Confirmation bias: a ubiquitous phenomenon in many guises. Rev Gen Psychol 1998;2(2):175-220. [CrossRef]
  68. Mosier KL, Skitka LJ, Heers S, Burdick M. Automation bias: decision making and performance in high-tech cockpits. Int J Aviat Psychol 1997;8(1):47-63. [CrossRef] [Medline]
  69. Jun S, Plint AC, Campbell SM, Curtis S, Sabir K, Newton AS. Point-of-care cognitive support technology in emergency departments: a scoping review of technology acceptance by clinicians. Acad Emerg Med 2018;25(5):494-507 [FREE Full text] [CrossRef] [Medline]
  70. Ye T, Xue J, He M, Gu J, Lin H, Xu B, et al. Psychosocial factors affecting artificial intelligence adoption in health care in China: cross-sectional study. J Med Internet Res 2019;21(10):e14316 [FREE Full text] [CrossRef] [Medline]


AI: artificial intelligence
CDSS: clinical decision support system
PRISMA: Preferred Reporting Items for Systematic Reviews and Meta-Analyses


Edited by A Kushniruk; submitted 09.03.21; peer-reviewed by O Asan, D Tao, S Lee; comments to author 13.04.21; revised version received 02.06.21; accepted 07.02.22; published 24.03.22

Copyright

©Michael Knop, Sebastian Weber, Marius Mueller, Bjoern Niehaves. Originally published in JMIR Human Factors (https://humanfactors.jmir.org), 24.03.2022.

This is an open-access article distributed under the terms of the Creative Commons Attribution License (https://creativecommons.org/licenses/by/4.0/), which permits unrestricted use, distribution, and reproduction in any medium, provided the original work, first published in JMIR Human Factors, is properly cited. The complete bibliographic information, a link to the original publication on https://humanfactors.jmir.org, as well as this copyright and license information must be included.