Published on in Vol 11 (2024)

Preprints (earlier versions) of this paper are available at https://preprints.jmir.org/preprint/47031, first published .
Trust in and Acceptance of Artificial Intelligence Applications in Medicine: Mixed Methods Study

Trust in and Acceptance of Artificial Intelligence Applications in Medicine: Mixed Methods Study

Trust in and Acceptance of Artificial Intelligence Applications in Medicine: Mixed Methods Study

Original Paper

1Panaxea bv, Den Bosch, Netherlands

2Vrije Universiteit Amsterdam, Amsterdam, Netherlands

3Universitätsklinikum Würzburg, Würzburg, Germany

4Ortec bv, Zoetermeer, Netherlands

5Fraunhofer Institute for Production Technology, Aachen, Germany

Corresponding Author:

Anam Ahmed, MSc

Panaxea bv

Pettelaarpark 84

Den Bosch, 5216 PP

Netherlands

Phone: 31 639421854

Email: anam.ahmed@panaxea.eu


Background: Artificial intelligence (AI)–powered technologies are being increasingly used in almost all fields, including medicine. However, to successfully implement medical AI applications, ensuring trust and acceptance toward such technologies is crucial for their successful spread and timely adoption worldwide. Although AI applications in medicine provide advantages to the current health care system, there are also various associated challenges regarding, for instance, data privacy, accountability, and equity and fairness, which could hinder medical AI application implementation.

Objective: The aim of this study was to identify factors related to trust in and acceptance of novel AI-powered medical technologies and to assess the relevance of those factors among relevant stakeholders.

Methods: This study used a mixed methods design. First, a rapid review of the existing literature was conducted, aiming to identify various factors related to trust in and acceptance of novel AI applications in medicine. Next, an electronic survey including the rapid review–derived factors was disseminated among key stakeholder groups. Participants (N=22) were asked to assess on a 5-point Likert scale (1=irrelevant to 5=relevant) to what extent they thought the various factors (N=19) were relevant to trust in and acceptance of novel AI applications in medicine.

Results: The rapid review (N=32 papers) yielded 110 factors related to trust and 77 factors related to acceptance toward AI technology in medicine. Closely related factors were assigned to 1 of the 19 overarching umbrella factors, which were further grouped into 4 categories: human-related (ie, the type of institution AI professionals originate from), technology-related (ie, the explainability and transparency of AI application processes and outcomes), ethical and legal (ie, data use transparency), and additional factors (ie, AI applications being environment friendly). The categorized 19 umbrella factors were presented as survey statements, which were evaluated by relevant stakeholders. Survey participants (N=22) represented researchers (n=18, 82%), technology providers (n=5, 23%), hospital staff (n=3, 14%), and policy makers (n=3, 14%). Of the 19 factors, 16 (84%) human-related, technology-related, ethical and legal, and additional factors were considered to be of high relevance to trust in and acceptance of novel AI applications in medicine. The patient’s gender, age, and education level were found to be of low relevance (3/19, 16%).

Conclusions: The results of this study could help the implementers of medical AI applications to understand what drives trust and acceptance toward AI-powered technologies among key stakeholders in medicine. Consequently, this would allow the implementers to identify strategies that facilitate trust in and acceptance of medical AI applications among key stakeholders and potential users.

JMIR Hum Factors 2024;11:e47031

doi:10.2196/47031

Keywords



Artificial Intelligence

Artificial intelligence (AI) is commonly defined as a computer system that uses statistical models, diverse algorithms, and self-modifying systems to make predictions and decisions based on its own aggregated experience. It can therefore perform tasks that usually require or even surpass the human level of intelligence [1,2]. AI has been increasingly integrated in the health care sector, where it helps with administrative workflows, diagnostic image analysis, robotic surgery, and clinical decision-making. Consequently, medical AI applications allow, amongst other things, earlier disease detection, patient-tailored treatments, and more efficient follow-ups, which should drive the health care costs down upon implementation [3]. Although medical AI applications provide various advantages to the current health care system, such as increased efficiency and improved workflows [4], there are also various challenges associated with AI implementation. For instance, a large share of potential users has concerns over privacy issues [5,6]. Equity and fairness are other important concerns, since there is a risk of perpetuating bias within data sets being adopted by AI technology [5,7,8]. Further, the implementation of AI into medical practice raises the question of accountability, since it is currently unclear whether technology developers, hospitals, or regulators should be responsible for mistakes or undesirable outcomes from the use of an AI application.

Trust and Acceptance

To ensure successful implementation of medical AI applications, it is essential to build trust in and acceptance of AI technology among its users [9,10]. In this study, a model of key drivers of trust in and acceptance of AI systems was used [11]. According to this model, trust is influenced by 4 drivers: current safeguards, job impact of AI, familiarity of AI, and AI uncertainty. Current safeguards indicate the belief that current regulations and laws are adequate for ensuring the safety of AI and the protection of people who use it. Job impact refers to the belief that there will be more jobs generated than eliminated due to AI implementation. Familiarity with AI is the level of understanding of how AI technology works and how AI applications are used. These 3 drivers have a positive influence on trust, with current safeguards being its strongest driver. The fourth driver, AI uncertainty, impacts trust in a negative way. It implies the belief that the impact of AI on society is unpredictable and the technology is still not fully explored. Overall, these drivers influence the extent to which people trust the AI system and believe it to be trustworthy. Trust, then, is a large contributor to the level of acceptance, which is the extent to which people accept or approve of AI and are willing to use it without resistance [11]. In the scientific literature, trust can be defined in different ways. In this study, we used literature-derived definitions of trust and acceptance in the context of AI implementation, namely:

  • Trust is the belief of an individual that an AI application will do what it promises [12,13].
  • Acceptance is the willingness of an individual to use the AI application in medicine [14].

Therefore, it can be argued that acceptance of an AI application depends on trust people have toward this technology [11,15,16]. At the same time, people can often accept their usage of technologies without necessarily trusting them [17]. Therefore, it is important to consider the 2 concepts separately as well as together.

Overall, widespread trust in and acceptance of an AI application is crucial for successful introduction and implementation of the technology. Failure to ensure trust in and acceptance of AI technology would pose the risk of “stifling innovation” and causing unnecessary “opportunity costs” [18]. The lack of trust in AI applications in medicine impedes their adoption in health care, compounded by inadequate public assurance and attention to concerns, thereby exacerbating these challenges. In addition, the anticipated benefits of AI-based innovations can coexist with significant acceptance barriers [15,18-21].

Investigating what factors contribute to trust in and acceptance of AI technology in medicine would help us understand how to make the implementation and regulatory approval of AI-powered advanced therapy manufacturing systems as efficient as possible. This can be achieved by collecting insights into stakeholders’ perspectives with regard to trust and acceptance toward medical AI applications [2,22]. Factors contributing to trust and acceptance toward medical AI applications can be attributed a different weight by various groups of stakeholders with distinct roles in AI.

Study Objectives

Since AI applications are still relatively new, users and providers are hesitant to trust and accept this new technology without restrictions. As for the future implementation of AI applications in treatment centers, it is essential that stakeholders (eg, clinicians, researchers, hospital staff) accept and trust the innovative AI-based manufacturing platform. Therefore, the aim of this study was first to identify the factors related to trust in and acceptance of AI technology in medicine and second to assess the relevance of those factors among relevant stakeholders in medicine.


Study Setting

This study is part of the European Union’s (EU) Horizon 2020 project AIDPATH (AI-driven Decentralized Production for Advanced Therapies in the Hospital; grant agreement number 101016909) [22,23]. It is an upcoming state-of-the-art AI application in hospitals, which aims to develop an AI-driven, automated chimeric antigen receptor T cell (CAR-T) manufacturing platform at the point of care as a treatment for acute leukemia and lymphoma. In CAR-T therapy, the patient’s own T cells are removed, genetically modified, and reinfused into the patient in order to find and eliminate tumour cells. Current production is characterized by laborious manual process steps, complex logistics, and a lack of process understanding. This results in long delivery times (up to 21 days) and high costs (approx €320,000, or US $347,890, per treatment) [24,25]. For this reason, AIDPATH is developing a system to fully automate the manufacturing process, from the provision of patient cells to the injection directly in the hospital. An important building block for effective and equitable manufacturing is AI. AI can provide essential process insights into the cell’s characteristics and behavior. This offers a significant benefit for adaptive control of the whole process and the design of personalized process protocols. Furthermore, AI can assist cost-effective platform operation in a smart manufacturing hospital by improving manufacturing schedules and resource management [26]. In general, successful implementation of AIDPATH would serve as an example of an effective AI technology that automates the production and delivery of advanced therapy medicinal products (ATMPs). Furthermore, AI-powered technology can form the basis for a deployable platform for further pilot trials in multiple hospitals and would create a model innovation system for smart manufacturing hospitals [2,22].

In this study, to meet the study objectives, a rapid literature review was conducted, followed by a survey.

Rapid Literature Review

A rapid literature review of peer- and non-peer-reviewed publications was conducted to identify factors related to trust in and acceptance of AI applications used in medicine. As an alternative method to systematic reviews, a rapid review allows for accelerated synthesis of up-to-date evidence, while efficiently informing latest findings in recent health care research [27]. The peer- and non-peer-reviewed literature needed to be published between 2012 and 2022 in English. Data on attitudes toward AI in relation to prognosis, diagnosis, treatment, and care were included. The search was performed in PubMed/MEDLINE with the following search syntax: ((trust) OR (acceptance) OR (attitude) OR (perspective) OR (perception)) AND ((AI) OR (artificial intelligence) OR (machine learning) OR (deep learning)) AND (((prognosis) OR (diagnosis) OR (treatment) OR (care)) OR ((medic*) OR (clinic*) OR (hospital) OR (smart hospital) OR (health care)) AND ((survey) OR (questionnaire) OR (interview)). The reason for inclusion of only survey-, questionnaire-, or interview-based research in the search terms was due to their direct relevance to our research objectives.

In the non-peer-reviewed literature search, similar terms and time frame of publication were used and the first 10 pages on the Google Search engine were examined to identify other relevant papers and reports by (non)governmental and research organizations. This allowed the study findings to be applicable to a broad range of medical AI applications. Papers were screened, and data were extracted by 2 authors (DS and AA). The selected literature was analyzed to identify key trends and explanatory factors related to trust and acceptance toward medical AI applications. The factors were then grouped into 4 categories: human-related, technology-related, legal and ethical, and additional factors. These factor groups formed the basis of the survey designed to investigate factor relevance. This was performed independently by 2 authors (DS and AA).

Survey

The survey was reported in accordance with the CHERRIES (Checklist for Reporting Results of Internet E-Surveys) guidelines [28]. The survey in English assessed the relevance of the factors related to trust in and acceptance of novel AI applications in medicine. The survey started with an introduction to AI applications in medicine and AIDPATH, followed by 7 general questions on each participant’s background, including gender, age, the country they worked in, years of experience, the stakeholder group they belonged to, their familiarity with AI applications in medicine, and their general view on AI. In the last question, the following distinction was made between the answer options: “I embrace AI” meant welcoming and using AI as a constituent part of their work or life, “I approve of AI” implied that the participant agreed with the use of AI in their work or life but did not use it themselves, and “I accept AI” referred to acknowledging the use of AI in work or life but not being ready to fully approve it.

In the core section of the survey, the definitions of trust and acceptance were provided as a reference for participants. The core part also consisted of 2 identical lists of 19 factors related to trust and acceptance toward AI applications in medicine. Each factor was categorized into human-related, technology-related, legal and ethical, or additional factors. Human-related factors were linked to AI professionals assessed the relevance of the type of organization the AI professionals were affiliated to and the purpose to innovate with a specific AI application. With respect to health care professionals, the factors were related to the knowledge of AI applications and the attitude toward AI application usage in medicine. In relation to patients, the relevance of the following factors was assessed: general knowledge of AI applications in medicine, the attitude toward AI application usage in medicine, and the patient’s age, gender, and level of education. Furthermore, participants were asked to evaluate the relevance of transparency between all parties involved in AI application use. Technology-related factors related to the performance of AI applications in medicine, the possibility of their integration into existing clinical workflows, a clear balance of risks and benefits of the AI applications, and the explainability and transparency of processes and outcomes. The legal and ethical factors were related to the adequacy of regulations and governance of AI applications in medicine, data use transparency, and clear accountability and responsibility for an AI application. The additional factors were concerned with the environmental sustainability of AI applications and AI’s impact on job availability. For each factor, participants could indicate each factor’s relevance to trust in and acceptance of AI applications from their stakeholder perspective using a Likert scale of 1-5, where 1 stood for “not relevant,” 3 for “not irrelevant, nor relevant,” and 5 for “relevant.” Throughout the survey, “relevant” meant being highly significant for ensuring trust in or acceptance of AI applications, while “irrelevant” meant no significance. The N/A (not applicable) option was available as well for each factor. Open questions at the end of both sections allowed participants to suggest other relevant factors related to trust in or acceptance of AI applications that were not mentioned in the survey. Furthermore, the participants were invited to suggest any other factors, different from trust, deemed important for acceptance of AI applications in medicine.

Sampling

Using the convenience sampling method [29], AIDPATH Consortium members were requested to invite stakeholders in their network but outside the AIDPATH Consortium to fill in the survey on the SurveyMonkey platform. The survey was distributed by email to members of relevant stakeholder groups to capture their professional perspectives (eg, clinicians, scientists, and policy makers). Data were collected from April to May 2022 and analyzed using Microsoft Excel.

Data Collection and Analysis

After participants were asked to rate the relevance of each factor from 1 (irrelevant) to 5 (relevant), the mean score of each factor was determined by assigning each response a weight from 1 to 5. Next, means scores were calculated by finding an average of the sum of response values for each question. To visualize the survey responses and compare the mean scores for each factor included in the survey, a spider diagram was charted. This provided an overview of the factors' relevance and their relative importance in influencing both trust and acceptance toward AI applications in medicine. In addition, a scatter plot was created to obtain an overview of the interrelationship between the relevance to trust (x axis) in and acceptance (y axis) of AI applications in medicine. The plot allowed us to identify the degree of relevance of each factor in relation to both trust and acceptance. To classify the factors based on their relevance, score ranges were established. Factors with mean scores from 1 to 3 were considered to be of low relevance, while factors from 4 to 5 were deemed of high relevance. The open-question responses were considered when interpreting numerical data.

Ethical Considerations

Under Dutch law, no ethical approval was required according to Article 1b of the Dutch Medical Research in Human Subjects Act [30]. However, all participants were informed about the study objectives, their verbal consent was obtained, and all data were processed anonymously. All responses were recorded anonymously. Participants were informed of their right to withdraw from the study at any time without any consequences. They were not financially compensated.


Rapid Literature Review

The literature search (Figure 1) yielded 301 hits in the PubMed database and 105 hits through gray literature search and snowballing. After screening titles and abstracts, 284 (70%) records were excluded. After full-text screening, 90 (73.8%) records were excluded primarily due to the absence of concepts of trust or acceptance and a lack of factors related to trust or acceptance in the main text or data-containing figures. As a result, 32 (26.2%) papers and reports [7,9-12,15,16,19,21,31-53] were included in the data analysis.

Overall, the rapid review identified a total of 110 factors related to trust and 77 factors related to acceptance toward medical AI technology. The full list of factors identified through the rapid review with corresponding studies can be found in Multimedia Appendix 1. Tables 1-4 show all factors from the rapid review, each with the frequency of its appearance in the literature and the corresponding overarching umbrella factors. Some factors from a single study are repeated in the same category in Tables 1 and 2 on trust and Tables 3 and 4 on acceptance or within the same category (eg, health care professionals and patients subsections of the human-related factors section). The most frequently reported human-related factors related to trust (Tables 1 and 2) in medical AI applications were knowledge and understanding of AI by health care professionals and knowledge and education of AI among patients. In terms of technology-related factors, accuracy, transparency, reliability, safety, and explainability of medical AI applications and their functioning appeared most often in the literature. Regarding legal and ethical factors, the most frequently occurring factors included fairness and equity of medical AI technology and the privacy and security of personal data handled by the AI systems. The most frequently presented human-related factors related to acceptance (Tables 3 and 4) of medical AI technology were the perceived usefulness and provision of better medical services by the AI technology. Regarding technology-related factors linked to acceptance, performance expectancy, design and output quality, and transparency were stated in the literature most often. A wide range of legal and ethical factors were mentioned in the literature, including adequate regulations of medical AI technology, protection and security of patients’ data, and the allocation of accountability and responsibility for the (mal)functioning of an AI application. There were additional factors related to trust in and acceptance of medical AI technology (Tables 1-4). These included replacement of doctors by machines that lack a human touch and moral support, labor market implications, and environmental sustainability. Three studies also highlighted that acceptance of a medical AI application is directly related to trust in the AI application. Overall, there were fewer factors related to acceptance than those related to trust, whereas most of the overarching umbrella factors were fully represented in both tables. Therefore, an identical list of umbrella factors allocated within the 4 categories (human-related, technology-related, legal and ethical, and additional factors) was used in the survey for investigating the relevance of factors for both trust in and acceptance of AI applications in medicine.

Figure 1. PRISMA flow diagram of the rapid review literature screening. AI: artificial intelligence.
Table 1. Human-related factors related to trust (N=110) in medical AIa applications (22/32, 68.8%, studies).
Factor category and factors from the rapid reviewUmbrella factors used in the survey
AI professionals

AI company/provider (n=2, 9.1%)Type of institution/organization of AI professionals (eg, university, technology company, commercial organization)

AI role (n=1, 4.5%); perceived helpfulness (n=1, 4.5%)The purpose to innovate with a specific AI application in medicine (eg, financial vs societal)
Health care professionals

Knowledge and understanding of AI (n=6, 27.3%); education (n=3, 13.6%)Knowledge of AI applications in medicine (eg, by means of training and education)

Expectation of AI (n=1, 4.5%); perceived actionability (ie, clear recommendation for action; n=1, 4.5%); user’s social network (n=1, 4.5%); user’s media consumption (n=1, 4.5%)Attitude toward AI application usage in medicine (eg, agreeableness, openness, conscientiousness, engagement)
Patients informed about AI application usage in the hospital

Knowledge/education about AI (n=5, 22.7%); awareness of AI (n=2, 9.1%)General knowledge of AI applications in medicine

Openness (to AI health care technologies and to judgments of potential benefits and harms; n=1, 4.5%); perceived benefit and lower concern (n=1, 4.5%); user’s social network (n=1, 4.5%); user’s media consumption (n=1, 4.5%)Attitude toward AI application usage in medicine (eg, agreeableness, openness, conscientiousness)

Gender (n=2, 9.1%); age (n=1, 4.5%); type of user (n=1, 4.5%)Age, gender, level of education
All parties

Clinicians and patients interaction during AI integration (n=1, 4.5%); human agency and oversight (n=1, 4.5%)Transparency between all involved parties (AI professionals, health care professionals, patients)

aAI: artificial intelligence.

Table 2. Other factors related to trust (N=110) in medical AIa applications (22/32, 68.8%, studies).
Factor category and factors from the rapid reviewUmbrella factors used in the survey
Technology-related factors

Accuracy (n=7, 31.8%); reliability (n=5, 22.7%); safety (n=4, 18.2%); design and output quality (n=2, 9.1%); performance expectancy (n=2, 9.1%); ability (n=1, 4.5%); perceived functionality (n=1, 4.5%); self-efficacy (n=1, 4.5%); tool itself (n=1, 4.5%)Performance of AI applications in medicine (reproducibility of outcomes, accuracy)

Auditability (n=1, 4.5%); customizability (n=1, 4.5%); understandability (n=1, 4.5%); ease of integration into clinical workflows (n=1, 4.5%); convenience of use (n=1, 4.5%); usability (n=1, 4.5%); (over)alerting and excessive false-positive rate (n=1, 4.5%)Possibility of integration of AI applications into existing clinical workflows

Risk and impact mitigation (n=1, 4.5%)Clear balance of risks and benefits of the AI application

Transparency (n=6, 27.3%); explainability (n=5, 22.7%); evidence strength (n=2, 9.1%); benevolence (n=2, 9.1%); complexity (n=2, 9.1%); interpretability (n=2, 9.1%); integrity (n=1, 4.5%); predictability (n=1, 4.5%); trialability (n=1, 4.5%); trustworthiness (n=1, 4.5%)Explainability and transparency of the processes and outcomes
Legal and ethical factors

Fairness and equity (n=8, 36.4%); adequate regulations, legislation, and governance (n=3, 13.6%); ethical/legal implications (n=1, 4.5%)Adequacy of the regulations and governance of AI applications in medicine

Personal data privacy and security (n=8, 36.4%); data used to train AI/cognitive bias (n=2, 9.1%); data sensitivity (n=1, 4.5%); respect and preservation of human dignity (n=1, 4.5%)Data use transparency

Accountability (n=3, 13.6%); power-control balance (n=1, 4.5%)Clear accountability and responsibility of the AI application (machine vs human responsibility)
Additional factors

Environmental sustainability (n=1, 4.5%)Environment-friendly AI application

Replacement of doctor/lack of human touch and moral support when evaluated by AI alone (n=1, 4.5%); labor market implications (n=1, 4.5%)Impact on job availability (machines replacing humans)

aAI: artificial intelligence.

Table 3. Human-related factors related to acceptance (N=77) of medical AIa applications (14/32, 43.8%, studies).
Factor category and factors from the rapid reviewUmbrella factors used in the survey
AI professionals

AI company/provider (n=1, 7.1%); brand impact (n=1, 7.1%)Type of institution/organization of AI professionals (eg, university, technology company, commercial organization)

Perceived usefulness (n=3, 21.4%); better medical services/ understanding of disease (n=3, 21.4%); improve the quality of people’s lives (n=2, 14.3%); medical costs (n=2, 14.3%); AI role (eg, saving patients’ time; n=1, 7.1%); miniaturization of hardware (n=1, 7.1%)Purpose to innovate with a specific AI application in medicine (eg, financial vs societal)
Health care professionals

Knowledge and understanding of AI (n=1, 7.1%)Knowledge of AI applications in medicine (eg, by means of training and education)

Behavioral intention to use (n=2, 14.3%); effort expectancy (n=2, 14.3%); perceived ease of use (n=2, 14.3%); perceived usefulness (n=2, 14.3%); intrinsic motivation (n=1, 7.1%); interest in AI (n=1, 7.1%); professional identity (n=1, 7.1%); concerns about benefit to patient care (n=1, 7.1%); general impression of AI (n=1, 7.1%)Attitude toward AI application usage in medicine (eg, agreeableness, openness, conscientiousness, engagement)
Patients informed about AI application usage in the hospital

Knowledge/education about AI (n=1, 7.1%); awareness of AI (n=1, 7.1%)General knowledge of AI applications in medicine

Behavioral intention to use (n=2, 14.3%); general impression (n=1, 7.1%); Interest in topic (n=1, 7.1%)Attitude toward AI application usage in medicine (eg, agreeableness, openness, conscientiousness)

Age (n=1, 7.1%)Age
All parties

Expectations of others (n=2, 14.3%)Transparency between all involved parties (AI professionals, healthcarehealth care professionals, patients)

aAI: artificial intelligence.

Table 4. Other factors related to acceptance (N=77) of medical AIa applications (14/32, 43.8%, studies).
Factor category and factors from the rapid reviewUmbrella factors used in the survey
Technology-related factors

Performance expectancy (n=4, 28.6%); design and output quality (n=4, 28.6%); accuracy (n=2, 14.3%); efficiency (n=1, 7.1%)Performance of AI applications in medicine (reproducibility of outcomes, accuracy)

Perceived ease of use (n=2, 14.3%); user-friendliness (n=2, 14.3%); actual system use (n=1, 7.1%); compatibility (n=1, 7.1%); facilitating conditions (n=1, 7.1%)Possibility of integration of AI applications into existing clinical workflows

Perceived risk (n=1, 7.1%)Clear balance of risks and benefits of the AI application

Transparency (n=3, 21.4%); explainability (n=2, 14.3%); evidence strength (n=1, 7.1%); trustworthiness (n=1, 7.1%)Explainability and transparency of the processes and outcomes
Legal and ethical factors

Adequate regulations, legislation and governance (n=2, 14.3%); ethical risks (n=1, 7.1%); political support (n=1, 7.1%)Adequacy of the regulations and governance of AI applications in medicine

Data protection/security (n=2, 14.3%); patients’ consent to the continuous collection and processing of data (n=1, 7.1%)Data use transparency

Accountability and responsibility (n=2, 14.3%); tort liability (n=1, 7.1%)Clear accountability and responsibility of the AI application (machine vs human responsibility)
Additional factors

Replacement of doctor/lack of human touch and moral support when evaluated by AI alone (n=1, 7.1%)Impact on job availability (machines replacing humans)

Trust in AI applications (n=3, 21.4%)Acceptance emerging from trust

aAI: artificial intelligence.

Survey

Participants

A total of 22 respondents participated in the survey, of which 18 (82%) completed the questions on trust and 15 (68%) completed the questions on acceptance. No reasons were provided for not completing the survey. Table 5 shows the characteristics of the survey participants, the majority (n=21, 95%) of whom came from European countries, were aged from 40 to 60 years, and had 0-10 or 21-30 years of professional experience.

Participants were mainly slightly (n=7, 32%) or moderately (n=8, 36%) familiar with AI-based devices used for clinical purposes (Figure 2). In thinking about AI, 9 (41% ) of the participants indicated that the statement “I accept AI” best represents their view, followed by “I approve of AI” (n=6, 27%) and “I embrace AI” (n=5, 23%); see Figure 3.

Table 5. Characteristics of the participants (N=22).
Characteristic and type of participantParticipants, n (%)
Stakeholder groupa

Researchers18 (82)

Technology providers5 (23)

Hospital staff3 (14)

Policy makers3 (14)
Gender

Female8 (36)

Male13 (59)

Prefer not to say1 (5)
Age (years)

≤302 (9)

31-393 (14)

40-496 (27)

50-595 (23)

≥606 (27)
Country of work

Netherlands11 (50)

Germany3 (14)

Ireland2 (9)

Spain2 (9)

France1 (5)

Hungary1 (5)

India1 (5)

Italy1 (5)
Years of professional experience

0-106 (27)

11-204 (18)

21-307 (32)

31-405 (23)

aParticipants sometimes represented more than 1 stakeholder group.

Figure 2. Familiarity with AI-based devices used for clinical purposes (N=22). AI: artificial intelligence.
Figure 3. Statement best representing participants’ view when thinking about AI (N=22). AI: artificial intelligence.

Relevance of Factors for Trust in and Acceptance of AI

In Table 6, the mean scores per factor for its relevance to trust and acceptance are shown. Figure 4 demonstrates a spider diagram with the 19 summarized statements and the corresponding mean scores of relevance to trust in and acceptance of AI applications in medicine. The degrees of relevance of the factors related to trust and to acceptance closely followed each other for all but 1 (5.3%) of the 19 factors. Only the type of AI organization was slightly more relevant to trust than to acceptance toward AI applications in medicine. In Figure 5, a scatter plot displays the combined relevance of the factors related to trust (x axis) and acceptance (y axis) toward medical AI applications. Of the 19 factors included in the survey, 3 (16%) were found to have, on average, low relevance, while the other 16 (84%) had high relevance. There were no factors relevant to acceptance and irrelevant to trust (upper-left section in the plot) and vice versa (bottom-right section in the plot).

Table 6. Mean (SD) factor relevance to trust and acceptance (N=22).
FactorTrustAcceptance
Type of AIa organization4.72 (0.75)4.27 (0.88)
Purpose to innovate with AI4.33 (0.84)4.47 (0.64)
Clinicians’ knowledge about AI4.50 (0.51)4.73 (0.46)
Clinicians’ attitude towards AI4.50 (0.51)4.47 (0.64)
Patients’ knowledge of AI4.17 (0.62)4.20 (0.68)
Patients’ attitude toward AI4.28 (0.57)4.47 (0.64)
Patients’ age3.17 (1.04)3.47 (1.19)
Patients’ gender2.61 (1.14)2.67 (1.05)
Patients’ education level3.50 (0.99)3.53 (1.19)
Transparency between all parties4.61 (0.50)4.47 (0.64)
Performance of AI4.83 (0.38)4.67 (0.62)
Possibility of AI integration into existing workflows4.56 (0.62)4.53 (0.83)
Clear balance of AI risks and benefits4.67 (0.49)4.60 (0.63)
Explainability and transparency of AI processes4.78 (0.43)4.60 (0.63)
Adequacy of AI regulations4.72 (0.57)4.60 (0.83)
Data use transparency4.61 (0.50)4.67 (0.49)
Clear accountability and responsibility of AI4.61 (0.61)4.80 (0.41)
Environmental friendliness of AI3.83 (0.79)3.87 (0.92)
Impact on job availability3.78 (1.11)4.07 (0.88)

aAI: artificial intelligence.

Figure 4. Mean scores of factors’ relevance to trust in and acceptance of AI applications in medicine (N=19). Score=1 means irrelevant; score=3 means not irrelevant, nor relevant; and score=5 means relevant. AI: artificial intelligence.
Figure 5. Overview of the relevance of factors related to trust in and acceptance of novel AI applications in medicine (1=not relevant, 5=relevant). AI: artificial intelligence.
Factors of Low Relevance

With regard to patients informed about AI application usage in the hospital, participants deemed the patient’s gender, age, and educational level to be of low relevance to trust in and acceptance of novel AI applications in medicine.

Factors of High Relevance

The majority of factors were deemed highly relevant to trust in and acceptance of novel AI applications in medicine by participants. Regarding AI professionals, it was observed that the type of institution or organization where AI professionals originated from (eg, university, technology company, commercial organization) and the purpose to innovate with a specific AI application in medicine (eg, financial, societal, or clinical purpose) were considered relevant. Participants reported that the involvement of health care professionals having knowledge of the AI application (eg, by means of training and education) is highly relevant to trust and acceptance. The health care professionals’ attitude toward AI application usage in medicine, comprising their agreeableness, openness, conscientiousness, and engagement, was found to be equally important. Likewise, the patients’ general knowledge of and attitude toward AI application usage in medicine were found to be relevant. The transparency between all involved parties (AI professionals, health care professionals, and patients) was also deemed highly relevant. Technology-related factors were found to be highly relevant, too, in particular the performance of AI applications in medicine (eg, reproducibility and accuracy of outcomes), the possibility of integration of the AI applications into existing clinical workflows, having a clear balance of risks and benefits of the AI applications, and the explainability and transparency of the processes and outcomes. Legal and ethical factors were also considered of high relevance and concerned the adequacy of the regulations and governance of AI applications in medicine, data use transparency, and clear accountability and responsibility of the AI applications (machine vs human responsibility). Additional factors, such as AI applications being environment friendly and the impact of medical AI on job availability (eg, machines replacing human beings), were viewed as factors of high relevance.

Other Factors

Participants were able to share other factors that were not mentioned in the survey questions. Factors related to trust included solidarity and understanding the bias and interdomain knowledge of AI in software development, data science, and medicine. Other factors related to acceptance were the extent to which alternatives to AI applications are available, the length of experience, transparency about limitations, reproducibility, risks evaluation, resources, and the fear to use an AI application (ie, fear of making the wrong decision or fear of losing control).


Principal Findings

This study aimed to identify factors related to trust and acceptance toward medical AI applications by means of a rapid review and to assess their relevance by conducting a survey. Through the rapid review, 19 key factors related to trust in and acceptance of AI-powered medical technologies were identified and subsequently grouped into 4 categories. Our survey results highlight that of all examined factors, 84% (16/19) were considered highly relevant to trust in and acceptance of novel AI applications in medicine. Only the patient’s gender, age, and education level (3/19, 16%) were deemed to be of low relevance by participants.

Comparison With Prior Work

Previous studies have reported that trust in technology is mainly determined by human characteristics [54], technology-related factors [55], and environment-related factors [56], which is in line with the findings of our survey. According to Tran et al [57], who investigated patients’ perceived benefits and risks of using digital and AI technology in health care, the important factors to consider are the new technologies requiring an overhaul of the current health care system as human care is being replaced by machines and health care professionals becoming sufficiently equipped with increasing knowledge of AI technology. This highlights the importance of several survey factors, including the possibility of AI integration into existing clinical workflows. Therefore, setting features such as understandability, usability, and user-friendliness (factors that frequently appeared in the rapid review) by AI professionals as key goals in the development of novel AI applications would increase the chances of successful integration of AI technology into health care systems. Tran et al [57] also highlighted the increasing importance of data use transparency toward patients and the acute need for clear accountability and responsibility (machine vs human responsibility) concerning the new technology, which also goes hand in hand with the findings from the rapid review and the survey [57]. The patient data handling must be organized in accordance with the existing data protection regulations in respective countries, with additional precautionary measures due to the sensitive nature of such medical data [57]. Shin et al [58] demonstrated that explainability of AI plays a big role in user trust and attitude toward AI. Explainability, along with transparency, was also found to be highly relevant in our study, especially in relation to the AI application processes and outcomes. In addition, Vourgidis et al [59] recommended that AI systems be regularly checked for being up to date, since today’s technology is continuously evolving. This again highlights the relevance of the education of health care professionals, since they are the primary users of medical AI technology and hence need to follow the developments in the field. Yang et al [49] found that gender is not relevant to trust in AI technology in medicine. This agrees with our finding that a patient’s gender has low relevance to trust in and acceptance of AI technology in medicine. Contrary to our findings, it has been reported that younger generations in general have more trust and are more likely to accept AI systems compared to older generations [11]. In our survey, the majority of participants were aged 40-60 years and above and they exhibited a solid awareness of and a positive attitude toward AI technology. Gillespie et al [11] also stated that highly educated people (university level) are more likely to trust and accept AI systems compared to those without a university degree. However, our survey showed that a patient’s educational level has low relevance to trust in and acceptance of medical AI applications.

Strengths and Limitations

To the best of our knowledge, this is the first study to use a rapid review of the latest literature to identify factors related to trust in and acceptance of AI applications in medicine in order to create a survey to evaluate their relevance and the attitudes of health care stakeholders toward implementation of medical AI applications. However, the study has several limitations. Since a large number of papers and reports in the rapid review did not provide sufficient context for the factors for trust or acceptance, there could have been an increased risk of personal bias during interpretation and categorization of those factors. Furthermore, some studies did not clarify whether the reported factors were related to only trust or only acceptance, which could also lead to possible misinterpretation. To minimize the effect of such bias and misinterpretation, a third reviewer (author HJMV) was consulted in such cases. Another limitation is the relatively small number of papers included in the rapid review, given the breadth of the topic. However, this rapid review was intentionally conducted focusing on the most relevant and recent literature to provide an initial overview and highlight key themes in a time-efficient manner. We aimed to provide a starting point that formed the basis for the survey. In addition, the number of participants included in the survey can be considered relatively low, which was caused by difficulties in recruiting participants and the time-constrained nature of the study. However, sufficient diversity in participant characteristics (ie, gender, age, country of work, and years of professional experience) was achieved, which could be considered more important in terms of validity of the study findings. Even though the survey benefited from a sample with a wide diversity in participant characteristics, one of the limitations to consider is the underrepresentation of certain stakeholder groups, in particular technology providers, policy makers, and hospital staff members other than clinicians. If these groups had been included in the survey, different patterns in factor relevance might have been observed, potentially shedding light on additional concerns or challenges associated with AI applications in medicine. Moreover, when considering the relevance of factors assessed through the survey, which were predominantly highlighted by researchers, it is important to note that these factors might be readily attainable or already well established within this specific stakeholder group. As a result, these factors may not necessarily represent challenges or barriers for this particular group, as they are already well versed in the aspects related to trust and acceptance.

Recommendations for Future Research

The results of this study can be valuable for various stakeholders involved in the implementation of novel AI applications, since trust and acceptance building remains a focus point throughout the different stages, including the pilot, implementation, evaluation, and monitoring phases of the process. In the survey, participants shared other factors related to trust in and acceptance of AI applications in medicine that were not included in the survey. However, due to a lack of context, it is not entirely clear what was meant by some of these factors; since these are open to interpretation, follow-up research is required to better understand this. In addition, further research is needed to gain insight into the reasons participants considered factors to be of low or high relevance. Regarding the currently underrepresented stakeholder groups in the survey, more research is required to gain insight into the perspectives of policy regulators, technology providers, and hospital staff members. Next, once the implementation of a novel AI technology, such as the AIDPATH system, becomes clear from the trust and acceptance point of view, it would be beneficial to conduct a workshop with experts from the AI and biotechnology fields to identify technical challenges of implementation. This is crucial since, according to the survey results, the technical robustness and clarity of AI applications is a prerequisite for trust and acceptance exhibited toward this technology by stakeholders.

Recommendations for Implementation

By considering the factors that are most relevant in the AI technology adoption process, the implementers can facilitate trust in and acceptance of medical AI applications among their users and other stakeholders. Furthermore, the knowledge of the factors with high relevance to stakeholders can predict concerns the potential users might have regarding the new AI technology and act upon these concerns to implement the AI application efficiently and in a timely manner. There are several ways in how the results of the survey could be used by AI implementers, such as smart hospitals, to build trust and acceptance among various stakeholder groups. For instance, the highly relevant factor of knowledge and understanding of AI among health care professionals could be addressed by providing information about medical AI to clinicians in the form of conferences and educational workshops. These initiatives can ensure that health care professionals remain updated on significant changes in AI technology, facilitating its accurate utilization. Similarly, patients could be informed of medical AI technology through patient information initiatives in (smart) hospitals and within patient communities. The highly relevant technology-related factors could be used by technology developers and scientific researchers as guidance in the development of novel AI technology. For regulators and policy makers, it is crucial to know that users and other stakeholders consider data use transparency and fairness and equity to be of utmost importance regarding novel medical AI technology. Indeed, data privacy is a crucial and ever-so-present topic in legislation and regulations, but it needs to be constantly reviewed by policy makers due to the newness of AI in health care and the speed of its development. The legal aspects of software containing AI have been subjected to the Medical Device Regulation (MDR) [60]. For the acceptance of AI, its implementation in MDR-compliant solutions is invaluable. The tasks of policy makers could involve the risk assessment of various data breaches related to AI in medicine with continuous updating of regulations related to data security and privacy within the field of medical AI. Furthermore, both policy makers and AI professionals have to ensure the maintenance of fairness and equity of AI technology usage.

Conclusion

This study identified and assessed the relevance of factors for trust in and acceptance of AI applications in medicine. The survey demonstrated that the majority of the identified human-related, technology-related, and legal and ethical factors for trust in and acceptance of novel AI applications in medicine were considered by stakeholders to be of high relevance. Taken together, these findings and subsequent recommendations could be used by any implementers of medical AI, such as (smart) hospitals, AI technology organizations, biotechnology research institutes, and policy makers, to facilitate smooth and timely adoption of novel AI applications in medicine.

Acknowledgments

Our gratitude goes out to the AIDPATH (artificial intelligence [AI]-driven Decentralized Production for Advanced Therapies in the Hospital) Consortium partners, the Fraunhofer Institute for Cell Therapy and Immunology IZI, University College London, the Foundation for Research and Technology (FORTH)-Hellas, SZTAKI, Aglaris Cell SL, Sartorius Cell Genix GmbH, the Fundació Clínic per a la Recerca Biomèdica, IRIS Technology Solutions, Red Alert Labs, and ORTEC b.v. We would also like to thank the advisory board members of AIDPATH for providing feedback on the survey. Lastly, we thank the survey’s responders.

The paper was written within the framework of the European Union’s (EU) Horizon 2020 research and innovation program project AIDPATH (grant agreement number 101016909). All mentioned colleagues/companies are part of AIDPATH and therefore received funding from the EU within the scope of AIDPATH.

Data Availability

The data used and analyzed in this study are available from the corresponding author upon reasonable request.

Authors' Contributions

DS was responsible for methods design, search strategy design, data acquisition, data extraction, data analysis, interpretation of data, and design and writing of the manuscript; AA, methods design, search strategy design, data acquisition, data extraction, data analysis, interpretation of data, regularly reviewing the work, design and writing of the manuscript, and providing feedback on the manuscript; IWAB, methods design, data acquisition, and providing feedback on the manuscript; CS, methods design, regularly reviewing the work, and providing feedback on the manuscript; MH, providing feedback on the manuscript and funding acquisition; JJLJ, reviewing the work and providing feedback on the manuscript; SH, contributing to the Introduction chapter, providing feedback on the manuscript, project administration, and funding acquisition; and HJMV, concept and design of the overall study, quality assessment, interpretation of data, regularly reviewing the work, providing feedback on the manuscript, and manuscript final approval.

Conflicts of Interest

MH reports speaker honoraria from Novartis, Janssen, and Celgene/BMS and has participated in scientific advisory boards for Janssen and Celgene/BMS. MH is also listed as an inventor on patent applications and has been granted patents related to chimeric antigen receptor (CAR) technologies and CAR T cell therapy that have been filed by the Fred Hutchinson Cancer Research Center and the University of Wurzburg and that have been, in part, licensed to industry. In addition, MH is a cofounder and equity owner of T-CURX GmbH, Wurzburg, Germany. The remaining authors declare that they have no competing interests.

Multimedia Appendix 1

Identified factors and corresponding studies included in the rapid review.

DOC File , 32 KB

  1. Artificial intelligence in medicine regulation. European Medicines Agency. URL: https://www.ema.europa.eu/en/news/artificial-intelligence-medicine-regulation [accessed 2022-11-25]
  2. Lekadir K, Quaglio G, Tselioudis Garmendia A, Gallin C. Artificial intelligence in healthcare: applications, risks, and ethical and societal impacts. European Parliament. URL: https://www.europarl.europa.eu/stoa/en/document/EPRS_STU(2022)729512 [accessed 2023-12-04]
  3. Bohr A, Memarzadeh K. The rise of artificial intelligence in healthcare applications. Artif Intell Healthc. Jun 2020:26-60. [CrossRef]
  4. Callahan G. What are the advantages of artificial intelligence? rev. URL: https:/​/www.​rev.com/​blog/​speech-to-text-technology/​what-are-the-advantages-of-artificial-intelligence [accessed 2023-12-04]
  5. Koski E, Murphy J. AI in Healthcare. Stud Health Technol Inform. Dec 15, 2021:284-299. [CrossRef]
  6. Murphy K, Di Ruggiero E, Upshur R, Willison DJ, Malhotra N, Cai JC, et al. Artificial intelligence for good health: a scoping review of the ethics literature. BMC Med Ethics. Feb 15, 2021;22(1):14. [FREE Full text] [CrossRef] [Medline]
  7. Rajpurkar P, Chen E, Banerjee O, Topol EJ. AI in health and medicine. Nat Med. Jan 20, 2022;28(1):31-38. [CrossRef] [Medline]
  8. Obermeyer Z, Powers B, Vogeli C, Mullainathan S. Dissecting racial bias in an algorithm used to manage the health of populations. Science. Oct 25, 2019;366(6464):447-453. [CrossRef] [Medline]
  9. Fujimori R, Liu K, Soeno S, Naraba H, Ogura K, Hara K, et al. Acceptance, barriers, and facilitators to implementing artificial intelligence-based decision support systems in emergency departments: quantitative and qualitative evaluation. JMIR Form Res. Jun 13, 2022;6(6):e36501. [FREE Full text] [CrossRef] [Medline]
  10. Schwartz JM, George M, Rossetti SC, Dykes PC, Minshall SR, Lucas E, et al. Factors influencing clinician trust in predictive clinical decision support systems for in-hospital deterioration: qualitative descriptive study. JMIR Hum Factors. May 12, 2022;9(2):e33960. [FREE Full text] [CrossRef] [Medline]
  11. Gillespie N, Lockey S, Curtis C. Trust in artificial intelligence: a five country study. KPMG. 2021. URL: https://assets.kpmg.com/content/dam/kpmg/au/pdf/2021/trust-in-ai-multiple-countries.pdf [accessed 2023-12-04]
  12. Ehsan Toreini MA, Coopamootoo K, Elliott K, Gonzalez Zelaya C, van Moorsel A. The relationship between trust in AI and trustworthy machine learning technologies. Presented at: ACM FAT* 2020: 2020 Conference on Fairness, Accountability, and Transparency; January 27-30, 2020, 2020;272-283; Barcelona. [CrossRef]
  13. Ferrario A, Loi M, Viganò E. In AI we trust incrementally: a multi-layer model of trust to analyze human-artificial intelligence interactions. Philos Technol. Oct 23, 2019;33(3):523-539. [CrossRef]
  14. Gursoy D, Chi O, Lu L, Nunkoo R. Consumers acceptance of artificially intelligent (AI) device use in service delivery. Int J Inf Manag. Dec 2019;49:157-169. [FREE Full text] [CrossRef]
  15. Cornelissen L, Egher C, van Beek V, Williamson L, Hommes D. The drivers of acceptance of artificial intelligence-powered care pathways among medical professionals: web-based survey study. JMIR Form Res. Jun 21, 2022;6(6):e33368. [FREE Full text] [CrossRef] [Medline]
  16. Joshi M, Mecklai K, Rozenblum R, Samal L. Implementation approaches and barriers for rule-based and machine learning-based sepsis risk prediction tools: a qualitative study. JAMIA open. JAMIA Open. Jul 2022;5(2):ooac022. [CrossRef]
  17. Rainie L, Anderson J. The fate of online trust in the next decade: theme 3: trust will not grow, but technology usage will continue to rise as a 'new normal' sets in. Pew Research Center. Jun 1, 2023. URL: https:/​/www.​pewresearch.org/​internet/​2017/​08/​10/​theme-3-trust-will-not-grow-but-technology-usage-will-continue-to-rise-as-a-new-normal-sets-in/​ [accessed 2023-12-04]
  18. Kerasidou CX, Kerasidou A, Buscher M, Wilkinson S. Before and beyond trust: reliance in medical AI. J Med Ethics. Nov 23, 2022;48(11):852-856. [FREE Full text] [CrossRef] [Medline]
  19. Asan O, Bayrak AE, Choudhury A. Artificial intelligence and human trust in healthcare: focus on clinicians. J Med Internet Res. Jun 19, 2020;22(6):e15154. [FREE Full text] [CrossRef] [Medline]
  20. Eltawil FA, Atalla M, Boulos E, Amirabadi A, Tyrrell PN. Analyzing barriers and enablers for the acceptance of artificial intelligence innovations into radiology practice: a scoping review. Tomography. Jul 28, 2023;9(4):1443-1455. [FREE Full text] [CrossRef] [Medline]
  21. Matheny M, Israeli S, Ahmed M, Whicher D. Artificial Intelligence in Health Care: The Hope, the Hype, the Promise, the Peril. Washington, DC. The National Academies Press; 2019.
  22. AIDPATH. Sciencrew. URL: https://www.sciencrew.com/c/6499?title=AIDPATH [accessed 2023-12-04]
  23. Artificial intelligence-driven, decentralized production for advanced therapies in the hospital. European Commission. URL: https://cordis.europa.eu/project/id/101016909 [accessed 2023-01-17]
  24. Ran T, Eichmüller SB, Schmidt P, Schlander M. Cost of decentralized CAR T-cell production in an academic nonprofit setting. Int J Cancer. Dec 15, 2020;147(12):3438-3445. [FREE Full text] [CrossRef] [Medline]
  25. Hoogendoorn K, Main H. What are ATMPs? Claryifying misconceptions. ATMP Sweden. Jun 12, 2020. URL: https://atmpsweden.se/wp-content/uploads/2020/06/What-are-ATMPs-170620.pdf [accessed 2023-12-04]
  26. Hort S, Herbst L, Bäckel N, Erkens F, Niessing B, Frye M, et al. Toward rapid, widely available autologous CAR-T cell therapy – artificial intelligence and automation enabling the smart manufacturing hospital. Front Med (Lausanne). Jun 6, 2022;9:913287. [FREE Full text] [CrossRef] [Medline]
  27. Arevalo-Rodriguez I, Tricco AC, Steingart KR, Nussbaumer-Streit B, Kaunelis D, Alonso-Coello P, et al. Challenges of rapid reviews for diagnostic test accuracy questions: a protocol for an international survey and expert consultation. Diagn Progn Res. Apr 4, 2019;3(1):7. [FREE Full text] [CrossRef] [Medline]
  28. Eysenbach G. Improving the quality of web surveys: the Checklist for Reporting Results of Internet E-Surveys (CHERRIES). J Med Internet Res. Sep 29, 2004;6(3):e34. [FREE Full text] [CrossRef] [Medline]
  29. Polland RJ. Essentials of survey research and analysis. Psycholosphere. 1998. URL: https:/​/www.​psycholosphere.com/​Essentials%20of%20Survey%20Research%20and%20Analysis%20by%20Ronald%20Polland.​pdf [accessed 2023-12-04]
  30. 30. Act on medical-scientific research with humans. Central Committee on Research Involving Human Subjects. URL: https://wetten.overheid.nl/BWBR0009408/2022-07-01 [accessed 2023-09-11]
  31. Alaqra AS, Kane B, Fischer-Hübner S. Machine learning-based analysis of encrypted medical data in the cloud: qualitative study of expert stakeholders' perspectives. JMIR Hum Factors. Sep 16, 2021;8(3):e21810. [FREE Full text] [CrossRef] [Medline]
  32. Allen D. Trust and artificial intelligence in healthcare. MedicalExpo e-Magazine. URL: https://emag.medicalexpo.com/trust-and-artificial-intelligence-in-healthcare/ [accessed 2023-12-04]
  33. Antes AL, Burrous S, Sisk BA, Schuelke MJ, Keune JD, DuBois JM. Exploring perceptions of healthcare technologies enabled by artificial intelligence: an online, scenario-based survey. BMC Med Inform Decis Mak. Jul 20, 2021;21(1):221. [FREE Full text] [CrossRef] [Medline]
  34. Brady D. Building public trust to successfully deploy AI in EU healthcare. European Policy Centre. Jan 26, 2022. URL: https://www.epc.eu/content/PDF/2022/CHES_PB_v2-2.pdf [accessed 2023-12-04]
  35. Diprose WK, Buist N, Hua N, Thurier Q, Shand G, Robinson R. Physician understanding, explainability, and trust in a hypothetical machine learning risk calculator. J Am Med Inform Assoc. Apr 01, 2020;27(4):592-600. [FREE Full text] [CrossRef] [Medline]
  36. Gille F, Jobin A, Ienca M. What we talk about when we talk about trust: theory of trust for AI in healthcare. Intell-Based Med. Nov 2020;1-2:100001-100002. [FREE Full text] [CrossRef]
  37. Haggenmüller S, Krieghoff-Henning E, Jutzi T, Trapp N, Kiehl L, Utikal JS, et al. Digital natives' preferences on mobile artificial intelligence apps for skin cancer diagnostics: survey study. JMIR Mhealth Uhealth. Aug 27, 2021;9(8):e22909. [FREE Full text] [CrossRef] [Medline]
  38. Hui CY, McKinstry B, Fulton O, Buchner M, Pinnock H. Patients' and clinicians' perceived trust in internet-of-things systems to support asthma self-management: qualitative interview study. JMIR Mhealth Uhealth. Jul 16, 2021;9(7):e24127. [FREE Full text] [CrossRef] [Medline]
  39. Jauk S, Kramer D, Avian A, Berghold A, Leodolter W, Schulz S. Correction to: technology acceptance of a machine learning algorithm predicting delirium in a clinical setting: a mixed-methods study. J Med Syst. Mar 10, 2021;45(4):52. [FREE Full text] [CrossRef] [Medline]
  40. Jermutus E, Kneale D, Thomas J, Michie S. Influences on user trust in healthcare artificial intelligence: a systematic review. Wellcome Open Res. Feb 18, 2022;7:65. [CrossRef]
  41. Jobin A, Ienca M, Vayena E. The global landscape of AI ethics guidelines. Nat Mach Intell. Sep 02, 2019;1(9):389-399. [CrossRef]
  42. Kovalchuk S, Kopanitsa G, Derevitskii I, Matveev G, Savitskaya D. Three-stage intelligent support of clinical decision making for higher trust, validity, and explainability. J Biomed Inform. Mar 2022;127:104013. [FREE Full text] [CrossRef] [Medline]
  43. Shah P, Mishra D, Shanmugam M, Vighnesh M, Jayaraj H. Acceptability of artificial intelligence-based retina screening in general population. Indian J Ophthalmol. 2022;70(4):1140. [CrossRef]
  44. Tamori H, Yamashina H, Mukai M, Morii Y, Suzuki T, Ogasawara K. Acceptance of the use of artificial intelligence in medicine among Japan's doctors and the public: a questionnaire survey. JMIR Hum Factors. Mar 16, 2022;9(1):e24680. [CrossRef]
  45. Tucci V, Saary J, Doyle TE. Factors influencing trust in medical artificial intelligence for healthcare professionals: a narrative review. J Med Artif Intell. Mar 2022;5:4-4. [CrossRef]
  46. van Bussel MJP, Odekerken-Schröder GJ, Ou C, Swart RR, Jacobs MJG. Analyzing the determinants to accept a virtual assistant and use cases among cancer patients: a mixed methods study. BMC Health Serv Res. Jul 09, 2022;22(1):890. [FREE Full text] [CrossRef] [Medline]
  47. Walter S, Gruss S, Frisch S, Liter J, Jerg-Bretzke L, Zujalovic B, et al. "What about automated pain recognition for routine clinical use?" A survey of physicians and nursing staff on expectations, requirements, and acceptance. Front Med (Lausanne). Dec 21, 2020;7:566278. [FREE Full text] [CrossRef] [Medline]
  48. Williams J, Sim A, Lee B, Colon A, Lee R, Schick A, et al. Trustworthy AI (TAI) playbook. U.S. DEPARTMENT OF HEALTH & HUMAN SERVICES. Sep 2021. URL: https://www.hhs.gov/sites/default/files/hhs-trustworthy-ai-playbook.pdf [accessed 2023-12-04]
  49. Yang K, Zeng Z, Peng H, Jiang Y. Attitudes of Chinese cancer patients toward the clinical use of artificial intelligence. Patient Prefer Adher. Nov 2019;13:1867-1875. [CrossRef]
  50. Yap A, Wilkinson B, Chen E. Patients perceptions of artificial intelligence in diabetic eye screening. Asia-Pac J Ophthalmol (Philadelphia, Pa). May 1, 2022;11(3):287-293. [CrossRef]
  51. Zhang Z, Citardi D, Wang D, Genc Y, Shan J, Fan X. Patients' perceptions of using artificial intelligence (AI)-based technology to comprehend radiology imaging data. Health Inform J. Apr 29, 2021;27(2):14604582211011215. [FREE Full text] [CrossRef] [Medline]
  52. Zhang Z, Genc Y, Wang D, Ahsen ME, Fan X. Effect of AI explanations on human perceptions of patient-facing AI-powered healthcare systems. J Med Syst. May 04, 2021;45(6):64. [CrossRef] [Medline]
  53. Zheng B, Wu M, Zhu S, Zhou H, Hao X, Fei F, et al. Attitudes of medical workers in China toward artificial intelligence in ophthalmology: a comparative survey. BMC Health Serv Res. Oct 09, 2021;21(1):1067. [FREE Full text] [CrossRef] [Medline]
  54. Hengstler M, Enkel E, Duelli S. Applied artificial intelligence and trust—the case of autonomous vehicles and medical assistance devices. Technol Forecast Soc Change. Apr 2016;105:105-120. [FREE Full text] [CrossRef]
  55. Schaefer KE, Chen JYC, Szalma JL, Hancock PA. A meta-analysis of factors influencing the development of trust in automation: implications for understanding autonomy in future systems. Hum Factors. May 22, 2016;58(3):377-400. [CrossRef] [Medline]
  56. Oleson K, Billings D, Kocsis V, Chen J, Hancock P. Antecedents of trust in human-robot collaborations. Presented at: 2011 IEEE International Multi-Disciplinary Conference on Cognitive Methods in Situation Awareness and Decision Support (CogSIMA); February 22-24, 2011, 2011;175-178; Miami Beach, FL. [CrossRef]
  57. Tran V, Riveros C, Ravaud P. Patients' views of wearable devices and AI in healthcare: findings from the ComPaRe e-cohort. NPJ Digit Med. Jun 14, 2019;2(1):53. [FREE Full text] [CrossRef] [Medline]
  58. Shin D. The effects of explainability and causability on perception, trust, and acceptance: implications for explainable AI. Int J Hum-Comput Stud. Feb 2021;146:102551. [FREE Full text] [CrossRef]
  59. Vourgidis I, Mafuma S, Wilson P, Carter J, Cosma G. Medical Expert Systems – A Study of Trust and Acceptance by Healthcare Stakeholders. Berlin. Springer International Publishing; 2019;108-119.
  60. EUR-Lex. Regulation (EU) 2017/745 of the European Parliament and of the Council of 5 April 2017 on medical devices, amending Directive 2001/83/EC, Regulation (EC) No 178/2002 and Regulation (EC) No 1223/2009 and repealing Council Directives 90/385/EEC and 93/42/EEC. European Parliament. 2017. URL: https://eur-lex.europa.eu/eli/reg/2017/745/oj [accessed 2023-12-04]


AI: artificial intelligence
AIDPATH: AI-powered Decentralized Production for Advanced Therapies in the Hospital
CAR-T: chimeric antigen receptor T cell
MDR: Medical Device Regulation


Edited by A Kushniruk, E Borycki; submitted 06.03.23; peer-reviewed by D Kohen, Y Zhuang, J Zeng; comments to author 17.06.23; revised version received 25.09.23; accepted 20.11.23; published 17.01.24.

Copyright

©Daria Shevtsova, Anam Ahmed, Iris W A Boot, Carmen Sanges, Michael Hudecek, John J L Jacobs, Simon Hort, Hubertus J M Vrijhoef. Originally published in JMIR Human Factors (https://humanfactors.jmir.org), 17.01.2024.

This is an open-access article distributed under the terms of the Creative Commons Attribution License (https://creativecommons.org/licenses/by/4.0/), which permits unrestricted use, distribution, and reproduction in any medium, provided the original work, first published in JMIR Human Factors, is properly cited. The complete bibliographic information, a link to the original publication on https://humanfactors.jmir.org, as well as this copyright and license information must be included.