This is an open-access article distributed under the terms of the Creative Commons Attribution License (https://creativecommons.org/licenses/by/4.0/), which permits unrestricted use, distribution, and reproduction in any medium, provided the original work, first published in JMIR Human Factors, is properly cited. The complete bibliographic information, a link to the original publication on https://humanfactors.jmir.org, as well as this copyright and license information must be included.
The use of artificial intelligence (AI) in the medical industry promises many benefits, so AI has been introduced to medical practice primarily in developed countries. In Japan, the government is preparing for the rollout of AI in the medical industry. This rollout depends on doctors and the public accepting the technology. Therefore it is necessary to consider acceptance among doctors and among the public. However, little is known about the acceptance of AI in medicine in Japan.
This study aimed to obtain detailed data on the acceptance of AI in medicine by comparing the acceptance among Japanese doctors with that among the Japanese public.
We conducted an online survey, and the responses of doctors and members of the public were compared. AI in medicine was defined as the use of AI to determine diagnosis and treatment without requiring a doctor. A questionnaire was prepared referred to as the unified theory of acceptance and use of technology, a model of behavior toward new technologies. It comprises 20 items, and each item was rated on a five-point scale. Using this questionnaire, we conducted an online survey in 2018 among 399 doctors and 600 members of the public. The sample-wide responses were analyzed, and then the responses of the doctors were compared with those of the public using
Regarding the sample-wide responses (N=999), 653 (65.4%) of the respondents believed, in the future, AI in medicine would be necessary, whereas only 447 (44.7%) expressed an intention to use AI-driven medicine. Additionally, 730 (73.1%) believed that regulatory legislation was necessary, and 734 (73.5%) were concerned about where accountability lies. Regarding the comparison between doctors and the public, doctors (mean 3.43, SD 1.00) were more likely than members of the public (mean 3.23, SD 0.92) to express intention to use AI-driven medicine (
Many of the respondents were optimistic about the role of AI in medicine. However, when asked whether they would like to use AI-driven medicine, they tended to give a negative response. This trend suggests that concerns about the lack of regulation and about accountability hindered acceptance. Additionally, the results revealed that doctors were more enthusiastic than members of the public regarding AI-driven medicine. For the successful implementation of AI in medicine, it would be necessary to inform the public and doctors about the relevant laws and to take measures to remove their concerns about them.
The use of artificial intelligence (AI) in the medical industry promises many benefits. For example, it can yield new diagnostic and therapeutic methods, provide the groundwork for introducing cutting-edge medical technology, and reduce the workload of doctors and care workers [
As the shift toward AI in medicine continues apace, there is an urgent need to consider the ethical, legal, and social issues (ELSIs) of this trend [
A new application can only fulfill its potential if people use it. Exemplifying this principle are South Korea’s mobile electronic medical records (EMRs) [
Therefore, the purpose of this study was to obtain detailed data on the acceptance of AI in medicine by comparing the acceptance among Japanese doctors with that among the Japanese public.
In recent years, AI has been developed in the medical industry. For example, AI can detect pulmonary nodules, tuberculosis, and pneumonia in chest radiographs, detect and quantify pulmonary nodules in chest computed tomography (CT) [
The technology acceptance model (TAM) is used to investigate the acceptance of new technologies. TAM is a model that explains the process by which users accept and use information systems. There are many extended models, among which the unified theory of acceptance and use of technology (UTAUT), proposed by Venkatesh et al [
Oh et al [
In Japan, a number of studies have polled attitudes regarding the rollout of AI in health care. In one survey by the Ministry of Internal Affairs and Communications (MIC), 81.5% of the polled experts said that they would welcome the use of AI in analyzing biometrics, lifestyle, disease history, genetic data, and other factors to detect precisely symptoms of health conditions or the onset of disease [
In addition, although clarifying the differences in acceptance of AI in medicine between doctors and the public would allow us to consider approaches suitable for each of them in promoting the introduction of AI, most previous studies have focused on either doctors or the public (patients).
Our research question was to what extent AI in medicine has been accepted in Japan and whether there are differences in the acceptance of AI in medicine between doctors and the public.
For the purposes of the study, AI in medicine was defined as the use of AI to determine diagnosis and treatment without requiring a doctor.
The survey questions were divided into two sections. The first section consisted of items on the respondents’ general attributes, such as sex and whether the respondent was a doctor.
The second section measured the respondents’ acceptance of AI in medicine with questions referred to the UTAUT [
There were 20 such items (
The survey was conducted over three days, from November 13 to 15, 2018. The authors did not obtain Institutional Review Board approval for this study because we used Rakuten Insight to conduct the survey, and we did not obtain any personal information from the respondents. In Japan, researchers do not have to obtain Institutional Review Board approval when subjects can voluntarily decide to participate in a study, there is no intervention in the collection of data, and individuals cannot be identified from the collected data [
After the questionnaire survey, the reliability of the questionnaire was examined. First, Cronbach alpha was calculated to confirm the reliability of the entire questionnaire (Cronbach α=.88). Cronbach α values of .70 to .80 or more are regarded as satisfactory [
The public and doctor responses for the 20 items were compared using a
An online survey was conducted among the public and among doctors. The sample representing the public survey consisted of 600 individuals across six age cohorts, each of which included 50 men and 50 women. The cohorts were aged 15 to 24, 25 to 34, 35 to 44, 45 to 54, 55 to 64, and 65 years and older. The sample representing doctors consisted of 400 individuals aged 25 years or older. Of these, 350 were men and 50 were women. One doctor was excluded from analysis because he answered that his highest education attainment was “vocational school/junior college,” though doctors needed university education to get a license.
Regarding the respondents’ general attributes,
Sex and age distribution.
Age (years) | Doctors | Public | |||||
|
Male | Female | Total | Male | Female | Total | |
15–24 | N/Aa | N/A | N/A | 50 | 50 | 100 | |
25–34 | 7 | 13 | 20 | 50 | 50 | 100 | |
35–44 | 42 | 19 | 61 | 50 | 50 | 100 | |
45–54 | 123 | 11 | 134 | 50 | 50 | 100 | |
55–64 | 141 | 5 | 147 | 50 | 50 | 100 | |
65 or older | 36 | 2 | 38 | 50 | 50 | 100 | |
Total | 349 | 50 | 399 | 300 | 300 | 600 |
aN/A: not applicable.
Highest educational attainment.
Highest educational attainment | Doctors | Public |
Junior high school | 0 | 8 |
High school | 0 | 177 |
Vocational school/junior college | (1)a | 127 |
University | 398 | 282 |
Other | 1 | 6 |
Total | 399 | 600 |
aOne doctor was excluded from analysis because doctors needed university education to get a license.
Regarding the necessity of legislation and whether there is concern about accountability, only 360 (36.0%) and 315 (31.5%) of the respondents gave a strong affirmative response, respectively. However, when the strong and moderate responses were combined, as many as 730 (73.1%) and 734 (73.5%) gave an affirmative response, respectively.
Sample-wide results for factors of acceptance.
Items | Mean (SD) |
Usefulness | 3.66 (0.91) |
Efficiency | 3.47 (0.89) |
Better medical services | 3.66 (0.97) |
Mastery | 3.02 (0.91) |
User-friendliness | 2.97 (0.91) |
Expectations of others | 3.44 (0.92) |
Expectations among patients | 3.48 (0.99) |
Brand impact | 3.05 (1.07) |
Knowledge of AI in other contexts | 2.82 (1.04) |
Knowledge of AI in medicine | 2.34 (1.01) |
Medical costs | 2.88 (0.99) |
Necessity of legislation | 3.99 (0.88) |
General impression | 3.64 (1.00) |
Interest in topic | 3.52 (0.81) |
Accuracy | 3.08 (1.10) |
Concern about data leakage | 3.30 (1.00) |
Concern about accountability | 3.92 (0.96) |
Intention to use | 3.31 (0.91) |
Relevance to life | 3.64 (0.92) |
Necessity in medicine | 3.70 (0.42) |
Comparison between doctors and the public regarding factors associated with acceptance.
Items | Doctor | Public | Cohen |
95% CI | |||||||
|
Median | Mean (SD) | Median | Mean (SD) |
|
|
|
||||
Usefulness | 4 | 3.72 (0.93) | 4 | 3.62 (0.90) | 0.11 | –0.02 to 0.21 | .09 | ||||
Efficiency | 4 | 3.44 (0.89) | 4 | 3.50 (0.86) | –0.07 | –0.17 to 0.05 | .30 | ||||
Better medical services | 4 | 3.73 (0.85) | 4 | 3.61 (0.91) | 0.15 | 0.02 to 0.24 | .02 | ||||
Mastery | 3 | 3.15 (0.95) | 3 | 2.93 (0.97) | 0.23 | 0.10 to 0.34 | <.001 | ||||
User-friendliness | 3 | 2.97 (0.91) | 3 | 2.97 (0.90) | –0.001 | –0.12 to 0.11 | .98 | ||||
Expectations of others | 4 | 3.56 (0.88) | 3 | 3.37 (0.92) | 0.21 | 0.08 to 0.30 | .001 | ||||
Expectations among patients | 4 | 3.50 (0.89) | 4 | 3.46 (0.94) | 0.05 | –0.07 to 0.16 | .46 | ||||
Brand impact | 3 | 3.04 (0.98) | 3 | 3.06 (0.99) | –0.02 | –0.14 to 0.11 | .82 | ||||
Knowledge of AIa in other contexts | 3 | 2.88 (1.00) | 3 | 2.78 (1.11) | 0.10 | –0.03 to 0.24 | .13 | ||||
Knowledge of AI in medicine | 3 | 2.66 (1.00) | 2 | 2.12 (1.01) | 0.54 | 0.41 to 0.67 | <.001 | ||||
Medical costs | 3 | 3.01 (0.93) | 3 | 2.79 (1.04) | –0.23 | –0.35 to –0.10 | <.001 | ||||
Necessity of legislation | 4 | 4.01 (1.01) | 4 | 3.99 (0.97) | 0.02 | –0.11 to 0.15 | .76 | ||||
General impression | 4 | 3.69 (0.85) | 4 | 3.60 (0.89) | 0.10 | –0.02 to 0.20 | .13 | ||||
Interest in topic | 4 | 3.61 (1.01) | 4 | 3.46 (0.99) | 0.15 | 0.02 to 0.28 | .02 | ||||
Accuracy | 3 | 3.11 (0.84) | 3 | 3.06 (0.79) | 0.07 | –0.05 to 0.16 | .32 | ||||
Concern about data leakage | 3 | 3.38 (1.08) | 3 | 3.24 (1.11) | 0.13 | –0.28 to –0.002 | .046 | ||||
Concern about accountability | 4 | 3.91 (1.01) | 4 | 3.93 (0.99) | 0.015 | –0.11 to 0.14 | .81 | ||||
Intention to use | 4 | 3.43 (1.00) | 3 | 3.23 (0.92) | 0.21 | 0.07 to 0.32 | .002 | ||||
Relevance to life | 4 | 3.62 (0.90) | 4 | 3.66 (0.92) | –0.04 | –0.16 to 0.08 | .49 | ||||
Necessity in medicine | 4 | 3.74 (0.84) | 4 | 3.67 (0.97) | 0.07 | –0.05 to 0.18 | .24 |
aAI: artificial intelligence.
The total mean score of the whole scale was 65.5 (SD 10.4) for doctors and 64.1 (SD 10.6) for the public, with a difference of 1.4 (95% CI 0.10-2.77). Cohen
Regarding the sample-wide results, the respondents were generally receptive toward AI in medicine. Particularly respondents have confidence in AI’s usefulness and a belief in its future necessity in medicine. In the MIC survey [
Despite their tendency to see AI as useful and necessary in medicine, the respondents were less enthusiastic about the prospect of actually using AI-driven medicine, with only 44.8% of the respondents giving a moderate or strong affirmative response for intention to use. According to a previous study, 41% of respondents in Germany were in favor of using AI alone to diagnose melanoma [
The majority of the sample expressed a moderate or strong concern regarding the issues of regulatory legislation and accountability. About half of the respondents expressed moderate or strong concern about data leakage. These three items describe ELSIs, which require solutions from a policy perspective. Given that so many of the respondents were concerned about these ELSIs, the ELSIs in question are likely major determinants of acceptance for both doctors and members of the public. In particular, the issue of accountability attracted concern from as many as three-fourths of the respondents, despite the MHLW’s attempts to reassure people that human doctors will always be responsible for the final diagnosis. The causes of such uncertainty are unclear from this study’s results; further research is necessary to identify the causes and derive ways to alleviate the concerns among doctors and the public.
Discussed below is the comparison between doctors and the public. The results revealed significant intergroup differences in eight items. One such difference was in “intention to use”; doctors were more enthusiastic than the public about using AI-driven medicine in the future. Ema et al [
Doctors’ comparative enthusiasm for using AI may be related to the fact that they were also more likely than members of the public to give positive responses to better medical services, mastery, expectations of others, and interest in the topic. That is, the doctors’ intention to use AI may have been motivated by their greater expectations (compared with those held by members of the public) about the potential of AI in medicine. Additionally, members of the public were more likely than doctors to indicate a lack of knowledge about AI in medicine. The fact that members of the public tended to be rather uninformed about AI in medicine may have contributed to their weak (compared with doctors) intention to use AI.
Meanwhile, the responses to the items on medical costs and concern about data leaks present a paradox. Specifically, members of the public were more likely than doctors to believe that AI would lead to lower medical costs, whereas doctors were more likely than members of the public to express concern about the risk of data leakage. The results for these two items seem to imply that the members of the public, not the doctors, are more inclined to use AI. However, it was the doctors who gave the more affirmative responses to the actual question on the intention to use AI-driven medicine. A possible explanation for this paradox could be that the items “usefulness” and “better medical services” impact “intention to use” more than they do “medical costs” and “concern about data leakage.”
Although doctors’ total mean scores were significantly higher than those of the public, the effect size was negligible. This would be because a slight difference in the total mean score was detected by
Regarding the limitations of the study, one limitation concerns the possibility of sampling bias in the online survey. Because participation in the online survey was limited to individuals who could use a personal computer, smartphone, or similar device, the sample may have been biased toward the digitally literate. Moreover, as the survey was titled “Survey on AI in medicine,” the sample may have been biased toward individuals who were interested in medicine and AI. Given that people are generally more likely to express a clear opinion for or against a proposition when they are knowledgeable about the topic in question [
Further research is necessary to explore the relations between items. This study ascertained population trends by analyzing sample-wide responses and then comparing the responses between doctors and the public. What this approach failed to clarify was the matter of which item most affects intention to use. Accordingly, future research should explore how the responses to one item correlate with those to another. In this study, we were not able to conduct an analysis using the UTAUT model. However, since AI in medicine is now starting to be used in Japan, we would like to analyze the acceptance of AI in medicine by the UTAUT model assuming a specific system in the future study.
Since this study surveyed a large sample of 399 doctors and 600 citizens, it can be considered to have at least some validity. However, it should be noted that the questionnaire was not carried out validation.
In this study, we did not investigate the health status of the public or the duration of the professional experience of doctors. In the future, it will be necessary to conduct a survey that takes this into account.
To the best of our knowledge, this is the first survey on the acceptance of AI in medicine in Japan. This study aimed to obtain detailed data on the acceptance of AI in medicine by comparing the acceptance among Japanese doctors with that among the Japanese public. An online survey was conducted, and the results were analyzed to determine sample-wide trends and trends specific to doctors and to the public.
In the 999 respondents, the results indicated that around two-thirds of the sample believed that AI would be useful in (657/999, 65.8%) and necessary to medicine (653/999, 65.4%). However, such beliefs did not directly translate to intention to use AI-driven medicine; only 447 (44.7%) of the sample expressed such a desire. The results also showed that 730 (73.1%) believed that regulatory legislation was necessary, and 734 (73.5%) were concerned about accountability, suggesting that these factors are important in terms of acceptance among doctors and the public alike. The comparison of the two groups revealed that doctors were more likely than members of the public to express intention to use AI-driven medicine (
In this study, we did not analyze with the UTAUT model; however, the analysis with UTAUT should be done assuming a concrete system in the future.
Report on the Checklist for Reporting Results of Internet E-Surveys.
artificial intelligence
computed tomography
ethical, legal, and social issue
electronic medical record
Food and Drug Administration
Ministry of Health, Labour, and Welfare
Ministry of Internal Affairs and Communications
technology acceptance model
unified theory of acceptance and use of technology
HT, MM, and KO considered the conception and design of this research. HT and HY made an initial version of the questionnaire, and all authors revised the questionnaire. TS contributed to the acquisition of data. HT and YM analyzed the data. HT drafted the manuscript. All authors interpreted the results and revised the paper.
None declared.