Original Paper
Abstract
Background: Despite growing efforts to develop user-friendly artificial intelligence (AI) applications for clinical care, their adoption remains limited because of the barriers at individual, organizational, and system levels. There is limited research on the intention to use AI systems in mental health care.
Objective: This study aimed to address this gap by examining the predictors of psychology students’ and early practitioners’ intention to use 2 specific AI-enabled mental health tools based on the Unified Theory of Acceptance and Use of Technology.
Methods: This cross-sectional study included 206 psychology students and psychotherapists in training to examine the predictors of their intention to use 2 AI-enabled mental health care tools. The first tool provides feedback to the psychotherapist on their adherence to motivational interviewing techniques. The second tool uses patient voice samples to derive mood scores that the therapists may use for treatment decisions. Participants were presented with graphic depictions of the tools’ functioning mechanisms before measuring the variables of the extended Unified Theory of Acceptance and Use of Technology. In total, 2 structural equation models (1 for each tool) were specified, which included direct and mediated paths for predicting tool use intentions.
Results: Perceived usefulness and social influence had a positive effect on the intention to use the feedback tool (P<.001) and the treatment recommendation tool (perceived usefulness, P=.01 and social influence, P<.001). However, trust was unrelated to use intentions for both the tools. Moreover, perceived ease of use was unrelated (feedback tool) and even negatively related (treatment recommendation tool) to use intentions when considering all predictors (P=.004). In addition, a positive relationship between cognitive technology readiness (P=.02) and the intention to use the feedback tool and a negative relationship between AI anxiety and the intention to use the feedback tool (P=.001) and the treatment recommendation tool (P<.001) were observed.
Conclusions: The results shed light on the general and tool-dependent drivers of AI technology adoption in mental health care. Future research may explore the technological and user group characteristics that influence the adoption of AI-enabled tools in mental health care.
doi:10.2196/46859
Keywords
Introduction
Background
In spite of the growing efforts to create user-friendly artificial intelligence (AI) applications, their use in clinical care remains limited [
]. Barriers to the adoption of AI-enabled clinical decision support systems (AI-CDSSs) can be found at the individual (eg, end user’s lack of trust in the system), organizational (eg, capacity to innovate), and system (eg, political decisions) levels [ - ]. Often, the adoption of AI-CDSSs fails because system and organizational requirements are not met, and accordingly, tools do not become available to potential end users [ ]. The lack of regulatory oversight and standardization of AI-CDSSs can create uncertainty in the field, potentially leading to liability issues at the organizational and system levels [ ]. If the system and corporate requirements for implementing a given technology are satisfied, their successful deployment depends on the practitioner’s willingness to use them. However, clinicians may be skeptical about using AI-CDSSs because of concerns regarding the accuracy and reliability of AI-generated decisions. Several frameworks and theories have been developed to systematically study the mechanisms influencing the implementation of technology in practice [ - ]. The 2 most relevant models for individual-level predictors are the Technology Acceptance Model (TAM) [ ] and the Unified Theory of Acceptance and Use of Technology (UTAUT) [ ]. The TAM aims to explain why a given technology is rejected or accepted by the end user. It proposes that system use is centrally driven by its perceived usefulness and ease of use. Both beliefs are determinants of attitudes toward use, which, in turn, influence use behavior [ ]. The UTAUT combines the principles of 8 technology acceptance models, including the TAM. In addition to perceived usefulness (ie, performance expectancy) and perceived ease of use (ie, effort expectancy), it considers social processes (ie, social influence) and demographic variables (ie, age and gender) as predictors of use intention [ ]. Accordingly, we focused on the UTAUT as the most holistic use prediction model.Several studies have already demonstrated the applicability of the UTAUT in investigating the implementation of AI-CDSSs [
- ]. However, only 1 study has examined the predictors of the intention to use AI-enabled tools in mental health care [ ]. The authors asked psychology students about their general knowledge of and attitudes toward AI systems. The results suggest a link between the perceived social norms, perceived ease of use, perceived usefulness, and perceived knowledge with students’ intention to use AI-enabled tools. However, prospective and current mental health practitioners may have varying levels of skepticism about implementing AI technology for different purposes in their (future) practice. For example, when presented with AI-generated feedback regarding diagnostic or treatment decisions, they may be reluctant to accept AI-based recommendations because of the far-reaching consequences of erroneous predictions or because they feel undermined in their role as therapists. At the same time, they may be open to incorporating AI-generated feedback regarding their interviewing techniques. Although research has begun to examine practitioners’ acceptance of AI-enabled tools in mental health care, there is a lack of specificity in assessing use intention, limiting the utility of these findings in informing practice. This study sought to address this gap by examining the intention to use two specific AI-enabled mental health tools: (1) a psychotherapy feedback tool (FB tool) that analyzes data from therapist-patient conversations and provides performance-specific feedback to the therapist [ - ] and (2) a treatment recommendation tool (TR tool) that uses voice recordings and mood scores to generate recommendations for psychotherapeutic support [ ]. The research model is shown in .The AI-Enabled FB Tool
Providing supervision and performance feedback during and after psychotherapy sessions enhances trainees’ and therapists’ skills acquisition and retention [
, ]. However, these processes are labor and cost intensive and thus rarely used in training and clinical practice. Often, feedback is based on trainees’ self-reports and is only available long after the therapy session has concluded [ ]. AI technology may help to reduce this problem by providing continuous, immediate, and performance-specific feedback to psychotherapists and trainees. Over the past few years, several AI-enabled FB tools have been developed and are already used in practice [ ]. For example, the Therapy Insights Model uses real-time chat messages exchanged between therapists and patients to provide feedback on topics covered in the session and generate recommendations regarding topics that should be addressed in the following session [ ]. Counselor Observer Ratings Expert for Motivational Interviewing uses audio recordings of motivational interviewing (MI) sessions to generate feedback on psychotherapists’ adherence to MI principles. The generated feedback focuses on 6 aspects of MI fidelity: empathy, MI spirit, reflection-to-question ratio, percent open questions, percent complex reflections, and percent MI adherence [ ]. The tool chosen for this study was developed based on the Counselor Observer Ratings Expert for Motivational Interviewing. Participants were presented with information on how speech data recorded during a psychotherapy session were processed and analyzed using machine learning models to generate feedback for psychotherapists regarding their adherence to MI principles and possibilities for improvement, as shown in .The AI-Enabled TR Tool
Timely psychotherapeutic support may lower the risk of worsening depressive symptoms and suicidality [
]. Multiple studies have demonstrated the effectiveness of AI-enabled emotion analysis in assessing patients’ depressive states and recommending timely intervention, thereby improving mental health care [ , ]. In particular, systems have been developed in recent years to monitor or evaluate the mood of individuals with mental disorders, such as major depressive or bipolar disorder, using speech data [ , ]. These tools usually require patients to record voice samples on their mobile phones, which are analyzed by an automated speech data classifier to assess their current mood [ ]. Mental health practitioners can then use this information to decide whether urgent intervention is needed [ ]. The TR tool chosen for this study was based on the system developed by SondeHealth [ ]. Specifically, participants were presented with information on how voice data recorded on a mobile device are processed and analyzed to generate a mood score that may be used for treatment-related decisions, as shown in .Research Model and Hypotheses
The first goal of this study was to test the applicability of a modified version of the UTAUT in the mental health context to understand the factors that influence the intention to use 2 specific AI-enabled mental health care tools [
, , , ]. In line with the UTAUT, we propose tool-specific perceived usefulness (ie, the degree to which an individual believes that using a system will enhance their performance) and perceived ease of use (ie, the degree of ease associated with using the technology) to predict the behavioral intention to use the tools in their future work. The hypotheses for this research have been preregistered through the Open Science Framework [ ]. We propose the following hypotheses:- Hypothesis 1: There is a positive relationship between perceived usefulness and the intention to use the tools in psychotherapy.
- Hypothesis 2: There is a positive relationship between perceived ease of use and the intention to use the tools in psychotherapy.
Unlike experienced psychotherapists, psychology students and psychotherapists in training may be less likely to be influenced by established work habits or procedures, which could impede the adoption of new AI technologies [
]. However, it has been suggested that students are more likely to be affected by their peers and the values and standards of their potential future employers [ ]. As a result, we propose that the UTAUT variable, “social influence” (ie, the perception that other significant people think the system should be used), should be considered a predictor of students’ intention to use the tools.- Hypothesis 3: There is a positive relationship between social influence and the intention to use the tools in psychotherapy.
It has been suggested that trust may be a relevant predictor of the intention to use a technology if the risk associated with it is high [
]. Because of the sensitive nature of the recommendations made by the 2 tools, we hypothesized that trust may be a predictor of students’ intention to use the tools.- Hypothesis 4: There is a positive relationship between trust in the tools and the intention to use them in psychotherapy.
A lack of understanding of the underlying mechanisms of AI-enabled tools in mental health care has led to skepticism regarding their use [
, ]. In particular, the lack of transparency and explainability of AI-based clinical decision-making has impeded the adoption of such tools in mental health care [ - ]. Building on the new framework for theorizing and evaluating Nonadoption, Abandonment, and Challenges to the Scale-Up, Spread, and Sustainability of Health and Care Technologies [ ], we proposed that knowledge regarding technology is a predictor of its perceived value. Consequently, we suggested that students with the knowledge and skills to apply the tools and understand how the recommendations are derived are more likely to perceive them as useful [ , ]. To test this, we extended the UTAUT by including cognitive technology readiness as an indicator of general AI knowledge and understanding of the tool as an indicator of specific AI knowledge as predictors of perceived usefulness, perceived ease of use, and trust. We preregistered 2 research questions to test this relationship:- Research question 1: Is the positive relationship between cognitive technology readiness and the intention to use the tools mediated through (1) perceived usefulness, (2) perceived ease of use, and (3) trust in the tools?
- Research question 2: Is the positive relationship between understanding of the tools and the intention to use the tools mediated through (1) perceived usefulness, (2) perceived ease of use, and (3) trust in the tools?
Methods
Participants
Psychology students and psychotherapists in training were recruited through social media postings, email correspondence with administrative offices of universities, and psychotherapy training centers, as well as through the professional research-focused panel company, Prolific. Data were collected between October 2022 and January 2023, resulting in a total of 362 participants beginning the questionnaire. Of these, 208 provided answers on the behavioral intention to use the tools, resulting in a 42.54% dropout rate. In addition, 2 participants failed at least 2 of the 4 attention check items [
], leaving us with a final sample size of 206.The final sample consisted of 16% (33/206) of men, 80.1% (165/206) of women, and 3.9% (8/206) of nonbinary individuals. The age of the participants ranged from 18 to 54 (mean 28.10, SD 7.03) years. Data were collected from Germany, the United States, the United Kingdom, and Canada. Most participants studied in Germany (111/206, 53.9%), followed by the United Kingdom (49/206, 23.8%), the United States (32/206, 15.5%), Canada (13/206, 6.3%), and other countries (1/206, 0.5%). Regarding the field of study, most participants stated that their studies focused on clinical psychology (118/206, 57.3%), followed by those studying psychology with no specific focus (50/206, 24.3%) and those who did not provide this information (38/206, 18.4%).
Procedure
The web-based survey was anonymous and self-administered. All participants provided informed consent before participating. In the web-based survey, we first assessed cognitive technology readiness. Next, participants were presented with slides that explained how recommendations for the AI-enabled FB tool and TR tool were generated (the material is available from the first author upon request). Before seeing the slides, participants read the following short introduction: “On the following page, you will be presented with a tool that is used to [FB tool: provide feedback to psychotherapists about what went well and what could be improved in their sessions; TR tool: generate a mood score to rate the severity of patients’ depression. The mood score may be used by psychotherapists to decide which patient to treat first if multiple patients seek treatment and there is limited capacity]. Please read the information carefully and try to understand what the tool does and how it may be used in psychotherapy practice/training. After the presentation, you will be asked a couple of questions about the tool.” After each tool presentation, the UTAUT predictor variables (ie, perceived usefulness, perceived ease of use, social influence, and trust), the understanding of the tool, and the intention to use the respective tool were assessed. Finally, we asked them about their demographic information.
Ethics Approval
The Institutional Review Board Committee of the University of Regensburg approved the study protocol (22-3096-101).
Measurement Instruments
Independent Variables
We assessed cognitive technology readiness with 5 items of the cognition factor of the medical AI readiness scale [
]. This scale measures terminological knowledge about medical AI applications. In total, 2 items with factor loadings<0.40 [ ] that did not relate to a general understanding of AI (ie, “I can define the basic concepts of data science” and “I can define the basic concepts of statistics”) were removed. We retained 3 items related to AI understanding (ie, “I can explain how AI systems are trained,” “I can define the basic concepts and terminology of AI,” and “I can properly analyze the data obtained by AI in healthcare”; α=.77; ω=0.75).Perceived usefulness, perceived ease of use, and social influence were measured using items adapted from the study by Venkatesh et al [
]. Participants rated their agreement on a 5-point Likert scale ranging from 1=strongly disagree to 5=strongly agree. Perceived usefulness was assessed using 5 items (eg, “Using the AI tool would enable me to accomplish tasks more quickly”). The reliabilities are αFB tool=.86 and ωFB tool=0.91 for the first tool and αTR tool=.91 and ωTR tool=0.93 for the second tool. Perceived ease of use was measured using 4 items (eg, “My interaction with the AI tool will be clear and understandable”; αFB tool=.84; ωFB tool=0.89; αTR tool=.89; ωTR tool=0.93). Social influence was measured with 5 items (eg, “In my future job as a psychotherapist, people who are important to me will think that I should use the AI tool”; αFB tool=.88; ωFB tool=0.94; αTR tool=.91; ωTR tool=0.95). Trust was measured with 3 items adapted from the study by Venkatesh et al [ ] (eg, “The AI tool will provide access to sincere and genuine feedback”; αFB tool=.83; ωFB tool=0.84; αTR tool=.89; ωTR tool=0.89). Finally, understanding of the AI-enabled tools was assessed with a single item (“Please rate your understanding of the AI-enabled feedback tool”), with answers ranging from 1=I don’t understand the tool at all to 6=I understand the tool extremely well.The Behavioral Intention to Use the Tools as the Dependent Variable
The behavioral intention to use the tools was measured on a 5-point Likert scale, ranging from 1=strongly disagree to 5=strongly agree, with 3 items adapted from the study by Venkatesh et al [
] (eg, “I intend to use the AI tool in my future job as a psychotherapist”; αFB tool=.95; ωFB tool=0.95; αTR tool=.96; ωTR tool=0.96).Control Variables
Data privacy concerns and AI anxiety (ie, fears and insecurity regarding AI technology) have repeatedly been identified as negative predictors of the intention to use AI technology [
]. In addition, it has been shown that male participants have more positive attitudes toward AI technologies than female participants [ ]. Finally, some evidence exists for the association of AI acceptance with age [ ] and country [ ]. Accordingly, data privacy and security concerns [ ] (αFB tool=.84; ωFB tool=0.85; αTR tool=.89; ωTR tool=0.91; eg, “I would be concerned that the AI tool would share my personal information with third-parties”), AI anxiety [ ] (αFB tool=.78; ωFB tool=0.81; αTR tool=.76; ωTR tool=0.79; eg, “I feel apprehensive about using the AI tool”), gender (0=man and 1=woman and nonbinary), age, and study country (1=Germany and 0=English-speaking countries) were included as control variables. One item of the AI anxiety scale and 3 items of the data privacy scales with standardized factor loadings<0.40 were excluded [ ].Data Analysis
Data were analyzed using R software (version 4.2.2; R Foundation for Statistical Computing) [
]. First, we calculated descriptive statistics, including mean values, SDs, and correlations between study variables for each tool. Second, a confirmatory factor analysis of perceived usefulness, perceived ease of use, social influence, trust, cognitive readiness, specific tool understanding, behavioral intention to use the tool, AI anxiety, and data privacy concerns was conducted using the lavaan package [ ]. We assumed at least reasonable fit for models with comparative fit index (CFI) and Tucker-Lewis index (TLI) values close to or exceeding 0.90 [ ]. Root mean square error of approximation (RMSEA) values <0.08 are considered acceptable [ ]. Finally, standardized root mean square residual (SRMR) values up to 0.08 are considered satisfactory [ ]. We compared the theoretical measurement model with 3 more parsimonious models (combining cognitive readiness and tool understanding; perceived usefulness and ease of use; and AI anxiety and data privacy concerns) to assess whether the model variables were sufficiently distinct. Third, we conducted structural equation modeling (SEM) using the lavaan package [ ] to examine the relationships between the predictor variables and the intention to use the tools to answer hypotheses 1 to 4 and research questions 1 and 2. We specified 2 models (1 for each tool) with direct effects and the mediation of the relationship between specific tool understanding, cognitive AI readiness, and the intention to use the tool. We followed the recommendations by Scharf et al [ ] to determine whether the regression coefficients should be regularized. Specifically, we applied regularization in case of multicollinearity and associated inflated SEs [ ]. The study data and R script will be made available on the web on publication [ ].Preregistration Statement
The hypotheses were preregistered in the Open Science Framework [
]. Exploratory hypotheses were thus identified.Results
presents the means, SDs, and correlations. We specified the theoretical model with perceived usefulness, perceived ease of use, social influence, trust, cognitive readiness, specific tool understanding, behavioral intention to use the tool, AI anxiety, and data privacy concerns to load on separate factors. The theoretical model fitted the data adequately (FB tool: χ2370=808.9, P<.001; CFI=0.89; TLI=0.87; RMSEA=0.08; SRMR=0.08 and TR tool: χ2370=713.41, P<.001; CFI=0.93; TLI=0.92; RMSEA=0.07; SRMR=0.06).
The theoretical model fit the data better than the 3 more parsimonious models (ie, cognitive readiness and specific tool understanding combined; FB tool: 𝛥χ27=50.37, P<.001 and TR tool: 𝛥χ27=72.68, P<.001; perceived usefulness and perceived ease of use combined, FB tool: 𝛥χ28=257.79, P<.001 and TR tool: 𝛥χ21=435.43, P<.001; and AI anxiety and data privacy concerns combined, FB tool: 𝛥χ28=240.91, P<.001 and TR tool: 𝛥χ21=133.6, P<.001). Thus, we concluded that the model variables were sufficiently distinct.
To test hypotheses 1 to 4 and research questions 1 and 2, we specified 2 SEMs (1 for each tool) with the behavioral intention to use FB tool and TR tool to be predicted by the respective UTAUT variables (ie, perceived usefulness, perceived ease of use, social influence, and trust); tool understanding; cognitive readiness; and the control variables AI anxiety, data privacy concerns, age, male gender (0=man and 1=woman and nonbinary), and study country (1=Germany and 0=English-speaking countries). In addition, we added mediated pathways of the relationship of specific tool understanding and cognitive AI readiness with the intention to use the tools through perceived usefulness, perceived ease of use, and trust in the tool. No inflated SEs were observed, and we proceeded with the interpretation of the SEM without regularization. The results are presented in
and . shows the significant paths from the SEM path models. As can be seen in and and , the relevant paths differ between the 2 models. Perceived usefulness and social influence showed the expected positive relationships with the intention to use both tools, supporting hypotheses 1 and 3. However, trust was unrelated to use intention in both models, and perceived ease of use was unrelated to the intention to use the FB tool and was negatively related to the intention to use the TR tool. Accordingly, we found no support for hypotheses 2 and 4. AI anxiety was negatively related to use intentions in both models. Finally, the exploratory mediation analysis results suggest that the relationships of tool understanding and cognitive technology readiness with the intention to use FB tool are not mediated through perceived usefulness, perceived ease of use, or trust. There was a negative mediation effect of the relationship between tool understanding and the intention to use the TR tool through perceived ease of use, that is, tool understanding was positively related to perceived ease of use, which, in turn, was negatively associated with use intention.1 | 2 | 3 | 4 | 5 | 6 | 7 | 8 | 9 | 10 | 11 | 12 | |
1. PUb | —c | 0.35 | 0.74 | 0.77 | 0.15 | 0.73 | −0.05 | −0.20 | 0.08 | 0.05 | −0.12 | — |
2. PEd | 0.44 | — | 0.26 | 0.38 | 0.54 | 0.23 | −0.26 | −0.37 | 0.05 | −0.10 | 0.01 | — |
3. SIe | 0.59 | 0.40 | — | 0.71 | 0.19 | 0.78 | −0.02 | −0.25 | 0.13 | 0.12 | −0.25 | — |
4. TRf | 0.68 | 0.50 | 0.57 | — | 0.19 | 0.72 | −0.17 | −0.31 | 0.08 | 0.04 | −0.22 | — |
5. TUg | 0.10 | 0.43 | −0.01 | 0.14 | — | 0.15 | −0.21 | −0.19 | 0.21 | −0.13 | −0.08 | — |
6. IUh | 0.70 | 0.49 | 0.67 | 0.66 | 0.08 | — | −0.11 | −0.41 | 0.11 | 0.11 | −0.23 | — |
7. PCi | −0.07 | −0.17 | −0.06 | −0.24 | −0.10 | −0.18 | — | 0.33 | −0.11 | 0.19 | −0.03 | — |
8. ANXj | −0.08 | −0.31 | −0.11 | −0.21 | −0.22 | −0.32 | 0.31 | — | −0.12 | −0.11 | 0.16 | — |
9. CRk | 0.07 | 0.15 | 0.14 | 0.09 | 0.21 | 0.22 | −0.11 | −0.19 | — | 0.01 | −0.07 | — |
10. Age | 0.01 | −0.03 | 0.12 | −0.06 | −0.21 | 0.02 | 0.11 | −0.03 | 0.01 | — | −0.11 | — |
11. Genderl | −0.08 | −0.01 | −0.15 | −0.11 | 0.00 | −0.12 | −0.02 | 0.11 | −0.07 | −0.11 | — | — |
12. Countrym | −0.10 | −0.06 | −0.13 | 0.01 | −0.02 | −0.03 | −0.02 | −0.10 | −0.02 | −0.22 | 0.21 | — |
FB tooln, mean (SD) | 3.2 (0.9) | 3.7 (0.8) | 2.9 (0.9) | 3.4 (0.9) | 4.2 (1.0) | 2.9 (1.1) | 4.2 (1.6) | 2.7 (0.9) | 2.5 (1.0) | 28.1 (7.0) | 0.8 (0.4) | 0.5 (0.5) |
TR toolo, mean (SD) | 2.8 (1.1) | 3.9 (0.8) | 2.7 (1.0) | 3.0 (1.0) | 4.5 (1.1) | 2.4 (1.2) | 4.0 (1.7) | 2.9 (1.0) | 2.5 (1.0) | 28.1 (7.0) | 0.8 (0.4) | 5 (0.5) |
aThe lower triangle of the correlation table contains the correlations for the FBtool, and the upper triangle contains the correlations for the TR tool. All correlations≥|0.14| are significant at P<.05.
bPU: perceived usefulness.
cNot applicable.
dPE: perceived ease of use.
eSI: social influence.
fTR: trust in the tool.
gTU: tool understanding.
hIU: intention to use the tool.
iPC: privacy concerns.
jANX: artificial intelligence anxiety.
kCR: cognitive technology readiness.
lCode: 0=man and 1=woman and nonbinary.
mCode: 1=Germany and 0=English-speaking country.
nFB tool: feedback tool.
oTR tool: treatment recommendation tool.
Effect | Feedback tool | |||
B (SE) | β (95% CI) | P value | ||
Direct effects (DVa=IUb) | ||||
PUc | 0.63 (0.11) | .51 (.30 to .72) | <.001 | |
PEd | 0.06 (0.06) | .03 (−.09 to .15) | .59 | |
SIe | 0.37 (0.07) | .32 (.19 to .46) | <.001 | |
TRf | 0.06 (0.12) | .04 (−.19 to .27) | .72 | |
CRg | 0.12 (0.05) | .12 (.02 to .22) | .02 | |
TUh | −0.07 (0.05) | −.07 (−.18 to .03) | .16 | |
PCi | −0.03 (0.05) | −.04 (−.13 to .06) | .42 | |
ANXj | −0.18 (0.06) | −.18 (−.29 to −.07) | .001 | |
Age | 0.00 (0.04) | −.01 (−.10 to .07) | .74 | |
Genderk | −0.08 (0.04) | −.03 (−.12 to .05) | .48 | |
Countryl | 0.04 (0.04) | .02 (−.07 to .11) | .66 | |
Direct effects (DVs=PU, PE, and TR) | ||||
TU→PU | 0.09 (0.08) | .12 (−.03 to .27) | .13 | |
CR→PU | 0.04 (0.08) | .04 (−.12 to .20) | .60 | |
TU→PE | 0.24 (0.06) | .45 (.32 to .57) | <.001 | |
CR→PE | 0.02 (0.07) | .04 (−.11 to .18) | .62 | |
TU→TR | 0.09 (0.08) | .13 (−.02 to .28) | .09 | |
CR→TR | 0.07 (0.08) | .10 (−.06 to .26) | .22 | |
Indirect effects | ||||
TU→PU→IU | 0.06 (0.04) | .06 (−.02 to .14) | .16 | |
TU→PE→IU | 0.01 (0.03) | .01 (−.04 to .07) | .59 | |
TU→TR→IU | 0.01 (0.02) | .01 (−.02 to .04) | .73 | |
CR→PU→IU | 0.02 (0.04) | .02 (−.06 to .10) | .60 | |
CR→PE→IU | 0.00 (0.00) | .00 (−.01 to .01) | .71 | |
CR→TR→IU | 0.00 (0.01) | .00 (−.02 to .03) | .73 |
aDV: dependent variable.
bIU: intention to use the tool.
cPU: perceived usefulness.
dPE: perceived ease of use.
eSI: social influence.
fTR: trust in the tool.
gCR: cognitive technology readiness.
hTU: tool understanding.
iPC: privacy concerns.
jANX: artificial intelligence anxiety.
kCode: 0=man and 1=woman and nonbinary.
lCode: 1=Germany and 0=English-speaking country.
Effect | Treatment recommendation tool | ||||
B (SE) | β (95% CI) | P value | |||
Direct effects (DVa=IUb) | |||||
PUc | 0.31 (0.11) | .28 (.06 to .50) | .01 | ||
PEd | −0.29 (0.06) | −.18 (−.30 to −.06) | .004 | ||
SIe | 0.56 (0.08) | .50 (.34 to .65) | <.001 | ||
TRf | 0.23 (0.11) | .17 (−.04 to .37) | .12 | ||
CRg | −0.01 (0.04) | .00 (−.09 to .08) | .91 | ||
TUh | 0.02 (0.05) | .02 (−.07 to .12) | .65 | ||
PCi | −0.01 (0.05) | −.01 (−.10 to .08) | .81 | ||
ANXj | −0.25 (0.06) | −.21 (−.33 to −.10) | <.001 | ||
Age | 0.00 (0.04) | −.02 (−.10 to .06) | .64 | ||
Genderk | −0.04 (0.04) | −.01 (−.09 to .07) | .74 | ||
Countryl | −0.08 (0.04) | −.03 (−.11 to .04) | .40 | ||
Direct effects (DVs=PU, PE, and TR) | |||||
TU→PU | 0.15 (0.07) | .15 (.01 to .29) | .04 | ||
CR→PU | 0.04 (0.08) | .04 (−.12 to .19) | .64 | ||
TU→PE | 0.40 (0.05) | .57 (.47 to .68) | <.001 | ||
CR→PE | −0.06 (0.07) | −.07 (−.20 to .06) | .30 | ||
TU→TR | 0.15 (0.07) | .19 (.04 to .33) | .01 | ||
CR→TR | 0.06 (0.08) | .06 (−.10 to .22) | .44 | ||
Indirect effects | |||||
TU→PU→IU | 0.05 (0.03) | .04 (−.01 to .09) | .12 | ||
TU→PE→IU | −0.11 (0.04) | −.10 (−.18 to −.03) | .01 | ||
TU→TR→IU | 0.03 (0.02) | .03 (−.01 to .08) | .19 | ||
CR→PU→IU | 0.01 (0.02) | .01 (−.03 to .05) | .64 | ||
CR→PE→IU | 0.02 (0.01) | .01 (−.01 to .04) | .34 | ||
CR→TR→IU | 0.01 (0.01) | .01 (−.02 to .04) | .49 |
aDV: dependent variable.
bIU: intention to use the tool.
cPU: perceived usefulness.
dPE: perceived ease of use.
eSI: social influence.
fTR: trust in the tool.
gCR: cognitive technology readiness.
hTU: tool understanding.
iPC: privacy concerns.
jANX: artificial intelligence anxiety.
kCode: 0=man and 1=woman and nonbinary.
lCode: 1=Germany and 0=English-speaking country.
Discussion
Principal Findings
In recent years, there has been a rapid growth in the development of AI-enabled mental health care tools. To investigate the implementation challenges and potential user needs, in this study, we examined the intention to use 2 AI-enabled mental health care tools among psychology students and psychotherapists in training. The first tool provides feedback to the psychotherapist on their adherence to MI techniques by analyzing data collected during psychotherapy sessions. The second tool uses patient voice samples to derive mood scores that the therapists may use for treatment decisions. An extended UTAUT model was used to analyze the results, which showed that perceived usefulness and social influence had a positive effect on the intention to use both tools. However, trust was unrelated to the intention to use both tools, and perceived ease of use was unrelated (FB tool) and even negatively related (TR tool) to the intention to use when considering all predictors in 1 model.
The findings of this study are partly in line with previous research on AI-CDSSs in medicine [
, ]. Fan et al [ ] found positive associations between perceived usefulness and trust with use intentions among a sample of health care professionals, and Zhai et al [ ] reported positive relationships between perceived usefulness and social influence with the intention to use AI-assisted contouring technology among radiation oncologists. Furthermore, Tran et al [ ] identified social influence as the only significant predictor of the intention to use AI-CDSSs among undergraduate medical students. Gado et al [ ] found support for the direct effects of perceived usefulness, AI knowledge, and perceived social norms on the intention to use AI as well as indirect effects of perceived ease of use on use intention via positive attitudes toward AI in a sample of psychology students. This consistent link between social influence and AI use intentions found in studies using student samples may be explained by the greater susceptibility of students to influence of peers and prospective employers [ ]. As students have yet to develop a professional identity that shapes their work-related decisions, they may be more likely to align their decisions with the perceived expectations of influential others [ ].The assessment of symptom severity often involves complex interactions with the patient and reflections on psychotherapeutic elements, which may make participants skeptical of a device that is perceived as being easy to use. One explanation for the null and negative relationships between perceived ease of use and use intentions for AI-generated recommendations in the mental health field may be the high stakes of accepting the tool’s advice. This interpretation might be supported by a study predicting intentions to learn about AI applications among medical staff [
], which found that perceived ease of use was the strongest predictor of the intention to learn how to use AI-enabled tools in health care. Combined with the results of this study, it may be assumed that ease of use positively predicts interactions with AI-generated advice that aligns with the user’s level of competency and professionalism. That is, ease of use may positively predict learning intentions but maybe not the intention to use high-stakes mental health tools among students and trainees who have not yet gained profound professional experience. Students’ primary task at university is to learn and acquire skills and knowledge. The ease with which an AI-enabled tool can be applied likely becomes more relevant when the interaction with such tools is required or advantageous for their professional performance. More research is needed to understand the conditions under which perceived ease of use is positively related to AI use intentions among medical and mental health practitioners and to explore the implications of the high stakes associated with AI-generated recommendations.Trust in the tools was unrelated, whereas AI anxiety was negatively related to the intention to use both the FB and TR tools. One explanation for this finding may be participants’ limited insight into the functioning mechanisms of the tools. A profound assessment of their trust in the tools requires more in-depth knowledge than assessing their AI anxiety. Specifically, whether the AI tool “will provide data in [their] best interest,” “provides access to sincere and genuine feedback,” or “will perform its role of a supportive system very well” [
] may be difficult to assess without having used the tool in practice and, thus, may be less relevant for students’ intention to use the tool. In contrast, AI anxiety represents intuitive, affective reactions, such as feeling apprehensive about the tool or being hesitant to use the tool for fear of making mistakes [ ]. As students and psychotherapists in training have limited to no experience interacting with AI-generated feedback, they may base their decision-making on intuitive, emotional reactions better represented by AI anxiety than trust in the tools [ ].By differentiating between specific tool understanding and more general cognitive technology readiness, this study moves beyond previous research that focused on the role of general AI knowledge in predicting general use intention [
]. The mediation analyses revealed that none of the 3 UTAUT variables mediated the relationship between tool understanding and cognitive technology readiness with the intention to use the FB tool. However, there was a positive relationship between cognitive technology readiness and the intention to use the FB tool. This might indicate that general AI understanding may spur use intentions of low-stakes AI-generated advice but not the intention to use AI advice for deriving treatment decisions. In addition, in line with the direct effects, perceived ease of use emerged as a negative mediator between specific tool understanding and the intention to use the TR tool. The results of the exploratory mediation models highlighted the relevance of distinguishing between different AI-enabled tools when assessing the relationship between different forms of AI knowledge and use intentions.Limitations and Future Directions
This study has some limitations. First, we collected data at only 1 time point. Although cross-sectional designs are commonly chosen to investigate mechanisms predicted by the UTAUT [
, ], they prevent the assessment of an order of effects. The adoption of AI-generated advice should be studied longitudinally to increase the understanding of use-predicting mechanisms. Second, although studying technology acceptance with deterministic models, such as the UTAUT and TAM, has a long tradition, such studies have recently been criticized for their oversimplicity, which lowers their explanatory power. In this vein, focusing on 2 specific AI-enabled mental health tools may be highlighted as a strength of this study, as it increases the ecological validity of the results. However, future research should seek to integrate organizational and system processes to provide a more profound understanding of the mechanisms that prevent and promote technology adoption. Other frameworks and theories, such as activity theory [ ], adaptive structuration theory [ ], and the Nonadoption, Abandonment, and Challenges to the Scale-Up, Spread, and Sustainability of Health and Care Technologies framework [ ], may serve as theoretical underpinnings of research investigating use in context instead of focusing on individual-centered variables alone [ ]. Finally, we focused on psychology students and psychotherapists in training as a potential user group and found discrepancies in our results compared with previous research findings [ , ]. Future research should compare adoption and adoption intentions among multiple (potential) user groups and tools to shed light on tool-dependent and user-dependent predicting mechanisms.Conclusions
This study provides insights into the individual implementation challenges of AI-enabled FB and TR tools used in mental health care. The results highlight the relevance of specific UTAUT predictors as general drivers of AI technology adoption in mental health care (ie, perceived usefulness, social influence, and AI anxiety) and emphasize the need to distinguish between different AI technologies with reference to other influencing factors (ie, perceived ease of use, cognitive technology readiness, and tool understanding). Future research should explore the conditions under which perceived ease of use is positively related to AI use intentions among mental health practitioners.
Conflicts of Interest
None declared.
References
- Sendak MP, D’Arcy J, Kashyap S, Gao M, Nichols M, Corey K, et al. A path for translation of machine learning products into healthcare delivery. EMJ Innov. Jan 27, 2020:1-14. [CrossRef]
- Greenhalgh T, Wherton J, Papoutsi C, Lynch J, Hughes G, A'Court C, et al. Beyond adoption: a new framework for theorizing and evaluating nonadoption, abandonment, and challenges to the scale-up, spread, and sustainability of health and care technologies. J Med Internet Res. Nov 01, 2017;19(11):e367. [FREE Full text] [CrossRef] [Medline]
- Yusof MM, Kuljis J, Papazafeiropoulou A, Stergioulas LK. An evaluation framework for Health Information Systems: human, organization and technology-fit factors (HOT-fit). Int J Med Inform. Jun 2008;77(6):386-398. [CrossRef] [Medline]
- Garvey KV, Thomas Craig KJ, Russell R, Novak LL, Moore D, Miller BM. Considering clinician competencies for the implementation of artificial intelligence-based tools in health care: findings from a scoping review. JMIR Med Inform. Nov 16, 2022;10(11):e37478. [FREE Full text] [CrossRef] [Medline]
- Shachak A, Kuziemsky C, Petersen C. Beyond TAM and UTAUT: future directions for HIT implementation research. J Biomed Inform. Dec 2019;100:103315. [FREE Full text] [CrossRef] [Medline]
- Hsiao JL, Chen RF. Critical factors influencing physicians' intention to use computerized clinical practice guidelines: an integrative model of activity theory and the technology acceptance model. BMC Med Inform Decis Mak. Jan 16, 2016;16(1):3. [FREE Full text] [CrossRef] [Medline]
- Kumar A, Mani V, Jain V, Gupta H, Venkatesh VG. Managing healthcare supply chain through artificial intelligence (AI): a study of critical success factors. Comput Ind Eng. Jan 2023;175:108815. [FREE Full text] [CrossRef] [Medline]
- Wiljer D, Salhia M, Dolatabadi E, Dhalla A, Gillan C, Al-Mouaswas D, et al. Accelerating the appropriate adoption of artificial intelligence in health care: protocol for a multistepped approach. JMIR Res Protoc. Oct 06, 2021;10(10):e30940. [FREE Full text] [CrossRef] [Medline]
- Camacho J, Zanoletti-Mannello M, Landis-Lewis Z, Kane-Gill SL, Boyce RD. A conceptual framework to study the implementation of clinical decision support systems (BEAR): literature review and concept mapping. J Med Internet Res. Aug 06, 2020;22(8):e18388. [FREE Full text] [CrossRef] [Medline]
- Davis FD. Perceived usefulness, perceived ease of use, and user acceptance of information technology. MIS Q. Sep 1989;13(3):319-340. [CrossRef]
- Venkatesh V, Thong JY, Xu X. Unified theory of acceptance and use of technology: a synthesis and the road ahead. J Assoc Inf Syst. May 1, 2016;17(5):328-376. [FREE Full text] [CrossRef]
- Arfi WB, Nasr IB, Kondrateva G, Hikkerova L. The role of trust in intention to use the IoT in eHealth: application of the modified UTAUT in a consumer context. Technol Forecast Soc Change. Jun 2021;167:120688. [CrossRef]
- Fan W, Liu J, Zhu S, Pardalos PM. Investigating the impacting factors for the healthcare professionals to adopt artificial intelligence-based medical diagnosis support system (AIMDSS). Ann Oper Res. Mar 19, 2018;294(1-2):567-592. [CrossRef]
- Lin HC, Tu YF, Hwang GJ, Huang H. From precision education to precision medicine: factors affecting medical staff's intention to learn to use AI applications in hospitals. Educ Technol Soc. Jan 2021;24(1):123-137. [FREE Full text]
- Zhai H, Yang X, Xue J, Lavender C, Ye T, Li JB, et al. Radiation oncologists' perceptions of adopting an artificial intelligence-assisted contouring technology: model development and questionnaire study. J Med Internet Res. Sep 30, 2021;23(9):e27122. [FREE Full text] [CrossRef] [Medline]
- Tran AQ, Nguyen LH, Nguyen HS, Nguyen CT, Vu LG, Zhang M, et al. Determinants of intention to use artificial intelligence-based diagnosis support system among prospective physicians. Front Public Health. Nov 26, 2021;9:755644. [FREE Full text] [CrossRef] [Medline]
- Gado S, Kempen R, Lingelbach K, Bipp T. Artificial intelligence in psychology: how can we enable psychology students to accept and use artificial intelligence? Psychol Learn Teach. Aug 12, 2021;21(1):37-56. [CrossRef]
- Cummins R, Ewbank MP, Martin A, Tablan V, Catarino A, Blackwell AD. TIM: a tool for gaining insights into psychotherapy. In: Proceedings of the World Wide Web Conference. Presented at: WWW '19: The Web Conference; May 13-17, 2019, 2019; San Francisco, CA, USA. URL: https://doi.org/10.1145/3308558.3314128 [CrossRef]
- Hirsch T, Soma C, Merced K, Kuo P, Dembe A, Caperton DD, et al. "It's hard to argue with a computer:" investigating psychotherapists' attitudes towards automated evaluation. DIS (Des Interact Syst Conf). Jun 2018;2018:559-571. [FREE Full text] [CrossRef] [Medline]
- Tanana MJ, Soma CS, Srikumar V, Atkins DC, Imel ZE. Development and evaluation of ClientBot: patient-like conversational agent to train basic counseling skills. J Med Internet Res. Jul 15, 2019;21(7):e12529. [FREE Full text] [CrossRef] [Medline]
- Imel ZE, Pace BT, Soma CS, Tanana M, Hirsch T, Gibson J, et al. Design feasibility of an automated, machine-learning based feedback system for motivational interviewing. Psychotherapy (Chic). Jun 2019;56(2):318-328. [CrossRef] [Medline]
- Huang Z, Epps J, Joachim D, Chen M. Depression detection from short utterances via diverse smartphones in natural environmental conditions. In: Proceedings of the Interspeech 2018. Presented at: Interspeech 2018; Sep 2-6, 2018, 2018; Hyderabad, India. URL: https://doi.org/10.21437/Interspeech.2018-1743 [CrossRef]
- Rønnestad MH, Ladany N. The impact of psychotherapy training: introduction to the special section. Psychother Res. May 2006;16(3):261-267. [CrossRef]
- ieso online therapy homepage. ieso. URL: https://www.iesohealth.com/why-typed-therapy [accessed 2023-05-05]
- Calati R, Courtet P. Is psychotherapy effective for reducing suicide attempt and non-suicidal self-injury rates? Meta-analysis and meta-regression of literature data. J Psychiatr Res. Aug 2016;79:8-20. [CrossRef] [Medline]
- Jan A, Meng H, Gaus YF, Zhang F. Artificial intelligent system for automatic depression level analysis through visual and vocal expressions. IEEE Trans Cogn Dev Syst. Sep 2018;10(3):668-680. [CrossRef]
- Karam ZN, Provost EM, Singh S, Montgomery J, Archer C, Harrington G, et al. Ecologically valid long-term mood monitoring of individuals with bipolar disorder using speech. Proc IEEE Int Conf Acoust Speech Signal Process. May 2014;2014:4858-4862. [FREE Full text] [CrossRef] [Medline]
- Huang Z, Epps J, Joachim D, Sethu V. Natural language processing methods for acoustic and landmark event-based features in speech-based depression detection. IEEE J Sel Top Signal Process. Feb 2020;14(2):435-448. [CrossRef]
- Sokero TP, Melartin TK, Rytsälä HJ, Leskelä US, Lestelä-Mielonen PS, Isometsä ET. Prospective study of risk factors for attempted suicide among patients with DSM-IV major depressive disorder. Br J Psychiatry. Apr 02, 2005;186(4):314-318. [CrossRef] [Medline]
- Sonde health homepage. Sonde Health. URL: https://www.sondehealth.com [accessed 2023-05-05]
- Venkatesh V. Adoption and use of AI tools: a research agenda grounded in UTAUT. Ann Oper Res. Jan 19, 2021;308(1-2):641-652. [CrossRef]
- Venkatesh V, Morris MG, Davis GB, Davis FD. User acceptance of information technology: toward a unified view. MIS Q. Sep 2003;27(3):425-478. [CrossRef]
- Student mental health AI tools. Open Source Fremwork. URL: https://osf.io/fqdzb [accessed 2023-06-26]
- Owusu MK, Owusu A, Fiorgbor ET, Atakora J. Career aspiration of students: the influence of peers, teachers and parents. J Educ Soc Behav Sci. Apr 29, 2021;34(2):67-79. [CrossRef]
- Aafjes-van Doorn KA, Kamsteeg C, Bate J, Aafjes M. A scoping review of machine learning in psychotherapy research. Psychother Res. Jan 29, 2021;31(1):92-116. [CrossRef] [Medline]
- Chekroud AM, Bondar J, Delgadillo J, Doherty G, Wasil A, Fokkema M, et al. The promise of machine learning in predicting treatment outcomes in psychiatry. World Psychiatry. Jun 18, 2021;20(2):154-170. [FREE Full text] [CrossRef] [Medline]
- Kelly CJ, Karthikesalingam A, Suleyman M, Corrado G, King D. Key challenges for delivering clinical impact with artificial intelligence. BMC Med. Oct 29, 2019;17(1):195. [FREE Full text] [CrossRef] [Medline]
- Seufert S, Guggemos J, Sailer M. Technology-related knowledge, skills, and attitudes of pre- and in-service teachers: the current situation and emerging trends. Comput Human Behav. Feb 2021;115:106552. [FREE Full text] [CrossRef] [Medline]
- Oppenheimer DM, Meyvis T, Davidenko N. Instructional manipulation checks: detecting satisficing to increase statistical power. J Exp Social Psychol. Jul 2009;45(4):867-872. [CrossRef]
- Karaca O, Çalışkan SA, Demir K. Medical artificial intelligence readiness scale for medical students (MAIRS-MS) - development, validity and reliability study. BMC Med Educ. Feb 18, 2021;21(1):112. [FREE Full text] [CrossRef] [Medline]
- Dimitrova M. Of discovery and dread: the importance of work challenges for international business travelers' thriving and global role turnover intentions. J Organ Behav. Feb 03, 2020;41(4):369-383. [CrossRef]
- Venkatesh V, Thong JY, Chan FK, Hu PJ, Brown SA. Extending the two-stage information systems continuance model: incorporating UTAUT predictors and the role of context. Inf Syst J. Nov 2011;21(6):527-555. [CrossRef]
- Chai CS, Wang X, Xu C. An extended theory of planned behavior for the modelling of Chinese secondary school students’ intention to learn artificial intelligence. Mathematics. Nov 23, 2020;8(11):2089. [CrossRef]
- Fietta V, Zecchinato F, Stasi BD, Polato M, Monaro M. Dissociation between users’ explicit and implicit attitudes toward artificial intelligence: an experimental study. IEEE Trans Human Mach Syst. Jun 2022;52(3):481-489. [CrossRef]
- Liang Y, Lee SA. Fear of autonomous robots and artificial intelligence: evidence from national representative data with probability sampling. Int J Soc Robot. Mar 8, 2017;9(3):379-384. [CrossRef]
- Sindermann C, Sha P, Zhou M, Wernicke J, Schmitt HS, Li M, et al. Assessing the attitude towards artificial intelligence: introduction of a short measure in German, Chinese, and English language. Künstl Intell. Sep 23, 2020;35(1):109-118. [CrossRef]
- Brady GM, Truxillo DM, Bauer TN, Jones MP. The development and validation of the Privacy and Data Security Concerns Scale (PDSCS). Int J Select Assess. Sep 29, 2020;29(1):100-113. [CrossRef]
- R Core Team. R: A Language and Environment for Statistical Computing. Vienna, Austria. R Foundation for Statistical Computing; 2022.
- Rosseel Y. lavaan: an R package for structural equation modeling. J Stat Soft. 2012;48(2):1-36. [CrossRef]
- Hu LT, Bentler PM. Cutoff criteria for fit indexes in covariance structure analysis: conventional criteria versus new alternatives. Struct Equ Modeling Multidiscipl J. Jan 1999;6(1):1-55. [CrossRef]
- Browne MW, Cudeck R. Alternative ways of assessing model fit. Sociol Methods Res. Jun 29, 2016;21(2):230-258. [CrossRef]
- Scharf F, Pförtner J, Nestler S. Can ridge and elastic net structural equation modeling be used to stabilize parameter estimates when latent factors are correlated? Struct Equ Model Multidiscipl J. Jun 15, 2021;28(6):928-940. [CrossRef]
- Hurst JL, Good LK. Generation Y and career choice: the impact of retail career perceptions, expectations and entitlement perceptions. Career Dev Int. 2009;14(6):570-593. [CrossRef]
- Luyckx K, Klimstra TA, Duriez B, Van Petegem S, Beyers W. Personal identity processes from adolescence through the late 20s: age trends, functionality, and depressive symptoms. Soc Dev. Jun 04, 2013;22(4):701-721. [CrossRef]
- Lin HC, Tu YF, Hwang GJ, Huang H. From precision education to precision medicine. Educ Technol Soci. Jan 2021;24(1):123-137. [FREE Full text]
- Kwak Y, Ahn JW, Seo YH. Influence of AI ethics awareness, attitude, anxiety, and self-efficacy on nursing students' behavioral intentions. BMC Nurs. Sep 30, 2022;21(1):267. [FREE Full text] [CrossRef] [Medline]
- Allen D, Karanasios S, Slavova M. Working with activity theory: context, technology, and information behavior. J Am Soc Inf Sci. Feb 18, 2011;62(4):776-788. [CrossRef]
- DeSanctis G, Poole MS. Capturing the complexity in advanced technology use: adaptive structuration theory. Organ Sci. May 1994;5(2):121-147. [CrossRef]
Abbreviations
AI: artificial intelligence |
AI-CDSS: artificial intelligence–enabled clinical decision support system |
CFI: comparative fit index |
FB tool: feedback tool |
MI: motivational interviewing |
RMSEA: root mean square error of approximation |
SEM: structural equation modeling |
SRMR: standardized root mean square residual |
TAM: Technology Acceptance Model |
TLI: Tucker-Lewis index |
TR tool: treatment recommendation tool |
UTAUT: Unified Theory of Acceptance and Use of Technology |
Edited by A Kushniruk; submitted 28.02.23; peer-reviewed by D Kohen, J Ferrer Costa; comments to author 01.05.23; revised version received 08.05.23; accepted 14.05.23; published 12.07.23.
Copyright©Anne-Kathrin Kleine, Eesha Kokje, Eva Lermer, Susanne Gaube. Originally published in JMIR Human Factors (https://humanfactors.jmir.org), 12.07.2023.
This is an open-access article distributed under the terms of the Creative Commons Attribution License (https://creativecommons.org/licenses/by/4.0/), which permits unrestricted use, distribution, and reproduction in any medium, provided the original work, first published in JMIR Human Factors, is properly cited. The complete bibliographic information, a link to the original publication on https://humanfactors.jmir.org, as well as this copyright and license information must be included.