This is an open-access article distributed under the terms of the Creative Commons Attribution License (https://creativecommons.org/licenses/by/4.0/), which permits unrestricted use, distribution, and reproduction in any medium, provided the original work, first published in JMIR Human Factors, is properly cited. The complete bibliographic information, a link to the original publication on https://humanfactors.jmir.org, as well as this copyright and license information must be included.
Emerging technologies, such as artificial intelligence (AI), have the potential to enhance service responsiveness and quality, improve reach to underserved groups, and help address the lack of workforce capacity in health and mental health care. However, little research has been conducted on the acceptability of AI, particularly in mental health and crisis support, and how this may inform the development of responsible and responsive innovation in the area.
This study aims to explore the level of support for the use of technology and automation, such as AI, in Lifeline’s crisis support services in Australia; the likelihood of service use if technology and automation were implemented; the impact of demographic characteristics on the level of support and likelihood of service use; and reasons for not using Lifeline’s crisis support services if technology and automation were implemented in the future.
A mixed methods study involving a computer-assisted telephone interview and a web-based survey was undertaken from 2019 to 2020 to explore expectations and anticipated outcomes of Lifeline’s crisis support services in a nationally representative community sample (n=1300) and a Lifeline help-seeker sample (n=553). Participants were aged between 18 and 93 years. Quantitative descriptive analysis, binary logistic regression models, and qualitative thematic analysis were conducted to address the research objectives.
One-third of the community and help-seeker participants did not support the collection of information about service users through technology and automation (ie, via AI), and approximately half of the participants reported that they would be less likely to use the service if automation was introduced. Significant demographic differences were observed between the community and help-seeker samples. Of the demographics, only older age predicted being less likely to endorse technology and automation to tailor Lifeline’s crisis support service and use such services (odds ratio 1.48-1.66, 99% CI 1.03-2.38;
Although Lifeline plans to always have a real person providing crisis support, help-seekers automatically fear this will not be the case if new technology and automation such as AI are introduced. Consequently, incorporating innovative use of technology to improve help-seeker outcomes in such services will require careful messaging and assurance that the human connection will continue.
In 2016, the founder and executive chairman of the World Economic Forum, Klaus Schwab, wrote that “we stand on the brink of a technological revolution that will fundamentally alter the way we live, work, and relate to one another” [
One such innovation has been the development of artificial intelligence (AI). AI has been described as the ability of a computer or machine to mimic the capabilities of the human mind, such as learning from examples and experiences, recognizing objects, understanding and responding to language, making decisions, and solving problems [
Importantly, in the fields of health and mental health, AI has been argued to have the potential to enhance existing services by facilitating diagnostics and decision-making, expand the reach and personalization of services to underserved populations and high-risk groups, and ease the human resources crisis in mental health care and support [
AI has even been viewed by some academics as representing the future of mental health research methodology because of its superior ability to recognize the complexity of disorders, heterogeneity of clients, and varied mental health contexts compared with traditional statistical approaches that tend to rely on
Although the current use of ML techniques for diagnosis in real-world mental health settings is limited because of the lack of clinical validation and readiness of ML applications [
In Australia, the national 24-hour crisis support service for the general community, Lifeline, featured heavily in the Australian Department of Health’s $10.4 (US $7.2) million national mental health communications campaign to encourage Australians to reach out for mental health support during COVID-19 [
Despite the rapid advancement of technological innovations in health care, research on consumers’ acceptance of new technologies has been scarce. To the best of our knowledge, there has been no research on consumer perspectives of AI as applied to the fields of mental health and crisis support, representing significant knowledge and practice gaps in this area. A recent systematic review of 117 articles published from 2005 to 2016 on data mining for AI in health care analytics revealed that one-third of the reviewed research did not use expert opinions in any form [
In addition, the few studies conducted specifically on consumer perspectives of AI have focused solely on its use in medical health contexts. Nascent research has shown that trust and understanding of AI are important factors in the acceptance of AI in medical applications [
These concerns have been largely corroborated by reports from surveys of nationally representative and consumer samples. For instance, in a 2020 survey of 2575 Australians, perceptions of the adequacy of current regulations and laws to make AI use safe, the uncertain impact of AI on society and jobs, and reported familiarity and understanding of AI were found to strongly influence AI acceptance more broadly [
To date, research has focused on medical care applications, and the extent to which findings can be translated into AI applications in mental health and crisis support contexts remains unclear. With global investment in AI technology rising from 1.7 billion in 2010 to 14.9 billion in 2014 [
A possible avenue for AI-integrated technology to promote increased service capacity and quality is to support the crisis counselor workforce (often volunteer-based) to feel better equipped to support help-seekers, train and support each other, and prevent staff burnout and attrition at an organizational level. Research shows that crisis counselors spend a considerable amount of time taking manual notes and cross-referencing these notes while actively trying to support help-seekers, which adds to their cognitive load [
This study aimed to address the significant gaps in understanding consumer perspectives of AI in mental health support for crisis support services by exploring, in the context of Lifeline, Australia’s largest crisis support helpline: (1) the level of support for the use of technology and automation, (2) the likelihood of service use if technology and automation were implemented, (3) the impact of demographic characteristics on the level of support and likelihood of service use if technology and automation were implemented, and (4) reasons for not using the services if technology and automation were implemented. These perspectives were explored for the Australian general community and specifically for Lifeline service users (help-seekers). It should be noted that AI can involve the automation of processes, such as self-driving vehicles, but automation does not necessarily include AI. The focus of this research is on AI-integrated technology and automation.
A mixed methods approach using the triangulation design (validating quantitative data model [
The community sample comprised a nationally representative sample of 1300 community-dwelling adults across Australia [
A computer-assisted telephone interview (CATI) was administered at the Social Research Centre at the Australian National University by trained interviewers. Data collection took place over 5 weeks, from October 28 to November 30, 2019. Contact details were purchased from the commercial sample provider SamplePages and included 16,245 mobile and 11,375 landline telephone numbers across Australia. The landline sample was stratified based on the state and capital city or rest of the state divisions. Geographic-based strata were not put in place for mobile devices, as no a priori geographic information was available. Random digit dialing (RDD) was used to obtain participants from all states or territories of Australia.
The interviews included 910 participants from the mobile RDD sample and 390 from the landline RDD sample. For people contacted on a landline number, any household member aged ≥18 years was eligible to participate. For people contacted on a mobile number, the survey was conducted with the phone user. Mobile phones were sent a pre-approach SMS text message with an opt-out option before contact by telephone. Interviews were conducted in English only. The average interview length was 14.8 minutes. There were no incentives for participation.
The help-seeker sample comprised 553 Lifeline help-seekers aged 18 to 77 (mean 39.60, SD 13.92) years, and 313 (74.2%; valid percent) were women.
A self-report survey was made available to Australian residents (aged ≥18 years) who had previously contacted Lifeline. Data collection took place over 6 months, from December 16, 2019, to June 16, 2020, via the web-based survey platform Qualtrics (Copyright 2021 Qualtrics) [
Recruitment was conducted through Lifeline Australia’s official social media pages (Facebook, Twitter, and LinkedIn) and website; a survey link shared at the end of Lifeline’s web-based chat and text message contacts; and snowballing across Lifeline Australia’s Lived Experience Advisory Group (LEAG) members and mental health organizations, such as Beyond Blue and SANE Australia. On clicking the survey link, participants were presented with an information sheet detailing the study aims, participant involvement, confidentiality and anonymity, data storage procedures, and investigator and ethics contact information. Informed consent was obtained from all participants through the web. Respondents were able to review and change their answers via a back button if desired.
The survey received 1278 total responses through Qualtrics, but 725 (56.73%) of them were <60% complete or the person had not previously contacted Lifeline. Analyses were compared with these cases excluded and included (using multiple imputation) when complete-case analysis was required. The median completion time was 11.7 minutes. No incentives were provided for participation.
The questionnaire measures aimed to determine participants’ awareness, expectations, and outcomes of using Lifeline’s crisis support services. Demographic questions were asked about age, gender, sexual orientation, country of birth, main language spoken at home, indigenous status, and household composition. These characteristics were chosen because they represent groups of interest to Lifeline that may be at an elevated risk of suicidality and they can be used to assess regional variation. No standardized measures for assessing community or help-seeker expectations of AI as applied to crisis support services have been identified in the literature [
Participants were asked, “When people contact Lifeline there is always a real person on the other end. However, there is the potential in the future for technology and automation to be used to help Lifeline counsellors to provide better services. Using a scale of 1 to 5, where 1 is not at all and 5 is very much, when people contact Lifeline, to what extent would you support Lifeline collecting information about individual users through technology and automation in order to tailor the services provided to the needs of each individual?” The additional prompts of “for example, to identify types of needs callers have and how they are feeling” and “automation refers to things like using artificial intelligence to monitor callers and measure their level of distress” were provided to sample 1 (community). In sample 2 (help-seekers) the following detail was provided: “for example, automation can refer to things like using artificial intelligence to monitor callers and measure their levels of distress.”
Participants were asked, “If Lifeline were to use this type of technology and automation, do you think you would be less likely to use Lifeline, more likely to use Lifeline or would it not make a difference to you?” In sample 1 (community), the response options were 1 (less likely to use Lifeline), 2 (more likely to use Lifeline), and 3 (would not make a difference to you). In sample 2 (help-seekers), the response options were 1 (much less likely to use Lifeline), 2 (somewhat less likely to use Lifeline), 3 (neither more nor less likely to use Lifeline), 4 (somewhat more likely to use Lifeline), and 5 (much more likely to use Lifeline). For comparison between the samples, sample 2 scores were rescaled to range from 1 to 3, consistent with sample 1.
Participants from both samples who indicated that they would be less likely to use Lifeline were asked to elaborate on their response via the following open-ended question: “Why would you be less likely to use Lifeline as a result of Lifeline using this technology and automation?”
Quantitative data were analyzed using the statistical package SPSS (version 25.0; IBM Corporation) [
In the data set, 1.8%-9.7% data were missing at the variable level. Model estimates for each of the regression models were compared when missing data were excluded from the analysis using listwise deletion (the default treatment of missing data for SPSS logistic regression; N=1554-1573) and when missing data were included (N=1853) using SPSS’s multiple imputation of missing values to obtain pooled estimates across 40 imputations (m=40 number of imputations; refer to
Significance was set at
Open-ended responses to the reasons question were analyzed in NVivo (version 12.0; QSR International [
This study was approved by the Human Research Ethics Committee of the University of Canberra (project ID: 2133).
Descriptive information for the community and help-seeker samples is provided in
Descriptive statistics for community and help-seeker samples.
|
Community sample (n=1300) | Help-seeker sample (n=553) | χ2 or |
η2a or Cramer |
||
Age (years), mean (SD) | 53.43 (18.49) | 39.60 (13.92) | 17.23 (1279.34) |
|
|
|
|
158.79 (2) |
|
|
|||
|
Male | 606 (46.72) | 77 (18.2) |
|
|
|
|
Female | 687 (52.96) | 313 (74.2) |
|
|
|
|
Other | 4 (0.30) | 32 (7.6) |
|
|
|
|
41.25 (1) |
|
|
|||
|
Heterosexual | 1167 (89.76) | 302 (76.8) |
|
|
|
|
Other | 133 (10.23) | 91 (23.2) |
|
|
|
|
35.00 (2) |
|
|
|||
|
Australia | 961 (73.92) | 346 (83.8) |
|
|
|
|
Another English-speaking country | 159 (12.23) | 38 (9.2) |
|
|
|
|
Non–English-speaking country | 180 (13.84) | 29 (7.0) |
|
|
|
|
7.49 (1) |
|
|
|||
|
English | 1105 (85.06) | 373 (90.5) |
|
|
|
|
Other | 194 (14.93) | 39 (9.5) |
|
|
|
|
4.05 (1) | .04 | 0.05 | |||
|
Yes | 31 (2.40) | 18 (4.5) |
|
|
|
|
No | 1259 (97.59) | 382 (95.5) |
|
|
|
|
8.07 (1) |
|
− |
|||
|
Lives alone | 248 (19.07) | 107 (25.7) |
|
|
|
|
Not alone | 1052 (80.92) | 309 (74.3) |
|
|
|
aη2=eta-squared measure of effect size.
bΦ=phi.
c
Levels of community (n=1268) and help-seeker (n=426) participants’ support in the use of technology and automation to tailor Lifeline’s crisis support service.
Given the demographic differences between the samples, a direct binary logistic regression was performed on participants’ level of support for the collection of user information to tailor Lifeline’s services, with sample type and 7 sociodemographic predictors included (age, gender, sexual orientation, country of birth, main language spoken at home, indigenous status, and whether living alone). A test of the full model with all 8 predictors against a constant-only model was statistically significant (N=1592,
Logistic regression for support for the collection of user information to tailor Lifeline’s services (N=1592).
“Would not support”a | Odds ratio (99% CI) | |
Sample type (community) | 1.16 (0.82-1.65) | |
|
||
|
≥55 | 1.55 (1.07-2.24)c |
|
35-54 | 1.52 (1.06-2.19)d |
Gender (male) | 1.11 (0.84-1.47) | |
Sexual orientation (heterosexual) | 0.82 (0.54-1.26) | |
|
||
|
Australia | 1.13 (0.67-1.88) |
|
Another English-speaking country | 1.34 (0.72-2.49) |
Main language spoken at home (other than English) | 1.12 (0.68-1.85) | |
Indigenous status (Aboriginal or Torres Strait Islander) | 0.96 (0.42-2.19) | |
Living situation (lives alone) | 1.22 (0.87-1.72) |
a“Would support” combined with “Would neither support nor not support” is the reference group for comparison with “Would not support.”
b18 to 34 years is the reference group for age. Age groupings broadly reflect young adults (18-34 years), middle-aged adults (35-54 years), and older adults (≥55 years).
c
d
eNon–English-speaking country is the reference group for country of birth.
Likelihood of community (n=1247) and help-seeker (n=426) participants using Lifeline if technology and automation were used.
To test the sample effect while controlling for demographic differences, a direct binary logistic regression was performed. A test of the full model with all 8 predictors against a constant-only model was statistically significant (N=1572,
Logistic regression for participants’ self-reported likelihood of service use if technology and automation were implemented at Lifeline (N=1572).
“Less likely”a | Odds ratio (99% CI) | ||
Sample type (community) | 1.23 (0.87-1.75) | ||
|
|||
|
≥55 | 1.66 (1.15-2.38)c | |
|
35-54 | 1.48 (1.03-2.12)d | |
Gender (male) | 1.27 (0.96-1.67) | ||
Sexual orientation (heterosexual) | 0.99 (0.65-1.50) | ||
|
|||
|
Australia | 1.36 (0.81-2.27) | |
|
Another English-speaking country | 1.50 (0.80-2.80) | |
Main language spoken at home (other than English) | 0.78 (0.47-1.28) | ||
Indigenous status (Aboriginal or Torres Strait Islander) | 2.14 (0.89-5.16) | ||
Living situation (lives alone) | 1.25 (0.89-1.75) |
a“More likely” combined with “Would not make a difference” is the reference group for comparison with “Would not support.”
b18 to 34 years is the reference group for age. Age groupings broadly reflect young adults (18-34 years), middle-aged adults (35-54 years), and older adults (≥55 years).
c
d
eNon–English-speaking country is the reference group for country of birth.
There were 837 community sample participants and help-seeker participants who indicated that they would be less likely to use Lifeline if technology and automation were used, and 94.9% (795/837) of the participants provided a qualitative response as to why (
Reasons for community (n=595) and help-seeker (n=200) participants not using the Lifeline crisis support service if technology and automation were used—open-ended.
Respondents overwhelmingly wanted to speak to a real person rather than a
You are talking to a robot, if I was suicidal, I would rather talk to somebody than [a] computer because the computer may not understand how you feel, but a person you’re talking to might have an idea of how to cope.
Another respondent said the following:
Because automation does not really understand people. Automation is just a script and people don’t like talking to machines.
Many emphasized the need for person-to-person contact and viewed this as a strength of the current Lifeline crisis support service:
Because I think one of the attractions of Lifeline is having a person at the other end, and I’d be concerned that AI [Artificial Intelligence] couldn’t pick up what I’m saying.
Another said the following:
I think Lifeline stands out because it’s always got a person there when so many other customer service interfaces are using technology—the reason I/they go to Lifeline is because of the person.
A total of 7 subthemes were identified as specific reasons for respondents wanting to speak to a real person. In the community sample, this included the lack of emotional connection (20/433, 4.6% of main theme), where respondents discussed how they would feel “less important” and “less connected” if technology and automation were used and how they would be left with “a perception that you might be wondering if you are more of a statistic than a person in need of help” (Community No 406, Male, 33 years of age). In the help-seeker sample, this included expectations that the experience would be impersonal (46/126, 36.5% of main theme), that human expertise is greater than what technology and automation could provide (30/126, 23.8%), that the use of technology and automation would be frustrating (9/126, 7.1%), that help-seekers require emotional connection (9/126, 7.1%), that help-seekers would feel devalued if technology and automation were used (6/126, 4.7%), and that only real people can provide comfort (4/126, 3.1%).
In relation to the expectation that the crisis support service would be impersonal, a respondent stated the following:
The distress and need is immediate. There is so much cold automation out there—sometimes the cause of our issues—the thought of more is depressing and sad. All we want is a human being. Some of us are minutes away from suicide. Don’t waste a second on bullshit automation. We need human beings.
Other respondents questioned how their interactions would differ from interacting with programmed devices:
Why would I want to talk to a computer instead of a person? I could use Siri or buy a Google home device and talk to it. What is the point of Lifeline if it becomes another computer to talk to?
Many emphasized the limits of technology and that it could never replace human expertise. For example, a help-seeker stated the following:
Technology will never improve the human condition more than other humans can.
Another said the following:
AI [Artificial Intelligence] cannot sense a person’s level of distress and convey empathy the way a human can. When I hear someone say something that sounds automatic and stereotyped (reflections of strengths are a good example of this) I switch off and don’t feel able to engage with the person because I don’t feel they are listening. An AI service would do that to me—except all the time. There’s no one really listening and hearing me so there would be absolutely no point in calling. I’d feel worse after talking to an AI.
Some respondents raised concerns about whether technologies such as AI could understand the nuances and complexities of help-seekers’ crises, particularly when this was already a difficult task for humans. For example, a respondent wrote the following:
I think there are things robots can do, but in my experience, understanding people is too complex even for humans.
Help-seekers also noted the following perspectives:
AI would be based on a more generic format and would not consider the nuances of each particular concern and how the concerns affect people differently on any given day.
Others indicated the following:
There is nothing more frustrating than being panicked or stressed and having to repeat yourself over and over again to a machine.
...when I’m depressed and/or suicidal, the last thing I need to deal with is automated phone “services,” when all I need to do is talk with a human.
Help-seekers emphasized that automation would only add to existing feelings of stress, particularly for older generations who may not be so familiar with the use of technology, and would result in many hanging up because “nobody wants to feel like they are a number instead of a person and that’s even more important when they are distressed” (Help-seeker No 443, Female, 49 years of age).
Finally, a few respondents specifically brought up the notion of comfort, with the perception that automation would take away the realness of contact. One help-seeker wrote the following:
Lifeline stands out as a service through which each caller speaks and connects directly to another person. There is intrinsic and immediate comfort in this, and the contact makes a huge difference to my confidence in the service.
The next most common theme, although much less endorsed, was concerns with privacy and data sharing issues. Respondents were wary of scams, lack of security, and the potential of their data being used against them in the future. For example, a respondent said:
[I] do not like automation because so many scams rely on voice automation; it would make me stressed.
Another raised the issue of bias, limiting their trust in technology, stating the following:
Not something that I trust. Also don’t think it’s great considering these services are used by marginalised people. If the information, no matter how confidential, were leaked, it could be really bad for people who have already been dealt bad hands. The way those technologies are being developed and automated doesn’t seem to be going in a good direction. [I] believe that the development of these technologies can include biases, despite people believing that AI and tech is unbiased (built in bias).
Some were also concerned about feeling monitored, the level of control they would have over the information being shared and used, and the protection of their anonymity and confidentiality. A respondent said the following:
...some people experience paranoia in general and would be less likely to reach out if they felt they were being monitored in any way.
Another respondent stated the following:
Without specific details of the types of information to be collected and what would be done with that information, I am erring on the side of caution on this one. As a caller I like to be in control of the information I give out—I personally am quite an open-book anyway, so I generally don’t have a problem with sharing, but I think people need to feel trusted, and perhaps won’t feel trusted if this is implemented. I would also be concerned that the collection of this information would preference some callers over others somehow.
A respondent highlighted that this would be a particularly important consideration for vulnerable people:
...[they] are often abused. Using automation to detect distress could potentially cause an alert, which could put that person at risk of harm, abuse, and further trauma from services (eg, police, ambulance) when all they want is someone to listen. This type of “advancement” would be dangerous.
Concerns were also expressed regarding technology being something that was untested and may not work, which would exacerbate help-seekers’ levels of stress and anxiety. There were doubts about whether the machines could make judgments and accurately interpret mixed messages. A respondent said that it “can’t give instant answers” and involves “lots of hypotheticals” where “lots of things can change and a machine doesn’t know” (Community No 162, Male, 62 years of age). Another respondent stated the following:
I don’t believe automation can actually listen to a human being and understand the inflection and tones in the person’s voice. People who want to kill themselves don’t want to talk to automation. We hate when we speak to automation in other services (eg, banking); not good in this situation.
Community respondents provided 2 additional main themes (computer literacy issues or dislike of technology and the belief that the process would take too long) that were not evident in the help-seeking sample.
Community respondents spoke about how “frustrating” automation could be, particularly for older generations, as well as having a “hatred” or “aversion” to technology and robots. For example, a respondent stated the following:
Automation could be frustrating particularly if you’re not tech-savvy.
A less common response was the belief that the automation process would take too long. Community respondents highlighted that “people using the service would want someone immediately on the line” and that help-seekers “could have hung up or would be feeling even more distressed by it than they already were” (No 116, Female, 31 years of age). A respondent indicated the following:
...it’s hard enough dealing with your emotions and figure out which number to press to get someone to be able to talk to you.
Reference was made to how this would be especially problematic for help-seekers who experience suicidality and require immediate support.
The aim of this mixed methods study was to understand the consumer perspectives of AI in mental health support from crisis support services. By surveying both general community members and Lifeline help-seekers, our results show a high level of resistance to and considerable misunderstanding of potential AI technologies in crisis line services.
Community and help-seeker participants were broadly consistent in their level of support and likelihood of service use if technology and automation were implemented in Lifeline’s crisis support services in Australia. One-third of the participants did not support the collection of information about individual users through technology and automation to tailor Lifeline’s services to individual needs, whereas approximately one-fifth of the participants were supportive. Approximately half of the participants reported that they would be less likely to use the Lifeline crisis support service if it implemented technology and automation. These findings reveal that the level of support for the use of technology and automation is not strong, that the likelihood of service use if technology and automation were implemented is not evident for most, that these views are evident across demographic groups, and that the reasons for not using the services if technology and automation were implemented are related to the preference for human contact and distrust of automation.
After controlling for demographic differences across the samples, older people (≥35 years) were found to have at least 48% greater odds of reporting that they would be less likely to support the collection of user information to tailor Lifeline’s crisis support services or to use these services if technology and automation were implemented compared with younger people. This finding may be attributed to young people, particularly men, who have higher levels of awareness, use, and acceptance of AI [
Importantly, we found that community and help-seeker participants strongly held assumptions that the use of
Specifically, “want to speak to a real person” and “privacy and data sharing issues” were the most commonly reported main themes and concerns among both community members and help-seekers. For help-seekers, wanting to speak to a real person was attributed to participants believing that the human element is essential because human expertise is greater than what technology and automation could provide, that the use of technology and automation would be frustrating, that help-seekers require emotional connection and would feel devalued if technology and automation were used, and that only real people can provide comfort. Regarding confidentiality issues, community members were wary of scams, lack of security, and the potential of their data being used against them in the future, which are concerns related to the risks of technology use in general. Help-seekers were more concerned about feeling monitored, the level of control they would have over the information being shared and used, and the protection of their anonymity and confidentiality, particularly for vulnerable people such as those who have experienced abuse.
These findings show the need for clear communication and education about the potential use and benefits of AI in crisis support services, particularly to assuage fears regarding the replacement of counselors and removal of human-centered care, as well as transparency around confidentiality and how individuals’ data are collected, used, and stored so that trust is not eroded [
Overall, community and help-seeker participants’ levels of support for technology and automation largely align with previous research conducted in medical health contexts. The results are consistent with the
Notably, despite the resistance of about half of the participants to using the service if automation was implemented, the other half said that their decision would be unaffected. Of these, approximately one-tenth reported that they would be more likely to use the service, highlighting the scope for endeavors that aim to promote the acceptability of AI in crisis support services. However, given the paucity of existing research in this area, more quantitative and qualitative studies are needed to better understand why consumers would and would not support the use of AI in their mental health and crisis support services. Research needs to identify the barriers and facilitators to the acceptance of AI and inform the development of AI awareness and promotion education initiatives to modify fear-based or inaccurate assumptions about the role, application, and impact of AI on personal user experiences in mental health support. Our research shows that preconceived notions, such as fears of talking to a
The strengths of this study include the large nationally representative community sample and large help-seeker sample used to address the study aims and the use of multivariate analyses, which enabled the examination of the extent to which demographic factors impacted consumer perspectives of AI in relation to Lifeline’s crisis support service. This study had several limitations. First, with the lack of standardized measures for assessing community help-seeker expectations of AI as applied to crisis support services, support for technology and automation and likelihood of service use were assessed using 2 single-item measures developed in consultation with Lifeline and their LEAG. Although research has shown that single-item measures can perform well relative to their full scales across psychological, health, and marketing research [
Second, the depth of the qualitative thematic analysis was restricted to the format of the questions and the inductive approach used, limiting interpretative power beyond the surface descriptions provided by community members and help-seekers. Respondents may have endorsed additional themes if they had been probed specifically about their views and had the opportunity to elaborate. The lack of in-person and group discussions may have also reduced the richness of the qualitative data obtained, although this was mitigated by obtaining data from such large samples. Future research should incorporate in-depth focus groups to explore consumers’ reluctance to approve technologically enhanced crisis support services.
Third, the study only focused on why participants would be less likely to use Lifeline’s services if technology and automation were used and not on why they would be more likely to do so, which could include faster response times, higher quality interactions, fewer missed calls, and greater capacity to support the community. The explicit form and role of technology and automation in Lifeline’s services were also not fully preempted by participants when asking about their reasons for not using Lifeline’s services, which may have led to many assuming technology and automation to be relatively extreme and intrusive. We found it difficult to simply and clearly contextualize the relevant questions in a survey format. Explaining the potential uses of AI and debunking myths about automation are difficult without unduly influencing participants’ responses, particularly given the complex nature of AI and ML innovations. Nevertheless, future studies would benefit from providing additional framing and specificity around concepts of technology and automation (ie, that human counselors are not being replaced by robots or machines) and incorporating positive reasons for use, which would enable investigation into both the barriers and facilitators of AI-integrated service use in mental health and crisis support contexts.
Fourth, there were significant demographic differences between the 2 samples and different data collection methods were used. For example, men were underrepresented in the help-seeker sample. Although the sample differences were statistically controlled for, other confounding factors may have impacted the results. Finally, this research cannot ascertain causality regarding the link between beliefs and actual help-seeking behavior, and, as such, the integration of technology and automation in services may not result in actual crisis support service use refusal.
To our knowledge, this is the first mixed methods study to explore consumer perspectives of AI in mental health, specifically regarding its application in crisis support services. As such, this study addresses a significant knowledge and practice gap in relation to consumers’ acceptance of new technologies in response to the rapid advancement of technology use in health and mental health care and support. Although some level of consumer support exists for the collection of user information to tailor services via technology, the majority were reluctant to use AI-integrated crisis support services. Greater reluctance was evident among older people. Addressing community and help-seeker concerns about AI in mental health support contexts, including emphasizing how technology will augment rather than replace human connection and decision-making, with the goal of positively and ethically supporting service users’ experiences, is of high priority given that these groups are the ultimate consumers of AI. Those most affected, namely, service users and their service providers, need to be fully involved in the development and implementation of innovative technologies to ensure they are appropriately designed and effectively adopted to improve mental health and crisis support services in the near future and beyond. However, the value of the human connection factor should not be lost.
Logistic regression on multiple imputed data (m=40) for support for the collection of user information to tailor Lifeline’s services (N=1853).
Logistic regression on multiple imputed data (m=40) for participants’ self-reported likelihood of service use if technology and automation were implemented at Lifeline (N=1853).
artificial intelligence
computer-assisted telephone interview
Lived Experience Advisory Group
machine learning
random digit dialing
This work was supported by the National Health and Medical Research Council (NHMRC; grant 1153481). PJB was supported by a NHMRC fellowship 1158707.
JSM performed the comparative quantitative analysis of the computer-assisted telephone interview (CATI) and help-seeker survey data, performed the qualitative analysis of the help-seeker data, performed initial interpretation of results, and drafted the manuscript. MO performed the initial descriptive quantitative analysis of the CATI and help-seeker survey data and qualitative analysis of CATI data. KM oversaw the study design and data collection for the CATI. PJB, SB, KK, NT, and BK contributed to the study design, interpretation of results, and critical revisions to the manuscript. DJR conceived and supervised all aspects of the study, supported the comparative quantitative analysis of the CATI and help-seeker survey data, and contributed to the interpretation of the results and critical revisions to the manuscript. All authors reviewed the results and approved the final version of the manuscript.
None declared.