Published on in Vol 13 (2026)

This is a member publication of Imperial College London (Jisc)

Preprints (earlier versions) of this paper are available at https://preprints.jmir.org/preprint/80979, first published .
AI-Enhanced Conversational Agents for Personalized Asthma Support in People With Asthma: Factors for Engagement, Value, and Efficacy in a Cross-Sectional Survey Study

AI-Enhanced Conversational Agents for Personalized Asthma Support in People With Asthma: Factors for Engagement, Value, and Efficacy in a Cross-Sectional Survey Study

AI-Enhanced Conversational Agents for Personalized Asthma Support in People With Asthma: Factors for Engagement, Value, and Efficacy in a Cross-Sectional Survey Study

1Dyson School of Design Engineering, Faculty of Engineering, Imperial College London, South Kensington Campus, 25 Exhibition Road, London, United Kingdom

2Institute for Technology and Humanity, University of Cambridge, Cambridge, United Kingdom

3Department of Computing, Faculty of Engineering, Imperial College London, London, United Kingdom

Corresponding Author:

Laura Moradbakhti, Dr Phil


Background: Asthma-related deaths in the United Kingdom are the highest in Europe, and only 30% of patients access basic care. There is a need for alternative approaches to reaching people with asthma to provide health education, self-management support, and better bridges to care.

Objective: This study aimed to examine patients’ interest in using a chatbot for asthma and to identify factors that influence engagement. Automated conversational agents (specifically, mobile chatbots) present opportunities for providing alternative and individually tailored access to health education, self-management support, and risk self-assessment. But would patients engage with a chatbot, and what factors influence engagement?

Methods: We present results from a patient survey (N=1257) developed by a team of asthma clinicians, patients, and technology developers, conducted to identify optimal factors for efficacy, value, and engagement with an asthma chatbot.

Results: Results indicate that most adults with asthma (53%) are interested in using a chatbot. The patients most likely to do so are those who believe their asthma is more serious and are less confident in their self-management. Results also indicate enthusiasm for 24/7 access, personalization, and for WhatsApp (Meta) as the preferred access method (compared to app, voice assistant, SMS text messaging, or website).

Conclusions: Obstacles to uptake include security and privacy concerns and skepticism of technological capabilities. We present detailed findings and consolidate these into 7 recommendations for developers to optimize the efficacy of chatbot-based health support.

JMIR Hum Factors 2026;13:e80979

doi:10.2196/80979

Keywords



Background

Eight million people (12% of the population) in the United Kingdom are estimated to have a diagnosis of asthma [1]. Despite effective treatments and extensive efforts, the UK death rate from asthma is the highest in Europe. Moreover, 65% of people with asthma in the United Kingdom do not receive the professional care they are entitled to (eg, a yearly review, feedback on how to use their inhaler correctly, or an asthma action plan) [2]. In the latest Asthma + Lung UK survey [3], it was reported that only 30% of patients with asthma received basic care in 2021, the lowest proportion since 2015. Moreover, according to the same report, almost half (44%) of the patients who are admitted to the hospital did not receive follow-up care, with younger adults 18-29 years receiving the lowest level of basic asthma care [4].

One of the reasons for this is believed to be poor self-assessment of risk by those with asthma. For instance, according to Asthma + Lung UK, the illness is often not taken seriously enough, as reports show that 1 in 6 people in the United Kingdom do not know, or are not sure, if asthma can be fatal [5]. Moreover, health education and self-management behaviors play a critical role.

In the United Kingdom, asthma-associated care costs are at least £1.1 billion per year [6], and the bulk of the burden centers on differences in asthma control rather than severity. People with severe uncontrolled asthma are prone to more symptoms, night-time awakenings, rescue medication use, and worse self-reported outcomes [7] and are estimated to represent 3 times the cost to the health care system compared with patients with severe but controlled disease [7]. Studies have shown that informed self-management that is accompanied by professional reviews improves asthma control, reduces exacerbations, and improves overall quality of life [8].

These statistics demonstrate an urgent need for alternative approaches to reaching people with asthma to (1) improve risk self-assessment, (2) improve health literacy and self-management, and (3) encourage them to seek traditional care as needed. One way to support people with asthma could be through a specialized conversational agent (CA). A CA can be any type of dialogue system that has natural language processing capabilities and can respond to input using human language. Moreover, the input and output are not restricted to text but could also happen via voice. Although CAs can be embodied avatars or physical robots, for this study, we refer only to CAs that are nonembodied chatbots used via apps such as WhatsApp (Meta).

New generative chatbot technologies, such as ChatGPT (OpenAI), are rapidly becoming pervasive, and people are likely to continue to consult them for health care advice despite safety risks [9-11]. Conversational interfaces based on large language models can be intuitive, engaging, and provide a level of personalization in their responses based on user input. However, these technologies rely on probabilistic models trained on information from across the internet to generate answers that have never been vetted. These systems are known to fabricate information and present it confidently, sometimes adding artificial citations to sources. Such “hallucinations” could be dangerous or even life-threatening when the system is responding to medical questions. As such, misinformation remains a significant problem for publicly available generative artificial intelligence (AI) services, rendering them unsafe for health applications. The development of a trustworthy conversational system must involve health experts, including patients and clinicians, and must exclusively provide health information that has been verified as safe and accurate for the specific context.

There is existing evidence that mobile technology can be used effectively for health interventions to help people learn about their illness, complete self-assessments, track symptoms, and be directed to appropriate care options. Several bespoke apps for asthma [12-15] and a smaller number of studies on CAs designed for children and adolescents have been reported in the literature [16,17]. However, research is missing that might guide an effective mobile CA for adults with asthma. While the technology itself holds promise, successful outcomes are entirely contingent on human acceptance and use, and research is lacking on what adult patients’ concerns, expectations, preferences, and motivations would make such a conversation-based health intervention valuable to them. Furthermore, how these needs might vary for the groups that most stand to benefit from alternative interventions (eg, patients with poor asthma control, with lower health literacy, or who do not access traditional health services) is not clear.

There are many open questions related to the potential for meaningful patient engagement with a conversation-based asthma intervention. For example, what would patients be looking for from conversational health support? What would be necessary for them to value and trust a technology enough to use it? How does existing technology use play a part? While CAs have a lot of potential to help people with asthma, they are a relatively new type of technology in the public sphere and, in addition to common concerns about privacy, people may mistrust a CA’s ability to help. Also, there is not enough understanding of what motivates or hinders patients’ use of CAs in general, or how socioeconomic variables may relate to lack of trust in, or engagement with, CAs.

The study aimed to provide answers to these questions and contribute to a better understanding of the features required to make a successful CA for asthma based on the expectations, needs, and motivations of adult patients with asthma. Additionally, attention is given to different groups of patients and how developers might most effectively meet the needs of those who stand to benefit the most from such digital services.

Related Work

This section describes previous work in the 3 intersecting areas relevant to this project: mobile technologies for health broadly and asthma specifically; conversational mobile technologies for health broadly and asthma specifically; and the use of WhatsApp as a platform for conversational mobile digital health.

Mobile Technologies for Asthma

Mobile technologies focusing on health care support have been proven to be cost-effective across health domains [2]. Specifically, there is evidence that mobile technology can be used effectively to help patients learn about their illness, complete self-assessments, track symptoms, and be directed to appropriate care options. According to a systematic review, 87% of studies reported improved adherence and 53% reported improved health outcomes for users [14]. Moreover, they are a cost-effective way to reach patients in a personalized and time-efficient manner, for example, by reminding patients to engage in self-management tasks (eg, medication adherence, electronically accessible action plans, and so on) [2]. With respect to asthma specifically, bespoke apps are the most common approach to mobile interventions for asthma. For example, several bespoke apps for young people [13,14] have demonstrated positive outcomes for improving self-management. For adults, the ASTHMAXcel mobile app was designed to improve asthma knowledge and outcomes [18]. The app was tested in multiple studies and revealed an increase in asthma knowledge [18,19] as well as a decrease in clinical outcomes such as emergency department visits, hospitalizations, and prednisone usage [20] among patients.

CAs for Health

Previous studies have shown that CAs can help patients with a range of health care tasks, such as delivering clinical and health risk information, providing elements of basic care, and supporting behavior change [21] through personalized dialogue. The use of CAs for health has been extensively reviewed [22-24].

With respect to asthma specifically, 3 studies explored the use of CAs for young people. Rhee et al [25] described an automated mobile phone–based asthma self-management aid for adolescents that is able to interpret conversational English SMS text messaging describing asthma symptoms. The system was tested over 2 weeks by 15 adolescent-parent dyads, and response rates for daily messages were 81%‐97% for the adolescents. Following the intervention, participants’ awareness of both symptoms and triggers, their sense of control, and treatment adherence were higher, with benefits to the partnership between parents and adolescents.

Kadariya et al [16] tested kBot, a chatbot capable of interacting via both text and voice in the form of an Android app, designed to support pediatric patients with asthma (aged 8‐15 years). Overall, kBot received good technology acceptance and system usability scores from clinicians (n=8) and researchers (n=8), but no patient testing was reported.

A study by Kowatsch et al [26] tested MAX, a CA that was designed to increase knowledge of asthma and behavioral skills, such as inhalation technique, among 10- to 15-year-olds with asthma. MAX incorporated different modes of communicating with the CA; for example, care professionals could communicate with it via email, patients via a mobile app, and family members via SMS text messaging. The results showed not only high acceptability of the CA by all stakeholders but also improved cognitive and behavioral skills. These studies with children and adolescents suggest promise for conversation-based digital health care. However, they are limited to tests with small groups of pediatric patients. Since late-onset asthma often remains undiagnosed [27] and these patients regularly experience more severe forms of asthma compared with people with early-onset asthma [27-29], it is important to assess the needs of adults and understand how they differ from those of pediatric patients, particularly with regard to health literacy and the self-management of symptoms.

Messaging platforms are widely used across age groups in the United Kingdom and elsewhere. According to Statista [30], 80% of people aged 16‐64 years in the United Kingdom use WhatsApp, and 59.5% use Facebook Messenger (Meta). As such, the potential for conversation-based support for adults could be equally promising, but a better understanding of adult users and what might drive them to engage with conversational tools in real-world settings is needed, including how this may vary between different groups with respect to asthma severity, control, and other factors.

WhatsApp and Digital Health

Although CAs are often delivered as part of custom apps, they can also leverage existing and familiar chat-based tools such as SMS text messaging, WhatsApp, or web-based messaging systems. Indeed, we began our project with the intention of creating a bespoke app, but early engagement with patients and patient advocates led us to change this plan and shift to the widespread platform, WhatsApp. As mentioned in the “CAs for Health” section, most of the UK adult population already uses WhatsApp, which is probably why it was favored by our patient participants. Using a familiar and widespread platform also meant that we could remove the additional friction and technical barriers associated with bandwidth, storage capacity, usability, learnability, and concerns about trustworthiness that can come with downloading and using a new app for the first time.

Previous research has demonstrated positive results for the use of WhatsApp in health care settings. For example, an online questionnaire assessing patient satisfaction with WhatsApp for sharing health information, collected from 14 WhatsApp asthma and allergy groups, concluded that WhatsApp is an effective social media program for improving patient knowledge and behavior [31]. Furthermore, a study by Asthma UK offered young adults access to an asthma nurse who would assist them in managing their asthma via WhatsApp [32]. The service was well received, with 67.1% of participants reporting improved confidence in asthma management post usage.

Similarly, a study conducted across 5 Latin American countries confirmed an interest in WhatsApp as a tool to receive information about asthma. Irrespective of age (mean age of 43 years), 61.5% of patients in the study reported the highest interest in WhatsApp (as compared to SMS text messaging, Facebook, Twitter [X Corp], and email) as a tool for communication [33]. It is noteworthy that patients reported SMS text messaging as the most used service (69.9%) while still indicating that WhatsApp would be the most preferred tool for asthma communication. WhatsApp has several advantages over SMS text messaging, which may help explain its penetration. It is often more cost-effective than SMS text messaging because SMS text messaging is charged differently by mobile network providers than web-based messaging platforms such as WhatsApp; unlike SMS text messaging, the geographic reach of WhatsApp is global (ie, not limited to the mobile service region); and it provides better support for group communication, audio, video, and the option to retain an account even when one’s phone number changes.

Despite the above evidence for the potential benefits of using WhatsApp, it is not as widely used as websites or custom apps. One reason (which applies to all messaging platforms) is that text- and voice-based conversations do not provide as many affordances (ie, visual displays and menu interfaces) as other media. Another reason, more specific to WhatsApp, involves privacy concerns. For example, to receive research ethics approval for a follow-up study, the research team carried out thorough discussions with both the ethics committee and the university’s data protection office over the use of WhatsApp. Some of the concerns raised and how we recommend addressing them are listed below:

  • Sensitive data: a health app could include sensitive data. Given these concerns, we recommend not asking users for any sensitive information and asking only for the minimum data required to provide functionality.
  • Data use by Meta: because WhatsApp is owned by Meta (formerly Facebook), there is no way of knowing how they will use data from chatbot conversations. To address these concerns, we recommend including an explanatory section specific to WhatsApp in the participant information sheet. It is also important to note that, as of September 2023, WhatsApp uses end-to-end encryption, meaning the conversation is readable only by the 2 communicating parties, in this case, the patient and our servers. This means that not even WhatsApp can read the messages. As such, the onus is on the entity hosting the chatbot servers to manage data securely and ethically rather than on Meta.
  • Local storage vulnerabilities: local backups of conversations stored on a user’s phone could be compromised if the phone is lost. To address these concerns, we recommend instructing users on how to turn off the backup if they wish.
  • Management of identifiable data: to further address privacy concerns, we propose that it is not necessary to request any identifiable information and that only the WhatsApp phone number be used to identify a user within the system. Conversely, the user can be invited to give a nickname rather than their real name. For data analysis, data can be anonymized further (eg, phone numbers and nicknames can be removed).

Summary

The studies described in the “Introduction” section, including the asthma chatbot studies with children and adolescents, the survey studies exploring the potential use of WhatsApp in health care, and significant evidence for the efficacy of mobile interventions more broadly within health care, all provide promising evidence for a new conversation-based digital health care approach to complement primary and secondary care. These are summarized in Table 1 to clarify visually how existing work is distributed with respect to the technology used and whether it refers to health broadly or asthma specifically.

Table 1. Overview of categories under which the related work studies fall.
CategoryFor health (broadly)For asthma (specifically)
Mobile technology (broadly; eg, apps, bespoke, and so on)Ramsey et al [14] and
Whittamore [2]
MAX (Kowatsch et al [26])
and ASTHMAXcel (Hsia et al [18,20])
WhatsApp (Meta; specific chat technology)Saeed Tayeb [31]Calderón et al [33] and Cumella et al [32]
Conversational agents (automated chatbots that can be used through chat technologies)Singh et al [21]kBot (Kadariya et al [16]) and Rhee et al [25]

What remains to be explored is a nuanced understanding of what adult users themselves would use and value, what might drive them to engage with conversational health tools in the real world, and how this might differ for diverse groups (eg, based on differences in disease severity, symptom control, confidence with technology, or trust in the health care system). For this study, we specifically sought this improved understanding for the population of adults with asthma within the United Kingdom, and we did so with a view to improving asthma risk self-assessment, asthma health literacy, and access to care. As such, our study aimed to answer the following research questions: (1) To what extent do different adults with asthma in the UK trust the health care system, and does this level of trust correlate with interest in using a chatbot for asthma? (2) Are those people who are more interested in using an asthma care chatbot more or less likely to already be accessing basic health care services (ie, general practitioner [GP])? (3) In what ways would adults with asthma prefer to access an asthma care chatbot? (4) What features, characteristics, and style preferences for a chatbot would maximize the likelihood of adults with asthma engaging meaningfully with it? (5) To what extent does a person’s confidence using technology affect their willingness to use a chatbot for asthma? (6) Is there a relationship between an individual’s interest in using an asthma care chatbot and their self-assessment of the seriousness of their asthma?


Overview

The survey study described herein is part of a larger overarching project funded by the UK Engineering and Physical Sciences Research Council (EPSRC) and conducted in partnership with Asthma + Lung UK, an asthma charity organization. A description of the overarching project is available in the protocol paper by Calvo et al [34].

Recruitment

Participants were recruited through YouGov [35]. To fulfill our recruitment criteria, we recruited only participants with self-reported asthma from the YouGov database. Although no medical proof of their asthma was requested, 1117 participants stated that they had received a diagnosis for their asthma. Participants were paid based on YouGov rates.

Survey

A survey comprising 38 closed- and 12 open-ended questions was conducted with adults living with asthma to gain a better understanding of how design and demographic features might influence interest in, and preferences for, a chatbot to support asthma management. A full list of the survey questions is provided in the Multimedia Appendix 1.

Analysis

The quantitative results were analyzed using SPSS (version 28; IBM Corp) [36] to run all statistical analyses of the quantitative survey results. Firstly, descriptive statistics were analyzed to gain a better understanding of the sample and participants’ general preferences for an asthma chatbot. Following the descriptive statistics, further analyses were conducted with a focus on users’ trust in the health care system and to highlight differences between participants who were and were not interested in the chatbot. For all analyses, the significance level was set at α=.05.

Qualitative analysis of the open-ended free-text responses in the survey was conducted via a process of inductive thematic analysis following the steps outlined in Braun and Clarke [37] and using NVIVO (version 11 for Mac; QSR International) [38]. The aim was two-fold: (1) to identify more detailed insights underlying quantitative answers to the closed-ended questions and (2) to highlight patterns of responses or themes salient across participants’ free-text contributions. In other words, text responses were coded iteratively, and themes generated were based on repeated patterns of ideas shared across participants. Resulting themes highlight some of the more common motivations and rationales for the quantitative results.

Ethical Considerations

The survey questions were approved by the Imperial College London ethics committee (reference no #21|C7403).


Quantitative Results Overview

A total of 1257 people with self-reported asthma completed the survey (43% male, 57% female, and 4% left the gender question blank). In total, 1117 (88.9%) participants reported that they had received an asthma diagnosis. Of these, 47.8% of participants reported that their asthma was diagnosed younger than the age of 16 years, while 51.7% of participants reported that their asthma was diagnosed older than the age of 16 years, and 0.5% of participants preferred not to provide this information. Overall, 98.9% of participants received their diagnosis more than 6 months ago, 0.8% of participants preferred not to say when their diagnosis happened, and 0.3% of participants were diagnosed with asthma within the last 6 months at the time of participation in the study.

Most participants were older than 40 years; specifically, 44% were older than 55 years, 30% between 41 and 55 years, 19% were between 31 and 40 years, and 7% were between 18 and 30 years. Most participants had either completed a university degree (48%) or trade/technical/vocational training (19%), while 33% indicated that secondary school was their highest education level. For 0.4% of participants, the highest education level was completion of primary school. Fifty-six percent of participants spent most of their lives in a medium to small town, 23% in a rural area, and 21% in a large city.

Sixty-seven (5.3%) participants from our sample identified as being part of a minority ethnic group, while 1178 (93.7%) participants did not, and 12 (1%) participants preferred not to disclose. The number of participants who identified as part of a minority ethnic group was too small to allow for comparisons between participants who do and do not identify as such; however, some correlations with minority group identification and other variables are reported.

A summary of all key results per section is provided in Table 2. Table 3 summarizes the results that distinguish between the preferences of participants interested in an asthma chatbot and the preferences of participants who are not interested in an asthma chatbot for their asthma management.

Table 2. Summary of key results from each section.
SectionResults
Trust in health care system
  • The majority of participants trust the health care system (83%).
  • People who are interested in the asthma chatbot have more trust in the health care system than those who are not interested.
  • Around 79.5% of participants seek support from a GPa.
  • Participants with an interest in the asthma chatbot seek more support from a GP than those who were not interested.
  • The more education participants have, the more likely they are to trust the health care system.
  • Participants who are more confident in their ability to use technology have a higher trust in the health care system.
  • Older participants trust the health care system more than younger participants.
  • Participants who are already confident in dealing with their own asthma trust the health care system more.
Communication preferences
  • Around 59.7% of participants were interested in the voice feature.
  • Participants with a general interest in the asthma chatbot were also more interested in the voice feature of the asthma chatbot.
  • Around 85.5% of participants would prefer to talk to a human about their asthma.
  • Participants who were not interested in the asthma chatbot have a stronger preference for in-person conversations about their asthma.
Conversational style
  • Most respondents preferred a “reassuring” and “friendly” conversational style and an asthma chatbot that felt “like talking to a nurse.”
  • However, a significant number of participants selected other preferences, for example, a “direct” approach or an experience that was more “like talking to a doctor.”
  • Overall, it can be concluded that different styles suit different people.
Technology experience
  • The large majority of participants were confident in their use of technology (95%).
  • Participants who were less confident in their use of technology were less interested in trying a chatbot for asthma.
  • Participants who were confident in their use of technology, as well as participants who were interested in using an asthma chatbot, used WhatsApp (Meta) and other messenger apps significantly more often compared with participants who were less confident in their use of technology and those who were not interested in an asthma chatbot.
  • The majority of participants had used a virtual assistant in the past (89.3%).
  • Participants who had used a virtual assistant in the past were more likely to be interested in using a chatbot for their asthma.
Confidence in self-management
  • Most participants felt either very confident (51.5%) or somewhat confident (45.6%) about managing their asthma, with only 2.9% of participants who did not feel confident.
  • Participants who were not interested in the asthma chatbot reported being more confident in managing their asthma.
  • The seriousness of their asthma was rated by participants, with 6.2% reporting it asvery serious, 31.1% reporting it as somewhat serious, and 62.7% reporting it as not serious.
  • Participants who were not interested in the asthma chatbot rated their asthma as less serious.

aGP: general practitioner.

Table 3. Summary of results that distinguish participants interested vs not Interested in an asthma chatbot.
Interested in an asthma chatbotNot interested in an asthma chatbot
  • More trust in the health care system
  • More likely to ask for support from a general practitioner (GP) than those
  • More likely to also be interested in a feature that detects asthma from the sound of the voice
  • Tend to rate their asthma as more serious
  • Stronger preference for in-person conversations about their asthma
  • More confident in managing their asthma

Descriptive Results

YouGov shares some data they collect from all registered users. Existing YouGov data on WhatsApp use was available for 1011 participants and showed that 85% used WhatsApp. According to our own questions, 74% of participants used smartphone messaging apps daily, and 60% specifically used WhatsApp daily.

In contrast, 92.5% of participants indicated that they had never used a mobile app to help with their asthma (eg, to track their symptoms). Instead, participants were more likely to turn to existing familiar technologies (ie, internet search) or to trusted people (eg, nurses, doctors, or family members) for asthma information. The graph in Multimedia Appendix 2 shows which platform and/or source of information participants currently use to obtain information about asthma (multiple answer options; N=1257).

Participants were asked about which platform they would prefer to use to chat with an asthma chatbot, and they were allowed to select more than one. The majority indicated a preference for WhatsApp (refer to Multimedia Appendix 3), closely followed by a website.

Overall, the majority of participants (53%) indicated that they would indeed be interested in using an asthma chatbot via a messaging service such as WhatsApp (17% very interested and 36% somewhat interested).

An even higher number of participants (59%; 22% very interested and 37% somewhat interested) indicated interest in an asthma chatbot that could detect asthma severity from the sound of their voice, a technological feature currently in development by our research team. For this question, we asked participants: “Imagine AsthmaBot could detect your asthma severity by listening to the sound of your voice. You would speak into your phone to provide a short voice recording, and AsthmaBot would analyze this recording for signals (that only a computer can detect) to determine your asthma severity. How interested would you be in trying this voice feature?”

Trust in the Health Care System

To analyze participants’ trust toward the health care system, Spearman rank-order correlations were conducted. There was a significant negative correlation between participants’ education level and their trust in the health care system (coded as 1: participant trusts the system; and 2: participant does not trust the system; rs(1255)=−0.07 and P=.014). That is, the higher a participant’s educational attainment, the more likely they were to trust the health care system.

Participants’ confidence in technology significantly positively correlated with their trust in the health care system (rs(1255)=0.190 and P<.001). In other words, participants who were more confident in their ability to use technology also had higher trust in the health care system.

Participants’ trust in the health care system just fell short of a significant negative correlation with their identification as a minority ethnic group (or not; rs(1255)=−0.054 and P=.056). Therefore, we can only report that there is a tendency for participants who do not identify as part of a minority ethnic group to trust the health care system more than people who do identify as an ethnic minority. This would be an important question to explore in a future study with a larger sample and would support previous research on experiences of discrimination and distrust in the health care system [39].

Age correlated significantly and negatively with trust in the health care system (rs(1255)=−0.07 and P=.009), indicating that older participants were more likely to trust the health care system than younger ones.

Participants’ confidence in dealing with their asthma significantly correlated with participants’ trust in the health care system, rs(1255)=0.101 and P<.001, showing that participants who were confident in managing their asthma trusted the health care system more. This may suggest that positive experiences with the health care system led to better management and, therefore, confidence in the ability to manage their asthma. However, there are many possible explanations for such a result: for example, people who trust the system less may not access health care and therefore may not get what they need to manage their asthma well; people who have had negative experiences with the system may not have gotten what they need to manage their asthma well despite accessing it; or people with asthma that is more difficult to treat and manage may more clearly see the limitations of the health care system.

Comparing Participants Interested vs Not Interested in an Asthma Chatbot

To allow a straightforward interpretation of the data, some of the variable answers were grouped. The example below (for the variable “How interested would you be in using AsthmaBot?”) shows how the Likert-scale answers were grouped into a binary format (Textbox 1).

Textbox 1. Categorical variable split between participants interested in the asthma chatbot and not interested in the asthma chatbot.

Interested

  • Very interested
  • Somewhat interested

Not interested

  • Neutral
  • Not particularly interested
  • Not at all interested

There was no significant correlation between participants’ interest in using an asthma chatbot and either their age, education level, or identification as a minority. This suggests that neither age, education level, nor identification with a minority ethnic group is likely to be a barrier to engagement.

To account for multiple comparisons, we applied the Bonferroni correction across all tests, which focus on analyzing factors that influence participants’ interest in the asthma chatbot. This includes the 8 main chi-square comparisons as well as 1 Mann-Whitney U test. The adjusted significance level threshold for these tests is P<.0056. All tests answering different research questions were not included in this correction. The Spearman correlations were treated as exploratory analyses and are not included in this correction.

Health Care Usage Predicts Interest

We conducted a chi-square test for association to measure the relationship between participants’ trust in the health care system and their interest in an asthma chatbot. The relationship is significant (χ21=9.75; P=.002; φ=.088). Overall, the majority of participants trusted the health care system (83%), while 17% did not. Out of the participants who were not interested in an asthma chatbot, 20.1% indicated they did not trust the health care system, while 79.9% did. Of the participants who were interested in using an asthma chatbot, a higher percentage trusted the health care system (86.5%), and a lower percentage of participants did not trust the health care system (13.5%). This indicates that people who are interested in an asthma chatbot were more likely to trust the health care system than those who are not interested.

To check for a difference between participants’ interest in an asthma chatbot and their current support from a GP, we conducted another chi-square test for association. The relationship is significant (χ21=12.72; P<.001; φ=.101). Overall, 20.5% of participants do not seek support from a GP, while 79.5% do. Of the participants who were interested in the chatbot, 16.7% did not already receive support from a GP, while 83.3% did. For participants who were not interested in the chatbot, 24.8% did not get support from a GP, while 75.2% did. This suggests that overall, participants with an interest in the asthma chatbot were also more likely to be seeking support from a GP. This aligns with our results showing that people who see their asthma as more serious are more likely to be interested in a chatbot. Moreover, participants’ perception of the seriousness of their asthma positively and significantly correlated with the Asthma Control Questionnaire items: difficulty sleeping due to asthma (rs(1255)=0.313, P<.001); asthma symptoms during the day (rs(1255)=0.279, P<.001); asthma affecting usual activities (rs(1255)=0.361, P<.001); and asthma caused an accident and emergency (A&E) visit in the last 2 years (rs(1255)=0.275, P<.001). Taken together, these findings suggest that at least some participants who were not interested in a chatbot lack interest because they did not feel their asthma was serious enough to motivate use. Qualitative results provide additional evidence for this conclusion (refer to section Reasons for Lack of Interest and Reservations).

Communication Preferences: Participants Would Value Asthma Voice Recognition but Still Favor Human Support

Regarding participants’ interest in a feature that detects asthma severity through the sound of one’s voice, voice recording could be more appealing (eg, novelty and perceived accuracy) or less appealing (eg, privacy concerns) than the standard text-based chatbot. Therefore, we analyzed the relationship between interest in such a feature and general interest in an asthma chatbot with a chi-square test for association. Again, both variables were coded as interested vs not interested. The relationship is significant (χ21=347.100; P<.001; φ=0.525). Overall, 59.7% of participants were interested in the voice feature, while 40.3% were not. Out of the interested participants, 84.1% were interested in the voice feature, while 15.9% were not. Out of participants who were not interested in an asthma chatbot, only 32.4% were interested in the voice feature, and 67.6% were not. These differences show that most participants with a general interest in an asthma chatbot were also interested in a voice feature to detect severity, and that a subgroup of participants not interested in an asthma chatbot was, however, interested in voice-based severity detection for asthma.

A chi-square test for association was conducted to test the relationship between participants’ preference for talking to a human about their asthma and their interest in an asthma chatbot. The relationship is significant (χ21=17.05; P<.001; φ=.116). Overall, 85.5% of participants reported that they would prefer to talk to a human about their asthma. Of the participants interested in an asthma chatbot, 81.7% would prefer to talk to a human, while 18.3% would not. Of the participants who were not interested in an asthma chatbot, an even higher percentage preferred talking to a human (89.9%), while only 10.1% did not. The findings demonstrate that participants who were not interested in using a chatbot for asthma, unsurprisingly, have a stronger preference for in-person conversations about their asthma. It is also worth noting that while the large majority would prefer to speak to a human about their asthma, a similar majority had sought asthma information via the internet. This suggests that although human interaction is preferred, when it is not available or easily accessible, most people turn to nonhuman options for support.

Conversational Style: Most Prefer a Chatbot That Is Friendly, Reassuring, and Like a Nurse

Research suggests that defining personality dimensions and a clear conversational style to guide the dialogue of a chatbot provides a more consistent and effective user experience [40]. Chatbot conversational style can vary across different personality dimensions; for example, it might be more or less formal or informal, authoritative, submissive, or convey warmth through word choice. To determine the kind of conversational style that most people with asthma would feel comfortable interacting with in a chatbot, the survey provided a list of personality descriptors and asked participants to select those they believe to be most appropriate for an asthma chatbot. Among participants interested in an asthma chatbot, the most frequently selected personality descriptors were “reassuring” (n=343), “like talking to a nurse” (n=335), “friendly” (n=316), “caring” (n=261), “good listener” (n=198), “like talking to a doctor” (n=196), “direct” (n=164), and “informal” (n=147). It is notable, however, that a significant number of participants selected other preferences, for example, favoring a “direct” approach or an experience more “like talking to a doctor,” suggesting different styles suit different people.

Technology Experience Predicts Interest

To assess the relationship between participants’ confidence in using technology and their interest in an asthma chatbot, we conducted a chi-square test for association. The relationship is significant (χ21=13.73; P<.001; φ=0.105). Overall, the large majority of participants were confident in their use of technology (95%). Of the participants who were not interested in an asthma chatbot, 92.6% were confident in using technology, while 7.4% were not. Of the interested participants, a higher percentage were confident in technology (97.1%), while only 2.9% were not. In other words, and perhaps unsurprisingly, people who reported less confidence with technology use in general were also less interested in trying a chatbot for asthma.

Moreover, 2 Mann-Whitney U tests showed that participants who reported more confidence in their ability to use technology used WhatsApp and other messenger apps significantly more frequently in comparison to people who felt less confident with technology use (U=23652.50; Z=−5.636; P<.001) and that participants who were interested in using an asthma chatbot also used both WhatsApp and other messenger apps significantly more frequently in comparison to participants who were not interested in using a chatbot (U=175969.50; Z=−3.688; P<.001).

We also conducted a chi-square test for association to measure the relationship between participants’ previous use of a virtual assistant and participants’ interest in an asthma chatbot. The relationship is significant (χ21=10.11; P=.001; φ=0.090). Overall, the majority of participants reported having used a virtual assistant in the past (89.3%). Of the participants who were not interested in an asthma chatbot, 86.3% had used a virtual assistant before, while 13.7% had not. Of participants who were interested, a higher percentage had used a virtual assistant previously (91.9%), while only 8.1% had not. As such, people who had used a virtual assistant in the past were more likely to be interested in using a chatbot for their asthma.

Lower Confidence in Disease Self-Management Predicts Interest

A chi-square test for association was conducted to assess the relationship between participants’ confidence in the self-management of their asthma and their interest in an asthma chatbot. The relationship is significant (χ22=26.84; P<.001; φ=0.146). The majority of participants felt either very confident (51.5%) or somewhat confident (45.6%) about managing their asthma, with only 2.9% of participants reporting that they did not feel confident. Of those interested in the asthma chatbot, 44.7% were very confident, 52.3% were somewhat confident, and 3% were not confident at all. In comparison, out of the participants who were not interested in an asthma chatbot, a higher percentage indicated being very confident in managing their asthma (59.1%), 38% were somewhat confident, and only 2.9% were not confident. Therefore, participants who are less confident about their own ability to manage their asthma were more likely to be interested in an asthma chatbot. For an overview, refer to Figure 1. This provides support for the utility of a chatbot in improving management skills and confidence among those who feel they lack them.

Figure 1. Bar chart showing the confidence level of participants dealing with their asthma, who are either Interested or not Interested in the asthma chatbot.

To assess the relationship between participants’ own perception of the severity of their asthma and their interest in using an asthma chatbot, a chi-square test for association was conducted. The relationship is significant (χ22=16.68; P<.001; φ=0.115). Asthma seriousness was rated by participants, with 6.2% rating their asthma as very serious, 31.1% as somewhat serious, and 62.7% as not serious. Of the participants who were interested in the asthma chatbot, 6.9% rated their asthma as very serious, 35.6% as somewhat serious, and 57.4% as not serious. Participants who were not interested rated their asthma as less serious overall, with 5.4% indicating that their asthma is very serious, 26% stating that their asthma is somewhat serious, and 68.6% rating their asthma as not serious. Overall, participants who were not interested in an asthma chatbot rated their asthma as less serious. For an overview, refer to Figure 2.

Figure 2. Bar chart showing the level of severity at which participants rate their asthma, who are either interested or not interested in the asthma chatbot.

Qualitative Results Overview

All of the 1257 survey respondents provided at least 1 free-text response to an open question. While quantitative analysis of closed survey questions reveals characteristics correlated with those most interested in an asthma chatbot, open-text results provide explicit insights into the reasons some participants are interested while others are not. In alignment with this “interest vs no interest” focus applied to the quantitative analysis, the qualitative analysis of open-question text was also primarily organized around these 2 categories, including reasons given for these and associated characteristics.

Open-text responses also provided additional information on specific technologies and information sources in relation to survey questions about where participants seek asthma information and what technologies they already use to support self-management.

Reasons for Lack of Interest and Reservations

Data were organized into a set of 4 recurring themes based on the types of reasons and reservations expressed in relation to lower interest in an asthma chatbot. These are provided in Table 4, along with example quotations from respondent comments.

Table 4. Summary of reasons for not being interested in an asthma chatbot.
ThemeExample quotations from the open comments
Theme 1: technology skepticism
  • “I\'m unsure it would get details correct and misdiagnose.” [P21]
  • “They\'re not real, just programs. So they make mistakes and are not to be taken seriously.” [P351]
  • “I dislike AIa.” [P9]
  • “I would rather talk to my doctor or asthma nurse.” [P109]
Theme 2: personalization skepticism
  • “I\'m not sure I would use it; these things tend to give general advice and if you need advice specifically for you then I\'m not sure it could do that for everyone.” [P231]
  • “I worry that it would tell me basic things that I already know or give generic advice. If it was very tailored to me that might help.” [P163]
  • “My concern is these virtual assistants are programmed to answer specific questions whilst I may want to ask something that they are not equipped to answer!” [P455]
Theme 3: low asthma severity
  • “I don\'t feel that my asthma is severe enough to need anything more than my annual check at the moment.” [P367]
  • “If my asthma was worse than it is, I would find it more useful.” [P855]
Theme 4: security and privacy concerns
  • “The system gets hacked, and my data is used nefariously.” [P35]
  • “I\'d be uncomfortable with the idea of my medical details being stolen either by a hack or ‘man in the middle’ style attack.” [P633]

aAI: artificial intelligence.

The first theme, “technology skepticism,” captures expressions of doubt in the accuracy and trustworthiness of chatbot technology. For example, a number of participants expressed skepticism about the capacity of current technology to provide information that is accurate and safe (referring to the possibility of misdiagnoses and mistakes). Some participants expressed a general dislike for technological solutions (“I dislike AI” [P9]) or referenced the technology’s nonhuman nature as an inherent problem (“They’re not real…and are not to be taken seriously” [P352]), or relatedly, the simple preference for human solutions (“I would rather talk to my doctor or asthma nurse” [P109]).

The second recurring theme, “personalization skepticism,” refers to responses expressing doubt in a technology’s ability to respond in a personally relevant and specific way. For example, participant 163 stated, “I worry that it would tell me basic things that I already know or give generic advice. If it was very tailored to me that might help.”

The third recurring reason for lack of interest in a chatbot was perceived low asthma severity, resulting in a perceived lack of need for technological support (eg, “I don’t feel that my asthma is severe enough to need anything more than my annual check at the moment” [P367]).

Finally, a fourth reason for hesitancy in using an asthma chatbot was concern about system security and data privacy, including misuse by a malicious third party (eg, “The system gets hacked, and my data is used nefariously” [P35]). For more examples, refer to Table 4.

In addition to expressions of doubt, concern, or reservation with respect to using an asthma chatbot, many participants expressed optimism and interest in the potential for a chatbot to provide benefits. The recurring themes around these responses are summarized in Table 5. They include the capacity for a chatbot to provide education and self-management support, for example, by providing users with health information, warning of future asthma attacks, and helping manage symptoms; and in doing these things, providing an alternative to traditional health services (eg, “Support for my symptoms without risking a trip to the doctors” [P125] and “I have mild asthma and am concerned it may get worse, an asthma bot may help me control it because the National Health Service (NHS) website is next to useless” [P480]).

Table 5. Summary of reasons for interest in an asthma chatbot.
ThemeExample quotations from the open comments
Theme 1: health education and self-management support
  • “Help control my asthma.” [P120]
  • “Support for my symptoms without risking a trip to the doctors.” [P125]
  • “To try and predict potential future asthma attacks in case I’m more at risk.” [P240]
  • “It sounds easy and convenient and help manage my asthma worries easily.” [P259]
  • “Learn new things about controlling asthma.” [P337]
  • “It seems like a fun way to learn about the condition.” [P453]
  • “I have mild asthma and am concerned it may get worse, an asthma chatbot may help me control it because the NHSa website is next to useless.” [P480]
Theme 2: ease of access
  • “It already knows the range of symptoms and can come up with something concrete very quickly, instead of me wasting someone\'s actual time.” [P371]
  • “I need help now but cannot see my GPb for seven weeks.” [P428]
  • “It is easier than getting appointment with nurse.” [P449]
  • “I would use it rather than visit A&Ec out of hours.” [P485]
  • “An easy to access service.” [P53]
  • “If it was free.” [P107]
  • “Free to use and could be helpful before going to see a doctor.” [P152]
  • “It seems like a good tool. It being free would be best.” [P226]
Theme 3: trusted endorsement
  • “That it is run by the NHS in conjunction with asthma UK not a commercial computer program.” [P356]
  • “Endorsement by my GP practice.” [P370]

aNHS: National Health Service.

bGP: general practitioner.

cA&E: accident and emergency.

Another set of responses captured various ways a chatbot provides ease of access to support, as compared to traditional health services (“I need help now but cannot see my GP for seven weeks” [P428]), with an emphasis by some participants on the importance of such a service being available for free (“If it was free” [P107] and “It being free would be best” [P226]).

Finally, some participants highlighted the importance of trusted endorsement by a trustworthy organization, such as their GP or the NHS, rather than a commercial company, as being important to their interest in using the technology (eg, “That it is run by the NHS in conjunction with Asthma UK, not a commercial computer program” [P356]).

Sources of Asthma Support: Sources and Apps

Finally, participants used open-text responses to provide the names of information sources and asthma technologies used. Specifically, 74 participants listed specific technologies they already use, or have used in the past, to get support for their asthma. Those best represented in the text responses included the NHS app (listed by 17 participants), Asthma UK (listed by 9), and Peak Flow (listed by 6).


Key Findings and Recommendations

This study is the first to assess and contribute detailed insights on the motivational factors, features, and characteristics necessary for ensuring an asthma chatbot is of value to adults with asthma and that would maximize the likelihood of meaningful engagement. These findings are based on the closed and free-text responses of adults with asthma sharing views about their expectations, needs, and preferences, which were then correlated with differences in background, technology confidence, asthma history, and asthma self-management confidence. The goal is to contribute toward meaningful engagement with an easy-access asthma support technology to help address the care gap. Key findings that can most readily guide the design and development of future asthma chatbots are summarized herein.

Recommendation 1: Prioritize Ease of Access and Consider Existing Technologies Rather Than a Custom App

Overall, our results suggest that a significant percentage of UK adults with asthma (53% of our survey participants) would be interested in using a chatbot to support their asthma, and that an even higher number would be interested in using a voice-based risk detection feature (59% of our survey participants). Moreover, our findings provide a strong rationale for delivering the chatbot via WhatsApp instead of through a custom asthma app, since the large majority of our participants had already installed WhatsApp on their phone (85% of 1011 participants) and reported it to be their most common method of sending messages. Finally, WhatsApp was the number one platform of preference for interacting with an asthma chatbot. Alternatively, a website was rated the second most popular channel for accessing an asthma chatbot. Although apps have distinct advantages that will sometimes warrant the extra friction, few participants had used an app for asthma, and their 2 top preferences for access leveraged familiar and easily accessible technology.

This aligns with qualitative findings in which participants highlighted ease of access as a major motivator for using an asthma chatbot. Ease of access had three main facets, including (1) that it is available 24/7, including when appointments are not available, (2) that it can (and should) be made available for free to ensure access, and (3) that it is easier to access than health care professionals. How we assessed ease of access aligns with 2 out of 4 key factors of the Unified Theory of Acceptance and Use of Technology model (UTAUT) [41], namely effort expectancy (the level of convenience and usability when using an information system) and facilitating conditions (the belief that the technical infrastructure to support the use of the system exists), which have been shown to positively influence behavioral intention and use behavior of information systems.

Recommendation 2: Center the Needs of People With More Serious Asthma Who Feel They Need Additional Support

Our findings indicate that an asthma chatbot is of particular interest to those who believe their asthma to be more serious. As such, a CA could be most beneficial for those patients who need additional support, either because their asthma is indeed more serious or because they experience more anxiety around it and less confidence in self-management. This is supported by the further results showing that participants who feel less confident in managing their asthma are also more likely to be interested in a chatbot than those who are already confident. Qualitative data confirms this and suggests that at least some people may also feel they do not always get adequate or timely access to traditional care (refer to Table 5). For example, participant open responses included “could be helpful before going to see a doctor” and “I need help now but cannot see my GP for seven weeks.” Thus, collectively, our results suggest there is a strong opportunity to integrate CAs as a supplementary, easily accessible, 24/7 service for patients who are in need of additional support.

Relatedly, participants who would be interested in using an asthma chatbot were also more likely to seek support from a GP in comparison to those who were not. This is consistent with the finding that participants interested in a chatbot have a greater need for support, as they rate their asthma as more serious.

In summary, our results show that an asthma chatbot is likely to be of greatest benefit to (1) patients who believe their asthma to be serious, (2) patients who feel less confident about managing their asthma, and (3) patients who are looking for immediate support while they wait for an appointment with a health care professional.

Recommendation 3: Include Conversational Content for Asthma Education, Risk Assessment, and Customized Self-Management Advice

Across both quantitative and qualitative responses, results demonstrate that participants would value easy access to educational information about asthma, support in assessing and preventing asthma attacks, and support for self-management—particularly tailored support that goes beyond the generic advice broadly available. The clinicians involved in this study add that there is great potential for a chatbot to have conversations about topics that are important to positive health outcomes but that there often is not enough time to cover during medical appointments. These topics include full explanations of what asthma is, what the medicine is for, how and why it works, proper technique, and conversations to support identifying and avoiding triggers. Previous studies have demonstrated that patient self-management not only allows patients to better manage long-term health conditions but also decreases visits to both GP offices and emergency departments [42,43]. A CA can improve health literacy, and this would not only fill in gaps but also provide repeated access to information over time, thus augmenting clinical practice.

Recommendation 4: Craft a Conversational Design That Is Reassuring and Like Talking to a Nurse

Our survey results with respect to people’s preferences for the conversational style (personality) of a chatbot revealed a set of majority preferences, with additional groups of alternate preferences. Most respondents preferred an asthma chatbot that was “reassuring,” “friendly,” and “like talking to a nurse.” It is notable, however, that a significant alternative group favored a “direct” approach or an experience that was more “like talking to a doctor.” Designers and developers could approach this diversity in a number of ways, including by (1) opting for the majority preference, (2) targeting a specific audience, or (3) allowing users to select personalities from a set of options. From a Self-Determination Theory perspective [44], providing options for personalization would support users’ need for autonomy. Fulfillment of all 3 basic psychological needs during technology interaction (autonomy, competence, and relatedness) predicts positive user experience and well-being [45-47], as well as higher intention to use a system [48,49].

Recommendation 5: Address Security and Privacy Concerns

The group of participants who reported more interest in an asthma chatbot also showed more overall trust in the health care system compared with participants who were not interested. This finding suggests that participants who are not interested in the chatbot might be less inclined overall to seek external support for their asthma. Future studies could investigate whether this is simply due to less serious asthma and effective self-management, or a more general lack of trust in external support (eg, negative experience, language barriers). The qualitative analysis provided some evidence for both explanations, indicating a combination of reasons is likely. Specifically, on the one hand, we identified a group of participants who were not interested in the chatbot and believed their asthma to be less severe (refer to theme 3 in Table 4). On the other hand, we also identified a group who were skeptical of AI solutions and their accuracy in general.

Qualitative results revealed a number of themes that provide richer explanations for the quantitative findings. Specifically, that there were at least four categories of reasons given for lack of interest in an asthma chatbot: (1) technology skepticism (doubt in the technology’s ability to be accurate or useful), (2) personalization skepticism (doubt in the technology’s ability to provide anything other than very basic and generic advice, or to provide specific advice tailored to specific needs), (3) low asthma severity (perception that their asthma is not severe enough to warrant the use of technological support), and (4) security and privacy concerns (concerns over data misuse or hacking).

A successful chatbot would have to address these concerns to reach ambivalent users. These patient concerns demonstrate the importance of providing clear information and transparency around data security and the technology owner. Our results align with previous studies demonstrating that technology skepticism [50,51], as well as privacy concerns [52-54], can hinder the successful integration of new technologies in health care settings. Moreover, as the results show that previous exposure to a virtual assistant had a positive impact on patients’ interest in the CA, these issues may improve over time as exposure increases [55-58].

Addressing security and privacy concerns would, at minimum, require very transparent disclosure of how data would be used and kept private (ie, not shared beyond the system), communicated in easy-to-understand ways. Granular consent options could be provided if providers wanted to use data for other purposes, such as research, and these would need to be opt-in. Sharing data with health care providers or systems (eg, NHS) could be made voluntary and fully under user control. Granular controls could also allow users to share only certain types of information and to withdraw consent [59]. To mitigate risks around data security, newer methods such as federated learning (decentralized model training without the distribution of raw patient data) or blockchain-based federated learning (an additional security layer for data sharing) present alternatives to traditional machine learning approaches and combine technological advances with adherence to legislative requirements (eg, HIPAA [Health Insurance Portability and Accountability Act] or GDPR [General Data Protection Regulation]) [60-62].

Recommendation 6: Address Technology Skepticism

While security and privacy concerns have to do with threats to personal autonomy, technology skepticism was evidenced by participant responses to nonbelief in the technology’s capabilities, accuracy, and fitness to purpose. Some participants expressed aversion to using technology in favor of human interaction or general aversion to “AI.” Others were doubtful that a chatbot system could provide advice that was tailored enough to the individual to be useful. Aiming to address these doubts, both technologically (making the doubts unfounded) and/or through messaging (communicating the genuine benefits and limitations) is likely to increase the audience for an asthma chatbot.

However, trust in these systems is likely to be a moving target. Given the rapid growth of generative AI chatbots such as ChatGPT (OpenAI), future studies could explore whether increased exposure to these types of chatbots impacts either positively or negatively on trust in related technologies. Although an asthma chatbot should only provide advice vetted by clinicians, generic generative AI chatbots such as ChatGPT and Claude (Anthropic) generate advice that is not safety-checked by a human and may therefore respond erroneously with so-called “hallucinations” or fabrications, which, in a health context, could be highly dangerous. As such, educating patients about the limitations and distinctions among various conversational technologies may become critical going forward. It is crucial to establish AI literacy among patients and health care providers to ensure the effective, ethical, and safe integration of emerging technologies [63,64]. In other words, communicating that there is a clear distinction between (1) general-purpose systems, (2) unproven apps that claim to be for health, and (3) specialized health systems with reliable third-party credentials will be part of the effort. However, there are no easy answers here, as trust must be deserved. To tackle these challenges, a multistakeholder approach should be implemented [53,65]. This involves a difficult, ongoing, and system-wide effort being addressed by academics, health care professionals, and technologists simultaneously as we attempt to find ways to balance the potential of new technologies with the clear risks of harm and challenges for regulation in the face of relentless technological change.

Recommendation 7: Consider a Trusted Endorsement

A final point on trust came from participants who highlighted the importance that a chatbot be endorsed by trusted authorities, such as their GP, an asthma charity, and not a commercial enterprise. For health care systems, trust is being repeatedly named as one of the key factors driving technology adoption even in comparison to the typical Unified Theory of Acceptance and Use of Technology (UTAUT) predictors: effort expectancy, performance expectancy, social influence, and ease of use [66-68]. This may be attributable to the fact that health care is a more sensitive and private topic. Consequently, it is imperative to engage trusted authorities in the development of digital health care solutions. Transparency about the inclusion of trusted authorities will provide users with the necessary trust to engage with novel CAs and other digital health care solutions.

By understanding people’s doubts and the reservations that form obstacles to technology adoption, developers can target these concerns through technological improvements, interaction design, and education.

Limitations and Future Work

A limitation to the generalizability of the findings within this study is that it was conducted within the United Kingdom, and with the UK health care system in mind. Results with respect to access to primary care, trust in the health care system, or access to technologies, for example, are likely to vary across other countries. Another important consideration is that the survey was conducted online through an online survey service. Confidence in technology use was reported as very high within our sample, but results could be different, for example, if a paper-based survey were conducted via community centers.

It is also relevant to highlight that our data rely on participants’ self-reports of having asthma (rather than medical proof). To mitigate the risk of inaccurate self-reporting, we asked participants to specify when they were diagnosed by a doctor and to answer questions on symptoms and asthma control. We believe the added detail in questions, combined with the high number of participants who took part in the study, mitigates the potential effects of inaccurate self-reporting.

Another limitation involves our small sample of participants who identify with a minority ethnic group. We did not recruit enough participants from ethnic minority groups through the YouGov database to be able to explore research questions that could contribute to helping these groups specifically get their needs met. Our results are therefore not generalizable across ethnic groups, and additional research is needed to address the specific needs of ethnic minorities. We believe that this is an important research area for future studies, since there are health disparities demonstrated for minority ethnic communities, and easily accessible technologies, such as CAs, might contribute to addressing some of these. We would recommend using alternative methods of recruitment to reach participants from minority ethnic groups.

Relatedly, the majority of participants in our sample get support from a GP for their asthma, and trust in the health care system was high. Follow-up studies could focus specifically on patient groups who do not fall within either of these categories to gain a better understanding of how these groups (who are not accessing traditional care) might be better supported and how a chatbot might play a role.

Finally, in this study, participants did not have the chance to test an actual asthma chatbot. Instead, they were asked to make hypothetical judgments about their interests and preferences, so their responses were speculative and based on their previous experiences. Ideally, a longitudinal study should be conducted to test the long-term effects of, and adherence to, a fully built CA for asthma support. In a next follow-up study, we will test the feasibility of a prototype designed in alignment with the findings of this study. The follow-up study will provide insights into real-world patterns of use, perception of benefit, and final evaluations.

Conclusion

Overall, this study provides evidence for demographically broad interest in an asthma chatbot among adults with asthma in the United Kingdom. It is the first study to explore the potential of a CA to support adults with asthma, since previous research in this area focused on pediatric asthma instead. We were able to identify key factors that correlate with greater interest (ie, easy access, a fast alternative to GP or nurse, lower confidence in asthma self-management, and perception of asthma as serious) and less interest (lack of trust in AI, security concerns, preference for in-person contact, and higher confidence in self-management). These findings offer important insights for the design and development of a CA for asthma management and demonstrate the demand and opportunity in the United Kingdom for providing alternative support for asthma management. The results indicate that CAs can help to improve risk self-assessment, asthma control, and provide easy access to basic care outside of appointments with a health care professional.

Acknowledgments

We are grateful to Asthma + Lung UK and our patient advisory board participants for their constant support.

Funding

This project was funded by the UK Engineering and Physical Sciences Research Council (grant no EP/W002477/1). This research was supported by the National Institute for Health and Care Research (NIHR) Imperial Biomedical Research Centre (BRC).

Conflicts of Interest

None declared.

Multimedia Appendix 1

Full list of survey questions used in the study.

PDF File, 111 KB

Multimedia Appendix 2

Overview of channels participants currently use to receive information about asthma.

PNG File, 43 KB

Multimedia Appendix 3

Overview of participants’ preferred platform to use the asthma chatbot.

PNG File, 50 KB

  1. What is the prevalence of asthma? NICE. 2022. URL: https:/​/www.​nice.org.uk/​cks-uk-only#:~:text=In%20the%20UK%2C%20over%208,people%20are%20receiving%20asthma%20treatment [Accessed 2023-06-22]
  2. Whittamore A. Technology can revolutionise supported self-management of asthma. Primary Care Respiratory Society UK; 2017. URL: https://www.pcrs-uk.org/sites/default/files/SupportedSelfCare_SupportGroupsPerspective.pdf [Accessed 2026-03-03]
  3. Levelling up lung health how improving lung health will achieve levelling up by delivering a healthier, fairer society. Asthma + Lung UK; 2022. URL: https:/​/www.​asthmaandlung.org.uk/​sites/​default/​files/​Levelling%20up%20lung%20health_what%20needs%20to%20be%20done.​pdf
  4. On the edge: how inequality affects people with asthma. Asthma UK; 2018. URL: https://www.asthmaandlung.org.uk/sites/default/files/2023-03/auk-health-inequalities-final.pdf
  5. Lung conditions kill more people in the UK than anywhere in Western Europe. Asthma + Lung UK. URL: https:/​/www.​asthmaandlung.org.uk/​media/​press-releases/​lung-conditions-kill-more-people-uk-anywhere-western-europe#:~:text=use%20your%20inhaler-,Lung%20conditions%20kill%20more%20people%20in%20the%20UK%20than%20anywhere,3 [Accessed 2023-01-19]
  6. Mukherjee M, Stoddart A, Gupta RP, et al. The epidemiology, healthcare and societal burden and costs of asthma in the UK and its member nations: analyses of standalone and linked national databases. BMC Med. Aug 29, 2016;14(1):113. [CrossRef] [Medline]
  7. Chen S, Golam S, Myers J, Bly C, Smolen H, Xu X. Systematic literature review of the clinical, humanistic, and economic burden associated with asthma uncontrolled by GINA Steps 4 or 5 treatment. Curr Med Res Opin. Dec 2018;34(12):2075-2088. [CrossRef] [Medline]
  8. Pinnock H. Supported self-management for asthma. Breathe (Sheff). Jun 2015;11(2):98-109. [CrossRef] [Medline]
  9. Dave T, Athaluri SA, Singh S. ChatGPT in medicine: an overview of its applications, advantages, limitations, future prospects, and ethical considerations. Front Artif Intell. 2023;6:1169595. [CrossRef] [Medline]
  10. Li J, Dada A, Kleesiek J, Egger J. ChatGPT in healthcare: a taxonomy and systematic review. MedRXiv. Preprint posted online on Mar, 2023. [CrossRef]
  11. Vaishya R, Misra A, Vaish A. ChatGPT: Is this version good for healthcare and research? Diabetes & Metabolic Syndrome: Clinical Research & Reviews. Apr 2023;17(4):102744. [CrossRef]
  12. Jaimini U, Thirunarayan K, Kalra M, Venkataraman R, Kadariya D, Sheth A. “How is my child’s asthma?” Digital phenotype and actionable insights for pediatric asthma. JMIR Pediatr Parent. 2018;1(2):e11988. [CrossRef] [Medline]
  13. Kagen S, Garland A. Asthma and allergy mobile apps in 2018. Curr Allergy Asthma Rep. Feb 2, 2019;19(1):6. [CrossRef] [Medline]
  14. Ramsey RR, Caromody JK, Voorhees SE, et al. A systematic evaluation of asthma management apps examining behavior change techniques. J Allergy Clin Immunol Pract. 2019;7(8):2583-2591. [CrossRef] [Medline]
  15. Venkataramanan R, Thirunarayan K, Jaimini U, et al. Determination of personalized asthma triggers from multimodal sensing and a mobile app: observational study. JMIR Pediatr Parent. Jun 27, 2019;2(1):e14300. [CrossRef] [Medline]
  16. Kadariya D, Venkataramanan R, Yip HY, Kalra M, Thirunarayanan K, Sheth A. KBot: knowledge-enabled personalized chatbot for asthma self-management. IEEE; 2019. Presented at: 2019 IEEE International Conference on Smart Computing (SMARTCOMP):138-143. [CrossRef]
  17. Suehs CM, Vachier I, Galeazzi D, et al. Standard patient training versus Vik-Asthme chatbot-guided training: “AsthmaTrain” - a protocol for a randomised controlled trial for patients with asthma. BMJ Open. Feb 21, 2023;13(2):e067039. [CrossRef] [Medline]
  18. Hsia B, Mowrey W, Keskin T, et al. Developing and pilot testing ASTHMAXcel, a mobile app for adults with asthma. J Asthma. Jun 2021;58(6):834-847. [CrossRef] [Medline]
  19. Chan A, Kodali S, Lee GY, et al. Evaluating the effect and user satisfaction of an adapted and translated mobile health application ASTHMAXcel© among adults with asthma in Pune, India. J Asthma. Aug 2023;60(8):1513-1523. [CrossRef] [Medline]
  20. Hsia BC, Wu S, Mowrey WB, Jariwala SP. Evaluating the ASTHMAXcel mobile application regarding asthma knowledge and clinical outcomes. Respir Care. Aug 2020;65(8):1112-1119. [CrossRef] [Medline]
  21. Singh B, Olds T, Brinsley J, et al. Systematic review and meta-analysis of the effectiveness of chatbots on lifestyle behaviours. NPJ Digit Med. Jun 23, 2023;6(1):118. [CrossRef] [Medline]
  22. Laranjo L, Dunn AG, Tong HL, et al. Conversational agents in healthcare: a systematic review. J Am Med Inform Assoc. Sep 1, 2018;25(9):1248-1258. [CrossRef] [Medline]
  23. Parmar P, Ryu J, Pandya S, Sedoc J, Agarwal S. Health-focused conversational agents in person-centered care: a review of apps. NPJ Digit Med. Feb 17, 2022;5(1):21. [CrossRef] [Medline]
  24. Sundareswaran V, Sarkar A. Chatbots RESET: a framework for governing responsible use of conversational AI in healthcare. World Economic Forum. 2020. URL: https://www3.weforum.org/docs/WEF_Governance_of_Chatbots_in_Healthcare_2020.pdf [Accessed 2023-01-19]
  25. Rhee H, Allen J, Mammen J, Swift M. Mobile phone-based asthma self-management aid for adolescents (mASMAA): a feasibility study. Patient Prefer Adherence. 2014;8:63-72. [CrossRef] [Medline]
  26. Kowatsch T, Schachner T, Harperink S, et al. Conversational agents as mediating social actors in chronic disease management involving health care professionals, patients, and family members: multisite single-arm feasibility study. J Med Internet Res. Feb 17, 2021;23(2):e25060. [CrossRef] [Medline]
  27. Li J, Ye L, She J, Song Y. Clinical differences between early- and late-onset asthma: a population-based cross-sectional study. Can Respir J. 2021;2021:8886520. [CrossRef] [Medline]
  28. de Nijs SB, Venekamp LN, Bel EH. Adult-onset asthma: is it really different? Eur Respir Rev. Mar 1, 2013;22(127):44-52. [CrossRef] [Medline]
  29. Wu TJ, Chen BY, Lee YL, Hsiue TR, Wu CF, Guo YL. Different severity and severity predictors in early-onset and late-onset asthma: a Taiwanese population-based study. Respiration. 2015;90(5):384-392. [CrossRef] [Medline]
  30. Alexander K. Most used messenger by brand in the UK as of september 2025. Statista. 2023. URL: https://www.statista.com/forecasts/997945/most-used-messenger-by-brand-in-the-uk [Accessed 2026-03-03]
  31. Saeed Tayeb MM. Whatsapp education for asthma and allergy patients: pros and cons. GJO. 2019;18(5). URL: https://juniperpublishers.com/gjo/volume18-issue5-gjo.php [Accessed 2026-03-04] [CrossRef]
  32. Cumella A, King J, Sinton H, Walker S. Engaging younger people in their asthma management – findings from a pilot telemedicine asthma nurse service. 2020. Presented at: ERS International Congress 2020 abstracts:4805. [CrossRef]
  33. Calderón J, Cherrez A, Ramón GD, et al. Information and communication technology use in asthmatic patients: a cross-sectional study in Latin America. ERJ Open Res. Jul 2017;3(3):00005-02017. [CrossRef] [Medline]
  34. Calvo RA, Peters D, Moradbakhti L, et al. Assessing the feasibility of a text-based conversational agent for asthma support: protocol for a mixed methods observational study. JMIR Res Protoc. Feb 2, 2023;12:e42965. [CrossRef] [Medline]
  35. YouGov PLC. URL: https://yougov.com/ [Accessed 2026-03-10]
  36. IBM SPSS Statistics for Windows, version 25.0. IMB Corp. 2017. URL: https:/​/www.​ibm.com/​products/​spss?utm_content=SRCWW&p1=Search&p4=346823877772&p5=p&p9=88618557002&gclsrc=aw.​ds&gad_source=1&gad_campaignid=8658737010&gbraid=0AAAAAD-_QsTIPB8ZIjuiFQkBnEkgmVLjT&gclid=EAIaIQobChMI1dLo2YKWkwMVPtHjBx2T7SDOEAAYASAAEgLInvD_BwE [Accessed 2026-03-10]
  37. Braun V, Clarke V. Using thematic analysis in psychology. Qual Res Psychol. Jan 2006;3(2):77-101. [CrossRef]
  38. Ask more from your data with the depth and power of NVivo. Lumivero. 2020. URL: https://lumivero.com/products/nvivo/
  39. Paul E, Fancourt D, Razai M. Racial discrimination, low trust in the health system and COVID-19 vaccine uptake: a longitudinal observational study of 633 UK adults from ethnic minority groups. J R Soc Med. Nov 2022;115(11):439-447. [CrossRef] [Medline]
  40. Callejas Z, López-Cózar R, Ábalos N, Griol D. Affective conversational agents: the role of personality and emotion in spoken interactions. In: Conversational Agents and Natural Language Interaction: Techniques and Effective Practices. IGI Global; 2011:203-222. [CrossRef]
  41. Venkatesh V, Morris MG, Davis GB, Davis FD. User acceptance of information technology: toward a unified view. Manag Inf Syst Q. Sep 1, 2003;27(3):425-478. [CrossRef]
  42. Barker I, Steventon A, Deeny S. Patient activation is associated with fewer visits to both general practice and emergency departments: a cross-sectional study of patients with long-term conditions. Clin Med. 2017;17(Suppl 3):s15. [CrossRef]
  43. Deeny S, Thorlby R, Steventon A. Briefing: reducing emergency admissions: unlocking the potential of people to better manage their long-term conditions. The Health Foundation; 2018. URL: https:/​/www.​health.org.uk/​sites/​default/​files/​Reducing-Emergency-Admissions-long-term-conditions-briefing.​pdf [Accessed 2026-03-03]
  44. Deci EL, Ryan RM. The “what” and “why” of goal pursuits: human needs and the self-determination of behavior. Psychol Inq. Oct 2000;11(4):227-268. [CrossRef]
  45. Peters D, Calvo RA, Ryan RM. Designing for motivation, engagement and wellbeing in digital experience. Front Psychol. 2018;9:797. [CrossRef] [Medline]
  46. Peters D. Wellbeing supportive design – research-based guidelines for supporting psychological wellbeing in user experience. Int J Hum Comput Interact. Aug 27, 2023;39(14):2965-2977. [CrossRef]
  47. Ryan RM, Deci EL. Self-determination theory: basic psychological needs in motivation, development, and wellness. Guilford Publications; 2017. URL: https:/​/www.​google.com/​books/​edition/​Self_Determination_Theory/​th5rDwAAQBAJ?hl=en&gbpv=1&dq=inauthor:%22Richard+M.​+Ryan%22 [Accessed 2026-03-04] [CrossRef]
  48. Moradbakhti L, Leichtmann B, Mara M. Development and validation of a basic psychological needs scale for technology use. Psychol Test Adapt Dev. Jan 1, 2024;5(1):26-45. [CrossRef]
  49. Moradbakhti L, Schreibelmayr S, Mara M. Do men have no need for “feminist” artificial intelligence? Agentic and gendered voice assistants in the light of basic psychological needs. Front Psychol. 2022;13:855091. [CrossRef] [Medline]
  50. Offermann J, Rohowsky A, Ziefle M. Emotions of scepticism, trust, and security within the acceptance of telemedical applications. Int J Med Inform. Sep 2023;177:105116. [CrossRef] [Medline]
  51. Sobaih AEE, Chaibi A, Brini R, Abdelghani Ibrahim TM. Unlocking patient resistance to AI in healthcare: a psychological exploration. Eur J Investig Health Psychol Educ. Jan 8, 2025;15(1):6. [CrossRef] [Medline]
  52. Alhammad N, Alajlani M, Abd-Alrazaq A, Epiphaniou G, Arvanitis T. Patients’ perspectives on the data confidentiality, privacy, and security of mHealth apps: systematic review. J Med Internet Res. May 31, 2024;26:e50715. [CrossRef] [Medline]
  53. Gondi DS, Bandaru VKR, Sathish K, Kanthi Kumar K, Ramakrishnaiah N, Bhutani M. Balancing innovation with patient privacy amid ethical challenges of AI in health care. In: International Conference on Innovative Computing and Communication. Springer Nature; 2025:555-576.
  54. Richardson JP, Smith C, Curtis S, et al. Patient apprehensions about the use of artificial intelligence in healthcare. NPJ Digit Med. Sep 21, 2021;4(1):140. [CrossRef] [Medline]
  55. Bartneck C, Suzuki T, Kanda T, Nomura T. The influence of people’s culture and prior experiences with Aibo on their attitude towards robots. AI & Soc. Nov 3, 2006;21(1-2):217-230. [CrossRef]
  56. Glikson E, Woolley AW. Human trust in artificial intelligence: review of empirical research. Ann R Coll Physicians Surg Can. Jul 2020;14(2):627-660. [CrossRef]
  57. Rossi A, Holthaus P, Dautenhahn K, Koay KL, Walters ML. Getting to know pepper: effects of people’s awareness of a robot’s capabilities on their trust in the robot. 2018. Presented at: Proceedings of the 6th international conference on human-agent interaction:246-252. [CrossRef]
  58. Waytz A, Heafner J, Epley N. The mind in the machine: anthropomorphism increases trust in an autonomous vehicle. J Exp Soc Psychol. May 2014;52:113-117. [CrossRef]
  59. Narkhede MR, Wankhede NI, Kamble AM. Enhancing patient autonomy in data ownership: privacy models and consent frameworks for healthcare. JDH. 2025:1-23. [CrossRef]
  60. Abbas SR, Abbas Z, Zahir A, Lee SW. Federated learning in smart healthcare: a comprehensive review on privacy, security, and predictive analytics with IoT integration. Healthcare (Basel). Dec 22, 2024;12(24):2587. [CrossRef] [Medline]
  61. Ullah Tariq U, Sabrina F, Mamunur Rashid M, et al. Blockchain-based secured data sharing in healthcare: a systematic literature review. IEEE Access. 2025;13:45415-45435. [CrossRef]
  62. Xu G, Zhou Z, Dong J, Zhang L, Song X. A blockchain-based federated learning scheme for data sharing in industrial internet of things. IEEE Internet Things J. 2023;10(24):21467-21478. [CrossRef]
  63. Ang CS. Developing AI literacy in healthcare education: bridging the gap in competency assessment. Discov Educ. 2025;4(1):372. [CrossRef]
  64. Liu Z, Zou W, Lin C. Exploring the influence of privacy concerns, AI literacy, and perceived health stigma on AI chatbot use in healthcare: an uncertainty reduction approach. Patient Educ Couns. Nov 2025;140:109271. [CrossRef] [Medline]
  65. Williamson SM, Prybutok V. Balancing privacy and progress: a review of privacy challenges, systemic oversight, and patient perceptions in AI-driven healthcare. Appl Sci (Basel). 2024;14(2):675. [CrossRef]
  66. AlQudah AA, Al-Emran M, Shaalan K. Technology acceptance in healthcare: a systematic review. Appl Sci (Basel). 2021;11(22):10537. [CrossRef]
  67. Holden RJ, Karsh BT. The technology acceptance model: its past and its future in health care. J Biomed Inform. Feb 2010;43(1):159-172. [CrossRef] [Medline]
  68. Tung FC, Chang SC, Chou CM. An extension of trust and TAM model with IDT in the adoption of the electronic logistics information system in HIS in the medical industry. Int J Med Inform. May 2008;77(5):324-335. [CrossRef] [Medline]


AI: artificial intelligence
CA: conversational agent
EPSRC: Engineering and Physical Sciences Research Council
GDPR: General Data Protection Regulation
GP: general practitioner
HIPAA: Health Insurance Portability and Accountability Act
NHS: National Health Service
UTAUT: Unified Theory of Acceptance and Use of Technology


Edited by Adeola Bamgboje-Ayodele, Andre Kushniruk; submitted 26.Jul.2025; peer-reviewed by Seung Won Lee, Shamnad Mohamed Shaffi; final revised version received 22.Dec.2025; accepted 22.Jan.2026; published 11.Mar.2026.

Copyright

© Laura Moradbakhti, Dorian Peters, Jennifer K Quint, Björn Schuller, Darren Cook, Rafael A Calvo. Originally published in JMIR Human Factors (https://humanfactors.jmir.org), 11.Mar.2026.

This is an open-access article distributed under the terms of the Creative Commons Attribution License (https://creativecommons.org/licenses/by/4.0/), which permits unrestricted use, distribution, and reproduction in any medium, provided the original work, first published in JMIR Human Factors, is properly cited. The complete bibliographic information, a link to the original publication on https://humanfactors.jmir.org, as well as this copyright and license information must be included.