Published on in Vol 12 (2025)

This is a member publication of Lancaster University (Jisc)

Preprints (earlier versions) of this paper are available at https://preprints.jmir.org/preprint/79514, first published .
Exploring Perspectives of Health Care Professionals on AI in Palliative Care: Qualitative Interview Study

Exploring Perspectives of Health Care Professionals on AI in Palliative Care: Qualitative Interview Study

Exploring Perspectives of Health Care Professionals on AI in Palliative Care: Qualitative Interview Study

Original Paper

1Northern Care Alliance NHS Foundation Trust, Greater Manchester, United Kingdom

2Palliative Care Unit, Dept of Cardiovascular and Metabolic Medicine, University of Liverpool, Liverpool, United Kingdom

3Liverpool John Moores University, Liverpool, United Kingdom

4Marie Curie North West, Liverpool, United Kingdom

5Lancaster Medical School, Lancaster University, Lancaster, United Kingdom

Corresponding Author:

Amara Callistus Nwosu, MBChB, PhD

Lancaster Medical School

Lancaster University

Bailrigg

Lancaster, LA1 4YW

United Kingdom

Phone: 44 1524 594547

Email: a.nwosu@lancaster.ac.uk


Background: The use of artificial intelligence (AI) methods in palliative care research is increasing. Most AI palliative care research involves the use of routinely collected data from electronic health records; however, there are few data on the views of palliative care health care professionals on the role of AI in practice. Determining the opinions of palliative care health care professionals on the potential uses of AI in palliative care will be useful for policymakers and practitioners to determine and inform the meaningful use of AI in palliative care practice.

Objective: This study aimed to explore the views of palliative care health care professionals on the use of AI for the analysis of patient data in palliative care.

Methods: This was a phenomenological study using qualitative semistructured interviews with palliative care health care professionals with a minimum of 1 year of clinical experience in a hospice in the North West of England. Data were analyzed using inductive thematic analysis.

Results: We interviewed 6 palliative care professionals, including physicians, nurses, and occupational therapists. AI was viewed positively, although most participants had not used it in practice. None of the participants had received training in AI and stated that education in AI would be beneficial. Participants described the potential benefits of AI in palliative care, including the identification of people requiring palliative care interventions and the evaluation of patient experiences. Participants highlighted security and ethical concerns regarding AI related to data governance, efficacy, patient confidentiality, and consent issues.

Conclusions: This study highlights the importance of staff perceptions of AI in palliative care. Our findings support the role of AI in enhancing care, addressing educational needs, and tackling trust, ethics, and governance issues. This study lays the groundwork for guidelines on AI implementation, urging further research on the methodological, ethical, and practical aspects of AI in palliative care.

JMIR Hum Factors 2025;12:e79514

doi:10.2196/79514

Keywords



Background

Artificial intelligence (AI) is the science and engineering of creating “intelligent” machines (computers) through developed algorithms that replicate the ability of a machine to think and act like a human [1]. AI involves different methodologies (eg, machine learning, neural networks, deep learning, and natural language processing) that enable a machine to be trained to act autonomously, with or without human instruction [2]. AI has already demonstrated a significant impact in practice by facilitating the interpretation and analysis of large amounts of data contained within electronic health care datasets [3-5]. In health care, AI can facilitate the diagnosis of diseases, support clinical care delivery, and help individuals maintain their independence [6]. AI is increasingly used in palliative care (a discipline that provides holistic, person-centered support for people with life-limiting illness [7]). For example, AI-driven data analysis of electronic health records has been used to identify palliative care needs [8], support clinical documentation [9], identify quality indicators for end-of-life care [10], support symptom assessments [11], and estimate prognosis [12]. Despite the increased focus on palliative care AI, most current clinical AI tools are designed for nonpalliative care populations [8]. The lack of palliative care AI tools is a consequence of limited evidence in populations with serious illness, who often experience complex physical and psychosocial needs [13]. Palliative patients often experience significant morbidity, complex and multifaceted symptoms, and a high risk of mortality, whereas their families face grief and bereavement that extend beyond the patient’s death [13]. These unique dimensions of care introduce ethical, practical, and emotional considerations for AI, which are not as prominent in other medical and surgical specialties [14]. For example, the use of AI in prognostication may be more important in palliative care, where sensitive communication about life expectancy directly shapes care planning and emotional preparedness [12,15]. Similarly, the handling of data after death (ie, digital legacy and the rights of caregivers and families) raises questions that are less relevant in acute or curative clinical contexts [16]. Despite the growing body of research on AI in medicine, there is currently limited evidence that addresses these specific complexities within palliative care, which highlights the need for targeted research in this field [17]. Consequently, understanding the views of palliative care practitioners on AI is important to better understand the opportunities, challenges, and implementation issues associated with its use [18]. Therefore, a dedicated study of palliative health care professionals’ perspectives on AI is warranted to explore the specific issues affecting this cohort [17,19].

Aim

This study aimed to explore palliative care professionals’ views on the use of AI for the analysis of patient data in palliative care.


Overview

This study involved inductive thematic analysis, a method of analyzing qualitative data in which themes emerge directly from the data without predefined codes or expectations. Inductive thematic analysis was chosen as it provided the researcher with the flexibility to explore participants’ perspectives on AI in palliative care [20]. The lead researcher (OA) was a final-year medical student (male) who was supervised by SM and ACN. OA received training and support to conduct the interviews. The study adhered to the COREQ (Consolidated Criteria for Reporting Qualitative Research) checklist [21].

Study Setting

The study was conducted in a hospice in the North West of England, a specialized health care facility providing care for individuals in the advanced stages of a terminal illness or approaching the end of their lives. The hospice provides various services, including a 15-bed inpatient unit, day services, outpatient clinics, community outreach, patient and family support, and bereavement services.

Sampling and Recruitment

Recruitment was conducted between April and May 2022. The study was introduced at a weekly hospice education meeting, giving potential participants the opportunity to ask questions and express their interest in participating. Study advertisements were placed around the hospice, and an email was sent to all staff outlining the inclusion and exclusion criteria for the study. The study material included information about the researcher (OA). Inclusion criteria were palliative care health care professionals working in the hospice with a minimum of 1-year of clinical experience during the study data collection period of April to May 2022.

Data Collection

Over a period of 2 months, semistructured interviews were carried out with palliative care health care professionals. Participants received written information before the interview and provided written consent. Interviews were conducted face-to-face in a meeting room at the hospice. All the interviews were audio recorded using a digital voice recorder. The interviews lasted between 35 and 60 minutes, and the participants were informed that they could pause or discontinue the interview at any time. Each interview began with the researcher providing the participants with background information about the study and a definition of AI. The researcher conducted interviews using an interview guide, which was used to encourage structure across interviews but also facilitated flexibility by allowing participants to talk freely about their experiences (Multimedia Appendix 1). Open questions were used, and the interview schedule was adapted throughout the course of data collection to reflect the emergent themes and concepts (Textbox 1). Field notes and reflections were written throughout the interview process to help make sense of the data during the analysis phase.

Textbox 1. Examples of the interview questions.
  • What is your experience with artificial intelligence (AI) currently used within palliative care, if any?
  • What do you see as the desired goal or outcome of using AI within palliative care?
  • Have you ever been educated on the use of AI within palliative care?
  • What are your views on introducing education programs for palliative care health care professionals on the use of AI within palliative care?
  • AI in palliative care, like any new health care technology, may raise several safety concerns. Do you have any such concerns?
  • AI in palliative care, like any new health care technology, may raise several security concerns. Do you have any such concerns?
  • What are some important metrics that should be used when comparing AI tools to traditional tools in palliative care?
  • Can you identify a task at work which you currently perform repeatedly, with little or no variations each time?
  • What level of trust would you need to allow AI tools to perform this task for you?
  • What are some ways this level of trust can be established?
  • Is there anything else you’d like to tell me about this topic? If not, do you have any questions?

Interviews were transcribed verbatim by OA. Data were exported to Microsoft Word, and manual thematic analysis coding was used to systematically label the qualitative data extracts to identify patterns and themes. OA used the 6-step thematic analysis proposed by Braun and Clarke [22], which involves (1) familiarization with the data, (2) generation of initial codes, (3) development of initial themes, (4) reviewing themes, (5) defining and naming themes, and (6) writing up the analysis. We used inductive analysis, with line-by-line coding of participants’ interviews, to generate codes and organize the data into themes (Multimedia Appendix 2). Coding was carried out by OA. During the data collection phase, OA had regular supervisory meetings (with ACN and SM) to review data analysis, discuss initial findings, and evaluate data throughout the analysis process.

Six individuals contacted OA to consent to participate. None of the participants withdrew from the study. Interviews were analyzed iteratively. By the fourth interview, all key major themes (eg, “general openness toward AI” and “importance of human contact, empathy, and sympathy”) were identified. Additional interviews reinforced, rather than expanded upon, these categories. For example, the theme of “confidentiality concerns” emerged in the first interview and was reiterated consistently in interviews 2 to 6 without new subcategories. This redundancy indicated that thematic saturation had been reached with the sample. We determined that further interviews were unlikely to provide new codes, categories, or insights relevant to the research question. Therefore, a consensus was reached to stop data collection on completion of 6 interviews (ie, to stop recruitment of further participants) as the theoretical categories became saturated [23,24].

Ethics Statement

The University of Liverpool ethics committee gave ethics approval for this work (reference number 8523). This study was approved by the hospice research governance group. All participants provided written informed consent to participate in this study. All participants were informed of their right to privacy and confidentiality. All participants were informed that their data would be anonymized and that no identifying information would be included in any works related to this study. No compensation was given to participants for participating in the study.


Overview

We interviewed 6 palliative health care professionals, including 1 (17%) nurse, 2 (33%) occupational therapists, and 3 (50%) physicians. A total of 4 (67%) participants were female, and 2 (33%) were male. All interviews were conducted face-to-face and in person (Table 1).

Three themes were developed from the data: (1) opportunities for practice and the need for education, (2) enhancing human care and connection, and (3) trust and ethical considerations (Figure 1).

Table 1. Characteristics of the participants (N=6).
CharacteristicsParticipants, n (%)
Profession

Physician3 (50)

Occupational therapist2 (33)

Nurse1 (17)
Sex

Female4 (67)

Male2 (33)
Interview method

In person6 (100)

Online0 (0)
Figure 1. Thematic map showing the 3 main themes. AI: artificial intelligence.

Opportunities for Practice and the Need for Education

Participants highlighted their openness to consider using AI in palliative care practice. Specifically, participants described their views on how this technology could improve care delivery and efficiency for their patients. Participants spoke of how algorithm-driven health care is increasingly being used in other clinical specialties and how, similarly, AI can be used to improve palliative care:

I mean, I think obviously the future is, is going that way, isn’t it? You know, artificial intelligence is being developed at all areas.
[Participant 3]

Participants described the lack of AI use in palliative care compared to other medical and surgical specialties:

I suppose it’s kind of an alien concept, isn’t it? I think in terms of technology that we use day-to-day at the moment, we’re kind of behind the curve, I would say in, in the NHS [National Health Service], and certainly in this environment. So, I think it’s difficult to kind of visualise what that would be like in kind of day-to-day practice.
[Participant 6]

Participants spoke about how AI can potentially improve palliative care. The interviewees provided several examples of possible benefits, such as machine learning–based analyses of electronic health records to identify people needing palliative care, to improve data capture and analysis, and to inform personalized care recommendations:

I guess, you know, looking through the hospital records in the hospital. When I see a new patient, I’m manually looking back through the notes to try to find if they’ve had previous encounters with the palliative care team, how many times they’ve been in hospital, looking for all the medications to work out what medications they’re on and if any have been stopped.
[Participant 3]
A good thing would be if something could analyse the entire database every day and say, “look, these are the patients who are flagging up, particularly symptomatic,” or “are using lots of PRN [as required] medicines” or, you know, [if] certain keywords trigger [an] urgent review?
[Participant 4]

Participants highlighted that palliative care staff have a limited understanding of the different types of AI applications that may be used in clinical care. Participants expanded on this by describing how better staff education may improve their confidence in using AI in clinical practice:

I think for healthcare professionals, like for me, who’ve maybe not come across so much, or don’t understand it so much, it’s more about knowing about it, and understanding how we can help use it a bit better.
[Participant 1]
I think that’s only a positive thing. And, I think, to introduce education to help people understand even the basics of what AI is is really important, because I think until we know more about what it is and understand more about what it is, we can’t think of the ways in which we can improve care for the patient.
[Participant 1]

Enhancing Human Care and Connection

Participants emphasized the importance of human contact, empathy, and sympathy in palliative care, expressing concerns about AI potentially replacing human care. This was illustrated by the comments of 1 participant:

I think in palliative care, you know, our strength is based on human connections.... I don’t mean to be like negative towards, kind of, AI technology and stuff, but I think, actually, you know, the majority of my day, when I’m speaking to patients, it’s about human connection.
[Participant 6]

Participants identified how AI systems could be potentially designed and modeled to dynamically learn from health care professionals and service users, with the objective to develop algorithms that are meaningful in helping them provide clinical care:

I think it should be modelled as if, you know, the algorithm behind it is trying to learn, and it’s asking you for confirmation. You know, so it’s saying, “this is what I’ve found,” you know, and you then have to decide whether this is useful, or “can you confirm this is correct?”, or something like that. It should be this kind of back and forth.
[Participant 4]

Participants described the potential that palliative care health care professionals can routinely use data, informed by AI, to support care and their clinical decision-making (eg, analysis of electronic health record data to identify patients with palliative care needs, automated transcription of consultations, and personalized treatment recommendations). Consequently, the interviewees discussed scenarios in which AI was used to improve and augment the clinical care provided by palliative care professionals. Participants said that AI could be used as a tool to support clinical judgment and decision-making while maintaining a patient-centered approach:

I guess if you use the program alongside, like, have a step-by-step approach, so you could use the program or the intelligence alongside, you know, a human in kind of partnership. And then gradually, you just have to learn that trust and become more used to it.
[Participant 2]

Trust and Ethical Considerations

Participants highlighted the importance of health care professionals being confident that AI-driven clinical tools are trustworthy and reliable. The interviewees framed their opinions on the trustworthiness of AI in the context of their clinical responsibility of providing care for people with serious illness (who are often considered vulnerable). Several quotes from the participants illustrated the important role of research in generating evidence to inform meaningful AI use in clinical care:

It’d be track record and experience, wouldn’t it? So, you probably would want to have enough evidence to show that it made good decisions.
[Participant 5]

Participants stated that confidentiality was important, with several statements highlighting their concerns regarding data privacy and the security implications of using AI in clinical practice. They emphasized the importance of ensuring that patients’ data remain secure and protected when using AI to inform clinical decision-making in palliative care:

I suppose, privacy and data is the main concern as quite often it is in health care, isn’t it? And you know, making sure that data is secure and, you know, things like hacking aren’t an issue and people can’t access patients’ and relatives’ private data easily.
[Participant 1]
I think with any new technology to have to make sure it’s secure... And, you know, privacy concerns and breaches of confidentiality- So I think all those things are really important.
[Participant 6]

Participants discussed the risk of bias associated with AI analysis, which could create (and widen existing) inequalities in palliative care. In the interviews, the participants discussed the importance of developing strategies to reduce this risk of bias in AI algorithms and to explore public opinion about the role of AI in clinical practice. Participants described their belief that if AI should be used to improve holistic care for patients and those important to them. Further comments from the interviews highlighted the opinion that the use of AI in palliative care should be inclusive, unbiased, and ethical:

If our algorithms have been driven by developers in Silicon Valley in California and most of them are white, male and young, and the modelling has been tested on a particular set of individuals, which don’t have certain characteristics, which mean that certain people are not represented, you then might get a device or an algorithm which isn’t tailored for the needs of certain people.
[Participant 4]

General Comments of Participants

Overall, AI was viewed positively by many participants, although many stated that they had not used it in practice. None of the participants had received training in AI, and all stated that they would have liked to have received education on this topic. Participants described their opinions on how data science can improve clinical care; potential ideas included the use of AI to identify people from electronic health records who need palliative care and to use data analytics to evaluate quality of care. Participants highlighted data privacy and ethical issues related to AI use in palliative care, including important related issues such as governance, confidentiality, and consent.


Concerning the use of AI in palliative care, it is important to consider the opportunities it offers, the educational needs of staff, the importance of human connections, and practical issues related to trust and ethics.

Importance and Uniqueness of This Paper

This study provides an overview of the views of specialist palliative care health care professionals regarding AI in palliative care, which adds knowledge to the limited evidence base. Our qualitative approach facilitated an in-depth exploration of participants’ experiences, which captured their nuances and complexities. The evidence derived from this study improves knowledge about the views palliative care staff have about AI, which will shape its clinical use. Evidence demonstrates that improved staff involvement can improve the effectiveness and success of health care system interventions [25,26].

Relation to Previous Work in This Area

In this study, the staff positively described potential opportunities in which AI could be used to support palliative care, with themes consistent with previous work conducted with generalist staff [27]. In our study, staff identified several hypothetical possibilities where AI could be positively used to improve their practice (eg, predictive modeling, text screening, symptom assessment, and communication), which are consistent with current developments of AI in palliative care [28].

Our findings align with previous research, which recommends that health care professionals receive formal education and training in AI [17,29]. Specifically, previous research advises palliative care education programs to include holistic overviews of AI technologies, ethical considerations, and case studies that highlight the real-world applications and challenges of AI in palliative care [30].

Our findings support previous work, which describes the importance of focusing on human connections in palliative care while ensuring that AI tools are meaningfully used to improve the experience of patients, caregivers, and staff [27]. Consistent with previous research, we highlight the potential problems and bias that may occur from using AI in palliative care, due to limited evidence of patient-centered outcomes measures in palliative care populations [14]. In our analysis, participants described the ethical challenges of using AI in palliative care (reporting themes of promoting transparency and accountability in AI systems, regular ethical review and continuous impact assessments, ensuring patient autonomy and informed consent, and safeguarding data privacy and security) [30]. The ethical themes reported in our study are similar to those in previous research [30,31], illustrating the need to incorporate ethical principles (such as the 4 principles) into decisions involving AI use in palliative care practice [32]. For example, autonomy (Do service users have a choice of how their data are collected, analyzed, and stored?), beneficence (Is AI used in the best interests of individuals or is the benefit mostly for groups, populations, or other stakeholders?), nonmaleficence (How can we ensure people are not harmed from AI in palliative care?), and justice (How can AI tools be used fairly and equitably, in vulnerable people, from different backgrounds, cultures, and geographies?) [33]. Furthermore, our results support the importance of integrating broader ethical theories into practice (eg, consequentialism, deontology, rights-based ethics, and virtue ethics) to provide clinicians and policymakers with a framework to make decisions on how to use emerging technologies in clinical practice [34]. These frameworks can be used to address uncertainty to help stakeholders’ (eg, health care professionals, managers, and policymakers) decision-making when considering how to responsibly use AI tools in palliative care [13]. Consistent with previous work, our findings reinforce the view that there is a risk that current AI applications lack engagement with the ethical complexities of real-world use in palliative care, which highlights questions about the adequacy of clinical practice safeguards [15,30].

Limitations

This study is small, focused on one hospice in the North West of England, which means that the findings may not be generalizable to other palliative care settings (eg, home, community, hospital, and nursing homes). Although thematic saturation was achieved with 6 participants, this may have been influenced by the similarities of the participants. We acknowledge that a larger sample incorporating staff from a wider range of roles and professional backgrounds may have yielded more in-depth and diverse data. For example, this study lacks representation of some professional roles (eg, social work, spiritual care, pharmacy, and fundraising), which means there is a lack of data about how AI may impact wider specialist roles in the palliative care multidisciplinary team. Furthermore, our study did not include the perspectives of patients, caregivers, and other relevant stakeholders.

Importance to Policy, Practice, and Research

Decision-makers should consider the perspectives of palliative care staff when developing and implementing AI tools for palliative care. When considering applications of palliative care AI, decision-makers should consider how these innovations will improve care, address human needs, and fulfill ethical and governance requirements. From an educational perspective, it is important that palliative care professionals are trained to safely and effectively use new technologies (eg, AI) in clinical practice. Activities to achieve this objective may include the development of training curricula for undergraduate and postgraduate students, including content on the opportunities and challenges of AI in health care, ethical considerations of its use, and governance issues [30,35].

Future research on palliative care AI should establish standardized reporting for studies, seek external validation, and consider ethical issues, which are needed to ensure that the clinical application of AI tools is safe, meaningful, and effective [15]. Future research should explore views (on palliative care AI) from different perspectives, including multidisciplinary teams, managers, patients, caregivers, and other relevant stakeholders. Researchers should involve interdisciplinary partnerships and collaboration to facilitate work across essential interconnected themes, such as design, computing, data analysis, ethics, and translational medicine [17,36,37].

Conclusions

This study shows the importance of considering the views of palliative care professionals regarding the potential role of AI in clinical practice. Our findings demonstrate the importance of considering opportunities to meaningfully use AI to improve human-focused care, support staff education, and address practical issues related to trust, ethics, and governance. This study provides a foundation for developing guidelines for AI implementation in palliative care practice. Future research should examine methodological, ethical, and practical issues to ensure that AI best supports palliative care for people with serious illnesses.

Acknowledgments

The authors would like to thank the participants for their valuable input and participation in this study. They would also like to thank the hospice management and research governance group for supporting study recruitment. SS and ACN were funded by Marie Curie, the UK's leading palliative care charity. This study did not receive any funding.

Data Availability

All data generated or analyzed during this study are included in this published article (and its supplementary information files).

Authors' Contributions

Conceptualization: OA (lead), SM (supporting), ACN (supporting)

Formal analysis: OA (lead), SM (supporting), ACN (supporting) SS (supporting)

Methodology: OA (lead) ACN (supporting), SS (supporting), SM (supporting)

Project administration: OA (lead), ACN (equal), SM (supporting), SS (supporting)

Supervision: ACN (lead), SM (equal), SS (equal)

Writing—original draft: OA (lead), ACN (supporting)

Writing—review and editing: OA (lead), ACN (equal), SM (supporting), SS (supporting)

Conflicts of Interest

None declared.

Multimedia Appendix 1

Interview guide.

DOCX File , 40 KB

Multimedia Appendix 2

Generation of study themes.

DOCX File , 28 KB

  1. McCarthy J. What is artificial intelligence? Stanford University. Nov 12, 2007. URL: https://www-formal.stanford.edu/jmc/whatisai.pdf [accessed 2025-11-22]
  2. Chen M, Decary M. Artificial intelligence in healthcare: an essential guide for health leaders. Healthc Manage Forum. Jan 24, 2020;33(1):10-18. [CrossRef] [Medline]
  3. Shubhendu S S, Vijay J. Applicability of artificial intelligence in different fields of life. Int J Sci Eng Res. Sep 2013;1(1):28-35. [FREE Full text] [CrossRef]
  4. Quinn TP, Senadeera M, Jacobs S, Coghlan S, Le V. Trust and medical AI: the challenges we face and the expertise needed to overcome them. J Am Med Inform Assoc. Mar 18, 2021;28(4):890-894. [FREE Full text] [CrossRef] [Medline]
  5. Bajwa J, Munir U, Nori A, Williams B. Artificial intelligence in healthcare: transforming the practice of medicine. Future Healthc J. Jul 2021;8(2):e188-e194. [FREE Full text] [CrossRef] [Medline]
  6. Sixsmith A. AgeTech: technology-based solutions for aging societies. In: Rootman I, Edwards P, Levasseur M, Grunberg F, editors. Promoting the health of older adults: the Canadian experience. Toronto, ON. Canadian Scholars; 2021.
  7. Palliative care. World Health Organization. URL: https://www.who.int/health-topics/palliative-care [accessed 2025-11-22]
  8. Durieux BN, Tarbi EC, Lindvall C. Opportunities for computational tools in palliative care: supporting patient needs and lowering burden. Palliat Med. Sep 2022;36(8):1168-1170. [CrossRef] [Medline]
  9. Lindvall C, Lilley EJ, Zupanc SN, Chien I, Udelsman BV, Walling A, et al. Natural language processing to assess end-of-life quality indicators in cancer patients receiving palliative surgery. J Palliat Med. Feb 2019;22(2):183-187. [CrossRef] [Medline]
  10. Lee RY, Brumback LC, Lober WB, Sibley J, Nielsen EL, Treece PD, et al. Identifying goals of care conversations in the electronic health record using natural language processing and machine learning. J Pain Symptom Manage. Jan 2021;61(1):136-42.e2. [FREE Full text] [CrossRef] [Medline]
  11. Heintzelman NH, Taylor RJ, Simonsen L, Lustig R, Anderko D, Haythornthwaite JA, et al. Longitudinal analysis of pain in patients with metastatic prostate cancer using natural language processing of medical record text. J Am Med Inform Assoc. 2013;20(5):898-905. [FREE Full text] [CrossRef] [Medline]
  12. Avati A, Jung K, Harman S, Downing L, Ng A, Shah NH. Improving palliative care with deep learning. BMC Med Inform Decis Mak. Dec 12, 2018;18(Suppl 4):122. [CrossRef] [Medline]
  13. Finucane AM, Swenson C, MacArtney JI, Perry R, Lamberton H, Hetherington L, et al. What makes palliative care needs "complex"? A multisite sequential explanatory mixed methods study of patients referred for specialist palliative care. BMC Palliat Care. Jan 15, 2021;20(1):18. [FREE Full text] [CrossRef] [Medline]
  14. García Abejas A, Geraldes Santos D, Leite Costa F, Cordero Botejara A, Mota-Filipe H, Salvador Vergés À. Ethical challenges and opportunities of AI in end-of-life palliative care: integrative review. Interact J Med Res. May 14, 2025;14:e73517. [FREE Full text] [CrossRef] [Medline]
  15. Migiddorj B, Batterham M, Win KT. Systematic literature review on the application of explainable artificial intelligence in palliative care studies. Int J Med Inform. Aug 2025;200:105914. [FREE Full text] [CrossRef] [Medline]
  16. Stanley S, Higginbotham K, Finucane A, Nwosu AC. A grounded theory study exploring palliative care healthcare professionals' experiences of managing digital legacy as part of advance care planning for people receiving palliative care. Palliat Med. Oct 2023;37(9):1424-1433. [FREE Full text] [CrossRef] [Medline]
  17. Nwosu AC, McGlinchey T, Sanders J, Stanley S, Palfrey J, Lubbers P, et al. Identification of digital health priorities for palliative care research: modified Delphi study. JMIR Aging. Mar 21, 2022;5(1):e32075. [FREE Full text] [CrossRef] [Medline]
  18. Parchmann N, Hansen D, Orzechowski M, Steger F. An ethical assessment of professional opinions on concerns, chances, and limitations of the implementation of an artificial intelligence-based technology into the geriatric patient treatment and continuity of care. Geroscience. Dec 04, 2024;46(6):6269-6282. [CrossRef] [Medline]
  19. Mills J, Fox J, Damarell R, Tieman J, Yates P. Palliative care providers' use of digital health and perspectives on technological innovation: a national study. BMC Palliat Care. Aug 07, 2021;20(1):124. [FREE Full text] [CrossRef] [Medline]
  20. Fereday J, Muir-Cochrane E. Demonstrating rigor using thematic analysis: a hybrid approach of inductive and deductive coding and theme development. Int J Qual Methods. 2006;5(1):80-92. [CrossRef]
  21. Tong A, Sainsbury P, Craig J. Consolidated criteria for reporting qualitative research (COREQ): a 32-item checklist for interviews and focus groups. Int J Qual Health Care. Dec 16, 2007;19(6):349-357. [CrossRef] [Medline]
  22. Braun V, Clarke V. Using thematic analysis in psychology. Qual Res Psychol. Jul 21, 2008;3(2):77-101. [CrossRef]
  23. Charmaz K. Constructing Grounded Theory. Thousand Oaks, CA. SAGE Publications; 2014.
  24. Pope C, Ziebland S, Mays N. Analysing qualitative data. In: Pope C, Mays N, editors. Qualitative Research in Health Care, Third Edition. Oxford, UK. Blackwell Publishing Ltd; 2006.
  25. Cadeddu SB, Dare LO, Denis JL. Employee-driven innovation in health organizations: insights from a scoping review. Int J Health Policy Manag. 2023;12:6734. [FREE Full text] [CrossRef] [Medline]
  26. Ghani B, Hyder SI, Yoo S, Han H. Does employee engagement promote innovation? The facilitators of innovative workplace behavior via mediation and moderation. Heliyon. Nov 2023;9(11):e21817. [FREE Full text] [CrossRef] [Medline]
  27. Henzler D, Schmidt S, Koçar A, Herdegen S, Lindinger GL, Maris MT, et al. Healthcare professionals' perspectives on artificial intelligence in patient care: a systematic review of hindering and facilitating factors on different levels. BMC Health Serv Res. May 01, 2025;25(1):633. [FREE Full text] [CrossRef] [Medline]
  28. Reddy V, Nafees A, Raman S. Recent advances in artificial intelligence applications for supportive and palliative care in cancer patients. Curr Opin Support Palliat Care. Jun 01, 2023;17(2):125-134. [CrossRef] [Medline]
  29. Alowais SA, Alghamdi SS, Alsuhebany N, Alqahtani T, Alshaya AI, Almohareb SN, et al. Revolutionizing healthcare: the role of artificial intelligence in clinical practice. BMC Med Educ. Sep 22, 2023;23(1):689. [FREE Full text] [CrossRef] [Medline]
  30. Adegbesan A, Akingbola A, Ojo O, Jessica OU, Alao UH, Shagaya U, et al. Ethical challenges in the integration of artificial intelligence in palliative care. J Med Surg Public Health. Dec 2024;4:100158. [CrossRef]
  31. Floridi L. The Ethics of Artificial Intelligence: Principles, Challenges, and Opportunities. Oxford, UK. Oxford University Press; 2023.
  32. Beauchamp TL, Childress JF. Principles of Biomedical Ethics. Oxford, UK. Oxford University Press; 2001.
  33. Varkey B. Principles of clinical ethics and their application to practice. Med Princ Pract. Jun 4, 2021;30(1):17-28. [FREE Full text] [CrossRef] [Medline]
  34. Chonko L. Ethical theories. Direct Selling Education Foundation. 2012. URL: https://dsef.org/wp-content/uploads/2012/07/EthicalTheories.pdf [accessed 2025-11-22]
  35. Preparing the healthcare workforce to deliver the digital future. NHS Health Education England. 2018. URL: https://www.hee.nhs.uk/sites/default/files/documents/Topol%20Review%20interim%20report_0.pdf [accessed 2025-11-22]
  36. Nwosu AC. Telehealth requires improved evidence to achieve its full potential in palliative care. Palliat Med. Jul 2023;37(7):896-897. [FREE Full text] [CrossRef] [Medline]
  37. Nwosu A, Stanley S, Norris J, Taubert M. Digital legacy and palliative care: using technology, design and healthcare partnerships to research how digital information is managed after death. BMJ Support Palliat Care. 2023;13:A41-A42. [FREE Full text] [CrossRef]


AI: artificial intelligence
COREQ: Consolidated Criteria for Reporting Qualitative Research


Edited by A Choudhury; submitted 23.Jun.2025; peer-reviewed by M Tierney, SK Nandipati, J John Thayil; comments to author 07.Sep.2025; revised version received 26.Sep.2025; accepted 19.Nov.2025; published 08.Dec.2025.

Copyright

©Osamah Ahmad, Stephen Mason, Sarah Stanley, Amara Callistus Nwosu. Originally published in JMIR Human Factors (https://humanfactors.jmir.org), 08.Dec.2025.

This is an open-access article distributed under the terms of the Creative Commons Attribution License (https://creativecommons.org/licenses/by/4.0/), which permits unrestricted use, distribution, and reproduction in any medium, provided the original work, first published in JMIR Human Factors, is properly cited. The complete bibliographic information, a link to the original publication on https://humanfactors.jmir.org, as well as this copyright and license information must be included.