Original Paper
Abstract
Background: Emerging technologies such as artificial intelligence (AI) require an early-stage assessment of potential societal and ethical implications to increase their acceptability, desirability, and sustainability. This paper explores and compares 2 of these assessment approaches: the responsible innovation (RI) framework originating from technology studies and the co-design approach originating from design studies. While the RI framework has been introduced to guide early-stage technology assessment through anticipation, inclusion, reflexivity, and responsiveness, co-design is a commonly accepted approach in the development of technologies to support the care for older adults with frailty. However, there is limited understanding about how co-design contributes to the anticipation of implications.
Objective: This paper empirically explores how the co-design process of an AI-based decision support system (DSS) for dementia caregivers is complemented by explicit anticipation of implications.
Methods: This case study investigated an international collaborative project that focused on the co-design, development, testing, and commercialization of a DSS that is intended to provide actionable information to formal caregivers of people with dementia. In parallel to the co-design process, an RI exploration took place, which involved examining project members’ viewpoints on both positive and negative implications of using the DSS, along with strategies to address these implications. Results from the co-design process and RI exploration were analyzed and compared. In addition, retrospective interviews were held with project members to reflect on the co-design process and RI exploration.
Results: Our results indicate that, when involved in exploring requirements for the DSS, co-design participants naturally raised various implications and conditions for responsible design and deployment: protecting privacy, preventing cognitive overload, providing transparency, empowering caregivers to be in control, safeguarding accuracy, and training users. However, when comparing the co-design results with insights from the RI exploration, we found limitations to the co-design results, for instance, regarding the specification, interrelatedness, and context dependency of implications and strategies to address implications.
Conclusions: This case study shows that a co-design process that focuses on opportunities for innovation rather than balancing attention for both positive and negative implications may result in knowledge gaps related to social and ethical implications and how they can be addressed. In the pursuit of responsible outcomes, co-design facilitators could broaden their scope and reconsider the specific implementation of the process-oriented RI principles of anticipation and inclusion.
doi:10.2196/55961
Keywords
Introduction
Background
In the long-term care for older adults with frailty, caregivers and clients are increasingly being assisted by artificial intelligence (AI)–based technologies [
- ]. AI-based technologies can, for a given set of human-defined objectives, make predictions, recommendations, or decisions influencing real or web-based environments, thereby using machine or human-based data and input [ ]. For instance, AI is being used in decision support systems (DSSs) that acquire relevant data about care needs or processes; present the relevant data to users (eg, caregivers); and translate raw data into actionable information, such as alerts, risk assessments, or recommendations about care strategies [ - ]. Notwithstanding the opportunities and advantages, it is broadly acknowledged that the use of AI-based technologies entails societal and ethical implications. The long-term data collection in the context of monitoring older people’s health and well-being and the mediating or even leading role of algorithms in interpreting these data to arrive at care-related decisions pose implications related to, among others, undermining people’s privacy, autonomy, and self-determination; the discrimination and stigmatization of old age; and surveillance capitalism [ , - ].Due to the impact technologies such as DSSs have on people’s lives and the potential resistance that might emerge during implementation, an early-stage assessment of their implications is called for. This paper explores and compares 2 of these assessment approaches: the responsible innovation (RI) framework originating from technology studies and the co-design approach originating from design studies. The term RI refers to the aim to ensure the ethical acceptability, societal desirability, and sustainability of innovation processes and outcomes [
, ]. To guide RI into practice, Owen et al [ ] suggest that four process-oriented principles should guide technology research and development: (1) anticipation of the potential positive and negative implications; (2) inclusion of users and other stakeholders; (3) reflexivity of actors upon their own practices, assumptions, values, and interests; and (4) responsiveness to insights that emerge during the innovation process.Co-design can be used as an umbrella term for approaches that actively involve users and other stakeholders of innovations in any stage of the design process to ensure that the outcomes meet their needs [
, ]. It is a commonly accepted approach in the development of technologies to support the long-term care for older adults [ - ]. On a conceptual level, co-design resonates with RI. Both approaches share a focus on developing technologies to match human needs and abilities, similar to research fields such as human factors, human-computer interaction, and cognitive engineering. In fact, co-design has increasingly received attention as a way to support RI [ ]. Similar to RI, the co-design approach describes a research and development process in which innovators inclusively deliberate and reflect on the needs and values of different stakeholders and iteratively design and adapt innovations based on these insights [ ]. However, in contrast to RI, co-design does not explicitly impose on innovators the need to anticipate potential societal and ethical implications (henceforth, abbreviated as “implications”). Co-design can yield insights into potential unintended side effects and value creation that stakeholders do not want from innovation, but this is generally not an explicit aim in co-design. Against this background, this paper empirically explores how the explicit anticipation of implications can complement co-design.More specifically, this paper presents a case study on an international collaborative project that focuses on the development of a DSS to support formal caregivers involved in long-term dementia care. A co-design process involving intended users and other stakeholders (henceforth, abbreviated as “users”) is central to the development of the DSS. In addition, a separate line of research of the project under investigation explicitly anticipated implications of using DSSs in dementia care, along with strategies to address these implications, thereby fostering RI in AI-assisted decision-making. This so-called RI exploration largely took place in parallel to (ie, not as part of) the co-design activities and focused on soliciting the perspectives of project members (PMs) rather than those of users. This paper describes the empirical exploration of how the co-design process of an AI-based DSS for dementia caregivers is complemented by the explicit anticipation of implications.
The Healthy Ageing Eco-System for People With Dementia Project
The case presented in this paper is the Healthy Ageing Eco-system for People With Dementia (HAAL) project, which is part of the European Active and Assisted Living (AAL) program (AAL Europe, 2021; project AAL-2020-7-229-CP). In HAAL, an international consortium comprising care organizations, research institutes, and commercial firms from the Netherlands, Italy, Taiwan, and Denmark collaborates on the co-design, development, testing, and commercialization of a DSS that is intended to provide actionable information to formal caregivers of people with dementia, with the aim of reducing their workload and increasing the quality of care [
]. The DSS developed in HAAL concerns a dashboard that integrates various types of data about the physical activity, eating and sleeping patterns, cognitive functioning, mood, social contact, and medication intake of people with dementia. These data can be collected via several digital technologies (henceforth, “HAAL technologies”) throughout various stages of dementia. Besides integrating the data from HAAL technologies into 1 dashboard, possibilities to provide caregivers only the most relevant data in the form of summary overviews, alerts, predictions about emergency situations, and recommendations about care strategies were explored. To this end, both preprogrammed, rule-based algorithms and data-driven algorithms rooted in machine learning are used to process data.With these predefined directions as a starting point, a series of iterative co-design activities involving dementia caregivers, or more correctly “proxy users” who represent these eventual users (see the study by Stewart and Hyysalo [
]), and other stakeholders were organized to feed the actual design and development of the dashboard. The co-design activities focused on exploring the relevance and possibilities of translating the data from HAAL technologies into useful information and prioritizing data that are relevant to be presented in the dashboard [ , ]. In addition, the co-design activities focused on determining functionalities of the dashboard and designing and evaluating different pages of the dashboard’s user interface.The RI exploration in HAAL, which took place largely in parallel to the co-design activities, initially focused on raising PMs’ general awareness about RI and exploring their perspectives on both positive and negative implications of using the HAAL dashboard, along with strategies to address these implications.
Methods
Overview
For this case study, results from the co-design process and RI exploration within the HAAL project were incorporated and analyzed. In addition, retrospective interviews were held with individual PMs to reflect on the co-design process and RI exploration. Because the co-design process and RI exploration were largely organized in parallel, the HAAL project provided sufficient data within a specific time and context to perform a retrospective analysis on how the explicit anticipation of implications can complement co-design.
shows a timeline of activities.Co-Design Process
describes the 4 specific steps taken in the co-design process. The co-design activities in HAAL were conducted in 4 countries: the Netherlands, Italy, Taiwan and Denmark. The organizations from Denmark are unsubsidized partners in the HAAL project and did not participate in co-design steps 3 to 4. Despite differences in dementia care systems across these countries, such as types of caregivers involved in home-based and institutionalized care settings, formal caregivers of people with dementia were perceived as the primary target group for (using) the dashboard in all countries. Hence, a variety of formal caregivers of people with dementia, such as (homecare) nurses, case managers, psychologists, psychotherapists, social workers, and specialists in the care of older adults, were involved in the co-design activities. In addition, other stakeholders, such as innovation staff, data analysts at care organizations, and people working in (care) alarm centrals, were involved in some steps of the co-design process to broadly explore requirements for the dashboard. As indicated in , two intermediate steps were taken without the direct involvement of users. Further, at the end of step 4, participants were implicitly asked about RI-related themes (autonomy and transparency). Throughout the co-design activities, data were collected in the form of notes, audio and video recordings, photos, drawings, and (web-based) canvasses and by conducting surveys.
Step | Methods | Research focus | Participants |
1a | Focus group sessions (3 web based, 1 hybrid, and 16 physical) |
| Nurses, day-care workers, psychologists, physiotherapists, technical stakeholders, innovation managers and directors of care organizations, representatives from various municipalities, people with dementia, and informal caregivers (n=146; the Netherlands: n=18, 12.3%; Italy: n=18, 12.3%; Taiwan: n=108, 74%; Denmark: n=2, 1.4%). |
2 | Demonstration, try-outs, and survey (7 physical and 1 hybrid) |
| Nurses, day-care workers, psychologists, physiotherapists, data specialists, and innovation staff and directors from care organizations (n=48; the Netherlands: n=6, 12%; Italy: n=9, 19%; Taiwan: n=30, 62%; Denmark: n=3, 6%). |
3d | Co-design sessions (3 physical and 2 web based) |
| Data specialists and innovation staff, including part-time nurses (n=21; the Netherlands: n=6, 29%; Italy: n=4, 19%; Taiwan: n=11, 52%). |
4 | Usability study (8 physical sessions, including survey) |
| Formal caregivers, digital care ambassadors, alarm centralists, and innovation staff (n=33; the Netherlands: n=9, 27%; Italy: n=14, 42%; Taiwan: n=10, 30%). |
aIntermediate step: after analyzing results from step 1, user personas and desired dashboard functionalities were defined and translated into a preliminary mock-up for the dashboard (iteration 1). The motivational goal model of Taveter et al [
] was used for this translation.bHAAL: Healthy Ageing Eco-system for People With Dementia.
cMoSCoW: must have, should have, could have, and won’t have this time.
dIntermediate step: after analyzing results from step 3, insights about user requirements were again plotted on the motivational goal model to define design requirements. These design requirements were used to translate the preliminary mock-up into a clickable mock-up (iteration 2).
eHUBBI: eHealth usability benchmarking instrument.
fRI: responsible innovation.
RI Exploration
The RI exploration was primarily based on a qualitative survey among PMs, which was preceded by 2 workshops and followed by a third workshop with PMs. The first 2 workshops with PMs were held in a hybrid setting (web based and physical) during collective consortium meetings. The goal of the first workshop was to explain the notion of RI to PMs and discuss their thoughts about the relevance of and ways to address RI in HAAL. In the second workshop, based on the guidance ethics approach of Verbeek and Tijink [
], potential positive and negative implications of using the envisioned HAAL dashboard were explored, along with ways to address these implications.Next, a dedicated qualitative RI survey was developed and conducted among PMs (
). The goal of the RI survey was to reveal PMs’ viewpoints on how to responsibly develop AI-based analytical functionalities and the dashboard user interface in the HAAL project. The survey first explained that AI, as in the HAAL dashboard, provides opportunities for descriptive, diagnostic, predictive, and prescriptive analyses with differing levels of complexity and automation [ , ]. Next, questions were asked in relation to 2 distinct imaginary scenarios that outline different roles for AI within the HAAL dashboard. The first scenario (A) described a descriptive and largely rule-based dashboard through which users can assess the data from HAAL technologies and how the situations of clients have changed over time. This scenario was inspired by the dashboard that was aimed to be developed in the HAAL project. The second scenario (B) took a more speculative turn and described a proactive and partially self-learning dashboard that automatically translates the data into diagnostic, predictive, and prescriptive information to prompt caregivers to take certain actions. The scenarios were used as input to inspire respondents about directions the project could take in terms of developing AI and to enable them to articulate their expectations and considerations regarding the opportunities and implications of an advanced AI-based DSS (see also the study by Noortman et al [ ]). After presenting each scenario, questions were asked about the positive and negative implications of using the respective dashboard. Thereafter, respondents were asked which scenario they preferred in terms of ethical acceptability, societal desirability, and technical feasibility and why they preferred it. Next, the survey introduced six principles for responsible AI innovation, adopted from guidelines from the World Health Organization: (1) protecting human autonomy; (2) promoting human well-being and safety and the public interest; (3) ensuring transparency, explainability, and intelligibility; (4) fostering responsibility and accountability; (5) ensuring inclusiveness and equity; and (6) promoting AI that is responsive and sustainable [ ]. Respondents were asked how these principles might be relevant to and could be applied in the HAAL project. The survey was completed by 12 respondents representing 7 different organizations from all 4 countries. In addition, 5 respondents partially filled in the survey anonymously.Finally, the RI survey was followed by a third hybrid workshop in which PMs were invited to jointly discuss what they learned from answering the RI survey.
Retrospective Interviews With PMs
In addition to the co-design activities and RI exploration, semistructured interviews were held with 6 PMs: 4 co-design facilitators (n=1, 25% working in the Netherlands; n=2, 50% working in Taiwan; and n=1, 25% working in Italy) and 2 software developers (working in Italy). The goal of the interviews was to uncover possible rationales behind the co-design process, choices made throughout the co-design process, and input given by co-design participants. All interviews lasted between 30 and 40 minutes and were fully transcribed by a professional transcription service.
Analysis
The analysis of data was performed by DRML, SIA, NES, and BMH. The data collected during the co-design activities and RI exploration were first analyzed independently by these 4 researchers. While the co-design data were previously analyzed by HAAL PMs to learn about the dashboard requirements, they were analyzed again for the purposes of this paper. Taking the 6 responsible AI principles from the World Health Organization guidelines [
] as a starting point, the researchers performed an inductive thematic analysis [ ] to uncover conditions for the responsible design and deployment of the HAAL dashboard, including potential negative implications and strategies to address them. In doing so, they examined how certain insights regarding these conditions emerged in the co-design activities, the RI exploration, or both. In other words, the analysis focused, first, on identifying themes common within and between the co-design and RI exploration results and, second, on examining how the results from the RI exploration complement those from the co-design activities, or vice versa, in terms of RI. Subsequently, the transcripts of the retrospective interviews were analyzed independently by DRML, SIA, and BMH to uncover new conditions for RI and explore the complementarity between the co-design process and RI exploration. An additional focus was on why certain insights about conditions for RI may have emerged less explicitly in either the co-design process or the RI exploration. While analyzing the data, the researchers applied open coding and kept track of their reflections by writing them down as memos. After the data were independently analyzed by the researchers, the findings and memos were regularly discussed and reviewed by the researchers to reconcile major discrepancies in the coding and to reach agreement on the final coding scheme. Both physical and digital meetings were held to ensure the consistency of the analysis and reach convergence.Ethical Considerations
The authors of this study followed the guidelines in the Declaration of Helsinki and the Dutch code of conduct for scientific integrity. Ethical approval for the interviews, not subject to the medical scientific research act involving human subjects, was granted by an independent board of the lead author's department (Vilans), including a privacy officer and legal expert [
].For each co-design step, general information about the goal and procedure was provided, and the participants were asked to read and sign an informed consent form. The original consent covers secondary analysis of the data for the purposes of this study. The data gathered through the co-design steps and RI exploration were pseudonymized before analysis. Study participants did not receive any financial compensation.
Results
Overview
Seven overarching and interlinked themes representing conditions for the responsible development and deployment of the HAAL dashboard were extracted: (1) develop a proactive dashboard, (2) prevent cognitive overload, (3) protect privacy, (4) provide transparency, (5) empower caregivers to be in control, (6) safeguard accuracy, and (7) train users. We explicate how insights related to each theme emerged in the co-design activities, the RI exploration, or both. In addition, insights from the interviews with PMs are provided. In doing so, for each theme, we discuss how the explicit anticipation of implications (ie, the RI exploration) complements the co-design process in the HAAL project.
excerpts the results.- Develop a proactive dashboard
- The co-design results clearly indicate a perceived need for a proactive dashboard and provide concrete arguments to this end. The RI exploration also indicated the need for a proactive dashboard, albeit with less concrete arguments. Besides, limitations were raised regarding the short-term feasibility of a proactive dashboard.
- Prevent cognitive overload
- The co-design process and RI exploration yielded similar insights, that is, that too much data in one place would overload caregivers’ cognitive workload and that focus of the dashboard should be on providing actionable and only the most relevant information. However, this insight only emerged late in the co-design process (step 4 of 4).
- Protect privacy
- The need for privacy protection emerged strongly in the co-design process, and participants clearly pointed to the need for a proactive dashboard in privacy terms. The theme was discussed only briefly in the RI exploration, although some practical suggestions were provided, such as the use of encryption and passwords.
- Provide transparency
- While the importance of the transparency of the dashboard’s information emerged in the co-design process, practical suggestions on how to provide transparency (eg, training users in correctly interpreting information and explanations) were given only in the RI exploration.
- Empower caregivers to be in control
- The main contribution from co-design was the proposition to gradually expand the application of artificial intelligence (AI) functions in practice so that users can get used to an increasing role of AI. In comparison, the RI exploration yielded more in-depth insights and suggestions. The RI exploration stressed that it is important for caregivers not to become too reliant on the results of AI and to have a critical mindset and keep the context in mind.
- Safeguard accuracy
- During co-design, the importance of accurate dashboard information was mentioned but not discussed in depth. In the RI exploration, concrete suggestions were made to ensure accuracy, such as including feedback buttons for users.
- Train users
- The importance of training, also in relation to other themes such as empowering caregivers to be in control and safeguarding accuracy, frequently appeared in the RI exploration but was raised by only one of the participants in the co-design process. In the RI exploration, suggestions were also provided regarding the focus of training, for instance, on creating awareness about the mediating role of AI in decision-making.
Theme 1: Develop a Proactive Dashboard
The co-design participants generally agreed that the HAAL dashboard should support decision-making proactively, by actively generating and pointing users to relevant insights, rather than passively, by merely showing data from the HAAL technologies. In contrast, the results from the RI survey showed varying viewpoints among PMs regarding the dashboard’s required level of proactiveness with regard to supporting decision-making.
Co-design steps 1 and 2 showed that the data from HAAL technologies could be potentially useful for both daily caregivers and caregivers who are less frequently involved (eg, general practitioners). In these co-design steps, there was limited reflection on the possibilities of a dashboard beyond data integration. However, in co-design steps 3 and 4, most participants expressed an interest in a dashboard that also interprets data to provide new information and inspire users. That is, participants suggested that the dashboard should provide insights into or predictions about outliers from usual patterns and distinguish between urgent (eg, a fall) and nonurgent (eg, a deviation in sleeping pattern) outliers to prompt caregivers to take appropriate action. As one of the caregivers at a Taiwanese care center argued, “What I would like is an alert service, more centered on urgency than on daily, routine patient follow-up.” In addition, the dashboard was seen as a way to encourage caregivers to consider signs that might otherwise have been neglected or perceived too late. Besides, some participants of co-design steps 3 and 4 proposed that the dashboard could provide recommendations on how to prevent or address certain deviations from usual patterns.
In the RI survey, most PMs shared pros and cons related to both a descriptive dashboard (scenario A) and a proactive dashboard (scenario B). Most PMs argued that a proactive dashboard could potentially add the most value, especially in terms of enhancing prevention and reducing caregivers’ cognitive load (see also theme 2). At the same time, all PMs expressed doubts about the feasibility of developing a proactive dashboard due to the complexity and relatively limited time span of the HAAL project. Some PMs stressed that the initial acceptance and adoption of a proactive dashboard by caregivers might be low, arguing that the more proactive the dashboard is, the more it may infringe on job satisfaction. As one of the PMs explained, “Caregivers might enjoy the part in their work where they investigate the status of the client, and this is then (partially) taken over by machines.” However, although market introduction was questioned, some PMs advocated exploring possibilities for and experimenting with the more progressive concept of a proactive dashboard to iteratively learn and generate ideas and lessons for future research and development. In an interview, one of the PMs explained, “We know that we could do bigger, smarter things with AI, but you cannot start with high-level AI...But I think that these kinds of projects are useful also to build knowledge and literacy, by making people consider what technology and artificial intelligence could do.”
Theme 2: Prevent Cognitive Overload
The need to prevent cognitive overload was another recurring argument for developing a proactive dashboard in both the co-design process and RI survey. In co-design step 4, it was stressed by multiple participants that too much data or information in one place could exceed caregivers’ cognitive load and cause problems regarding the prioritization of which client, or what aspect of a client’s life, needs attention first. Similarly, in the RI survey, PMs suggested multiple times that a descriptive dashboard may require additional time for caregivers in terms of checking the data, rather than save time, and increase mental strain. As one of the PMs stated, “Adding more data in one place without elaborating on it would not really reduce the caregiver burden.”
Theme 3: Protect Privacy
While privacy was a prominent theme throughout all co-design steps, it was only briefly discussed in the RI exploration. During co-design step 3, multiple participants suggested that from a privacy perspective, a (proactive) dashboard that provides only the most relevant data patterns, notifications, and alerts may be preferred over a (descriptive) dashboard that directly discloses all data about the evolving status of clients in relation to various indicators. This link between the need for a proactive dashboard (scenario B) and privacy concerns was not discussed in the RI survey.
Further, privacy concerns raised in the co-design activities were related to the storage of large amounts of data collected about people with dementia and how these data would be handled. As one of the participants stated, “A lot of personal information is gathered, so you can get to know a lot about people.” In line with this, most participants stated that compliance with the European General Data Protection Regulation should be ensured, and some practical suggestions were made, for instance, to show the client’s home address or room number in the dashboard rather than their names in case of alarms.
The importance of privacy protection was mentioned by various PMs in the first 2 RI workshops, but in the RI survey, only 3 (18%) of the 17 PMs provided input on privacy issues. One of the PMs stated that ways must be found to balance the benefits of large-scale and long-term data collection (eg, in terms of prevention) with downsides such as a feeling of intrusion. Complementary to the co-design process, PMs also provided practical suggestions on privacy protection in the RI survey, such as using a private log-in to the dashboard for caregivers, encryption, or even facial recognition to protect data.
Another privacy concern, raised during co-design step 3, was data accessibility. Several participants reported about who should have access to the dashboard. Some participants proposed that access should be limited to specific caregivers with the specific assignment to learn from the dashboard. In contrast, others reported that all caregivers, including informal carers (eg, family), should have access to the dashboard, if desired. There was no consensus among participants about whether a distinction should be made between different users who are able to see different client data.
Theme 4: Provide Transparency
In co-design step 4, participants proposed that a condition for the use of a proactive dashboard is that users need to understand the reasons (eg, data patterns) behind information provided by the dashboard. In this respect, one of the PMs discussed in an interview that caregivers should not be overloaded with too many details about how specific dashboard information comes about (see also theme 2). In contrast, some co-design participants stressed that users should always be able to examine all data from the different HAAL technologies. Hence, this could be in conflict with the previously discussed insight from co-design that making all data available may be less preferable from a privacy perspective (see also theme 3).
The co-design participants also made various remarks regarding the context specificity of transparency needs. Multiple participants expressed that a need for transparency may not always, or for every user, mean the same. For instance, in case of alarms about certain urgent situations, it may be irrelevant or even distracting to immediately show all data that triggered the alarm. However, users may want to view all the data at a later stage to gain insights into the context and possible causes for the urgent situation, for instance, for training and prevention purposes. A similar insight was raised in the RI exploration, where it was, for example, suggested that in-depth explanations could be provided but only after users ask for it, for instance, by clicking through.
Further, during co-design step 4, it was suggested that once caregivers have built a certain level of trust in the dashboard, less detailed explanations clarifying how the dashboard reaches its conclusions might be sufficient. However, as one of the PMs added in an interview, in the long run, excessive trust might lead to caregivers making certain decisions too easily based on the dashboard’s information without critical reflection: “The long-term risk is that users end up trusting the system too much” (see also theme 5).
Although co-design participants highlighted the importance of transparency in HAAL, they did not provide practical suggestions about ways to provide transparency. In the retrospective interviews, various possible explanations were given. For instance, 2 PMs argued that issues such as transparency may have been discussed with limited depth throughout co-design because they pertain more to the backend of the system (ie, algorithms and web services) than the front end (ie, interface) with which users directly interact and because participants may place a certain degree of trust in developers to deal with such issues. Besides, 2 PMs discussed that it may have been hard for co-design participants to formulate requirements regarding transparency during early phases of design because the dashboard concept was still relatively abstract. As suggested, gaining in-depth insights into issues such as these may be easier when practically demonstrating and testing the dashboard in field tests, as users can then actually experience the system and its limitations.
While practical suggestions on providing transparency in HAAL were absent in the co-design results, they were discussed in the RI survey. For instance, PMs suggested (1) showing which specific data were included by algorithms to provide certain information; (2) creating abstractions easy to understand for users to explain the logic behind data analyses, for instance, by giving explanatory examples of common use cases; and (3) training users in interpreting the information and their explanations (see also theme 7).
Theme 5: Empower Caregivers to Be in Control
It was raised in co-design step 4 that people should be in charge of decision-making, regardless of whether human decisions are in line with the dashboard’s information. In the same line, multiple PMs argued in the RI survey that people (ie, caregivers) should always be making the final decisions, and they should make these decisions only after carefully valuing the dashboard’s information in light of the specific context. It was also suggested during co-design that caregivers may at first instance not be ready yet to get extensive advice from a dashboard. A gradual expansion of AI-functions in real practice was suggested. For instance, in the beginning, the dashboard could provide only generic insights (eg, patterns), alarms, and predictions. In a later stage, when reliability has improved and trust in and experience with the system have been gained, recommendations or conclusions about follow-up steps could be provided. Apart from the above, the importance of people making the final decisions was not further reported by co-design participants.
In contrast, the importance of caregivers being and remaining to be in control of decision-making was more prominent in the RI exploration. In the RI survey, 3 PMs suggested that the long-term use of a proactive dashboard might slowly deprive the intuition of caregivers and maintain an automated and predefined focus whereby one might overlook the person (ie, person with dementia) behind the data. One of the PMs even stated, “There may be a tendency to rely more on AI than own observations and assessments because ‘the computer is always right.’” To encourage caregivers to make autonomous decisions while using the dashboard, training was put forward as an important factor by several PMs (see also theme 7).
Theme 6: Safeguard Accuracy
The importance of accurate dashboard information was reflected to a limited extent in the co-design process. During all co-design steps, participants reported a couple of times that the accuracy of the data and data analyses should be regularly evaluated. However, in an interview, a PM suggested that co-design participants mainly shared this requirement as a general condition that must be met before the dashboard could be put into practice, rather than giving concrete ideas on how to achieve this.
The importance of accurate dashboard information and ways to achieve this were more prominently discussed in the RI survey. Multiple PMs argued that information provided by the dashboard should not lead to any faulty judgments by caregivers and that both the data and the algorithms processing data should, therefore, be accurate, without significant biases. For instance, one of the PMs stated, “The dashboard should not give unnecessary warnings to caregivers because the false warning could stimulate the caregivers to impose unnecessary boundaries to people with dementia.” One of the PMs explicitly linked accuracy to being sensitive toward the diversity among clients and suggested that the dashboard be fed with data from heterogeneous clients to reduce bias. In contrast to the co-design process, PMs also provided practical suggestions about particular ways of involving users to safeguard accuracy, such as enabling users to (1) provide feedback on data or insights through a button, (2) personalize certain thresholds for alarms to the individual client, (3) keep track of their responses and follow-up actions on the dashboard’s information, (4) report nonplausible suggestions and malfunctions, and (5) periodically evaluate the dashboard’s functioning. Again, training was put forward as an important factor in this case for users to be able to be involved (see also theme 7).
Theme 7: Train Users
During the co-design activities, one of the participants commented that the proper use of the dashboard would require training and practical learning. In the RI survey, multiple PMs pointed out that training users is an important measure to tackle challenges related to the autonomy of users and the accuracy of the dashboard’s information (see also themes 5 and 6). It was suggested that the training should focus on making the users become acquainted with the HAAL technologies; data types; and information provided by the dashboard, including underlying data analyses, and on understanding the impact that the use of the dashboard might have on decision-making. One of the PMs said, “Caregivers should be taught that they will always in some degree be influenced by the information on the dashboard, and be recommended to make their own judgements first.” Another PM argued that training should prevent caregivers to become overreliant on the dashboard. In addition, training was suggested to prepare some users for active involvement in maintaining the accuracy of the dashboard information (see also theme 6).
Discussion
Principal Findings
This paper empirically explores how the co-design process of an AI-based DSS for dementia caregivers is complemented by the explicit anticipation of implications. A total of 7 overarching and interlinked themes representing conditions for the responsible development and deployment of the DSS were extracted: develop a proactive dashboard, prevent cognitive overload, protect privacy, provide transparency, empower caregivers to be in control, safeguard accuracy (eg, by reducing false positives), and train users. Because these conditions are interlinked, it is essential for various actors, including developers and users of the DSS, to work together to cohesively address them in practice. Moreover, some conditions, such as to develop a proactive dashboard and empower caregivers to be in charge or to provide transparency through detailed information and prevent cognitive overload, can be at odds with each other and need to be carefully balanced. To gain a deeper understanding about appropriate and responsible levels of proactivity by the DSS, where the contributions of AI and human input in decision-making are balanced, future studies could expand upon prior research in fields such as human factors by exploring and contextualizing notions such as automation bias [
, ] and human automation coordination [ , ] in the context of AI-assisted decision-making in long-term dementia care. Scenarios that may lead to excessive reliance on the automated execution of functions, such as AI-driven data interpretation, could be anticipated, and strategies could be devised to mitigate such scenarios [ ].As our analysis points out, the general expectation of both co-design participants and PMs was that a dashboard that proactively supports decision-making would be most valuable to dementia caregivers. To this regard, the perspectives of co-design participants were fairly aligned; there was a consensus that the dashboard should not show all available data from care technologies. Rather, it should focus on information about significant changes in the data that, for instance, indicate a deterioration of well-being. AI itself was positioned as a technical fix (see also the study by Wehrens et al [
]) to mitigate specific risks related to the remote technology-based monitoring of people with dementia, that is, the infringement of clients’ privacy and cognitive overload of caregivers. This is in line with previous studies that show that too much information [ - ] and insufficient time can lead to information overload [ ]. The same suggestion of using AI to actually support the responsible embedding of technology in care practice was also found in a scoping review on practical approaches to responsible AI innovation in the context of long-term care [ ]. In comparison to the co-design results, the perspectives of PMs in the RI exploration were less unanimous; some PMs shared doubts about the short-term feasibility and acceptance of a proactive dashboard. This discrepancy between results may have been owing to the co-design process being focused on exploring opportunities for innovation, while the RI exploration explicitly invited PMs to reflect on opportunities as well as risks of AI-based analytical functionalities.Throughout both the co-design process and the RI exploration, various conditions were defined for the responsible development and deployment of a proactive DSS. Similar conditions emerged in the co-design process and RI exploration. However, despite considering and addressing usability requirements, such as minimizing memory load [
, ], in the co-design process, co-design participants generally went into less detail. Compared to PMs in the RI exploration, co-design participants provided fewer practical suggestions on how to meet the RI conditions, except for conditions related to privacy protection. In addition, multiple conditions (ie, preventing cognitive overload, empowering caregivers to be in control, and safeguarding accuracy) emerged in a relatively late stage of the co-design process, once prototyping and reflection on prototypes stood central. Relevant input on implications and conditions for RI emerged more naturally in these phases of co-design, regardless of 2 RI questions related to autonomy and transparency being asked at the end of the last co-design step. Again, these differences in results could potentially be explained by the focus of co-design activities being mainly on opportunities, while the RI exploration was focused on both opportunities and risks.Hence, the explicit anticipation of implications (ie, the RI exploration) was found to complement the insights from the co-design process in the project under investigation. At the same time, a number of deficiencies can be mentioned regarding the insights that have been gained about social and ethical implications of the DSS. For instance, potential tensions were found between conditions set by different co-design participants. More specifically, to protect privacy, some co-design participants proposed to limit access to information provided by the DSS to specific caregivers. Other participants advocated more transparency and data availability. It is premature to draw conclusions from such contrasting insights. However, it can be stated that insufficient insights were gained into people’s individual views on such matters, the interrelatedness of conditions, and potential trade-offs between them. Further, it stood out that both the co-design process and RI exploration yielded limited insights into the dependency of different conditions on context (eg, time, place, and culture). Although it was indicated that trust in the dashboard and transparency needs may change over time, limited insights were gained into how conditions for RI may depend on other contextual factors, such as place and culture. Despite the co-design activities being carried out in multiple countries, no cross-country differences in conditions for the responsible design and deployment of the dashboard were found.
Practical Implications
As argued by Fischer et al [
], differences regarding who is involved in the co-design of care technologies, and how, when, and why they are involved, result in different types of outcomes. To this respect, we discuss 4 considerations that designers and co-design facilitators could take into account to increase the potential for co-design processes to contribute to ethically acceptable, societally desirable, and sustainable deployments of AI-based care technologies.First, one could strive for balanced attention on both positive and negative implications throughout co-design processes. The co-design process in this case study was focused mostly on functional (ie, what the technology must do) and nonfunctional (eg, usability and reliability) requirements. However, rather than merely eliciting information on the needs, preferences, and requirements of users, co-design processes should go back and forth between needs and opportunities for innovation on the one hand and associated implications on the other hand. In addition, RI necessitates striking a balance in co-design practices between focusing on design aspects, such as usability and esthetics, and considering ethical and social implications. Adhering to specific design standards holds importance to meaningful field tests and the implementation of innovations in practice. However, excessive emphasis on these aspects during early phases of innovation may detract from fostering the innovation’s desirability and acceptability. Although research and development projects that integrate anticipatory elements into co-design may yield more in-depth insights and be able to more flexibly adapt to insights than projects that anticipate implications separate from the co-design process, a few remarks can be made here. For instance, implications of innovation may need to be anticipated and addressed not only as part of co-design but also in parallel to and beyond the co-design process through methods such as impact assessments, ethical reviews, and foresight exercises. Besides, caution should be exercised to prevent co-design processes from becoming dominated by the anticipation of long-term and wider societal implications, as this may go at the expense of fast iterative design cycles exploring and addressing requirements and direct benefits for users. Further, Sumner et al [
] argued that co-design may require the commitment of a significant amount of time and resources and that some projects may have to rationalize limited resources. Naturally, the same applies to anticipating implications as part of or in parallel to co-design.Second, one could engage with the perspectives of people who are willing and able to imagine how their interests and their role as users of technology evolve over time (ie, future users), rather than merely involve people from contemporary care practices in co-design. Innovators should not just examine the needs of current users because they may then be insufficiently able to respond to future needs [
]. For instance, in the context of the HAAL project, which was investigated in this study, this could concern the involvement of progressive and technology-savvy dementia caregivers who reflect on how the adoption of increasingly advanced DSSs and other AI technologies will change their work.Third, one could deliberate on which stakeholders, apart from users, should actually participate in co-design and regularly evaluate how their views guide the underlying direction of innovation. Due to the focus of co-design often being on the needs, expectations, and contexts of individual users, innovators may fail to address potential negative implications, especially implications for other stakeholders or in the long run [
]. Accordingly, it might be relevant to involve certain stakeholders such as intermediary user organizations or social advocacy groups in co-design to articulate societal demands and consider societal implications from a systemic perspective [ , , ]. For instance, in the context of the HAAL project, this could concern involving nongovernmental organizations that are committed to the privacy interests of older people.Fourth, one could not only invite but also actively enable users to contribute to the anticipation of implications in co-design. As users are often no experts in (responsible) innovation, they may have difficulties in explicating implications and how they could be addressed, even if explicitly asked for. In this case study, it became more natural for co-design participants to come up with implications in the later phases of co-design (ie, steps 3 and 4) when the dashboard concept had become more tangible. To enable the anticipation of implications early in the co-design process, it may be useful to develop inspirational tools that use, for instance, examples of negative impacts of AI technologies [
], envisioning cards [ ], or design fiction [ , ] to evoke consideration of the possible intended and unintended short- and long-term effects of future technologies. In addition, in the context of AI-based innovation, one could ensure through training that co-design participants have a basic understanding of what AI can do and how its behavior may be unpredictable and change over time while accumulating data [ , ].In sum, for co-design processes to result in more RI outcomes, designers and co-design facilitators may need to broaden their scope and reconsider the specific implementation of the process-oriented RI principles of anticipation and inclusion [
, ]. Even though there are still many uncertainties about the potential uses and consequences of technology during early phases of co-design and before users can “experience” the technology in practice, the anticipation of implications with users ideally starts early, before the technology design has been locked in and change becomes difficult, time-consuming, and expensive [ ]. Besides, anticipation should be a recurring element of the innovation process, as people’s values and perspectives on what is responsible may evolve over time and under the influence of technological innovation [ ].Limitations and Suggestions for Future Research
Given that this paper studies merely a single case, our aim is not to generalize, but rather to illustrate a typical co-design process of an AI-based technology to support the care for older adults and contribute to building a nuanced view on the relation between co-design and RI [
]. Although we use a broad definition for co-design, we acknowledge that there are multiple ways, methods, and instruments to integrate users into the innovation process [ ]. Therefore, our findings about the role of anticipating implications in co-design are not generally applicable to co-design. For instance, it is plausible that projects that adopt the value-sensitive design approach yield different results, as this approach aims to explicitly consider the values of users and other stakeholders and how these values are affected by the envisioned technology [ - ]. In other words, some approaches to co-design may in themselves impose on facilitators to explore the values at stake and thereby the implications of innovation. Future research could examine to what extent such approaches support RI.Further, we recognize that there are limitations to the RI exploration that was part of our study and thus to the insights gained into conditions for the responsible development and deployment of DSSs in dementia care. Our RI exploration initially focused on the perspectives of PMs to stimulate and facilitate whole-team participation in exploring how RI could be addressed throughout the HAAL project. The underlying assumption was that RI cannot be prescribed to innovators but needs to be conceptualized and addressed “in context” by those who actually perform the research, design, development, and testing with users [
, ]. However, soliciting PMs’ perspectives provided neither a complete nor necessarily an accurate picture about implications and ways they can be addressed. To this end, future studies could consider embedding trained ethicists in the research team who can provide top-down guidance and inspiration (eg, contextualized ethics principles) during bottom-up engagement with users and other stakeholders [ , ]. Besides, future research could explore the perspectives of users on RI in the context of AI-based care technologies, such as DSSs, for instance, what values come to matter most to them, what positive and negative implications they foresee, how they perceive the urgency of (other) known implications in their context, and how they look at certain strategies to address implications (eg, see the study of Lukkien et al [ ]). In doing so, the perspectives of stakeholders from different care contexts (eg, care organizations or countries) can be captured with sufficient detail and be compared to learn how to account for the context specificity of values in technology design and deployment [ , ]. In addition, the perspectives of people with dementia should be clarified, even when they are only a passive user of the technology (as is often the case with DSSs), and despite these people often having difficulties in expressing their needs [ , ].Finally, even though all co-design activities and the RI exploration had already been completed by the time the objectives for this case study were established, the RI exploration had a minor effect on the co-design process. For instance, some co-design researchers were also participants in the RI exploration, which could have affected the co-design activities. Besides, at the request of DRML (who led the RI exploration), the usability study (co-design step 4) included 2 RI-related questions. In our results, we explicated that co-design participants already discussed more implications before these 2 questions were asked. Without this minor effect, there may have been a greater knowledge gap between the results from the co-design process and RI exploration in HAAL. However, to gain more robust results into the role of the anticipation of implications in co-design, future research could study co-design processes completely separately from an exploration of associated implications.
Conclusions
In this paper, we explored how the co-design process of an AI-based DSS for dementia caregivers is complemented by the explicit anticipation of social and ethical implications. Co-design is an essential means to feed the development and deployment of AI-based care technologies with insights about needs of targeted users and collectively translate these needs into requirements for technology design. Besides, as found in this empirical study, certain implications and strategies to address these implications may be naturally anticipated in co-design, even though users may not necessarily think in terms of implications or risks, but rather in terms of conditions before the technology can be used. At the same time, this case study indicates that a co-design process that focuses on opportunities rather than balancing attention for both positive and negative implications may result in knowledge gaps related to implications and how they can be addressed. In the pursuit of responsible outcomes, co-design facilitators could consider broadening the scope of co-design processes, for instance, by moving back and forth between opportunities and associated implications of innovation, involving future users and social advocacy groups in such an inquiry, and ensuring that co-design participants are provided with inspiration and have basic knowledge and skills to contribute to anticipating implications. Explicit anticipation of implications in co-design and broader inclusion of stakeholders in doing so increase opportunities for innovators to start addressing implications of innovation before the technology design has been locked in.
Acknowledgments
The authors gratefully acknowledge support from the Active and Assisted Living (AAL) program, cofinanced by the European Commission through the Horizon 2020 Societal Challenge Health, Demographic Change, and Wellbeing. In particular, the work reported here has been supported by the AAL Healthy Ageing Eco-system for People With Dementia (HAAL) project (AAL-2020-7-229-CP). In addition, the authors thank their HAAL project partners in the Netherlands, Italy, Taiwan, and Denmark for organizing and participating in the research activities that provided the basis for this study.
Authors' Contributions
DRML contributed to conceptualization, methodology, validation, investigation, formal analysis, writing the original draft, reviewing and editing the manuscript, and funding acquisition. SIA contributed to methodology, investigation, formal analysis, and reviewing and editing the manuscript. NES contributed to methodology, investigation, and reviewing and editing the manuscript. BMH contributed to formal analysis and reviewing and editing the manuscript. HHN contributed to conceptualization, methodology, validation, reviewing and editing the manuscript, project administration, and funding acquisition. WPCB contributed to conceptualization, methodology, and reviewing and editing the manuscript. AP contributed to conceptualization, methodology, and reviewing and editing the manuscript. EHMM contributed to conceptualization, methodology, and reviewing and editing the manuscript. MMNM contributed to conceptualization and methodology. All authors contributed to writing (original draft).
Conflicts of Interest
None declared.
The responsible innovation survey.
DOCX File , 161 KBReferences
- Rubeis G. The disruptive power of artificial intelligence. Ethical aspects of gerontechnology in elderly care. Arch Gerontol Geriatr. 2020;91:104186. [FREE Full text] [CrossRef] [Medline]
- Ienca M, Jotterand F, Elger B, Caon M, Scoccia Pappagallo A, Kressig RW, et al. Intelligent assistive technology for Alzheimer's disease and other dementias: a systematic review. J Alzheimers Dis. 2017;60(1):333. [CrossRef] [Medline]
- Mukaetova-Ladinska EB, Harwoord T, Maltby J. Artificial Intelligence in the healthcare of older people. Arch Psychiatr Ment Health. 2020;4:7-13. [FREE Full text] [CrossRef]
- Loveys K, Prina M, Axford C, Domènec Ò, Weng W, Broadbent E, et al. Artificial intelligence for older people receiving long-term care: a systematic review of acceptability and effectiveness studies. Lancet Healthy Longev. Apr 2022;3(4):e286-e297. [FREE Full text] [CrossRef] [Medline]
- Xie BO, Tao C, Li J, Hilsabeck RC, Aguirre A. Artificial intelligence for caregivers of persons with Alzheimer's disease and related dementias: systematic literature review. JMIR Med Inform. Aug 20, 2020;8(8):e18189. [FREE Full text] [CrossRef] [Medline]
- Yeung K. Recommendation of the council on artificial intelligence (OECD). Int Leg Mater. Mar 12, 2020;59(1):27-34. [FREE Full text] [CrossRef]
- Akbar S, Lyell D, Magrabi F. Automation in nursing decision support systems: a systematic review of effects on decision making, care delivery, and patient outcomes. J Am Med Inform Assoc. Oct 12, 2021;28(11):2502-2513. [FREE Full text] [CrossRef] [Medline]
- Lee S. Features of computerized clinical decision support systems supportive of nursing practice: a literature review. Comput Inform Nurs. Oct 2013;31(10):477-496. [CrossRef] [Medline]
- Middleton B, Sittig DF, Wright A. Clinical decision support: a 25 year retrospective and a 25 year vision. Yearb Med Inform. Mar 06, 2018;25(S 01):S103-S116. [FREE Full text] [CrossRef]
- Sutton RT, Pincock D, Baumgart DC, Sadowski DC, Fedorak RN, Kroeker KI. An overview of clinical decision support systems: benefits, risks, and strategies for success. NPJ Digit Med. 2020;3:17. [FREE Full text] [CrossRef] [Medline]
- Berridge C, Grigorovich A. Algorithmic harms and digital ageism in the use of surveillance technologies in nursing homes. Front Sociol. 2022;7:957246. [FREE Full text] [CrossRef] [Medline]
- Stypinska J. AI ageism: a critical roadmap for studying age discrimination and exclusion in digitalized societies. AI Soc. 2023;38(2):665-677. [FREE Full text] [CrossRef] [Medline]
- Zuboff S. Big other: surveillance capitalism and the prospects of an information civilization. J Inf Technol. Mar 01, 2015;30(1):75-89. [FREE Full text] [CrossRef]
- Morley J, Machado CC, Burr C, Cowls J, Joshi I, Taddeo M, et al. The ethics of AI in health care: a mapping review. Soc Sci Med. Sep 2020;260:113172. [FREE Full text] [CrossRef] [Medline]
- Peine A, Neven L. From intervention to co-constitution: new directions in theorizing about aging and technology. Gerontologist. Jan 09, 2019;59(1):15-21. [FREE Full text] [CrossRef] [Medline]
- von Schomberg R. A vision of responsible research and innovation. In: Owen R, Bessant J, Heintz M, editors. Responsible Innovation: Managing the Responsible Emergence of Science and Innovation in Society. Chichester, UK. John Wiley & Sons; 2013:51-74.
- Owen R, Stilgoe J, Macnaghten P, Gorman M, Fisher E, Guston D. A framework for responsible innovation. In: Owen R, Bessant J, Heintz M, editors. Responsible Innovation: Managing the Responsible Emergence of Science and Innovation in Society. Chichester, UK. Wiley-Blackwell; 2013:27-50.
- Sanders EB, Stappers PJ. Co-creation and the new landscapes of design. CoDesign. Mar 2008;4(1):5-18. [FREE Full text] [CrossRef]
- Vargas C, Whelan J, Brimblecombe J, Allender S. Co-creation, co-design, co-production for public health - a perspective on definition and distinctions. Public Health Res Pract. Jun 15, 2022;32(2):3222211. [FREE Full text] [CrossRef] [Medline]
- Merkel S, Kucharski A. Participatory design in gerontechnology: a systematic literature review. Gerontologist. Jan 09, 2019;59(1):e16-e25. [FREE Full text] [CrossRef] [Medline]
- Sumner J, Chong LS, Bundele A, Wei Lim Y. Co-designing technology for aging in place: a systematic review. Gerontologist. Sep 13, 2021;61(7):e395-e409. [FREE Full text] [CrossRef] [Medline]
- Fischer B, Peine A, Östlund B. The importance of user involvement: a systematic review of involving older users in technology design. Gerontologist. Sep 15, 2020;60(7):e513-e523. [FREE Full text] [CrossRef] [Medline]
- Jansma SR, Dijkstra AM, de Jong MD. Co-creation in support of responsible research and innovation: an analysis of three stakeholder workshops on nanotechnology for health. J Responsible Innov. Nov 23, 2021;9(1):28-48. [FREE Full text] [CrossRef]
- Nap HH, Lukkien DR, Lin CC, Lin CJ, Chieh HF, Wong YT, et al. HAAL: a healthy ageing eco-system for people with dementia. Gerontechnology. 2022;21:1-5. [FREE Full text] [CrossRef]
- Stewart J, Hyysalo S. Intermediaries, users and social learning in technological innovation. Int J Innov Manag. Nov 20, 2011;12(03):295-325. [FREE Full text] [CrossRef]
- Barbarossa F, Amabili G, Margaritini A, Morresi N, Casaccia S, Marconi F, et al. Design, development, and usability evaluation of a dashboard for supporting formal caregivers in managing people with dementia. In: Proceedings of the 16th International Conference on PErvasive Technologies Related to Assistive Environments. 2023. Presented at: PETRA '23; July 5-7, 2023:154-161; Corfu, Greece. URL: https://doi.org/10.1145/3594806.3594820 [CrossRef]
- Kuhn J. Decrypting the MoSCoW analysis. ItSM Solut. 2009. URL: http://www.itsmsolutions.com/newsletters/DITYvol5iss44.htm [accessed 2024-04-29]
- Bradbury K, Watts S, Arden-Close E, Yardley L, Lewith G. Developing digital interventions: a methodological guide. Evid Based Complement Alternat Med. 2014;2014:561320. [FREE Full text] [CrossRef] [Medline]
- Broekhuis M, van Velsen L. Improving usability benchmarking for the eHealth domain: the development of the eHealth UsaBility Benchmarking instrument (HUBBI). PLoS One. 2022;17(2):e0262036. [FREE Full text] [CrossRef] [Medline]
- Bastien JM, Scapin DL. A validation of ergonomic criteria for the evaluation of human‐computer interfaces. Int J Hum Comput Interact. Apr 1992;4(2):183-196. [FREE Full text] [CrossRef]
- Nielsen J. Reliability of severity estimates for usability problems found by heuristic evaluation. In: Proceedings of the Posters and Short Talks of the 1992 SIGCHI Conference on Human Factors in Computing Systems. 1992. Presented at: CHI '92; May 3-7, 1992:129-130; Monterey, CA. URL: https://doi.org/10.1145/1125021.1125117 [CrossRef]
- Taveter K, Sterling L, Pedell S, Burrows R, Taveter EM. A method for eliciting and representing emotional requirements: two case studies in e-Healthcare. In: Proceedings of the 27th International Requirements Engineering Conference Workshops. 2019. Presented at: REW '19; September 23-27, 2019:100-105; Jeju, South Korea. URL: https://doi.org/10.1109/REW.2019.00021 [CrossRef]
- Verbeek PP, Tijink D. Guidance ethics approach: an ethical dialogue about technology with perspective on actions. ECP: Platform voor de InformatieSamenleving. 2020. URL: https://ris.utwente.nl/ws/portalfiles/portal/247401391/060_002_Boek_Guidance_ethics_approach_Digital_EN.pdf [accessed 2024-04-29]
- Mosavi NS, Santos MF. How prescriptive analytics influences decision making in precision medicine. Procedia Comput Sci. 2020;177:528-533. [FREE Full text] [CrossRef]
- El Morr C, Ali-Hassan H. Healthcare, data analytics, and business intelligence. In: El Morr C, Ali-Hassan H, editors. Analytics in Healthcare: A Practical Introduction. Cham, Switzerland. Springer; 2019:1-13.
- Noortman R, Schulte BF, Marshall P, Bakker S, Cox AL. HawkEye - deploying a design fiction probe. In: Proceedings of the 2019 CHI Conference on Human Factors in Computing Systems. 2019. Presented at: CHI '19; May 4-9, 2019:1-14; Glasgow, UK. URL: https://doi.org/10.1145/3290605.3300652 [CrossRef]
- Ethics and governance of artificial intelligence for health: WHO guidance. World Health Organization. URL: https://www.who.int/publications/i/item/9789240029200 [accessed 2024-04-29]
- Braun V, Clarke V. Thematic analysis. In: Cooper H, Camic PM, Long DL, Panter AT, Rindskopf D, Sher KJ, editors. APA Handbook of Research Methods in Psychology, Vol. 2. Research Designs: Quantitative, Qualitative, Neuropsychological, and Biologica. New York, NY. American Psychological Association; 2012:57-71.
- Your research: Is it subject to the WMO or not? Central Committee on Research Involving Human Subjects. URL: https://english.ccmo.nl/investigators/legal-framework-for-medical-scientific-research/your-research-is-it-subject-to-the-wmo-or-not [accessed 2024-07-23]
- Parasuraman R, Riley V. Humans and automation: use, misuse, disuse, abuse. Hum Factors. Nov 23, 2016;39(2):230-253. [FREE Full text] [CrossRef]
- Goddard K, Roudsari A, Wyatt JC. Automation bias: empirical results assessing influencing factors. Int J Med Inform. May 2014;83(5):368-375. [FREE Full text] [CrossRef] [Medline]
- Dekker SW, Woods DD. MABA-MABA or abracadabra? progress on human–automation co-ordination. Cogn Technol Work. 2002;4:240. [FREE Full text] [CrossRef]
- Dafoe A, Bachrach Y, Hadfield G, Horvitz E, Larson K, Graepel T. Cooperative AI: machines must learn to find common ground. Nature. May 2021;593(7857):33-36. [FREE Full text] [CrossRef] [Medline]
- Wehrens R, Stevens M, Kostenzer J, Weggelaar AM, de Bont A. Ethics as discursive work: the role of ethical framing in the promissory future of data-driven healthcare technologies. Sci Technol Hum Values. Nov 08, 2021;48(3):606-634. [FREE Full text] [CrossRef]
- Hiltz SR, Turoff M. Structuring computer-mediated communication systems to avoid information overload. Commun ACM. 1985;28:680-689. [FREE Full text] [CrossRef]
- Hettinger LJ, Nelson WT, Haas MW. Applying virtual environment technology to the design of fighter aircraft cockpits: pilot performance and situation awareness in a simulated air combat task. Proc Hum Factors Ergon Soc Annu Meet. Nov 06, 2016;38(1):115-118. [FREE Full text] [CrossRef]
- Rosekind MR, Gander PH, Miller DL, Gregory KB, Smith RM, Weldon KJ, et al. Fatigue in operational settings: examples from the aviation environment. Hum Factors. Jun 1994;36(2):327-338. [FREE Full text] [CrossRef] [Medline]
- Schick AG, Gordon LA, Haka S. Information overload: a temporal approach. Account Organ Soc. Jan 1990;15(3):199-220. [FREE Full text] [CrossRef]
- Lukkien DR, Nap HH, Buimer HP, Peine A, Boon WP, Ket JC, et al. Toward responsible artificial intelligence in long-term care: a scoping review on practical approaches. Gerontologist. Jan 24, 2023;63(1):155-168. [FREE Full text] [CrossRef] [Medline]
- Chew JC, Molich R. Heuristic evaluation of user interfaces. In: Proceedings of the SIGCHI Conference on Human Factors in Computing Systems. 1990. Presented at: CHI '90; April 1-5, 1990:249-256; Washington, DC. URL: https://doi.org/https://doi.org/10.1145/97243.97281 [CrossRef]
- Christensen CM. Innovator's Dilemma: When New Technologies Cause Great Firms to Fail. Boston, MA. Harvard Business Review Press; 2013.
- van Velsen L, Ludden G, Grünloh C. The limitations of user-and human-centered design in an eHealth context and how to move beyond them. J Med Internet Res. Oct 05, 2022;24(10):e37341. [FREE Full text] [CrossRef] [Medline]
- Boon WP, Moors EH, Kuhlmann S, Smits RE. Demand articulation in emerging technologies: intermediary user organisations as co-producers? Research Policy. Mar 2011;40(2):242-252. [FREE Full text] [CrossRef]
- Kivimaa P, Boon W, Hyysalo S, Klerkx L. Towards a typology of intermediaries in sustainability transitions: a systematic review and a research agenda. Res Policy. May 2019;48(4):1062-1075. [FREE Full text] [CrossRef]
- Obermeyer Z, Powers B, Vogeli C, Mullainathan S. Dissecting racial bias in an algorithm used to manage the health of populations. Science. Oct 25, 2019;366(6464):447-453. [FREE Full text] [CrossRef] [Medline]
- Friedman B, Hendry DG. The envisioning cards: a toolkit for catalyzing humanistic and technical imaginations. In: Proceedings of the 2012 SIGCHI Conference on Human Factors in Computing Systems. 2012. Presented at: CHI '12; May 5-10, 2012:1145-1148; Austin, TX. URL: https://doi.org/10.1145/2207676.2208562 [CrossRef]
- Dunne A, Raby F. Speculative Everything: Design, Fiction, and Social Dreaming. New York, NY. MIT press; 2013.
- Bratteteig T, Verne G. Does AI make PD obsolete?: exploring challenges from artificial intelligence to participatory design. In: Proceedings of the 15th Participatory Design Conference: Short Papers, Situated Actions, Workshops and Tutorial - Volume 2. 2018. Presented at: PDC '18; August 20-24, 2018:1-5; Hasselt and Genk, Belgium. URL: https://doi.org/10.1145/3210604.3210646 [CrossRef]
- Donia J, Shaw JA. Co-design and ethical artificial intelligence for health: an agenda for critical research and practice. Big Data Soc. Dec 17, 2021;8(2):205395172110652. [FREE Full text] [CrossRef]
- Fraaije A, Flipse SM. Synthesizing an implementation framework for responsible research and innovation. J Responsible Innov. Nov 21, 2019;7(1):113-137. [FREE Full text] [CrossRef]
- Genus A, Stirling A. Collingridge and the dilemma of control: towards responsible and accountable innovation. Res Policy. Feb 2018;47(1):61-69. [FREE Full text] [CrossRef]
- Kudina O, Verbeek PP. Ethics from Within: Google Glass, the collingridge dilemma, and the mediated value of privacy. Sci Technol Human Values. Aug 21, 2018;44(2):291-314. [FREE Full text] [CrossRef]
- Flyvbjerg B. Five misunderstandings about case-study research. Qual Inq. Jun 29, 2016;12(2):219-245. [FREE Full text] [CrossRef]
- van der Hoven J, Manders-Huits N. The need for a value-sensitive design of communication infrastructures. In: Miller KW, Taddeo M, editors. The Ethics of Information Technologies. London, UK. Routledge; 2020:329-332.
- Friedman B, Kahn Jr PH, Borning A, Huldtgren A. Value sensitive design and information systems. In: Doorn N, Schuurbiers D, van de Poel I, Gorman ME, editors. Early Engagement And New Technologies: Opening Up the Laboratory. Cham, Switzerland. Springer; 2013:55-95.
- Smits M, Nacar M, D S Ludden G, van Goor H. Stepwise design and evaluation of a values-oriented ambient intelligence healthcare monitoring platform. Value Health. Jun 2022;25(6):914-923. [FREE Full text] [CrossRef] [Medline]
- Forsberg EM, Thorstensen E, Dias Casagrande F, Holthe T, Halvorsrud L, Lund A, et al. Is RRI a new R and I logic? a reflection from an integrated RRI project. J Responsib Technol. May 2021;5:100007. [FREE Full text] [CrossRef]
- Stahl BC. Responsible innovation ecosystems: ethical implications of the application of the ecosystem concept to artificial intelligence. Int J Inf Manage. Feb 2022;62:102441. [FREE Full text] [CrossRef]
- McLennan S, Fiske A, Celi LA, Müller R, Harder J, Ritt K, et al. An embedded ethics approach for AI development. Nat Mach Intell. Jul 31, 2020;2(9):488-490. [FREE Full text] [CrossRef]
- Lukkien DRM, Stolwijk NE, Ipakchian Askari S, Hofstede BM, Nap HH, Boon WPC, et al. AI-Assisted Decision-Making in Long-Term Care: Qualitative Study on Prerequisites for Responsible Innovation. JMIR Nurs. Jul 25, 2024;7:e55962. [FREE Full text] [CrossRef] [Medline]
- ÓhÉigeartaigh SS, Whittlestone J, Liu Y, Zeng Y, Liu Z. Overcoming barriers to cross-cultural cooperation in AI ethics and governance. Philos Technol. May 15, 2020;33(4):571-593. [FREE Full text] [CrossRef]
- Pollock N, Williams R, D’Adderio L. Global software and its provenance: generification work in the production of organizational software packages. Soc Stud Sci. Jun 29, 2016;37(2):254-280. [FREE Full text] [CrossRef]
- Papoutsi C, Wherton J, Shaw S, Morrison C, Greenhalgh T. Putting the social back into sociotechnical: case studies of co-design in digital health. J Am Med Inform Assoc. Feb 15, 2021;28(2):284-293. [FREE Full text] [CrossRef] [Medline]
- Suijkerbuijk S, Nap HH, Cornelisse L, IJsselsteijn WA, de Kort YA, Minkman MM. Active involvement of people with dementia: a systematic review of studies developing supportive technologies. J Alzheimers Dis. Jun 18, 2019;69(4):1041-1065. [FREE Full text] [CrossRef]
Abbreviations
AAL: Active and Assisted Living |
AI: artificial intelligence |
DSS: decision support system |
HAAL: Healthy Ageing Eco-system for People With Dementia |
PM: project member |
RI: responsible innovation |
Edited by A Kushniruk, E Borycki; submitted 31.12.23; peer-reviewed by M Coccia, S Mitra, A Hidki; comments to author 18.02.24; accepted 02.06.24; published 31.07.24.
Copyright©Dirk R M Lukkien, Sima Ipakchian Askari, Nathalie E Stolwijk, Bob M Hofstede, Henk Herman Nap, Wouter P C Boon, Alexander Peine, Ellen H M Moors, Mirella M N Minkman. Originally published in JMIR Human Factors (https://humanfactors.jmir.org), 31.07.2024.
This is an open-access article distributed under the terms of the Creative Commons Attribution License (https://creativecommons.org/licenses/by/4.0/), which permits unrestricted use, distribution, and reproduction in any medium, provided the original work, first published in JMIR Human Factors, is properly cited. The complete bibliographic information, a link to the original publication on https://humanfactors.jmir.org, as well as this copyright and license information must be included.