Published on in Vol 12 (2025)

Preprints (earlier versions) of this paper are available at https://preprints.jmir.org/preprint/66632, first published .
Designing Chatbots to Treat Depression in Youth: Qualitative Study

Designing Chatbots to Treat Depression in Youth: Qualitative Study

Designing Chatbots to Treat Depression in Youth: Qualitative Study

1Human-Centered Systems Lab, Institute for Information Systems (WIN), Karlsruhe Institute of Technology, Karlsruhe, Germany

2Department of Clinical Psychology and Psychotherapy, Institute of Psychology, University of Greifswald, Franz-Mehring-Straße 47, Greifswald, Germany

3Chair of Explainable AI-Based Business Information Systems, School of Business, Economics and Information Systems, University of Passau, Passau, Germany

4Department of Clinical Psychology and Psychotherapy, University of Tübingen, Tübingen, Germany

Corresponding Author:

Stefan Lüttke, Dipl-Psych


Background: Depression is a severe and prevalent mental disorder among youth that requires professional care; however, various barriers hinder access to effective treatments. Chatbots, one of the latest innovations in the research on digital mental health interventions, have shown potential in addressing these barriers. However, most studies on how to design chatbots to treat depression have focused on adult populations or prevention in the general population.

Objective: This study aimed to investigate the problems faced by youth with depression and their adaptive coping strategies, as well as attitudes, expectations, and design preferences for chatbots designed to treat depression.

Methods: We conducted a qualitative study, consisting of a semistructured interview and a concurrent think-aloud session, in which participants interacted with a chatbot prototype with 14 youth with a current or remitted depressive episode.

Results: The participants reported a wide range of problems beyond core depressive symptoms, such as interpersonal challenges, concerns about school and the future, and problems with human therapists. Adaptive coping strategies varied, with most seeking social support or engaging in pleasant activities. Attitudes toward chatbots for depression treatment were predominantly positive, with participants expressing less anxiety about using a chatbot than about seeing a human therapist. Participants showed diverse and partially contradictory design preferences, which included diverse dialogue topics, such as discussing daily life, acute problems, and therapeutic exercises, as well as various preferences for personality, language use, and personalization of the chatbot.

Conclusions: Our study provides a comprehensive foundation for designing chatbots that meet the unique needs and design preferences of youth with depression. These findings can inform the design of engaging and effective chatbots tailored to this vulnerable population.

JMIR Hum Factors 2025;12:e66632

doi:10.2196/66632

Keywords



Depression is a prevalent mental disorder in youth with significant personal and socioeconomic consequences that requires professional care [1,2]. Despite the availability of effective treatments, such as cognitive behavioral therapy (CBT) [3], accessing them remains challenging. Even when professional services are free and accessible [4], many youth avoid seeking professional support due to perceived stigma, a preference to solve problems by themselves, and fear of psychotherapeutic settings [5-8]. Interestingly, such attitudinal barriers may be more important reasons for not seeking help than structural barriers, such as limited treatment resources and long waiting periods [8]. Digital mental health interventions (DMHIs) are a promising and effective way to overcome these barriers because they provide anonymous and self-empowered access to effective professional care [9]. However, to leverage the full potential of DMHIs in the treatment of depressive symptoms in youth, 2 major limitations need to be overcome,that is, low adherence [10] and difficulty in establishing a therapeutic alliance, which is viewed as a crucial factor for effective psychotherapy [11].

Chatbots, software systems that interact with users using natural language [12], have shown the potential to address these limitations. They are well accepted, feasible, and have shown promising effectiveness in strengthening mental health [13-16]. In addition, incorporating chatbots into DMHIs improves user engagement [17] and mental health outcomes [18]. Notably, users seem to develop a therapeutic alliance with chatbots [19-21] partly because of social cues such as empathetic messages and humor [15,22]. Despite these encouraging results, most studies on mental health chatbots, including those targeting depression symptoms, have focused on adult populations [13-16,23]. This is a key shortcoming because the results from adult populations cannot be generalized to youth. Youth face significant developmental changes in their biological, psychological, and social systems [24], and depression symptoms differ from those in adulthood, especially at the onset of puberty [25]. Furthermore, youth interact with smartphones and chatbots differently than adults [26,27], and have expressed that existing DMHIs often fail to address their specific concerns adequately [28].

To address this research gap, recent studies have explored the design of chatbots for youth mental health. Høiland et al [29] involved youth in designing a chatbot for high-school health services aimed at preventing mental disorders. Through focus groups, they identified four key support needs: (1) receiving information about mental health, (2) relating to someone beyond their immediate network, (3) receiving support for self-help, and (4) being referred to mental health services. Similarly, Grové [30] developed a preventive mental health chatbot with high-school students. Participants suggested topics such as school, family, friends, sexuality, and identity as well as resources for adaptive coping strategies, mindfulness, and distractions. They expressed a preference for chatbots with inspiring, charismatic, and fun personalities using emojis, humor, and GIFs. While these 2 studies provide valuable insights, they focused on the prevention rather than the treatment of mental disorders and addressed a broad spectrum of mental health issues rather than specifically targeting depression. Thus, there is a critical need for research on how to design chatbots that focus on specific problems of youth with depressive symptoms to achieve sufficient engagement and optimal treatment outcomes.

Our study aims to address this research gap by investigating the following research questions: (1) What problems do youth with depressive symptoms face? (2) What adaptive coping strategies do they apply? (3) What attitudes and expectations do they have for chatbots designed to treat depression among youth? (4) What are their design preferences?

By addressing these questions, we aim to provide a comprehensive foundation for developing chatbot-based DMHIs tailored to the unique needs and preferences of youth with depression. This foundation will facilitate the development of engaging and effective DMHIs for this vulnerable population.


Study Design

We conducted a qualitative study to examine how to design a chatbot to treat depression in youth. The study included a questionnaire, a semistructured interview, and a concurrent think-aloud session with a chatbot prototype. We chose interviews because they allowed us to gather rich, detailed data on participants’ problems, coping strategies, attitudes and expectations, and chatbot design preferences.

Participants

Participants were eligible if they were between 14 and 17 years of age, owned a smartphone, and met diagnostic criteria for a current or remitted depressive episode. Participants with suicidal ideation or psychotic symptoms were excluded. Participants were recruited through a resident child and adolescent psychiatrist and the University’s newsletter between June and August 2021.

The target sample size was calculated based on the goal of data saturation [31], considering age group (14‐15 and 16‐17 y), depression status (remitted and acutely depressed), and gender (women and men), assuming homogeneity within each subgroup. Given that data saturation can be achieved at 6 cases per homogeneous group [31], the study aimed to recruit 48 participants. The sample was recruited through convenience sampling but the sampling was constrained by the predetermined recruitment period and available recruitment channels. The final sample consisted of 14 participants (12 women and 2 nonbinary).

Procedure

After participants and caregivers provided informed consent, we assessed their eligibility to participate in the study. Next, enrolled participants answered a questionnaire on sociodemographic information and mental health on a laptop. We conducted semistructured interviews to explore participants’ experiences with depression, their adaptive coping strategies, attitudes and expectations toward chatbots for depression, and their design preferences. Finally, the participants interacted with a prototype chatbot using the concurrent think-aloud method.

Material

Eligibility Interview

We conducted a semistructured interview to evaluate participants’ eligibility to participate in the study. The interview guide included questions regarding age, smartphone ownership, suicidal ideation, and symptoms of depression, and psychotic disorders. The questions on suicidal ideation and symptoms of depression and psychotic disorders were based on the Diagnostic and Statistical Manual of Mental Disorders, Fifth Edition (DSM-5) criteria [32] and Kiddie-Schedule for Affective Disorders and Schizophrenia Present and Lifetime [33]. The complete eligibility interview is presented in Multimedia Appendix 1 .

Questionnaire

Participants completed a questionnaire on demographic characteristics (age, gender, and level of education), history of mental disorders, and previous experience with psychotherapy. We assessed the prevalence of current symptoms of depression using the 8-item Patient Health Questionnaire (PHQ-8) [34] as well as current symptoms of anxiety using the Screen for Child Anxiety Related Disorders (SCARED) [35]. To complement the insights from the semistructured interview on the design preferences, participants answered a questionnaire on potential chatbot capabilities. This questionnaire is based on CBT manuals and literature on the content of DMHIs for depression [36-38] and comprises 13 capabilities that a chatbot to treat depression in youth could implement. For each capability, we asked participants to indicate the extent to which the chatbot should support them on a scale from 1 (“strongly disagree”) to 5 (“strongly agree”). The capabilities are presented in Textbox 1. The participants completed all questionnaires on a laptop using SoSci Survey (SoSci Survey GmbH).

Textbox 1. Items listed in the questionnaire on chatbot capabilities.

The chatbot should support me with…

  • how I can become more physically active or do sports.
  • how to sleep better.
  • how I can change negative or self-critical thinking.
  • how I can do more activities that are important to me or that I have enjoyed in the past.
  • tracking my mood.
  • learning more about my depression.
  • improving my social skills.
  • reminding me to take my medication.
  • connecting with an expert (eg, psychotherapist) if I feel very bad.
  • connecting with other people who have similar problems.
  • how I can use my breath to make me feel better.
  • writing a journal with things that concern me or for which I am grateful for.
  • writing about events from my life.
Study Interview

We conducted semistructured interviews to investigate the problems participants faced due to depression, their adaptive coping strategies, their attitudes and expectations toward chatbots for depression, and their chatbot design preferences. Table 1 provides an excerpt of the interview guide. We asked several questions about each topic and prepared subquestions to follow up on specific details or offer suggestions based on the participants’ initial responses.

Table 1. Excerpt from the interview guide (translated from German).
TopicQuestion
Problems
  • Think of a period where you felt down. How did it look like?
Adaptive coping strategies
  • What have you tried in the past to feel better?
Attitudes and expectations towards therapy chatbots
  • Do you know what a chatbot is?
  • What do you think it would be like for you to use a chatbot that is there to help you with one of the problems you have mentioned? For example, what could be the advantages and disadvantages?
Design preferences for therapy chatbots
  • Imagine you are thinking about using one of these chatbots. How should it need to be designed, so that you would download and use it?
  • How do you imagine an ideal conversation with the chatbot? For example, what topics would you like to discuss or what its personality should be like?
  • If it was accessible as a mobile app, what should the app look like?
  • (What) Would you like to personalize?
  • How often would you like to use such a chatbot (per week)?
  • How long should every session last?
Chatbot Prototype: Cady

Cady is the prototype of a chatbot for the treatment of depressive symptoms in youth that we developed for this and subsequent studies. Cady guides participants through a behavioral activation exercise, which is a core component of CBT to treat depression in youth [3]. Behavioral activation aims to encourage patients to engage in pleasant activities to overcome positive reinforcement deficits [39]. The conversation comprised the following sections: (1) introduction; (2) mood check on a scale from 1 (lowest) to 5 (highest) with adaptive empathetic responses; (3) psychoeducation on the relationship between behavior, thoughts, and feelings; (4) finding pleasant activities; (5) planning pleasant activities; (6) advice on how to overcome barriers when performing activities; and (7) feedback and goodbye. The conversation was designed by licensed psychologists based on established CBT manuals for youth [40,41] with a focus on positive, activating, and age-adequate language style. Cady also used emojis in its messages and sent GIFs. Figure 1 represents a screenshot of an example conversation between Cady and a participant. We named the chatbot Cady in accordance with the title of our research project and did not specify its age, gender, or other demographic characteristics to prevent specific demographic characteristics from influencing the results. We developed Cady using the prototyping software, Botsociety [42]. The chatbot was built with a rule-based architecture, where each user input triggers a predetermined response pathway following a decision tree structure. To ensure appropriate responses and maintain therapeutic quality, users primarily interacted with the chatbot by selecting from predefined options presented as buttons. We allowed free-text input in specific situations, for example, to identify and schedule pleasant activities. However, these free-text responses did not alter the chatbot’s conversation pathway.

Figure 1. Screenshot of an example conversation between Cady and a participant.
Think-Aloud

We applied the concurrent think-aloud method [43], in which participants are asked to express their thoughts and feelings loudly while interacting with an app. At the beginning of the think-aloud session, the interviewer introduced the chatbot prototype Cady to the participants and explained the think-aloud method. Subsequently, participants interacted with Cady using a laptop. While the participants interacted with the prototype, we recorded the laptop screen and their comments. The interviewer stayed in the room because all participants felt it was easier to share their thoughts and feelings when the interviewer was in the room. The interviewer asked for further explanation if the statements were unclear or nonspecific.

Data Analysis

The interview and think-aloud session were recorded using an audio recorder (Zoom Q2n [Zoom Corporation]) and were transcribed verbatim. Furthermore, 2 coders analyzed the semistructured interviews and concurrent think-aloud sessions using qualitative content analyses by Mayring and Fenzl [44] and Mayring [45] with the QCAmap software (Fenzl and Mayring) [46]. Similar to Zehetmair et al [47], we chose an inductive coding approach to achieve the most unbiased and thorough description of the data, which we deemed important for the exploratory nature of our study [44,45]. Both coders (Lilli Feline Metelmann and LB) coded the material independently, discussed their coding results, and jointly developed a category system. When disagreements between the coders could not be resolved, a third coder (FOK) was consulted. We chose this process because intercoder comparison is not feasible with inductive category development [44,45]. In an inductive approach, categories are created bottom-up from the material rather than selected before data analysis, as in deductive approaches [44,45]. We report the frequencies of the categories and codes to increase transparency and demonstrate the saliency of the codes in the data. The category system is provided in Multimedia Appendix 2. The questionnaires were analyzed using R (version 4.1.3; R Core Team). For PHQ-8, SCARED, and the potential capabilities questionnaire, we calculated descriptive statistics (mean and SD). For the potential capabilities, we also generated frequency distributions to visualize the response patterns.

Ethical Considerations

The Institutional Review Board of the Medical Faculty of the University of Tübingen approved the study (project ID 595/2021B01). The participants and caregivers provided written informed consent (Multimedia Appendix 3) before screening for eligibility and enrollment. All participant data were deidentified before analysis by removing personal identifiers such as names, addresses, and contact information, and replacing them with unique study codes to ensure participant confidentiality. After the study, participants received a reimbursement of 30€ (equivalent to US $33) for taking part in the study. The study lasted between 1 and 2 hours.


Participant Characteristics

In total, 14 youth between 14 and 17 years old (mean 16.1, SD 1.14 years) participated in the study. Furthermore, 12 participants identified as women and 2 as nonbinary. All participants reported a current or remitted depressive disorder. In addition, 10 participants had a PHQ-8 score of 10 or higher, indicating acute depressive symptoms [48]. Of the total, 12 participants had a SCARED score of 25 or higher, indicating current symptoms of anxiety [49]. Furthermore, 9 participants showed symptoms of both depression and anxiety. Out of 14, 9 (64%) participants were currently receiving psychotherapy, whereas 8 (57%) had received psychotherapy in the past. A comprehensive overview of the participant characteristics is illustrated in Table 2.

Table 2. Participant characteristics.
CharacteristicStatistical values
Age (y), mean (SD)16.1 (1.14)
 Gender, n (%)
Women12 (86)
 Nonbinary2 (14)
PHQ-8a (sum score)
Mean (SD)13.57 (5.58)
≥Cut-off (10), n (%)10 (71)
SCAREDb (sum score)
Mean (SD)40.86 (17.55)
≥Cut-off (25), n (%)12 (86)
Psychotherapy experience, n (%)
No experience2 (14)
Current (≤6 mo)9 (64)
Remitted (>6 mo)8 (57)
Both current and past6 (43)
Diagnosis of depression, n (%)
Current (≤6 mo)8 (57)
Remitted (>6 mo)10 (71)
Recurrent (minimum 2 episodes)5 (36)

aPHQ-8: 8-item Patient Health Questionnaire.

bSCARED: Screen for Child Anxiety Related Disorders.

Research Question 1: Problems With Depression

All participants (N=14, 100%) experienced problems categorized as depressive symptoms, including lack of motivation and energy, depressed mood, and self-doubt. Furthermore, 2 participants explained their main problems:

I just don’t have the strength to do anything” and “I’m just tired of everyday life, or of all these everyday activities like brushing my teeth.

Some participants (9/14, 64%) reported comorbidities, such as panic attacks, anxiety, and excessive alcohol consumption, while 4 participants (28%) reported physical problems, such as tension, headaches, or stomach pain. All 14 participants (100%) also reported interpersonal problems, particularly withdrawal from social relationships, stress caused by parents, and problems with friends. Furthermore, 11 participants (78%) expressed concerns about school or their future. They shared negative experiences at school and experienced pressure to plan their future, which were reflected by statements like:

When it comes to school, I have a very, very big fear of the future.
Success is a big topic right now, because of the end of school year grades.

In total, 6 participants (42%) reported experiencing stigma associated with depression. They mentioned that adults stigmatized or trivialized their problems or feared negative reactions when sharing their problems. One participant was concerned that disclosing that they are seeing a psychotherapist would lead to others perceiving them as “sick in the head.” Most participants reported encountering difficulties with mental health care. Furthermore, 12 individuals (85%) reported experiencing treatment barriers, including attitudinal and structural barriers. Attitudinal barriers included fear of being judged, self-assessment that problems are not severe enough to deserve support, and negative experiences during psychotherapy (n=9, 64%). Participants reported ineffective techniques, difficulty with disclosing personal information, pressure to perform, trust violations, and condescending treatment from therapists. Structural barriers reported are long waiting lists or being deemed not sufficiently severe by healthcare providers or parents to qualify for professional support.

Research Question 2: Adaptive Coping Strategies

All 14 participants (100%) identified social support as an adaptive coping strategy. The majority received social support from friends, family members, partners, teachers, or people they interacted with online. In total, 13 participants (93%) engaged in activities to distract themselves or to have a positive experience such as spending time outdoors or consuming media. A total of 9 participants (64%) reported receiving professional treatment and 9 (64%) used cognitive strategies, including positive self-talk, self-reflection, and reducing rumination. Furthermore, 6 participants (42%) implemented a daily structure including forming habits, getting up early, scheduling positive activities, and setting goals. In addition, 7 (50%) prioritized their needs as a coping mechanism, such as intentionally allocating time to self-care or avoiding stressful social situations. Of the total, 3 individuals (21%) engaged in mindfulness practice, such as breathing techniques and meditation, and 2 individuals (14%) sought online information regarding their depressive symptoms, interrelated problems and further information.

Research Question 3: Attitudes and Expectations Toward Chatbots to Treat Depression

Most participants had positive attitudes toward and expectations of chatbots to treat depression. In total, 12 participants (85%) stated that they would be less anxious about using a chatbot than seeing a human therapist. They pointed out that using a chatbot would be a suitable option for discussing sensitive topics that they would not share with others, primarily because they would not fear negative reactions. In addition, texting was considered less intimidating than speaking with a therapist. Furthermore, 11 participants (79%) pointed out the unlimited capacity and flexibility of chatbots:

I think it’s also an advantage that you can really chat with it at any time, because in therapy you just have one appointment per week. If you feel bad in the evening or at night or something, then you can still text the chatbot.

In addition, participants indicated that a chatbot is more accessible and requires less effort than seeing a human therapist. A total of 8 participants (57%) expressed confidence in the effectiveness of chatbots for the treatment of depression. They indicated that such a chatbot would be capable of addressing a wide range of issues, and welcomed the idea of having a helpful everyday chatbot. All participants demonstrated keen interest in using a chatbot to treat depression either because they expected it to be effective or because they were curious about using it. One participant stated that a daily usable chatbot could alleviate feelings of loneliness. Several others noted that a chatbot’s personal and human-like nature would increase the motivation to use it, particularly compared with other less interactive DMHIs.

However, participants also reported several concerns regarding the use of chatbots to treat depression. Some participants (n=9, 64%) were skeptical about the chatbot’s intelligence and natural language capabilities. They expressed concern that the chatbot would not be able to address individual, diverse, or unusual problems effectively, and feared being disappointed if the chatbot was unable to do so. Participants were particularly worried about inappropriate answers to emotional and intimate topics or inappropriate advice for their problems. Some participants (n=9, 64%) were concerned that a conversation with the chatbot would not feel like a conversation with a human therapist, due to a potential lack of emotional intelligence or overly robotic or analytical responses. However, chatbots that appeared too human were believed to be uncanny. In total, 2 participants (14%) were skeptical about sufficient data security and privacy. Finally, 2 participants (14%) worried that their symptoms of depression, such as little motivation or forgetfulness, would result in low engagement with the chatbot, thereby preventing them from effectively using it. They also pointed out that the lack of social pressure when using a chatbot, compared to seeing a human therapist, could contribute to low engagement, which might not be solvable by the chatbot.

Research Question 4: Chatbot Design Preferences

Category 1: Dialogue Topics and Therapeutic Content

All participants (n=14, 100%) shared preferences regarding dialogue topics that the chatbot should be able to cover. These suggestions include comprehensive assessments of depressive symptoms and interrelated problems, psychoeducation, therapeutic exercises that address specific issues, reminders to address basic needs, support for regulating emotions, tackling intrusive thoughts, and being distracted when needed. Furthermore, they emphasized the importance of discussing their daily lives and, more specifically, sharing current problems and receiving suggestions on how to resolve them. One participant highlighted the importance of the chatbot explicitly asking the user what type of support they require, such as emotional support (ie, listening and validation) or solution-oriented support:

Getting advice on how to solve a problem isn’t always helpful, even if it’s well-intentioned. […] It would be important for the chatbot to ask or understand if you want advice or if you only want to share your feelings.

Others elaborated on how they imagined talking about daily life and receiving advice:

I would just talk about the things that depress me at the time, to which I don’t know the answer.
I would probably just chat about everyday situations that were unpleasant to me or something like that.

These preferences were complemented by responses to a questionnaire regarding the chatbot’s desired capabilities. Figure 2 presents the full results. The responses indicate the significance of three key components of CBT: (1) cognitive restructuring (“changing negative thinking;” mean 4.9, SD 0.4), (2) behavioral activation (“pursuing activities that are important to me or have brought me joy in the past;” mean 4.5, SD 0.7), and (3) psychoeducation (“learning about depression;” mean 4.5, SD 1). In addition, participants expressed a preference for “improving social skills” (mean 4.4, SD 0.9). On the other hand, therapeutic writing, represented by “writing about events in my life” (mean 4, SD 1.3) and “writing a diary about things that bother me or that I’m grateful for” (mean 3.7, SD 1.5), was rated lower than most items. Finally, “connecting with other people who face similar problems” was ranked the lowest (mean 3.2, SD 1.2).

Figure 2. Results for how much support participants would like for several proposed chatbot capabilities.
Category 2: Personality, Interaction Style, and Social Role

Participants made various suggestions for the chatbot’s personality (n=14, 100%), such as understanding, caring, friendly, emphatic, encouraging, humorous, interested, neutral, and nonjudgmental. During the think-aloud sessions, most participants characterized Cady’s personality as one of its main strengths. One participant said:

I like that you communicate in a friendly way, like an internet friend.

Furthermore, 2 participants were pleased that Cady was interested and asked them personal questions. For example, 1 participant approved that Cady had asked if she had experienced becoming less active herself:

I just think it’s good that she is asking if I know something like that or haven’t experienced it yet.

In terms of the interaction style and language used by the chatbot, participants suggested that it should be appropriate for the user’s age without complicated terms. This issue was also raised during think-aloud sessions. One participant asked for more age-specific language, such as short sentences:

So, this was a therapeutic chat, but if it is supposed to be like a friend, I think it is better if it is writes like people my age, eg using short sentences.

Another participant perceived the language as too therapeutic:

It was relatively authentic, but you also notice that it is not written for my age.

The human-likeness of the chatbot was also a significant factor, as participants found it important to react to feelings and to understand irony and humor. However, participants also expressed concerns about the chatbot being too human-like, which could lead to irritation or fear. During the think-aloud sessions, 1 participant appreciated that Cady disclosed personal information such as enjoying chats with nice people, which resembled a human-like trait.

Regarding the social role, the participants (n=14, 100%) held divergent views: some desired the chatbot to resemble a friend, whereas others preferred it to assume the role of a therapist or a combination of both. One participant explained their preference for a therapist-like role:

I actually think it’s better if it were like a therapist [...], because with friends you don’t necessarily want talk about everything.

One participant proposed using situation-specific roles:

A therapist if you’re doing therapeutic exercises and a friend if you just want to talk.
Category 3: Personalization

Personalization frequently discussed the importance of personalization (n=13, 92%). Participants reported various aspects that should be personalized, including the chatbot’s personality, gender, avatar, message style, dialogue topics, therapeutic content, the mobile app’s color theme, notifications, and user profile picture and username. However, the participants had divergent opinions on who should control the personalization of the chatbot. While some suggested that the user should be in control, others believed that the chatbot itself should control personalization. Others, in turn, proposed a mixed approach in which both the user and the chatbot share control. Some participants preferred personalization to occur only once at the beginning, whereas others preferred dynamic personalization, where personalization occurs continuously. In addition to these explicit preferences, personalization also emerged from diverging design preferences. All participants expressed a preference for an appealing user interface, but they had differing opinions on the specific design elements, with some suggesting a bright and colorful design, whereas others preferred a plain and minimalistic design with black and white colors. Most participants preferred the chatbot as a standalone app, but some preferred integration into a popular instant messaging app such as WhatsApp (Meta). Regarding the interaction modalities, most participants preferred a chatbot over a voice agent. In terms of text input, the participants believed that a mix of predefined answers and free-text input was optimal, as it combines the minimal effort of clicking predefined answers with the flexibility of unrestricted text input. In total, 2 participants (14%) expressed concerns about personalization, stating that personalized content could lead to avoidance behavior, in which users avoid topics that they find difficult or uncomfortable. They also believed that personalization of the chatbot’s appearance, such as its gender or avatar, could reduce its seriousness as a tool for treating depression.

Table 3 summarizes key chatbot design recommendations derived from results across all 4 research questions.

Table 3. Key design recommendations.
TopicDesign recommendation
Assessment
  • Incorporate repeated multidimensional assessment, covering depressive symptoms and related problems, to monitor and understand the key issues a user faces.
  • Ensure assessments are comprehensive yet concise to minimize the effort for users.
Scope and Limitations
  • Communicate the chatbot’s scope and limitations clearly, for instance, when it cannot properly handle a topic. Redirect users to external resources (eg, helplines and professional interventions) in such cases to ensure users receive appropriate support.
Dialogue content
  • Curate a diverse content database covering a wide range of topics that are relevant to youth with depression.
  • Integrate evidence-based therapeutic techniques alongside user-preferred coping strategies (eg, behavioral activation, cognitive restructuring, and social support).
  • Enable the chatbot to engage in conversations about users’ daily life, including discussions on urgent problems.
  • Support follow-up discussions on user-reported problems, track progress, and encourage reflection on how successful they carried out the learned techniques.
Personality, social role and language style
  • Design a chatbot with an understanding, caring, friendly, empathic, encouraging, humorous, interested, and nonjudgmental personality.
  • Ensure the chatbot uses age-appropriate language without complicated terminology.
  • Balance human-likeness (eg, recognizing feelings, irony, humor) with clear boundaries to avoid unrealistic or overly human-like interactions.
Personalization
  • Personalize the content selection and presentation.
  • Personalize the chatbot’s persona (eg, personality, social role, gender, avatar, message style).
  • Explore the impact of persona personalization (eg, switching between formal and casual depending on user feedback, interaction style preferences and situation).
  • Explore the impact of who controls personalization (the user, the chatbot or both) and when or how often personalization is performed (initial vs ongoing).
  • Ensure personalization enhances user experience while maintaining the chatbot’s therapeutic credibility and seriousness.

Primary Findings

This study explored the problems youth face with depression, their adaptive coping strategies, and their attitudes and design preferences for chatbots to treat depression. Our findings indicate that youth experience diverse problems beyond core depressive symptoms, employ a range of coping strategies, hold predominantly positive attitudes toward chatbots for depression, and express diverse, sometimes contradictory, design preferences. In the following section, we discuss the results according to the four research questions guiding this study: (1) the problems youth with depression face, (2) their adaptive coping strategies, (3) their attitudes and expectations toward chatbots to treat depression, and (4) their design preferences for chatbots to treat depression. As presented in Table 3, our findings across all research questions offer specific guidance for designing chatbots tailored specifically to youth experiencing depression.

Problems of Youth With Depression

First, participants reported various problems with depression beyond the symptoms described in the DSM-5 [50], including mental health and somatic comorbidities, interpersonal issues, and concerns about school and the future. These findings align with previous research showing that youth with depression face an increased risk for anxiety disorders, substance abuse, and physical health issues [51], as well as social withdrawal, lack of friendship [52], diminished academic achievement, school dislike, and pessimism [52]. To address these problems, engaging and effective chatbots should (1) assess both core depressive symptoms and related problems repeatedly to inform therapeutic decisions and strengthen the therapeutic relationship by understanding the users’ individual challenges, and (2) curate a diverse content database to address the wide range of problems, aligning with Li et al [53]. Second, participants reported significant barriers to receiving professional treatment, including poor mental health literacy (eg, awareness of symptom severity), attitudinal (eg, perceived stigma), and structural barriers (eg, waiting times). These findings align with research showing that youth struggle to recognize symptoms [54], hesitate to disclose personal information to professionals due to fear of judgement or not being taken seriously [7], and question the effectiveness of professional treatment [7], a concern supported by evidence that 60% of youth with depression do not benefit from psychological treatments [55]. To address these barriers, chatbots should (1) encourage emotional disclosure by providing a nonjudgmental conversational environment, something DMHIs cannot fully achieve due to their static, nonconversational design, and (2) clearly communicate their scope and limitations to set realistic user expectations and direct to external support when needed.

Adaptive Coping Strategies

Participants reported various adaptive coping strategies, including seeking social support, establishing positive activities, using cognitive strategies, and receiving professional treatment. Many also emphasized the importance of focusing on their needs and establishing a daily structure. Most of these strategies fall into the CBT strategies behavioral activation, cognitive restructuring, and problem solving [56], which are effective and first line treatments for depression in youth [57].

The prevalence of CBT-based coping strategies may be due to the previous psychotherapy experience of most participants. Similarly, among youth with nonsuicidal self-injury, those with experience in dialectical behavioral therapy have suggested incorporating it into a DMHI designed for them [58], showing how previous therapy experience influences preferences for DMHIs. The use of CBT-aligned adaptive coping strategies is important, as research shows that youth with depression use adaptive cognitive strategies less frequently than healthy controls [59] and that applying these strategies is associated with fewer depressive symptoms [60]. Chatbots for youth with depression should incorporate these strategies to create engaging and effective content that aligns with users’ existing coping strategies and effective therapeutic techniques. For example, the chatbot could facilitate social support by helping users identify supportive individuals and suggesting personalized ways to reach out, such as a video call with a friend who is good at cheering them up or a message to a family member who provides good advice. Chatbots can even draft customizable messages to support users who struggle to reach out.

Attitudes and Expectations Toward Chatbots to Treat Depression

Participants predominantly held positive attitudes toward chatbots to treat depression. Many participants reported feeling less anxious about using a chatbot than seeing a human psychotherapist, especially when discussing sensitive topics. This finding extends research on post-traumatic stress disorder, showing that digital agents lead to greater disclosure of sensitive or stigmatized information [61,62], and supports evidence that youth value the privacy and anonymity offered by DMHIs [63]. Given these findings, chatbots could help users become comfortable with sharing sensitive information and practicing therapeutic conversations, with potential benefits for subsequent sessions with psychotherapists. However, some participants were skeptical about the chatbots’ ability to address individual problems and provide appropriate advice on sensitive topics, which is supported by evidence that commercial chatbots for adults with depression often fail to match user inputs, understand messages, or respond appropriately [64]. Although privacy and data security are frequently cited as primary risks of DMHIs for young people [65], few participants raised these concerns. Instead, they viewed chatbots as privacy-enhancing compared with human therapists. Nevertheless, robust privacy and data security remain essential, and it is unclear whether participants were genuinely unconcerned or simply assumed chatbots would have strong protections in place.

Design Preferences

Our study revealed important insights into design preferences for chatbots for youth with depression. First, participants expressed diverse preferences for dialogue topics, with three areas standing out: (1) chatting about daily life, (2) discussing urgent problems and receiving advice, and (3) working through therapy exercises that address specific problems. Interestingly, while youth with nonsuicidal self-injury [58], emotional problems [66], and from the general population [63] included being connected to others facing similar challenges as a key feature of their DMHI, our participants ranked such social connection as their lowest priority, highlighting how design preferences may vary across mental health conditions. In summary, a chatbot to treat depression in youth should reflect these preferences. However, available applications have not implemented chats about daily life, and relied on rule-based approaches for problem-solving [14,15], likely due to insufficient conversational capabilities. Structured therapy techniques, such as behavioral activation or cognitive restructuring, are standard components of DMHIs [38] and chatbots [67]. However, their current implementation remains predominantly static [68], without personalized advice or feedback during therapeutic exercises. Large language models (LLMs) such as OpenAI’s GPT or Anthropic’s Claude promise to overcome current limitations of DMHIs and chatbots. LLMs enable chatbots to better address our participants’ design preferences, such as natural conversations about daily life and advice on urgent problems. In addition, LLMs can enhance static implementations of therapeutic exercises by offering personalized guidance and feedback. In behavioral activation, LLMs can personalize the explanation of the relationship between behavior and feelings and evaluate the proposed activity plan for feasibility and therapeutic appropriateness. In cognitive restructuring, Sharma et al [69] demonstrated that LLMs can assist users in identifying thinking traps and generating reframed thoughts. Although these enhanced capabilities have the potential to improve understanding and skill development, key challenges remain. LLMs can generate false or harmful messages [70], posing risks to vulnerable users. Future research needs to explore how capitalize on their advanced conversational abilities while ensuring therapeutic quality and safety [71].

Second, our findings show that engaging and effective chatbots for youth with depression need personalization beyond the conversational capabilities of LLMs. Drawing on the framework by Cohen et al [72] for psychotherapy personalization, effective chatbots require personalization in content selection and interaction style. Content selection ensures that chatbots address the diverse problems and dialogue topics our participants reported, aligning with findings from Li et al [53] and Ludlow et al [66]. A personalized interaction style accommodates the preferences for different social roles and language use. While the benefits of personalizing the chatbot persona have been shown [73,74]. LLMs enable the personalization of interaction styles via prompt instructions, reducing the need to craft different responses for each style manually [75]. Despite the clear need for personalization, participants disagreed on who should control it [76], preventing a clear design recommendation. While some preferred user-led personalization, aligning with Kenny et al [63], others favored chatbot-led personalization or a hybrid approach. Given this ambiguity and the lack of empirical evidence [77], future research is needed to guide chatbot personalization.

Limitations

Our study has 2 main limitations. First, our sample size was limited, and the participants were predominantly women. Although depression is more prevalent in women [78-80], the absence of men limits the generalizability of our findings across genders. The gender imbalance resulted from convenience sampling through a single youth psychiatrist. One man was informed about the study but declined to participate. While our study included 2 nonbinary participants, an underrepresented demographic in research, recruiting men would have required targeted efforts beyond our study’s constraints. In addition, our sample consisted entirely of participants who had actively sought mental health support, most of whom had previous psychotherapy experience. As a result, perspectives from those who avoid professional help or resist treatment may be underrepresented. Due to these sampling limitations, we likely did not achieve full data saturation. Although qualitative research can reach saturation with as few as twelve interviews [31], our sample composition suggests that some perspectives may have been missed. However, our findings on youth depression and related problems replicated findings from studies with much larger sample sizes [81], indicating that our results are comprehensive. Future studies should recruit a more diverse sample, specifically including men, individuals reluctant to seek professional treatment, and those without previous psychotherapy experience, to validate and extend our findings.

Second, our study captured attitudes, preferences, and hypothetical usage scenarios rather than actual usage behavior. While participants interacted with a prototype during the think-aloud sessions, this brief, controlled interaction may not reflect real-world interactions. The gap between stated preferences and actual behavior is well-documented [82], and our study design does not allow us to determine whether the design preferences of the participants would translate into long-term engagement or effective use. Future research should follow the next steps of an iterative human-centered design process [83,84] with functional prototypes to assess whether the identified design preferences lead to actual user engagement and therapeutic benefits. Despite these limitations, our study provides in-depth findings on the design of chatbots to treat depression in youth, highlighting the value of qualitative research in the iterative development of DMHIs [83,84].

Conclusions and Future Work

Our study provides valuable insights into the problems and coping strategies of youth with depression, and their attitudes, expectations, and design preferences for a chatbot to treat depression in youth. We found complex user needs, predominantly positive attitudes toward chatbots, and various design preferences, including the need for diverse dialogue topics and personalization. Our findings led to concrete design recommendations that lay a crucial foundation for developing engaging and effective chatbots to treat depression in youth.

Despite these contributions, several research directions remain. First, future studies should validate and extend these findings with larger, more diverse samples to ensure broader representation of youth with depression. Second, examining actual usage patterns and long-term engagement with functional chatbot prototypes will help assess the effectiveness of our design recommendations. Third, investigating the responsible integration of LLMs is important, including the development of robust safeguards and evaluating dialogue quality and therapeutic outcomes. Finally, researchers must design and evaluate effective personalization features, particularly regarding whether users, the chatbot or both should control personalization, while balancing implementation efforts and impact. By addressing these research directions, we can further improve chatbots to treat depression in youth and ultimately contribute to more accessible, engaging, and effective mental health support for this vulnerable population.

Acknowledgments

We thank (1) all the participants for their participation in this study, (2) Ulrike Detzner, Reiter, MD for supporting the recruitment, and (3) Lilli Feline Metelmann for her efforts in planning, conducting, and analyzing the study with us. This work was supported by a donation of Cents for help e.V. to SL (project title: “An intelligent therapy chatbot for children suffering from depression and suicidal ideation”). We also acknowledge financial support from the Open Access Publication Fund of the University of Tübingen.

Conflicts of Interest

SL has received public funding, along with other scientists, for on an app-based aftercare intervention for youth and young adults with depression (iCAN study). He has received consultancy fees from companies for advice on study and intervention design in the context of e-mental health. He has also received payments for lectures on e–mental health. None of the other authors declare any conflict of interest.

Multimedia Appendix 1

Interview Guide.

DOCX File, 62 KB

Multimedia Appendix 2

Categories.

DOCX File, 66 KB

Multimedia Appendix 3

Informed written consent.

PDF File, 351 KB

  1. Clayborne ZM, Varin M, Colman I. Systematic review and meta-analysis: adolescent depression and long-term psychosocial outcomes. J Am Acad Child Adolesc Psychiatry. Jan 2019;58(1):72-79. [CrossRef] [Medline]
  2. Thapar A, Collishaw S, Pine DS, Thapar AK. Depression in adolescence. Lancet. Mar 17, 2012;379(9820):1056-1067. [CrossRef] [Medline]
  3. Oud M, de Winter L, Vermeulen-Smit E, et al. Effectiveness of CBT for children and adolescents with depression: a systematic review and meta-regression analysis. Eur Psychiatry. Apr 2019;57:33-45. [CrossRef] [Medline]
  4. Eisenberg D, Golberstein E, Gollust SE. Help-seeking and access to mental health care in a university student population. Med Care. Jul 2007;45(7):594-601. [CrossRef] [Medline]
  5. Ebert DD, Mortier P, Kaehlke F, et al. Barriers of mental health treatment utilization among first-year college students: first cross-national results from the WHO World Mental Health International College student initiative. Int J Methods Psychiatr Res. Jun 2019;28(2):e1782. [CrossRef] [Medline]
  6. Gulliver A, Griffiths KM, Christensen H. Perceived barriers and facilitators to mental health help-seeking in young people: a systematic review. BMC Psychiatry. Dec 30, 2010;10(1):1-9. [CrossRef] [Medline]
  7. Radez J, Reardon T, Creswell C, Lawrence PJ, Evdoka-Burton G, Waite P. Why do children and adolescents (not) seek and access professional help for their mental health problems? A systematic review of quantitative and qualitative studies. Eur Child Adolesc Psychiatry. Feb 2021;30(2):183-211. [CrossRef] [Medline]
  8. Domhan D, In-Albon T, Pfeiffer S. Erfassung von Barrieren und Faszilitatoren zur Aufnahme einer Psychotherapie im Kontext ambulanter Kinder- und Jugendlichenpsychotherapie. Psychotherapie. Nov 2023;68(6):466-474. [CrossRef]
  9. Wu Y, Fenfen E, Wang Y, et al. Efficacy of internet-based cognitive-behavioral therapy for depression in adolescents: a systematic review and meta-analysis. Internet Interv. Dec 2023;34:100673. [CrossRef] [Medline]
  10. Leech T, Dorstyn D, Taylor A, Li W. Mental health apps for adolescents and young adults: a systematic review of randomised controlled trials. Child Youth Serv Rev. Aug 2021;127:106073. [CrossRef]
  11. Cameron SK, Rodgers J, Dagnan D. The relationship between the therapeutic alliance and clinical outcomes in cognitive behaviour therapy for adults with depression: a meta‐analytic review. Clin Psychology and Psychoth. May 2018;25(3):446-456. URL: https://onlinelibrary.wiley.com/toc/10990879/25/3 [CrossRef]
  12. Vaidyam AN, Wisniewski H, Halamka JD, Kashavan MS, Torous JB. Chatbots and conversational agents in mental health: a review of the psychiatric landscape. Can J Psychiatry. Jul 2019;64(7):456-464. [CrossRef] [Medline]
  13. Bendig E, Erb B, Schulze-Thuesing L, Baumeister H. Die nächste generation: chatbots in der klinischen psychologie und psychotherapie zur förderung mentaler gesundheit – ein scoping-review. Verhaltenstherapie. Dec 23, 2019;29(4):266-280. [CrossRef]
  14. Inkster B, Sarda S, Subramanian V. An empathy-driven, conversational artificial intelligence agent (Wysa) for digital mental well-being: real-world data evaluation mixed-methods study. JMIR Mhealth Uhealth. Nov 23, 2018;6(11):e12106. [CrossRef] [Medline]
  15. Fitzpatrick KK, Darcy A, Vierhile M. Delivering cognitive behavior therapy to young adults with symptoms of depression and anxiety using a fully automated conversational agent (Woebot): a randomized controlled trial. JMIR Ment Health. Jun 6, 2017;4(2):e19. [CrossRef] [Medline]
  16. Abd-Alrazaq AA, Rababeh A, Alajlani M, Bewick BM, Househ M. Effectiveness and safety of using chatbots to improve mental health: systematic review and meta-analysis. J Med Internet Res. Jul 13, 2020;22(7):e16021. [CrossRef] [Medline]
  17. Perski O, Crane D, Beard E, Brown J. Does the addition of a supportive chatbot promote user engagement with a smoking cessation app? An experimental study. Digit Health. Jan 2019;5:1-13. [CrossRef]
  18. Linardon J, Torous J, Firth J, Cuijpers P, Messer M, Fuller-Tyszkiewicz M. Current evidence on the efficacy of mental health smartphone apps for symptoms of depression and anxiety. A meta-analysis of 176 randomized controlled trials. World Psychiatry. Feb 2024;23(1):139-149. [CrossRef] [Medline]
  19. Darcy A, Daniels J, Salinger D, Wicks P, Robinson A. Evidence of human-level bonds established with a digital conversational agent: cross-sectional, retrospective observational study. JMIR Form Res. May 11, 2021;5(5):e27868. [CrossRef] [Medline]
  20. Bae Brandtzæg PB, Skjuve M, Kristoffer Dysthe KK, Følstad A. When the social becomes non-human: young people’s perception of social support in chatbots. Presented at: CHI ’21: Proceedings of the 2021 CHI Conference on Human Factors in Computing Systems; May 6, 2021; Yokohama Japan. [CrossRef]
  21. Beatty C, Malik T, Meheli S, Sinha C. Evaluating the therapeutic alliance with a free-text CBT conversational agent (Wysa): a mixed-methods study. Front Digit Health. 2022;4:847991. [CrossRef] [Medline]
  22. Feine J, Gnewuch U, Morana S, Maedche A. A taxonomy of social cues for conversational agents. Int J Hum Comput Stud. Dec 2019;132:138-161. [CrossRef]
  23. Vaidyam AN, Linggonegoro D, Torous J. Changes to the psychiatric chatbot landscape: a systematic review of conversational agents in serious mental illness: changements du paysage psychiatrique des chatbots: une revue systématique des agents conversationnels dans la maladie mentale sérieuse. Can J Psychiatry. Apr 2021;66(4):339-348. [CrossRef] [Medline]
  24. Cicchetti D, Toth SL. A Developmental Psychopathology Perspective on Adolescent Depression Handbook of Depression in Adolescents. Routledge/Taylor & Francis Group; 2009:3-32. ISBN: 9780429235054
  25. Rice F, Riglin L, Lomax T, et al. Adolescent and adult differences in major depression symptom profiles. J Affect Disord. Jan 15, 2019;243:175-181. [CrossRef] [Medline]
  26. Andone I, Błaszkiewicz K, Eibes M, Trendafilov B, Montag C, Markowetz A. How age and gender affect smartphone usage. Presented at: UbiComp ’16: Proceedings of the 2016 ACM International Joint Conference on Pervasive and Ubiquitous Computing: Adjunct; Sep 12, 2016:9-12; Heidelberg Germany. [CrossRef]
  27. Huffman S. OMG! Mobile voice survey reveals teens love to talk. URL: https://blog.google/products/search/omg-mobile-voice-survey-reveals-teens/ [Accessed 2014-10-14]
  28. Agapie E, Chang K, Patrachari S, Neary M, Schueller SM. Understanding mental health apps for youth: focus group study with Latinx youth. JMIR Form Res. Oct 18, 2022;6(10):e40726. [CrossRef] [Medline]
  29. Høiland CG, Følstad A, Karahasanovic A. Hi, can I help? Exploring how to design a mental health chatbot for youths. Human Technology. Aug 31, 2020;16(2):139-169. URL: https://humantechnology.jyu.fi/archive/vol-16/issue-2/vol16_iss2_ht_full_issue_august2020.pdf [CrossRef]
  30. Grové C. Co-developing a mental health and wellbeing chatbot with and for young people. Front Psychiatry. 2020;11:606041. [CrossRef] [Medline]
  31. Guest G, Bunce A, Johnson L. How many interviews are enough? An experiment with data saturation and variability. Field Methods US: Sage Publications. 2006;18(1):59-82. [CrossRef]
  32. Association AP. Diagnostic and Statistical Manual of Mental Disorders (DSM-5®). American Psychiatric Pub; 2013. ISBN: 0-89042-557-4
  33. Kaufman J, Birmaher B, Brent D, et al. Schedule for Affective Disorders and Schizophrenia for School-Age Children-Present and Lifetime Version (K-SADS-PL): initial reliability and validity data. J Am Acad Child Adolesc Psychiatry. Jul 1997;36(7):980-988. [CrossRef] [Medline]
  34. Martin A, Rief W, Klaiberg A, Braehler E. Validity of the Brief Patient Health Questionnaire Mood Scale (PHQ-9) in the general population. Gen Hosp Psychiatry. 2006;28(1):71-77. [CrossRef] [Medline]
  35. Weitkamp K, Romer G, Rosenthal S, Wiegand-Grefe S, Daniels J. German Screen for Child Anxiety Related Emotional Disorders (SCARED): reliability, validity, and cross-informant agreement in a clinical sample. Child Adolesc Psychiatry Ment Health. Jun 30, 2010;4(1):1-8. [CrossRef] [Medline]
  36. Towery J. The Anti-Depressant Book: A Practical Guide for Teens and Young Adults to Overcome Depression and Stay Healthy. Jacob Towery; 2016. ISBN: 978-0-692-64154-5
  37. Auerbach RP, Webb CA, Stewart JG. Cognitive Behavior Therapy for Depressed Adolescents: A Practical Guide to Management and Treatment. Routledge; 2016. [CrossRef] ISBN: 978-1-138-81614-5
  38. Huguet A, Rao S, McGrath PJ, et al. A systematic review of cognitive behavioral therapy and behavioral activation apps for depression. PLOS ONE. 2016;11(5):e0154248. [CrossRef] [Medline]
  39. Lejuez CW, Hopko DR, Hopko SD. A brief behavioral activation treatment for depression. Treatment manual. Behav Modif. Apr 2001;25(2):255-286. [CrossRef] [Medline]
  40. Abel U, Hautzinger M. Kognitive Verhaltenstherapie Bei Depressionen Im Kindes- Und Jugendalter. Springer-Verlag; 2013. ISBN: 978-3-642-29790-8
  41. Groen G, Petermann F. Therapie Tools Depression Im Kindes-Und Jugendalter. Beltz; 2015. ISBN: 978-3-621-28267-3
  42. Botsociety. Design, preview and prototype your next chatbot or voice assistant. URL: https://botsociety.io [Accessed 2021-06-18]
  43. Jaspers MWM, Steen T, van den Bos C, Geenen M. The think aloud method: a guide to user interface design. Int J Med Inform. Nov 2004;73(11-12):781-795. [CrossRef] [Medline]
  44. Mayring P, Fenzl T. Qualitative inhaltsanalyse. In: Baur N, Blasius J, editors. Handbuch Methoden Der Empirischen Sozialforschung. Springer Fachmedien; 2019:633-648. [CrossRef] ISBN: 978-3-658-21308-4
  45. Mayring P. Qualitative Inhaltsanalyse: Grundlagen Und Techniken 12, Überarbeitete Auflage. Beltz; 2015. ISBN: 978-3-407-29393-0
  46. Fenzl T, Mayring P. QCAmap: eine interaktive webapplikation für qualitative inhaltsanalyse. ZPID (Leibniz Institute for Psychology Information). 2017. [CrossRef]
  47. Zehetmair C, Nagy E, Leetz C, et al. Self-practice of stabilizing and guided imagery techniques for traumatized refugees via digital audio files: qualitative study. J Med Internet Res. Sep 23, 2020;22(9):e17906. [CrossRef] [Medline]
  48. Kroenke K, Strine TW, Spitzer RL, Williams JBW, Berry JT, Mokdad AH. The PHQ-8 as a measure of current depression in the general population. J Affect Disord. Apr 2009;114(1-3):163-173. [CrossRef] [Medline]
  49. Caporino NE, Sakolsky D, Brodman DM, et al. Establishing clinical cutoffs for response and remission on the Screen for Child Anxiety Related Emotional Disorders (SCARED). J Am Acad Child Adolesc Psychiatry. Aug 2017;56(8):696-702. [CrossRef] [Medline]
  50. American Psychiatric Association. Diagnostic and statistical manual of mental disorders. In: 5th. American Psychiatric Association; 2013. ISBN: 978-0-89042-555-8
  51. Agnafors S, Norman Kjellström A, Torgerson J, Rusner M. Somatic comorbidity in children and adolescents with psychiatric disorders. Eur Child Adolesc Psychiatry. Nov 2019;28(11):1517-1525. [CrossRef] [Medline]
  52. Mullarkey MC, Marchetti I, Beevers CG. Using network analysis to identify central symptoms of adolescent depression. J Clin Child Adolesc Psychol. 2019;48(4):656-668. [CrossRef] [Medline]
  53. Li SH, Achilles MR, Spanos S, Habak S, Werner-Seidler A, O’Dea B. A cognitive behavioural therapy smartphone app for adolescent depression and anxiety: co-design of ClearlyMe. tCBT. 2022;15. [CrossRef]
  54. Radez J, Reardon T, Creswell C, Orchard F, Waite P. Adolescents’ perceived barriers and facilitators to seeking and accessing professional help for anxiety and depressive disorders: a qualitative interview study. Eur Child Adolesc Psychiatry. Jun 2022;31(6):891-907. [CrossRef] [Medline]
  55. Cuijpers P, Karyotaki E, Ciharova M, et al. The effects of psychological treatments of depression in children and adolescents on response, reliable change, and deterioration: a systematic review and meta-analysis. Eur Child Adolesc Psychiatry. Jan 2023;32(1):177-192. [CrossRef] [Medline]
  56. Wenzel A. Basic strategies of cognitive behavioral therapy. Psychiatr Clin North Am. Dec 2017;40(4):597-609. [CrossRef] [Medline]
  57. Luxton R, Kyriakopoulos M. Depression in children and young people: identification and management NICE guidelines. Arch Dis Child Educ Pract Ed. 2020;107. [CrossRef]
  58. Čuš A, Edbrooke-Childs J, Ohmann S, Plener PL, Akkaya-Kalayci T. “Smartphone apps are cool, but do they help me?”: a qualitative interview study of adolescents’ perspectives on using smartphone interventions to manage nonsuicidal self-injury. Int J Environ Res Public Health. Mar 22, 2021;18(6):3289. [CrossRef] [Medline]
  59. Mihailescu I, Efrim-Budisteanu M, Andrei LE, et al. Cognitive coping strategies among inpatient adolescents with depression and psychiatric comorbidity. Children (Basel). Nov 29, 2023;10(12):1870. [CrossRef] [Medline]
  60. Schäfer J, Naumann E, Holmes EA, Tuschen-Caffier B, Samson AC. Emotion regulation strategies in depressive and anxiety symptoms in youth: a meta-analytic review. J Youth Adolescence. Feb 2017;46(2):261-276. [CrossRef]
  61. Lucas GM, Rizzo A, Gratch J, et al. Reporting mental health symptoms: breaking down barriers to care with virtual human interviewers. Front Robot AI. 2017;4. [CrossRef]
  62. Pickard MD, Roster CA, Chen Y. Revealing sensitive information in personal interviews: Is self-disclosure easier with humans or avatars and under what conditions? Comput Human Behav. Dec 2016;65:23-30. [CrossRef]
  63. Kenny R, Dooley B, Fitzgerald A. Developing mental health mobile apps: exploring adolescents’ perspectives. Health Informatics J. Jun 2016;22(2):265-275. [CrossRef] [Medline]
  64. Martinengo L, Lum E, Car J. Evaluation of chatbot-delivered interventions for self-management of depression: content analysis. J Affect Disord. Dec 15, 2022;319:598-607. [CrossRef] [Medline]
  65. Wies B, Landers C, Ienca M. Digital mental health for young people: a scoping review of ethical promises and challenges. Front Digit Health. 2021;3:697072. [CrossRef] [Medline]
  66. Ludlow K, Russell JK, Ryan B, et al. Co-designing a digital mental health platform, “Momentum”, with young people aged 7-17: a qualitative study. Digit Health. 2023;9:20552076231216410. [CrossRef] [Medline]
  67. Ahmed A, Hassan A, Aziz S, et al. Chatbot features for anxiety and depression: a scoping review. Health Informatics J. 2023;29(1):14604582221146719. [CrossRef] [Medline]
  68. Denecke K, Schmid N, Nüssli S. Implementation of cognitive behavioral therapy in e-mental health apps: literature review. J Med Internet Res. Mar 10, 2022;24(3):e27791. [CrossRef] [Medline]
  69. Sharma A, Rushton K, Lin IW, Nguyen T, Althoff T. Facilitating self-guided mental health interventions through human-language model interaction: a case study of cognitive restructuring. Association for Computing Machinery; 2024. Presented at: CHI ’24; May 11, 2024:1-29; Honolulu HI USA. URL: https://dl.acm.org/doi/proceedings/10.1145/3613904 [CrossRef]
  70. Birkun AA, Gautam A. Large language model (LLM)-powered chatbots fail to generate guideline-consistent content on resuscitation and may provide potentially harmful advice. Prehosp Disaster Med. Dec 2023;38(6):757-763. [CrossRef] [Medline]
  71. Stade EC, Stirman SW, Ungar LH, et al. Large language models could change the future of behavioral healthcare: a proposal for responsible development and evaluation. Npj Ment Health Res. Apr 2, 2024;3(1):12. [CrossRef] [Medline]
  72. Cohen ZD, Delgadillo J, DeRubeis RJ. Personalized treatment approaches. In: Bergin and Garfield’s Handbook of Psychotherapy and Behavior Change. John Wiley & Sons, Inc; 2021:673-703. ISBN: 978-1-119-53658-1
  73. Nißen M, Rüegger D, Stieger M, et al. The effects of health care chatbot personas with different social roles on the client-chatbot bond and usage intentions: development of a design codebook and web-based study. J Med Internet Res. Apr 27, 2022;24(4):e32630. [CrossRef] [Medline]
  74. Ahmad R, Siemon D, Gnewuch U, Robra-Bissantz S. Designing personality-adaptive conversational agents for mental health care. Inf Syst Front. 2022;24(3):923-943. [CrossRef] [Medline]
  75. Jiang H, Zhang X, Cao X, Breazeal C, Roy D, Kabbara J. PersonaLLM: investigating the ability of large language models to express personality traits. Association for Computational Linguistics Presented at: Findings of the Association for Computational Linguistics; Jun 16-21, 2024:3605-3627; Mexico City, Mexico. URL: https://aclanthology.org/2024.findings-naacl [CrossRef]
  76. Fan H, Poole MS. What is personalization? Perspectives on the design and implementation of personalization in information systems. JOCEC. Jan 2006;16(3-4):179-202. [CrossRef]
  77. Kocaballi AB, Berkovsky S, Quiroz JC, et al. The personalization of conversational agents in health care: systematic review. J Med Internet Res. Nov 7, 2019;21(11):e15360. [CrossRef] [Medline]
  78. Avenevoli S, Swendsen J, He JP, Burstein M, Merikangas KR. Major depression in the national comorbidity survey-adolescent supplement: prevalence, correlates, and treatment. J Am Acad Child Adolesc Psychiatry. Jan 2015;54(1):37-44. [CrossRef] [Medline]
  79. Seedat S, Scott KM, Angermeyer MC, et al. Cross-national associations between gender and mental disorders in the World Health Organization World Mental Health Surveys. Arch Gen Psychiatry. Jul 2009;66(7):785-795. [CrossRef] [Medline]
  80. Hua Z, Wang S, Yuan X. Trends in age-standardized incidence rates of depression in adolescents aged 10-24 in 204 countries and regions from 1990 to 2019. J Affect Disord. Apr 1, 2024;350:831-837. [CrossRef] [Medline]
  81. Chevance A, Ravaud P, Tomlinson A, et al. Identifying outcomes for depression that matter to patients, informal caregivers, and health-care professionals: qualitative content analysis of a large international online survey. Lancet Psychiatry. Aug 2020;7(8):692-702. [CrossRef] [Medline]
  82. de Corte K, Cairns J, Grieve R. Stated versus revealed preferences: an approach to reduce bias. Health Econ. May 2021;30(5):1095-1123. [CrossRef] [Medline]
  83. Yardley L, Morrison L, Bradbury K, Muller I. The person-based approach to intervention development: application to digital health-related behavior change interventions. J Med Internet Res. Jan 30, 2015;17(1):e30. [CrossRef] [Medline]
  84. Harte R, Glynn L, Rodríguez-Molinero A, et al. A human-centered design methodology to enhance the usability, human factors, and user experience of connected health systems: a three-phase methodology. JMIR Hum Factors. Mar 16, 2017;4(1):e8. [CrossRef] [Medline]


CBT: cognitive behavioral therapy
DMHI: digital mental health intervention
DSM-5: Diagnostic and Statistical Manual of Mental Disorders, Fifth Edition
LLM: large language model
PHQ-8: 8-item Patient Health Questionnaire
SCARED: Screen for Child Anxiety Related Disorders


Edited by Andre Kushniruk; submitted 10.10.24; peer-reviewed by Eugene Kee Onn Wong, Min H Cheah; final revised version received 31.03.25; accepted 31.03.25; published 19.06.25.

Copyright

© Florian Onur Kuhlmeier, Luise Bauch, Ulrich Gnewuch, Stefan Lüttke. Originally published in JMIR Human Factors (https://humanfactors.jmir.org), 19.6.2025.

This is an open-access article distributed under the terms of the Creative Commons Attribution License (https://creativecommons.org/licenses/by/4.0/), which permits unrestricted use, distribution, and reproduction in any medium, provided the original work, first published in JMIR Human Factors, is properly cited. The complete bibliographic information, a link to the original publication on https://humanfactors.jmir.org, as well as this copyright and license information must be included.