Published on in Vol 10 (2023)

Preprints (earlier versions) of this paper are available at, first published .
Assessing Mood With the Identifying Depression Early in Adolescence Chatbot (IDEABot): Development and Implementation Study

Assessing Mood With the Identifying Depression Early in Adolescence Chatbot (IDEABot): Development and Implementation Study

Assessing Mood With the Identifying Depression Early in Adolescence Chatbot (IDEABot): Development and Implementation Study

Original Paper

1Department of Psychiatry, Universidade Federal do Rio Grande do Sul, Porto Alegre, Brazil

2Child and Adolescent Psychiatry Division, Hospital de Clínicas de Porto Alegre, Universidade Federal do Rio Grande do Sul, Porto Alegre, Brazil

3Center for Technological Advancement, Universidade Federal de Pelotas, Pelotas, Brazil

4Social, Genetic & Developmental Psychiatry Centre, Institute of Psychiatry, Psychology & Neuroscience, King’s College London, London, United Kingdom

5Economic and Social Research Council Centre for Society and Mental Health, King’s College London, London, United Kingdom

6Division of Global Mental Health, Department of Psychiatry, School of Medicine and Health Sciences, The George Washington University, Washington, DC, United States

7Department of Psychological Medicine, Institute of Psychiatry, Psychology, King’s College London, London, United Kingdom

8Centre for Community Based Research, Human Sciences Research Council, Pietermaritzburg, South Africa

*these authors contributed equally

Corresponding Author:

Christian Kieling, MD, PhD

Department of Psychiatry

Universidade Federal do Rio Grande do Sul

Rua Ramiro Barcelos, 2400

Porto Alegre, 90035003


Phone: 55 5133085624


Background: Mental health status assessment is mostly limited to clinical or research settings, but recent technological advances provide new opportunities for measurement using more ecological approaches. Leveraging apps already in use by individuals on their smartphones, such as chatbots, could be a useful approach to capture subjective reports of mood in the moment.

Objective: This study aimed to describe the development and implementation of the Identifying Depression Early in Adolescence Chatbot (IDEABot), a WhatsApp-based tool designed for collecting intensive longitudinal data on adolescents’ mood.

Methods: The IDEABot was developed to collect data from Brazilian adolescents via WhatsApp as part of the Identifying Depression Early in Adolescence Risk Stratified Cohort (IDEA-RiSCo) study. It supports the administration and collection of self-reported structured items or questionnaires and audio responses. The development explored WhatsApp’s default features, such as emojis and recorded audio messages, and focused on scripting relevant and acceptable conversations. The IDEABot supports 5 types of interactions: textual and audio questions, administration of a version of the Short Mood and Feelings Questionnaire, unprompted interactions, and a snooze function. Six adolescents (n=4, 67% male participants and n=2, 33% female participants) aged 16 to 18 years tested the initial version of the IDEABot and were engaged to codevelop the final version of the app. The IDEABot was subsequently used for data collection in the second- and third-year follow-ups of the IDEA-RiSCo study.

Results: The adolescents assessed the initial version of the IDEABot as enjoyable and made suggestions for improvements that were subsequently implemented. The IDEABot’s final version follows a structured script with the choice of answer based on exact text matches throughout 15 days. The implementation of the IDEABot in 2 waves of the IDEA-RiSCo sample (140 and 132 eligible adolescents in the second- and third-year follow-ups, respectively) evidenced adequate engagement indicators, with good acceptance for using the tool (113/140, 80.7% and 122/132, 92.4% for second- and third-year follow-up use, respectively), low attrition (only 1/113, 0.9% and 1/122, 0.8%, respectively, failed to engage in the protocol after initial interaction), and high compliance in terms of the proportion of responses in relation to the total number of elicited prompts (12.8, SD 3.5; 91% out of 14 possible interactions and 10.57, SD 3.4; 76% out of 14 possible interactions, respectively).

Conclusions: The IDEABot is a frugal app that leverages an existing app already in daily use by our target population. It follows a simple rule-based approach that can be easily tested and implemented in diverse settings and possibly diminishes the burden of intensive data collection for participants by repurposing WhatsApp. In this context, the IDEABot appears as an acceptable and potentially scalable tool for gathering momentary information that can enhance our understanding of mood fluctuations and development.

JMIR Hum Factors 2023;10:e44388




The challenges and limitations of the current tools of mental health assessment—mostly performed using standardized scales—have increased the interest in alternative monitoring tools. Traditional assessment often fails to incorporate the dynamic nature of psychological constructs and other relevant clinical features [1] and is not capable of capturing prognostic and therapeutic differences among patients [2] as well as the personalized aspects that are essential to address mental health issues.

Over recent decades, technology has created an opportunity to expand data collection and analysis beyond clinical and research facilities and centers, with flexibility to create participative, 2-way communication applications that can be easily adapted and used in everyday settings for a variety of target populations [3]. Considering the central role of language in the diagnosis and assessment of mental health, a shift toward a technology focused on conversational aspects may be key to systematizing natural language domains that are not currently explored in clinical settings [4].

In this sense, we propose that using chatbots—digital systems that rely on a conversational interaction that mimics human conversation [5]—may be an alternative to using traditional assessment methods. Chatbots are capable of capturing real-time accounts of events (ie, at the moment the event is being experienced) [6] and thus may further our current understanding of time- and context-contingent associations among activities, moods, and experiences [7]. Primarily, it has been theorized that chatbots both facilitate disclosure [8,9] and provide an opportunity to collect real-time information on mood and behavior in real-world settings with lower perceived burden for participants and researchers, increasing ecological validity, minimizing recall biases [10], and taking advantage of human-like conversation features to assess psychological constructs (such as depression) in a scalable, systematic fashion that is not possible with the usual application of instruments and scales.

One important advantage of chatbots is that they may be integrated into existing applications that are routinely used by the general public and designed as affordable, potentially scalable tools, following a frugal innovation model [11]. In addition, chatbots could be explored to reduce barriers that typically prevent identification of mental health disorders among, and help-seeking by, young people, a group especially susceptible to these conditions [12]. Given the scarcity of resources allocated to mental health care, particularly in middle-income countries such as Brazil, the development of frugal chatbot apps is a promising alternative.


Chatbots have been used in mental health research for purposes such as therapy, training, and screening [13,14]. Nevertheless, most studies on user-chatbot interactions have focused on adults [15], although adolescents are often more familiar with smartphones than other populations [16]. Thus, exploring the feasibility of using chatbots to collect data on adolescent mood and behavior in an ecological fashion may be a promising avenue of inquiry. We hypothesize that, by leveraging already existing technologies, chatbots are a feasible, viable form of monitoring changes in mood and symptoms over time in adolescent populations. Moreover, we believe that their use lessens participant burden, possibly augmenting sustained engagement with the tool.

Therefore, we aimed to develop a chatbot tool to collect real-life data on mood and behavior from adolescents using text and audio messages. Here, we present the development and feasibility pilot of and initial results obtained with the implementation of the WhatsApp-based Identifying Depression Early in Adolescence Chatbot (IDEABot).

Study Setting: Identifying Depression Early in Adolescence Risk Stratified Cohort

The IDEABot was developed as part of the Identifying Depression Early in Adolescence Risk Stratified Cohort (IDEA-RiSCo) study [17]. The IDEA-RiSCo study includes 150 Brazilian adolescents (n=75, 50% female participants and n=75, 50% male participants) aged 14 to 16 years at baseline, stratified into 3 groups: low risk for developing depression (50/150, 33.3%), high risk for developing depression (50/150, 33.3%), and experiencing a current untreated major depressive episode (50/150, 33.3%). Participants were selected for each group using the Identifying Depression Early in Adolescence Risk Score (IDEA-RS), an empirically generated algorithm developed to estimate the individual-level probability of a unipolar depressive episode 3 years after initial assessment [17-19]. Additional details on procedures used in the IDEA-RiSCo study are described elsewhere [17].

Rationale and Feasibility Pilot

The IDEABot was developed to collect data from Brazilian adolescents via WhatsApp (Meta) [11]. In 2019, WhatsApp was reported to have been used at least once every hour by 81% of Brazilians [20]. Moreover, among adolescents from public state schools in the city of Porto Alegre, Rio Grande do Sul, Brazil (the population from which the IDEA-RiSCo sample was derived), WhatsApp was the most popular web-based platform, used at least once a day by 90% of the sample [21].

The IDEABot was devised to collect daily data on current mood via both structured items or questionnaires and free audio reporting of the aspects of daily life considered by participants (Multimedia Appendix 1). An interdisciplinary team was engaged in the project, including mental health practitioners (psychiatrists and psychologists), computer scientists, and writers. The prototype version of the IDEABot was designed and implemented in Brazilian Portuguese using inputs from the research team, followed by a feasibility pilot that generated a round of adjustments.

For the feasibility pilot, 6 adolescents were invited to test a prototype version of the IDEABot and comment on their user experience. They tested the chatbot system for 5 days, during which they answered the Short Mood and Feelings Questionnaire (sMFQ) and participated in 2 days of brief audio recordings. All features and possible response modes were tested. After test completion, the adolescents participated in an individual interview and a focus group discussion, conducted on the web by 2 researchers (AV and CK).

The interviews focused on the overall experience, feasibility, and acceptability of using the IDEABot (including concerns about data safety and privacy). In addition, the adolescents were engaged in jointly exploring and proposing improvements and solutions for perceived problems. In the focus group, anchored vignettes were used [22] to explore participants’ perceptions of the chatbot (Multimedia Appendix 2).

Implementation of the Final Version of the IDEABot

After the pilot test, the final version of the IDEABot was generated and subsequently implemented in the second- and third-year follow-ups of the IDEA-RiSCo study [17]. On the basis of a review of the literature, the following usability indicators [23] were evaluated to define successful implementation [24,25]: (1) acceptance (ie, the proportion of participants who were invited to take part in the IDEABot data collection and agreed to use the tool); (2) initial attrition (ie, failure to further engage in the protocol after agreeing to participate in the data collection and complete the initial steps); and (3) compliance, defined as the proportion of days on which participants generated at least 1 data point over the 15 days of data collection.

Socioeconomic status was also assessed with data collected at baseline using the Brazilian Criterion of Economic Classification [26], along with administration of a 9-item questionnaire on the frequency of the participants’ use of 8 social media platforms, including the frequency of WhatsApp use [21,27]. Responses were aggregated into 3 strata (1=never, 2=several times/week, and 3=several times/day or constantly).

Categorical and numerical variables were compared using the chi-square and Mann-Whitney U tests, respectively. In addition, the Spearman correlation coefficient was used to verify correlations among continuous variables. All analyses were performed using SPSS software (version 26.0; IBM Corp).

Ethics Approval

The development and research use of the IDEABot was approved by the Hospital de Clínicas de Porto Alegre ethics committee (50473015.9.0000.5327).

Informed Consent and Participation

All adolescents and caregivers provided written assent or consent to participate in each stage of data collection and were given the opportunity to withdraw assent or consent at any time. For participants aged >18 years, written consent was obtained directly. If participants wished to stop receiving messages from the chatbot before the completion of the 15-day trial, they were instructed to contact a research team member. In addition, participants were instructed to use the WhatsApp delete button if they preferred to delete sent messages or audio files. Along with the research team’s explanation on the functioning of the IDEABot, the chatbot’s first interaction with the user explicitly stated the nature of the exchange that would take place. Participants were thus aware that the audio recordings were not listened immediately and that the chatbot was not a channel for seeking help. Participants were provided with an additional telephone number and instructed to contact a team member (a board-certified psychiatrist) in case they were actively seeking information related to mental health issues. Furthermore, participants received information regarding the national helpline for health and safety emergencies. Following Brazilian legislation, participants did not receive financial incentives for taking part in the study but were offered compensation for mobile internet data use during their participation.

Results of the Feasibility Pilot

Six adolescents (n=4, 67% male participants and n=2, 33% female participants; n=1, 25% of the 4 male participants had lived experience of depression, as did n=1, 50% of the 2 female participants) aged 16 to 18 years participated in the feasibility pilot. They were selected by convenience among the group of adolescents who had already participated in other projects conducted by our research team. Despite their heterogeneous socioeconomic backgrounds, all had a smartphone with internet access. Parental consent was obtained for all underage participants (those aged <18 years). As most of the participants (5/6, 83%) had already participated in other stages of the research, they were familiar with the investigators and knew about the IDEA-RiSCo objectives and procedures. The interviews lasted 20 to 30 minutes, whereas the duration of the focus group was 50 minutes.

Overall, participants considered the IDEABot easy to use and enjoyable. All 6 adolescents completed at least 4 (80%) of the 5 interactions and sent an average of 54.5 (range 2-97) seconds of audio recordings per day. The adolescents expressed that directed questions (such as those asking about their daily routine) were easier to answer than more open questions (such as the initial request for participants to introduce themselves). In addition, the adolescents considered the prompts that targeted the collection of at least 1 minute of audio recordings over the day to be adequate.

Overall, they perceived the burden of integrating the chatbot into their daily routine as low. In fact, they highlighted a positive effect of talking about their daily lives:

It was a good experience...I felt I was talking about my things to someone—it even sounded like there was someone there wanting to know how my day was. Sometimes you spend your day without anyone asking you that. But the chatbot asked.
[Female participant, aged 17 years]

Regarding the sMFQ, the adolescents found that some of the instructions provided by the chatbot were unclear and made suggestions on how to fix these issues. It asked participants to answer the sMFQ using the numbers 0, 1, or 2. The adolescents suggested further anchoring of these responses (eg, through reminders of the meaning of each number during the completion of the questionnaire). The instructions were adjusted accordingly after these difficulties and possible solutions were explored with the adolescents. In the final version, an explanation of each possible choice of answer was provided (0=no, 1=sometimes, and 2=yes) before the participants were asked to complete each item of the sMFQ, using, for example, the statement “I feel sad today.” In addition, a short reminder of the meaning of each numeric answer (0, 1, or 2) was added after each chatbot prompt.

An important adjustment made possible by the feasibility pilot was as follows: the adolescents tended to respond to the chatbot’s final interaction by either thanking it or sending an emoji. In the chatbot’s initial programming, this was interpreted as an unsolicited interaction to which the IDEABot responded by requesting an audio message to explain what the participant had said. This chatbot response would often confuse the adolescents. To avoid this, we developed a content-based rule: if participants responded with a predefined set of words (“ok,” “see you,” “thank you,” or variations), this was interpreted as a conversation closure, and the chatbot’s probe would not be triggered.

Another aspect that required changing was suggested by the adolescents in relation to the schedule of interactions. The adolescents argued that they would most likely be at school or asleep at 10:30 AM and therefore would probably not feel comfortable responding to the questions owing to their current environment (especially if they were at school). The adolescents then suggested that the first interaction of the day be moved from 10:30 AM to 1:30 PM, which was implemented in the final version of the IDEABot.

Implementation of the IDEABot

Development of the IDEABot

The IDEABot was successfully developed to perform prescripted interactions requesting audio and text responses from participants to the questions it posed. The chatbot questions and responses were expressed only in text format, regardless of the format of user input. The IDEABot was also designed to delay answers proportionally to the length of the text being sent to users to simulate a more natural typed conversation. Using a rule-based approach, four types of interactions were developed: (1) mood ratings, (2) emoji mood ratings, (3) brief audio recordings, and (4) questionnaire answers (Multimedia Appendix 3).

As a first step to activate the chatbot, users were required to send a WhatsApp text message (any content was acceptable) to the chatbot’s mobile number. To ensure both the standardization of instructions given to users and clarity regarding the nature of the conversation, as well as to prevent misconceptions (such as participants believing that the chatbot is a real person or that the audio recordings would be listened immediately), the first interaction with the chatbot was designed to review overall functionality. This initial interaction was named day 0 and covered the routines that users should expect over the subsequent 14 days and how they were supposed to respond. Because of the IDEABot’s nature and objective, data generated on day 0 will be excluded from future analyses.

The chatbot follows a time-contingent sampling for each participant. In this sense, it is designed to initiate interactions at fixed times: every day, beginning at 1:30 PM, participants receive a message asking whether they are available to answer the scheduled questions. They may answer immediately after the first message prompt or use a snooze function to schedule a reminder for a later time in the day (the IDEABot allows snoozing until 3 AM the next day). If participants ignore the first prompt, additional messages are sent at 3-hour intervals. Participants have until 6 AM the following day to respond to the questions of each daily cycle. If the interaction is not completed, at 10 AM the following day, the chatbot informs the participant that the daily cycle will end without completion and that a new daily cycle will begin, also providing the time when the next message would be sent. In addition to scheduled interactions, participants are also given the option to send unprompted audio recordings throughout the day (Figure 1).

The chatbot’s schedule is divided into five interaction modes: (1) introduction (the first interaction with users), (2) audio questions, (3) administration of a version of the sMFQ, (4) other messages, and (5) the snooze function (Multimedia Appendix 3). On 7 (47%) of the 15 days, IDEABot asks broad questions about daily life, social interactions, and preferences (Textbox 1), and participants are invited to answer through audio recordings. The goal is to collect at least 1 minute of audio recordings per day from each participant. If the answers provided by participants to the 2 daily questions do not add up to 1 minute in duration, the chatbot asks 2 standard follow-up questions, encouraging the participant to say more. If after the first follow-up question (“Thank you for sending this audio! Tell us a little bit more about it, [participant]!”), the total duration of the audio recording still does not reach 1 minute, the chatbot sends the second question (“It would be very important if you could tell us a little more, okay?”). Regarding this last question, participants can choose whether to send another audio recording (typing “yes” or “no” before sending the audio recording). One example is provided in Figure 2.

On the 7 days without audio prompts, participants are asked to complete the sMFQ [28,29]. The 13 questions of the sMFQ cover the current day (instead of the last 2 weeks as in the original sMFQ; Multimedia Appendix 4). Participants are instructed to type 0, 1, or 2 to answer each question, and they have the option to correct their answers (for relevant aspects of the processing of the collected data and analyses, refer to Multimedia Appendix 5 [30-36]).

Figure 1. Overview of the functioning of the Identifying Depression Early in Adolescence Chatbot over the period of a day.
Textbox 1. Questions (question 1 [Q1] and question 2 [Q2]) or prompts for audio responses requested by the Identifying Depression Early in Adolescence Chatbot (the original questions are in Brazilian Portuguese).

Day 1

  • Q1. Can you introduce yourself?
  • Q2. What have you done today? Is your day going according to your usual routine?

Day 3

  • Q1. Are you at home?
    • [If the response is “yes”] What are you doing? Is someone else around?
    • [If the response is “no”] Who do you live with? Do you get along with the people you live with?
  • Q2. Can you tell me more about your house? Do you like living there?

Day 5

  • Q1. Did you go outside today at all, [participant]? Do you spend more time inside, or do you sometimes go out? When you’re out, what do you normally do?
  • Q2. And how’s your neighborhood? Are there nice things around?

Day 7

  • Q1. Today I want to know about your favorite story. What is it? You can choose a movie, a series, a book...whatever you want!
  • Q2. And why is this your favorite story, [participant]?

Day 9

  • Q1. Do you use your mobile phone a lot, [participant]? What are your favorite things to do on the mobile phone?
  • Q2. And how much time do you think you spend on the internet each day? Do you use the internet mostly during the day or at night? Why?

Day 11

  • Q1. Not counting the audio recordings you send here [grinning face with sweat emoji], who do you talk to about things that happen in your life? How’s your relationship with this person?
  • Q2. And why do you trust this person?

Day 13

  • Q1. It’s been almost 2 weeks since we started talking, [participant]! How did you feel about answering these questions?
  • Q2. And how have you been in these last 2 weeks? Has anything different happened?
Figure 2. Example of the interaction with users of the Identifying Depression Early in Adolescence Chatbot: day 2.
Initial Results of a Full-Sample Implementation of the IDEABot

The IDEABot was first implemented as part of the IDEA-RiSCo second-year follow-up assessment, which took place between August 1, 2020, and January 31, 2022. It was subsequently also used in the third-year follow-up of the IDEA-RiSCo sample, which occurred between August 1, 2021, and September 30, 2022.

To explain the chatbot’s functioning and features to participants, an animated video (Multimedia Appendix 6) was developed by the research team, providing a comprehensive overview of the research process. It reminded participants about the previous waves of data collection and the overall research goal, as well as presented the various steps of data collection that they could engage in (including the IDEABot). In addition, the video provided information regarding data confidentiality, including end-to-end encryption by WhatsApp for all chats, and the measures taken by the research team to ensure data protection. After the video was sent, if participants agreed to use the IDEABot, a research team member sent a link that directed users to initiate the interaction.

For the second- and third-year follow-up assessments, 9.7% (11/113) and 11.5% (14/122) of the adolescents, respectively, did not have a smartphone and agreed to receive a device from the study team to enable data collection completion. All other participants used their own smartphones and already had WhatsApp installed. In terms of technical challenges experienced during the IDEABot implementation, we recorded 6 and 14 occurrences or technical malfunctions in the second- and third-year follow-up assessments, respectively.

In the second-year follow-up, there were 5 issues with the integration with WhatsApp’s application programming interface (API; September 11 and 15, 2020; December 11, 2020; April 4, 2021; and June 15, 2021) and 1 instance in which WhatsApp was offline around the world owing to an instability in Meta’s servers (October 4, 2021) [37]. All issues were resolved within 24 hours, but the interactions of 6.2% (7/113) of the participants were affected directly. As result, these participants lost 8 interaction days in total. In addition, in the second-year follow-up, there were 3 instances in which the chatbot’s malfunctioning prevented participants from completing the scheduled interactions. In all cases, participants repeated the interaction days affected. Finally, there was 1 occasion on which a participant was not able to complete the day’s interaction owing to a problem with telephone billing, which was later resolved.

In the third-year follow-up, there were 12 issues with the integration with WhatsApp’s API (March 18 and 19, 2022; April 5 and 20, 2022; May 5, 18, and 20, 2022; June 14 and 26, 2022; July 8, 2022; and August 16 and 28, 2022), as well as 2 instances in which chatbot was unable to access the internet (October 10, 2021, and February 18, 2022). In addition, the instance in which WhatsApp was offline worldwide (October 4, 2021) also affected the third-year follow-up. Only 1 occurrence was not resolved within 24 hours (March 18 and 19, 2022), owing to the API’s instability. Interactions were affected for 33.6% (41/122) of the participants, resulting in a loss of 16 occasions in which these participants could have completed the day’s interaction. The greatest number of occurrences were mostly caused by the changes in WhatsApp Web, the web-based interface for WhatsApp required for running the API.

In the second-year follow-up, 140 adolescents took part in some aspects of data collection and were therefore eligible to use the IDEABot. Of the 140 adolescents, 113 (80.7%) agreed to use the IDEABot and completed the initial interaction. Of these 113 participants, 1 (0.9%) interacted with the chatbot only on the first interaction. The 112 adolescents who continued interacting with the chatbot engaged on average 12.8 (SD 3.5) of the 14 possible days, corresponding to a compliance rate of 91.4%. The snooze function was used 609 times, resulting in 331 completed interactions. In addition, participants sent on average 65 (SD 37.7) seconds of audio recordings per day, resulting in an average of 7.6 (SD 4.3) minutes of audio recordings per participant.

For the third-year follow-up, 132 adolescents took part in some aspects of data collection and were therefore eligible to use the IDEABot. Of the 132 adolescents, 122 (92.4%) agreed to use the IDEABot and completed the initial interaction. Of these 122 participants, 1 (0.8%) interacted with the chatbot only on the first interaction. The 121 adolescents who continued interacting with the chatbot engaged on average 10.57 (SD 3.4) of the 14 possible days, corresponding to a compliance rate of 75.5%. The snooze function was used 569 times, resulting in 258 completed interactions. In addition, participants sent an average of 69.2 (SD 66.1) seconds of audio recordings per day, resulting in an average total of 8.1 (SD 7.8) minutes of audio recordings per participant.

No significant association between socioeconomic status and the number of days of interaction with the IDEABot was found (P.88); the number of days on which responses were recorded also did not differ when participants were stratified according to the pattern of previous WhatsApp use (ie, never, several times/week, or several times/day; P.98) or by sex (male or female; P.66).

Principal Findings

This study outlines the development, feasibility pilot, and initial results obtained with the implementation of a chatbot to support mood assessment in adolescents. Although chatbots are becoming increasingly more common in health care settings [38], few studies have provided detailed analyses and empirical discussions of specific design elements and development techniques [39]. In this sense, we believe that reporting the development and implementation of the IDEABot is a novel and relevant contribution, especially given the overall good acceptance for using the tool, low attrition, and high compliance in terms of the proportion of responses in relation to the total number of elicited prompts.

To the best of our knowledge, the IDEABot is the first chatbot specifically tailored to aid multimodal research data collection with adolescent populations. Our decision to use an existing platform made it possible to design, develop, and implement the IDEABot in a way that directly addresses the constraints that the use of new mobile apps may pose to research teams and users, in addition to saving development and adjustment time. The IDEABot runs on any smartphone with WhatsApp, regardless of operating system, as long as internet connectivity is available. The IDEABot thus qualifies as a frugal innovation: it is significantly cheaper than other alternatives (such as the development of a new stand-alone app); it has proven sufficient for the proposed level of data collection; and by using it, we were able to reach participants who would otherwise remain underrepresented [11]. Moreover, the proposed approach to data collection is highly flexible and could potentially leverage all forms of interactions available on WhatsApp, including photographs and video recordings.

The initial administration of the IDEABot indicates engagement rates of >80%, with more than half of the participants (59/113, 52.2% and 52/122, 42.6% for second- and third-year follow-up use, respectively) completing all 15 days of collection. In ecological momentary assessment studies (ie, studies that are designed to collect individual data at several time points), 80% has been proposed as an indicator of adequate compliance [40]. Although compliance tends to vary in ecological momentary assessment studies (also depending on the number of measures made over time) [41], we believe that the rate obtained with the IDEABot matches the expected rates in similar studies and is adequate, considering the target population and that no financial or other direct incentive was used.

In this sense, we believe that repurposing an already ubiquitous tool in the life of adolescents to collect research data can increase overall engagement as well as diminish the perceived burden of data collection. Moreover, we highlight the importance of youth participation in the creation, adaptation, and implementation of the IDEABot. A chatbot’s personality, interaction flow, conversation length, and dialogue structure are important aspects and can influence user satisfaction [39]. In the case of the IDEABot, all these aspects were created and tailored with the aid of a group of adolescents, who were active in pointing out any strangeness or discomfort and were ready to brainstorm solutions. Thus, not only was the final chatbot tailored to collect relevant research data, but it was also pleasant in terms of appearance and the manner of interaction with adolescents themselves, which can greatly decrease the burden of research participation.

All things considered, the IDEABot still has important limitations that need to be addressed. Despite good engagement rates among Brazilian adolescents, the IDEABot is a basic chatbot that uses a rule-based approach. Although this gives the researchers optimal control over conversation flow and topics, the limited response range may decrease usability by adolescents (who may, for example, become frustrated with repeated error messages) [42]. In addition, as a WhatsApp-based chatbot, the IDEABot is susceptible to changes in policies and bugs affecting the platform. In this sense, the usability of the IDEABot becomes heavily linked to WhatsApp as a commercial product, and researchers have no control over policies such as data security and other features. The instance in which WhatsApp was offline worldwide preventing data collection is also an indication of the bot’s susceptibility to the platform’s functioning, which may hinder its applicability.

Furthermore, although the chatbot’s user-oriented design may contribute to higher self-disclosure [43], privacy concerns regarding the use of the data are a relevant topic. WhatsApp policies include “end-to-end encryption” [44], and the IDEABot also stores information (audio recordings and conversation logs) on secure encrypted servers with additional anonymization of sensitive information in reports. However, all conversation logs and sent audio files remain accessible to other users in the mobile phone or any other devices that may be used to connect to WhatsApp (such as WhatsApp Web). Local backups may also store this information in user’s mobile phones, creating the risk of confidentiality breaches that cannot be controlled by the research team.

Another important aspect is the chatbot’s response to serious health concerns. As the IDEABot often queries participants on mood and daily events, we might expect sensitive information to be disclosed at the moment when distressing events occur. However, the IDEABot’s rule-based approach may not be suitable for fully and effectively responding to these events. In our project, mitigation efforts included full disclosure that audio messages would not be listened to immediately by the research team and that the IDEABot was not equipped to deal with mental health emergencies. Participants were also provided with the national emergency service hotline number for acute cases, and they were also able to call a research team psychiatrist in case of significant distress during the data collection process. However, this particular safety measure was never used by participants during the data collection process in either follow-up wave.

Also important is the susceptibility of the interface to technical error, such as bugs in the chatbot response routine (it does not respond, or it provides responses that do not fit the conversation context). As people may anthropomorphize chatbots [43], perceiving them as having a mind with intention, consciousness, and goals [45], these instances may generate negative feelings or distress responses, with a potential negative impact on participants who could become attached to the chatbot [46], or even hinder retention and continuous use. For the IDEABot, preventive measures include continuous function supervision by both humans and software monitoring the integration with WhatsApp’s API. In addition, using the platform as a medium for data collection also gives researchers little control over the quality of the data while they are being collected. This can be critical, for example, during data analysis, in which the selection, extraction, and assessment of acoustic features are dependent on the quality of the audio files and the data obtained [30]. This highlights the need for further research to explore the data collected as well as the techniques that are best suited for collecting and analyzing the data.

Therefore, the IDEABot presents limitations that may be considered inherent to the methods chosen. However, its development was guided by the principle of user transparency, and challenges regarding privacy and adverse incidents have been, and continue to be, closely and continuously assessed throughout development, implementation, and use. In addition, we believe that, as a tool, the IDEABot supports stakeholder values [47]. Nonetheless, the ethical considerations involving chatbot use will change with time and technical development, and continuous reassessment is vital to address any resulting ethical concerns that may arise.


The IDEABot is a novel WhatsApp chatbot developed to aid intensive longitudinal collection of mood data among adolescents. The collection of audio recordings and information on mood and behavior throughout 15 days may enable analyses of adolescents’ data that would otherwise not be possible. The completion rate shows that the IDEABot was able to collect information in a manner that is attuned to the adolescents’ lives. In this sense, the use of sequenced audio recordings may be considered similar to an audio diary, capturing much of the sense making and representation of experiences at different time points [48].

It is worth noting that the choice for a multimodal data collection approach that combines audio recordings of prompted speech, daily information on mood, and traditional assessment methods (such as questionnaires) sheds light on aspects of depression—such as the temporal evolution of symptomatology—that have only recently become a focus of research and are also rapidly advancing. Thus, the IDEABot generates a rich database that combines different types of input information that can be compared and triangulated.

The IDEABot is a frugal innovation and therefore has a goal to meet the basic needs of a population that would otherwise remain underserved [11]; a strength of the IDEABot is its reliance on an available ubiquitous medium as a way to reach a population that is still underrepresented in research [49,50]. However, adaptability is key, and thus we chose to use a simple rule-based approach, allowing the IDEABot to be easily implemented, both technically and economically. As a result, the IDEABot is a feasible tool for data collection that can be adapted, tested, and implemented in different settings and for different purposes.

Another strength of the IDEABot is its capability for intensive data collection over extended periods within a longitudinal 3-year research project with a careful phenotypic characterization of the sample, including multiple informants. Such intensive and momentary data collection can elucidate aspects of the overall trajectory of different groups of individuals, such as those taking part in the IDEA-RiSCo study. This group approach can be useful for monitoring change and fluctuations in mood and to address the overall trajectories of different groups over time. In addition, periods of intensive data collection in individual participants may capture unique changes or symptom fluctuation patterns that would not otherwise be detected [7], contributing important information regarding symptom connectivity and centrality over time. The contrast between group and idiographic findings provides a further level of information not usually available in traditional research designs. In this sense, in addition to furthering our understanding of individual and group trajectories, the characterization of the sample also provides an opportunity to further explore the patterns of chatbot-assisted data collection.

In summary, the initial apps of the IDEABot were successful. The IDEABot seems to be a feasible, potentially scalable tool to collect data that can further our understanding of how mood changes and develops over time among adolescents.


The authors are extremely grateful to the schools and individuals who participated in this study and to all members of the Identifying Depression Early in Adolescence (IDEA) team for their dedication, hard work, and insights. This project was supported by the Royal Academy of Engineering under the Frontiers Follow-On Funding scheme (FF\1920\1\61). The original IDEA project was funded by an MQ Brighter Futures grant (MQBF/1 IDEA). Additional support was provided by the UK Medical Research Council (MC_PC_MR/R019460/1) and the Academy of Medical Sciences (GCRFNG_100281) under the Global Challenges Research Fund. This work was also supported by research grants from Brazilian public funding agencies Conselho Nacional de Desenvolvimento Científico e Tecnológico (477129/2012-9 and 445828/2014-5), Coordenação de Aperfeiçoamento de Pessoal de Nível Superior (62/2014), and Fundação de Amparo à Pesquisa do Estado do Rio Grande do Sul (17/2551-0001009-4). CK is a Conselho Nacional de Desenvolvimento Científico e Tecnológico researcher and an Academy of Medical Sciences Newton Advanced Fellow. CK and BAK are supported by the US National Institute of Mental Health (R21MH124072). HLF was partly supported by the Economic and Social Research Council Centre for Society and Mental Health at King’s College London (ES/S012567/1). VM was supported by the National Institute for Health and Care Research Maudsley Biomedical Research Centre hosted by South London and Maudsley NHS Foundation Trust and King’s College London and MQ funding (MQBF/4). The views expressed are those of the authors and not necessarily those of the funders, the National Health Service, the National Institute for Health and Care Research, the Department of Health and Social Care, the Economic and Social Research Council, or King’s College London.

Conflicts of Interest

VM has received research funding from Johnson & Johnson, but the research described in this paper is unrelated to this funding. All other authors declare no other conflicts of interest.

Multimedia Appendix 1

Technical aspects of the development of the Identifying Depression Early in Adolescence Chatbot (IDEABot).

DOCX File , 14 KB

Multimedia Appendix 2

Anchoring vignettes.

DOCX File , 2390 KB

Multimedia Appendix 3

The types of interactions users can have with the Identifying Depression Early in Adolescence Chatbot (IDEABot).

DOCX File , 15 KB

Multimedia Appendix 4

Chatbot script for the Short Mood and Feelings Questionnaire instructions.

DOCX File , 13 KB

Multimedia Appendix 5

Processing and analysis of data and questionnaires.

DOCX File , 15 KB

Multimedia Appendix 6

Animated video.

MP4 File (MP4 Video), 78470 KB

  1. Fava GA. Forty years of clinimetrics. Psychother Psychosom. 2022;91(1):1-7. [FREE Full text] [CrossRef] [Medline]
  2. Feinstein AR. An additional basic science for clinical medicine: IV. The development of clinimetrics. Ann Intern Med. Dec 1983;99(6):843-848. [CrossRef] [Medline]
  3. Tudor Car L, Dhinagaran DA, Kyaw BM, Kowatsch T, Joty S, Theng YL, et al. Conversational agents in health care: scoping review and conceptual analysis. J Med Internet Res. Aug 07, 2020;22(8):e17158. [FREE Full text] [CrossRef] [Medline]
  4. Rezaii N, Wolff P, Price BH. Natural language processing in psychiatry: the promises and perils of a transformative approach. Br J Psychiatry. Jan 07, 2022:1-3. [CrossRef] [Medline]
  5. Viduani A, Cosenza V, Araújo RM, Kieling C. Chatbots in the field of mental health: challenges and opportunities. In: Passos IC, Rabelo-da-Ponte FD, Kapczinski F, editors. Digital Mental Health: A Practitioner's Guide. Cham, Switzerland. Springer; 2023;133-148.
  6. Moskowitz DS, Young SN. Ecological momentary assessment: what it is and why it is a method of the future in clinical psychopharmacology. J Psychiatry Neurosci. Jan 2006;31(1):13-20. [FREE Full text] [Medline]
  7. Russell MA, Gajos JM. Annual research review: ecological momentary assessment studies in child psychology and psychiatry. J Child Psychol Psychiatry. Mar 2020;61(3):376-394. [FREE Full text] [CrossRef] [Medline]
  8. Lee S, Lee N, Sah YJ. Perceiving a mind in a chatbot: effect of mind perception and social cues on co-presence, closeness, and intention to use. Int J Hum Comput Interact. 2020;36(10):930-940. [FREE Full text] [CrossRef]
  9. Lukoff KH, Li T, Zhuang Y, Lim BY. TableChat: mobile food journaling to facilitate family support for healthy eating. Proc ACM Hum Comput Interact. Nov 2018;2(CSCW):1-28. [FREE Full text] [CrossRef]
  10. Fulmer R, Joerin A, Gentile B, Lakerink L, Rauws M. Using psychological artificial intelligence (Tess) to relieve symptoms of depression and anxiety: randomized controlled trial. JMIR Ment Health. Dec 13, 2018;5(4):e64. [FREE Full text] [CrossRef] [Medline]
  11. Hossain M. Frugal innovation: a review and research agenda. J Clean Prod. May 2018;182:926-936. [FREE Full text] [CrossRef]
  12. Bakker D, Kazantzis N, Rickwood D, Rickard N. Mental health smartphone apps: review and evidence-based recommendations for future developments. JMIR Ment Health. Mar 01, 2016;3(1):e7. [FREE Full text] [CrossRef] [Medline]
  13. Abd-Alrazaq AA, Alajlani M, Alalwan AA, Bewick BM, Gardner P, Househ M. An overview of the features of chatbots in mental health: a scoping review. Int J Med Inform. Dec 2019;132:103978. [FREE Full text] [CrossRef] [Medline]
  14. Vaidyam AN, Wisniewski H, Halamka JD, Kashavan MS, Torous JB. Chatbots and conversational agents in mental health: a review of the psychiatric landscape. Can J Psychiatry. Jul 2019;64(7):456-464. [FREE Full text] [CrossRef] [Medline]
  15. Mariamo A, Temcheff CE, Léger PM, Senecal S, Lau MA. Emotional reactions and likelihood of response to questions designed for a mental health chatbot among adolescents: experimental study. JMIR Hum Factors. Mar 18, 2021;8(1):e24343. [FREE Full text] [CrossRef] [Medline]
  16. Crutzen R, Peters GJ, Portugal SD, Fisser EM, Grolleman JJ. An artificially intelligent chat agent that answers adolescents' questions related to sex, drugs, and alcohol: an exploratory study. J Adolesc Health. May 2011;48(5):514-519. [CrossRef] [Medline]
  17. Kieling C, Buchweitz C, Caye A, Manfro P, Pereira R, Viduani A, et al. The Identifying Depression Early in Adolescence Risk Stratified Cohort (IDEA-RiSCo): rationale, methods, and baseline characteristics. Front Psychiatry. Jun 21, 2021;12:697144. [FREE Full text] [CrossRef] [Medline]
  18. Kieling C, Adewuya A, Fisher HL, Karmacharya R, Kohrt BA, Swartz JR, et al. Identifying depression early in adolescence. Lancet Child Adolesc Health. Apr 2019;3(4):211-213. [FREE Full text] [CrossRef] [Medline]
  19. Rocha TB, Fisher HL, Caye A, Anselmi L, Arseneault L, Barros FC, et al. Identifying adolescents at risk for depression: a prediction score performance in cohorts based in 3 different continents. J Am Acad Child Adolesc Psychiatry. Feb 2021;60(2):262-273. [FREE Full text] [CrossRef] [Medline]
  20. Global mobile consumer survey 2019. Deloitte. 2020. URL: https:/​/www2.​​bh/​en/​pages/​technology-media-and-telecommunications/​articles/​global-mobile-consumer-survey-2019.​html [accessed 2023-01-15]
  21. Pereira RB, Martini TC, Buchweitz C, Kieling RR, Fisher HL, Kohrt BA, et al. Self-reported social media use by adolescents in Brazil: a school-based survey. Trends Psychiatry Psychother (Forthcoming). Dec 20, 2022 [FREE Full text] [CrossRef] [Medline]
  22. Kapteyn A, Smith JP, Van Soest A, Vonkova H. Anchoring vignettes and response consistency. Report no: 1799563. RAND Working Paper Series WR-840. Feb 2011. URL: [accessed 2022-12-20]
  23. Ng MM, Firth J, Minen M, Torous J. User engagement in mental health apps: a review of measurement, reporting, and validity. Psychiatr Serv. Jul 01, 2019;70(7):538-544. [FREE Full text] [CrossRef] [Medline]
  24. Eysenbach G. The law of attrition. J Med Internet Res. Mar 31, 2005;7(1):e11. [FREE Full text] [CrossRef]
  25. Linardon J, Fuller-Tyszkiewicz M. Attrition and adherence in smartphone-delivered interventions for mental health problems: a systematic and meta-analytic review. J Consult Clin Psychol. Jan 2020;88(1):1-13. [CrossRef] [Medline]
  26. Critério de Classificação Econômica Brasil. Associação Brasileira de Empresas de Pesquisa. 2018. URL: [accessed 2022-08-17]
  27. Anderson M, Jiang J. Teens, Social Media and Technology 2018. Pew Research Center. URL: [accessed 2022-08-17]
  28. Pinto E. Short mood and feelings questionnaire: tradução para língua portuguesa, adaptação cultural e validação. Universidade Federal de São Paulo. 2014. URL: [accessed 2022-12-20]
  29. Rosa M, Metcalf E, Rocha TB, Kieling C. Translation and cross-cultural adaptation into Brazilian Portuguese of the Mood and Feelings Questionnaire (MFQ) - long version. Trends Psychiatry Psychother. Apr 05, 2018;40(1):72-78. [FREE Full text] [CrossRef] [Medline]
  30. Kerkeni L, Serrestou Y, Mbarki M, Raoof K, Mahjoub MA, Cleder C. Automatic speech emotion recognition using machine learning. In: Cano A, editor. Social Media and Machine Learning. London, UK. IntechOpen; 2019.
  31. Matheson JL. The voice transcription technique: use of voice recognition software to transcribe digital interview data in qualitative research. Qual Rep. Jan 15, 2015;12(4):547-560. [FREE Full text] [CrossRef]
  32. Ingale A, Chaudhari D. Speech emotion recognition. Int J Soft Comput Eng. Mar 2012;2(1):235-238. [FREE Full text]
  33. Torres Neto JR, Filho GP, Mano LY, Ueyama J. VERBO: voice emotion recognition database in Portuguese language. J Comput Sci. Nov 01, 2018;14(11):1420-1430. [FREE Full text] [CrossRef]
  34. Verde L, De Pietro G, Sannino G. A methodology for voice classification based on the personalized fundamental frequency estimation. Biomed Signal Process Control. Apr 2018;42:134-144. [FREE Full text] [CrossRef]
  35. Harris PA, Taylor R, Thielke R, Payne J, Gonzalez N, Conde JG. Research electronic data capture (REDCap)--a metadata-driven methodology and workflow process for providing translational research informatics support. J Biomed Inform. Apr 2009;42(2):377-381. [FREE Full text] [CrossRef] [Medline]
  36. Harris PA, Taylor R, Minor BL, Elliott V, Fernandez M, O'Neal L, et al. REDCap Consortium. The REDCap consortium: building an international community of software platform partners. J Biomed Inform. Jul 2019;95:103208. [FREE Full text] [CrossRef] [Medline]
  37. Milmo D, Anguiano D. Facebook, Instagram and WhatsApp working again after global outage took down platforms. The Guardian. Oct 05, 2021. URL: [accessed 2023-06-30]
  38. Laranjo L, Dunn AG, Tong HL, Kocaballi AB, Chen J, Bashir R, et al. Conversational agents in healthcare: a systematic review. J Am Med Inform Assoc. Sep 01, 2018;25(9):1248-1258. [FREE Full text] [CrossRef] [Medline]
  39. Fadhil A, Schiavo G. Designing for health chatbots. arXi. Preprint posted online on February 24, 2019. [FREE Full text] [CrossRef]
  40. Jones A, Remmerswaal D, Verveer I, Robinson E, Franken IH, Wen CK, et al. Compliance with ecological momentary assessment protocols in substance users: a meta-analysis. Addiction. Apr 2019;114(4):609-619. [FREE Full text] [CrossRef] [Medline]
  41. Howard AL, Lamb M. Compliance trends in a 14-week ecological momentary assessment study of undergraduate alcohol drinkers. Assessment (Forthcoming). Mar 13, 2023:10731911231159937. [FREE Full text] [CrossRef] [Medline]
  42. Chan WW, Fitzsimmons-Craft EE, Smith AC, Firebaugh ML, Fowler LA, DePietro B, et al. The challenges in designing a prevention chatbot for eating disorders: observational study. JMIR Form Res. Jan 19, 2022;6(1):e28003. [FREE Full text] [CrossRef] [Medline]
  43. Lee YC, Yamashita N, Huang Y, Fu W. "I hear you, I feel you": encouraging deep self-disclosure through a chatbot. In: Proceedings of the 2020 CHI Conference on Human Factors in Computing Systems. Presented at: CHI '20; April 25-30, 2020, 2020;1-12; Honolulu, HI, USA. URL: [CrossRef]
  44. Segurança do WhatsApp. WhatsApp. URL: [accessed 2022-06-19]
  45. Heider F, Simmel M. An experimental study of apparent behavior. Am J Psychol. Apr 1944;57(2):243-259. [FREE Full text] [CrossRef]
  46. Duffy BR. Anthropomorphism and the social robot. Rob Auton Syst. Mar 31, 2003;42(3-4):177-190. [FREE Full text] [CrossRef]
  47. Bender EM, Gebru T, McMillan-Major A, Shmitchell S. On the dangers of stochastic parrots: can language models be too big? In: Proceedings of the 2021 ACM Conference on Fairness, Accountability, and Transparency. Presented at: FAccT '21; March 3-10, 2021, 2021;610-623; Virtual Event. URL: [CrossRef]
  48. Bernays S, Rhodes T, Jankovic Terzic K. Embodied accounts of HIV and hope: using audio diaries with interviews. Qual Health Res. May 2014;24(5):629-640. [CrossRef] [Medline]
  49. Battel L, Cunegatto F, Viduani A, Fisher HL, Kohrt BA, Mondelli V, et al. Mind the brain gap: the worldwide distribution of neuroimaging research on adolescent depression. Neuroimage. May 01, 2021;231:117865. [FREE Full text] [CrossRef] [Medline]
  50. Kieling C, Baker-Henningham H, Belfer M, Conti G, Ertem I, Omigbodun O, et al. Child and adolescent mental health worldwide: evidence for action. Lancet. Oct 22, 2011;378(9801):1515-1525. [CrossRef] [Medline]

API: application programming interface
IDEABot: Identifying Depression Early in Adolescence Chatbot
IDEA-RiSCo: Identifying Depression Early in Adolescence Risk Stratified Cohort
IDEA-RS: Identifying Depression Early in Adolescence Risk Score
sMFQ: Short Mood and Feelings Questionnaire

Edited by Y Quintana; submitted 24.11.22; peer-reviewed by B Chaudhry, R Pine; comments to author 19.02.23; revised version received 03.04.23; accepted 02.05.23; published 07.08.23.


©Anna Viduani, Victor Cosenza, Helen L Fisher, Claudia Buchweitz, Jader Piccin, Rivka Pereira, Brandon A Kohrt, Valeria Mondelli, Alastair van Heerden, Ricardo Matsumura Araújo, Christian Kieling. Originally published in JMIR Human Factors (, 07.08.2023.

This is an open-access article distributed under the terms of the Creative Commons Attribution License (, which permits unrestricted use, distribution, and reproduction in any medium, provided the original work, first published in JMIR Human Factors, is properly cited. The complete bibliographic information, a link to the original publication on, as well as this copyright and license information must be included.