Original Paper
Abstract
Background: Cognitive bias modification for interpretation (CBM-I) is a technique to modify interpretation and is used to reduce unhelpful negative biases. CBM-I has been extensively studied in anxiety disorders where interpretation bias has been shown to play a causal role in maintaining the condition. Successful Treatment of Paranoia (STOP) is a CBM-I smartphone app targeting interpretation bias in paranoia. It has been developed following research on the feasibility and acceptability of a computerized version. This qualitative study extended that research by investigating the acceptability of STOP in individuals with paranoia. The study design and implementation were informed by the Evidence Standards Framework for Digital Health Technologies (DHTs) published by the UK National Institute for Health and Care Excellence (NICE).
Objective: The aim of the study was to involve service users in the design, development, and testing of STOP and understand the degree of satisfaction with the product. We aimed to establish the extent to which STOP met the NICE minimum and best practice standards for DHTs, specifically its acceptability to intended end users.
Methods: In total, 12 participants experiencing mild to moderate levels of paranoia were recruited to complete 6 weekly sessions of STOP before being invited to a feedback interview to share their experiences. Interview questions revolved around the acceptability of the app, its perceived usefulness, and barriers to the intervention, as well as practicality and views on the use of a digital intervention in principle. Interviews were coded and analyzed using the framework analysis method, combining both deductive and inductive approaches.
Results: Framework analysis yielded 6 themes: independent use and personal fit; digital versus traditional approaches; user reactions and emotional impact; impact on thinking, awareness, and well-being; design, engagement, and usability; and intervention relevance and practical fit.
Conclusions: STOP was found to be a broadly acceptable intervention and was positively received by most participants. The study findings are in line with the NICE Evidence Standards Framework for DHTs, as intended end users were involved in the development, design, and testing of STOP and were mostly satisfied with it. These findings will contribute to the further iterative development of this intervention that targets interpretation bias in paranoia.
International Registered Report Identifier (IRRID): RR2-10.1186/s13063-024-08570-3
doi:10.2196/70181
Keywords
Introduction
Interpretation Bias and Cognitive Bias Modification
Paranoia refers to unfounded beliefs that others intend to cause harm and exists on a continuum from mild suspiciousness to severe persecutory delusions []. Interpretation bias toward threat, defined as the tendency to interpret ambiguous information in a threatening or negative way, is thought to be one of the key cognitive processes involved in the development and maintenance of paranoia. Specifically, individuals with paranoia are more likely to interpret neutral or ambiguous situations as intentionally harmful, often exhibiting hostility interpretation bias (perceiving others as deliberately malicious) and external-personal attribution bias (attributing negative events to the harmful intent of others), as well as biases related to physical threats (such as those involving security devices or medical interventions) [-]. While similar biases are seen in anxiety disorders, paranoia is distinguished by its focus on intentional harm, rather than fear of negative evaluation or embarrassment as seen in social anxiety. Cognitive biases, including interpretation bias, have therefore been identified as critical maintenance factors in psychosis and paranoia, highlighting the need for targeted interventions that address these specific processing styles [,,].
Cognitive bias modification for interpretation (CBM-I) is one such technique, designed to reduce interpretation bias toward threat. Research has demonstrated that it is possible to change bias through training [,], helping individuals replace maladaptive cognitive patterns with more neutral or positive interpretations. Successful Treatment of Paranoia (STOP) is a smartphone app that delivers CBM-I through a series of ambiguous scenarios, encouraging the user to generate more adaptive interpretations []. CBM-I has shown consistent benefits in reducing anxiety and repetitive negative thinking [-], but studies applying this approach to paranoia remain limited. One feasibility randomized controlled trial evaluated a desktop version of cognitive bias modification for paranoia (CBM-pa) [], and a subsequent qualitative study highlighted the need for improvements in the task’s audiovisual design to enhance attention and engagement []. These findings directly informed the development of STOP, the smartphone app evaluated in this study.
While STOP draws on CBM principles, it differs significantly from the earlier CBM-pa platform. CBM-pa was delivered in a controlled laboratory setting, whereas STOP is a stand-alone smartphone app designed for independent use. Unlike CBM-pa, STOP enables participants to schedule sessions autonomously, receive automated reminders, and engage with a web-based interface designed to enhance immersion. These structural and experiential differences mean that the findings from the earlier CBM-pa evaluation cannot be assumed to generalize to STOP. A distinct evaluation is therefore warranted.
Thus, a qualitative study, such as the one reported here, is essential to determine whether STOP, in its current form, is a product that would be acceptable to users and suitable for use in routine clinical practice. The importance of gathering this type of evidence for any potential new digital therapy is emphasized in the Evidence Standards Framework for Digital Health Technologies published by the UK National Institute for Health and Care Excellence (NICE) [], which requires evidence of “acceptability with users” and their involvement in development, design, and testing. The qualitative data were contextualized using the User Experience Questionnaire (UEQ), which quantified participants’ experience of the app, such as their perception of its attractiveness, ease of use, and creativity []. Therefore, descriptive statistics of the UEQ are reported to indicate the overall experience of participants using the digital health technology (DHT).
In addition to assessing acceptability, a growing body of literature highlights the importance of applying a human factors lens when evaluating DHTs. This includes attention to usability, accessibility, and alignment with user needs and preferences. Human factors research considers not only whether an intervention works but also how individuals interact with it—factoring in context of use, autonomy, and digital literacy; for example, the concept of the digital therapeutic alliance has gained prominence in recent years, recognizing that users may form relational connections with digital tools, which in turn can affect engagement and perceived effectiveness. Evaluating STOP through both experiential and design lenses enables a more comprehensive understanding of its acceptability and potential for real-world implementation.
The Rise and Use of DHTs
Digital interventions have become increasingly common over time, and some have proven to be effective []. In a scoping review, Zhang et al [] identified 22 randomized controlled trials exploring the effect of CBM techniques delivered through a web-based intervention and found that CBM could help reduce biases in anxiety disorders and depression in adolescents.
Despite these advancements, there remains a lack of available treatments specifically targeting psychosis and paranoia. Other than pharmacological interventions, the only treatment that NICE recommends for psychosis is cognitive behavioral therapy, a psychological intervention. Smartphone app interventions offer an alternative treatment approach for paranoia. Additional benefits of using smartphone app interventions include their potential to help address long waiting times to access psychological services and to reduce the costs associated with in-person interventions. Furthermore, digital technologies could be considered low-stigma interventions, as service users can access them privately from any location. They could additionally be seen as empowering, enabling service users to take a more active and autonomous role in their care. More specifically for individuals with paranoia, digital health interventions may facilitate help seeking by reducing the need to build a therapeutic alliance and trust in a therapist, both of which can be especially challenging for this population [,].
Aims of This Study
This qualitative study explored users’ views and opinions of STOP, a DHT that targets interpretation bias in individuals with paranoia. The study provided an opportunity for individuals reporting distress from paranoia to voice their attitudes and experiences of using STOP and to offer feedback. The purpose of the study was therefore to explore different views and opinions of the digital intervention and to identify key strengths and weaknesses of the DHT as well as potential improvements. More specifically, the aim was to involve end users in the design, development, and testing of STOP and to assess the extent to which they were satisfied with it. The research questions were therefore formulated as follows:
- To what extent does the smartphone app STOP meet the NICE minimum or best practice standards for DHTs?
- Is STOP acceptable to its intended end users?
Methods
Ethical Considerations
This qualitative study received ethics approval from the London-Stanmore Research Ethics Committee (21/LO/0896). Potential participants were emailed a digital copy of the study information sheet and had at least 24 hours to consider participating in the study. All potential participants had the opportunity to ask questions and were free to withdraw at any stage without giving reasons. Interested, eligible participants were then asked to complete an electronic consent. Participants received £40 (US $52) in vouchers as compensation for their time and effort.
Screening and Recruitment Process
Participants were recruited from various sources, including the South London and Maudsley NHS Foundation Trust’s Psychological Interventions Clinic for Outpatients with Psychosis; Psychological Interventions Clinic for Outpatients with Psychosis clinic at the South West London and St George's Mental Health NHS Trust, through MQ (MQ Foundation, mental health research charity); and the McPin Foundation (a mental health research charity that specializes in integrating lived experiences in research).
Research assistants working on the main STOP randomized clinical trial (ISRCTN17754650) screened participants against the main trial eligibility criteria. Some of the main trial criteria were not relevant for this study, including the following: if on psychotropic medication, no change for at least the last 3 months; interpretation bias screening score of ≥−2; and Positive and Negative Syndrome Scale (PANSS) [] item 6 score of >3. Therefore, suitable participants who were ineligible for the main trial were redirected to this qualitative substudy and were then assessed against this study’s inclusion and exclusion criteria ( []).
Inclusion criteria
- Self-reported paranoid concerns
- Aged >18 y
- Fluent in written English
- Capacity to consent
Exclusion criteria
- Severe cognitive impairment
- Major current physical illness likely to impact participation
- Substance dependence
- Currently receiving another psychological intervention
- Scoring 7 (defined as “extreme”) on the Positive and Negative Syndrome Scale
- At high risk of suicide as per the Columbia Suicide Severity Rating Scale []
The choice of the sample size was guided by the concept of information power, which suggests that the number of participants needed for qualitative interview studies decreases as the amount of information held by the sample increases []. A smaller sample can be sufficient when the study aim is narrow, the sample is specific, the quality of dialogue is strong, and the analysis is theory informed []. Given the focused aim of exploring user experiences of STOP and the relevance of the sample to this aim, the final sample size was deemed appropriate for capturing key insights. Of note, patient and public involvement researcher and coauthor AK also took part in this study as a participant.
The STOP Intervention
STOP was delivered through a smartphone app. It consisted of 40- to 60-minute sessions (depending on reading speed) in which patients were presented with ambiguous scenarios that could be interpreted in either a threatening or a benign way. Each session consisted of 40 scenarios, presented as blocks of 10 items, followed by trivia items to allow users to take a short break (refer to for an intervention item example, a screenshot of STOP’s home page, and a trivia example). After reading each scenario, participants were asked to complete a word with missing letters and a comprehension question, both of which aimed to encourage them to interpret the scenario in a nonthreatening way. Further details regarding the intervention and its development have been published elsewhere [].

Materials
The UEQ is a validated and reliable scale designed to measure users’ experiences of using digital products []. It consists of 26 items grouped into 6 scales: attractiveness (overall impression; do users like or dislike the product?), perspicuity (is it easy to learn how to use it?), efficiency (can users complete tasks without unnecessary effort?), dependability (does the user feel in control of the interaction with the product?), stimulation (is it exciting and motivating to use?), and novelty (is it innovative and creative? Does it catch users’ interest?). Each item presents a pair of contrasting attributes (eg, annoying-enjoyable and confusing-clear), and respondents are asked to express their agreement with the attributes by selecting the point that most closely reflects their impression of the product. The UEQ demonstrates satisfactory reliability and construct validity [] and provides normative data from 452 prior digital product evaluations, enabling benchmarking against existing products.
Procedure
Participants who provided consent were sent a study Android smartphone with the STOP app preinstalled and ready for use. In line with medical device study requirements, hard copy instructions to use the smartphone and the app were mailed to participants’ home addresses. They were then given the opportunity to test the STOP app and were asked to complete 6 weekly sessions of the STOP intervention. They were also offered the option to meet with the researcher (LE) at any time for assistance with navigating the smartphone or app.
All participants had fortnightly check-in calls with the researcher (LE). They were also given the opportunity to ask questions and were offered support for any technical difficulties they encountered.
After the 6 STOP sessions ended, a time was scheduled for a feedback interview and completion of the UEQ. The feedback interviews were semistructured and lasted between 30 and 60 minutes. A topic guide () was used to structure the interview, while allowing flexibility for additional comments. The topic guide was informed by the NICE Evidence Standards Framework for DHTs []. Questions revolved around users’ experiences of using the digital intervention, its perceived strengths and weaknesses, their opinions of the visual display of the app as well as the CBM scenarios used, session duration and ease of use, and suggestions for improvement. All interviews were conducted via Microsoft Teams, as all participants had access to a computer and a working internet connection.
After the interviews, participants were asked to complete the UEQ. They could choose to receive a digital version or a printed copy sent to their home address and to complete it either at the beginning of the interview together with the researcher or on their own.
Data Analysis
UEQ quantitative data were aggregated across participants, and mean scores were plotted against normative data [] to enable within-sector comparisons.
The qualitative data from the interviews were coded and analyzed using the framework method []. The NICE Evidence Standards Framework for DHTs [] was used to inform the topic guide, codes, and deductive themes. To enhance quality and rigor, the Reflexive Thematic Analysis Reporting Guidelines were followed [].
Interview recordings were autotranscribed by the software and then manually corrected by LE. Data familiarization was achieved through repeated reading of the transcripts alongside any additional notes made during meetings.
Initial coding was surface level and descriptive and was conducted by LE using NVivo 14 (Lumivero).
A working analytical framework was developed by grouping and categorizing codes. New or revised codes were developed iteratively throughout coding. The data were then summarized in a matrix. Several group discussions between LE, JY, and PJ led to agreement on a final analytical framework and corresponding matrix. Themes were exclusively and intentionally related to users’ experiences of testing, acceptability, and satisfaction with the STOP app. Deductive codes were informed by the topic guide and the NICE Evidence Standards Framework for DHTs []. Inductive codes were driven by the data and reflected participants’ unique experiences and language.
Research Team and Reflexivity
A critical realist postpositivist stance was adopted to account for the fact that the coding and interpretation of data would be impacted by the researchers’ roles in the study. Reflexivity was therefore viewed not as a means of eliminating subjectivity but of making it visible and critically examined throughout the research process. Indeed, based on previous research [], most researchers involved in the study held beliefs that the app would help reduce interpretation bias in people with paranoia. In addition, the researchers’ own views as well as findings from previous research on individuals’ experiences with smartphone apps and CBM [,] would likely influence the analysis and interpretation of data. Using the topic guide informed by the NICE guidelines, interview questions were phrased to avoid guiding or stigmatizing language. The researcher attempted to embrace each participant’s unique and important experiences, as opposed to trying to reach a consensus. A reflective log was maintained throughout recruitment, data collection, and analysis to record reflective thoughts, which were used to inform analysis and interpretation and understand how personal preconceptions and assumptions may influence findings.
Results
Overview
We contacted 34 participants to complete 6 sessions of the STOP intervention, followed by a feedback interview. Of these 34 individuals, 16 (47%) met the eligibility criteria and consented to take part in the qualitative study. Of these 16 participants, 4 (25%) dropped out (n=3, 75% reported that they did not feel that they had the time to commit to the study; n=1, 25% was uncontactable), resulting in a final sample of 12 (75%) participants. The participants’ mean age was 36.33 (SD 10.74) years. Of the 12 participants, 7 (58%) identified as female, 8 (67%) were White, and 2 (17%) identified as being from a minority ethnic background (n=2, 17% did not disclose their ethnicity). The mean score on PANSS item 6 was 3.58 (SD 0.79) []. Participants’ characteristics and individual scores are summarized in .
| Participants | Sex | Age (y) | Race and ethnicity | Score on item 6 of PANSSa | UEQ attractiveness | UEQ perspicuity | UEQ efficiency | UEQ dependability | UEQ stimulation | UEQ novelty | Sessions completed (n=6), n (%) |
| 1 | Male | NRb | NR | 4 | −0.33 | −0.25 | −0.33 | 0.25 | −0.25 | 0.00 | 4 (67) |
| 2 | Female | 57 | White British | 3 | −1.17 | −1.25 | −2.00 | −0.75 | −1.00 | −1.00 | 5 (83) |
| 3 | Male | 29 | Pakistani | 3 | 2.17 | 2.00 | 0.75 | 1.50 | 1.00 | 2.25 | 1 (17) |
| 4 | Female | 25 | Asian | 3 | 1.00 | 2.00 | 0.00 | 0.75 | 0.75 | 1.25 | 3 (50) |
| 5 | Male | 40 | White Irish | 4 | 1.00 | 1.00 | 1.50 | 1.25 | 0.75 | 1.25 | 2 (33) |
| 6 | Male | 29 | White British | 5 | —c | — | — | — | — | — | 3 (50) |
| 7 | Female | 38 | NR | 4 | — | — | — | — | — | — | 3 (50) |
| 8 | Female | NR | White British | 4 | — | — | — | — | — | — | 3 (50) |
| 9 | Female | 37 | White British | 4 | 1.67 | 2.00 | 2.00 | 1.75 | 1.75 | 2.00 | 5 (83) |
| 10 | Female | NR | White British | 2 | — | — | — | — | — | — | 1 (17) |
| 11 | Male | 47 | White British | 4 | 1.33 | 1.75 | 2.00 | 1.50 | 1.00 | 0.75 | 2 (33) |
| 12 | Female | 25 | White British | 3 | 2.67 | 2.75 | 1.75 | 2.00 | 2.25 | 1.75 | 2 (33) |
aPANSS: Positive and Negative Syndrome Scale (item 6 relates to suspiciousness and persecution).
bNot reported.
cMissing data.
UEQ Results
After the feedback interview, 8 (67%) of the 12 participants completed the UEQ. The results are provided in . UEQ scores range from −3 (maximum negative evaluation) through 0 (neutral) to +3 (positive evaluation). A bandwidth of −0.8 to +0.8 indicates neutral evaluation, with scores outside this range indicating overall negative and positive evaluations, respectively. Participants who completed the UEQ in this study scored an average of between 0.9 and 1.5 on the different scales, which indicates that they described their experience as positive. The UEQ indicated generally positive user perceptions across all subscales, with the highest mean scores observed for perspicuity (mean 1.464, SD 1.302), attractiveness (mean 1.238, SD 1.224), and novelty (mean 1.179, SD 1.087). Lower, yet still positive, ratings were recorded for dependability (mean 1.143, SD 0.923), stimulation (mean 0.929, SD 1.018), and efficiency (mean 0.857, SD 1.457). Participant 1 showed inconsistencies on 5 scales (attractiveness, perspicuity, efficiency, dependability, and stimulation; , row 1) and was therefore excluded from the reported averages and from the scale mean and variety plots in and , as suggested by the UEQ handbook []. Specifically, the handbook defines inconsistency as a lack of uniformity in item ratings within a scale. While it may be possible for inconsistencies to occur in a single scale, due to a misunderstanding of the question or an error, it is less likely to be the case when they occur across multiple scales. The handbook therefore suggests excluding such participants from analysis because it is more likely that the behavior indicates random or nonserious completion of the questionnaire.
Summary statistics are presented graphically by scale () and benchmarked against digital sector norms ().


Qualitative Results
Overview
Through framework analysis of the interview transcripts, 6 themes were identified, along with topic summaries (summarized in the framework analysis): “independent use and personal fit; digital versus traditional approaches; user reactions and emotional impact; impact on thinking, awareness, and well-being; design, engagement, and usability; and intervention relevance and practical fit.” presents the framework analysis, and Figure S1 in illustrates the relationships between themes.
| Themes and topic summaries | Example quotations | |
| Independent use and personal fit | ||
| Having agency, taking responsibility, self-discipline, and self-growth |
| |
| Matches lifestyle and preferences |
| |
| Provides privacy, safety, and flexibility |
| |
| Ability, or otherwise, to complete without assistance: neurodiversity, language barrier, and technological proficiency |
| |
| Digital versus traditional approaches | ||
| Advantages of apps over traditional therapies |
| |
| Disadvantages compared to traditional therapies: digital exclusion, dropout rate, lack of feedback, distractions, and potential dependence on the app |
| |
| Focused, structured, and specific intervention |
| |
| User reactions and emotional impact | ||
| Perceived as judgmental, patronizing, and invalidating |
| |
| Validating and normalizing |
| |
| Elicited mixed feelings: confusion, excitement, fear and embarrassment, hope, sadness, or neutral |
| |
| Evocative and triggering of actual experiences |
| |
| Perceived as restrictive |
| |
| Impact on thinking, awareness, and well-being | ||
| Helpful impact on daily life and thinking patterns, sense of achievement, and general positive feelings |
| |
| Offers new perspectives |
| |
| Self-reflection, awareness, and understanding of paranoia and its presence in the user’s life |
| |
| Design, engagement, and usability | ||
| Design and look of the app |
| |
| Engagement with the app |
| |
| Suggestions for improvement: managing expectations, personalizing the experience, suggestions for design changes, and access to the app | —a | |
| Intervention relevance and practical fit | ||
| Acceptability and testing of scenarios |
| |
| Perceived difficulty of the intervention |
| |
| Session frequency and duration |
| |
| Views and suggestions related to implementation |
| |
aSuggestions are presented in .
Theme 1: Independent Use and Personal Fit
This theme depicted participants’ attitudes toward completing the intervention on their own and explored the extent to which an autonomous treatment was acceptable. An autonomous treatment was defined here as an intervention to be completed independently, meaning on one’s own and without additional support from professionals or other individuals, aside from any necessary practical or safety checks (eg, in STOP, fortnightly safety check-in calls with the researcher were required). This theme captured users’ views on both obstacles and benefits. Overall, participants seemed satisfied with the intervention being an autonomous treatment.
Some participants indicated that completing STOP on their own gave them a sense of agency and allowed them to take responsibility for their issues and for their progress by deciding to engage with the app and complete sessions on a weekly basis. Some described that STOP helped them monitor their progress and visualize it in the form of badges that they accumulated as they completed sessions. Others thought it could positively impact their self-confidence and allow them to grow:
I would say responsibility because...one of the things that can really happen is that you give up responsibility or you lack confidence, whereas with this it’s like I’m trying, I’m doing it.
[Participant 1]
Some participants reported satisfaction with the way the intervention was delivered as it fit their preferences and lifestyles. Often, the preferences were associated with mental health conditions getting in the way of seeking help. These participants reported that engaging in a smartphone app was helpful in that it allowed them to access help from their homes. For others, the intervention being an autonomous treatment fit with their personalities and offered them privacy and the space to be honest with themselves as well as the option to complete it at their own pace:
It gives you the independence, the privacy, the self-motivation and discipline.
[Participant 10]
Theme 2: Digital Versus Traditional Approaches
This theme described participants’ acceptability of STOP compared to other interventions, such as cognitive behavioral therapy or counselling. It captured users’ views on both obstacles and benefits related to the intervention’s delivery through a smartphone app. Compared to other treatments, participants seemed to be partly satisfied with the intervention, particularly the fact that it offered flexibility and was specifically focused on paranoia:
This treatment is different because it’s very specific to paranoia and I have not seen any treatment specific to paranoia before.
[Participant 8]
A few noted disadvantages of smartphone apps in general. These included the need for an internet connection and concerns about digital exclusion. In addition, some of the participants highlighted the high dropout rates associated with app-based interventions, while others reported that it can be distracting, potentially lead to dependence on the app, or not be in line with their preferences. Several comments revolved around the lack of human contact and feedback associated with using an app:
I mean there is an element, I suppose, of the isolation being extended and not having any human support around you, that you would miss out on.
[Participant 10]
Theme 3: User Reactions and Emotional Impact
This theme depicted the extent to which users were generally satisfied with the app after having completed a few sessions. It encompassed a wide range of emotions and reactions that were elicited and labeled. Two contrasting views emerged in the interviews. Most participants described their experiences as positive; however, some expressed negative views, particularly perceiving the app as invalidating or restrictive because it did not allow them the option to disagree with the answers provided within the therapeutic content:
You [are] only allow[ed] to answer the question one way. So not only is it restricting, what are you honestly expecting everyone to have the same opinion?...I can’t give you my opinion if you don’t let me answer the question in a way I feel right in answering it.
[Participant 6]
I found it validating.
[Participant 8]
Overall, emotions ranged from confusion and sadness to hopefulness and excitement. Such a range of emotions, even at an individual level, highlighted the unique experience of each participant and the complexities involved. Some participants explored the feelings elicited and used the opportunity to understand themselves better. Others reported that the scenarios presented in the app were evocative because of their relatability. A few described the experience as triggering their paranoia (which is an expected and possible element of STOP’s therapeutic process), whether due to the content of the scenarios or the use of an external smartphone provided by the researcher. However, this was not reported as particularly harmful or distressing. Some participants described STOP as validating because it normalized paranoid thoughts as well as the negative interpretations commonly associated with paranoia.
Theme 4: Impact on Thinking, Awareness, and Well-Being
This theme described users’ satisfaction with the app in terms of its impact on their daily lives, changes in thinking patterns, self-reflection, and awareness. Most participants reported that STOP had been helpful in some way.
Some described the impact as emerging from the fact that the app served as a distraction, while others described a sense of achievement from completing the weekly sessions. Some participants highlighted the positive impact on their daily lives, noting changes in their thinking patterns. Some described feeling generally more positive and less stressed after using the app. Some participants, including a few who described the app as restrictive, found it helpful in that it offered alternative ways to interpret situations that they would usually interpret in a threatening or negative manner.
Participants explicitly and spontaneously reported that they perceived the app as a tool for modifying their thinking patterns and also noted that it encouraged reflection and helped them identify themes of paranoia that they related to. They described STOP as helpful in recognizing personal thinking patterns and understanding the extent to which paranoia features in their day-to-day life. Some participants disclosed that the app helped them gain a better understanding of paranoia and how it is perceived by the general population.
Theme 5: Design, Engagement, and Usability
This theme outlined participants’ views of the design of the app and explored the extent to which this would be acceptable and satisfying to end users. Participants also spontaneously shared their suggestions for improvement.
Most participants reported feeling satisfied with the design, look, and feel of STOP, including the trivia, audiovisual features, and scheduling and reminder functions. They expressed that they found it “clear” (participant 7) and “not overloading of information” (participant 5). Some reported that they enjoyed the color schemes, the images, and the fact that “it involves all senses” (participant 10), especially if one had “poor eyesight” (participant 1). One participant noted that the images used reinforced binary gender stereotypes, which made the app less relatable. Most users described the app as engaging and interesting, easy to use, and friendly, although a few found it childish. One participant found the app helpful for addressing stigma, noting that its anonymous delivery made it feel discreet. In contrast, another participant felt that the app’s colors and sounds made it stand out and attract unwanted attention (the sounds can be switched off within the app).
Although STOP seemed largely well accepted, participants offered suggestions for improvement. The most common suggestions revolved around offering more response options for the scenarios, personalizing the app, and encouraging more flexible use. Suggestions also included managing expectations and providing more introductory and contextual information about the intervention’s aims, mode of action, and length. Some participants offered alternative design ideas that they believed would make the app more attractive and could help address possible issues of digital exclusion. Additional suggestions included making the app more stimulating and challenging to avoid boredom and to increase engagement:
If it was me, I would leave a section for another option where you could put your own feedback into, like feedback on what you think of the question or if your idea on the scenario didn’t fit into the boxes, boxes of yes or no.
[Participant 6]
Theme 6: Intervention Relevance and Practical Fit
The final theme focused on the core intervention and how it was experienced by users. It reflected the extent to which STOP was perceived as therapeutic, generally acceptable, and satisfying. Some participants indicated that they could strongly relate to the scenarios, while others found some scenarios personally irrelevant. A few reported feeling marginalized or excluded when the scenarios did not reflect the experiences of individuals with disabilities, restrictive lifestyles, or different lifestyle preferences. Some felt that the intervention was too easy and needed to be more stimulating and challenging. They found the more difficult or less relatable scenarios to be the most beneficial. Participants also noted that automatically and systematically completing the missing-letter tasks discouraged them from putting in the effort and therefore did not impact their way of thinking in a positive way.
Most participants commented on the length of the intervention, the frequency of sessions, and the duration of each session. In terms of session duration, most participants felt that the sessions were long and that they would benefit from shorter sessions. Some reported that the session duration interfered with their day-to-day activities and was difficult to fit into their busy schedules, while others said that it suited their lifestyle. Some participants highlighted the benefits of having a weekly session and compared it to therapy, while others expressed a wish to complete the content at their own pace and in shorter blocks throughout the week. In terms of the length of the intervention, some participants reported that 6 sessions would be enough in terms of changing thinking patterns, while others suggested more sessions to enhance the learning. Some participants offered suggestions related to the core intervention, such as shortening the duration of individual sessions, amending the frequency of sessions and length of the intervention, encouraging participants to complete the session, offering the intervention on an indefinite basis, integrating some human contact and additional support if needed, or targeting a specific population.
Other suggestions were related to how participants would envisage the implementation of STOP outside of a research trial:
Well, I guess like if the app was going to sort of develop, maybe integrating some sort of like breathing techniques or something. If things get too much, almost like a little like panic button crisis button on it or something.
[Participant 12]
I think it should be prescribed by your GP or your psychiatrist or your psychologist or occupational therapist. You should be signposted to it and there’s a mandatory one-to-one onboarding session to answer any questions, to explain and yeah.
[Participant 8]
The full list of recommendations arising from themes 5 and 6 is provided in .
Discussion
Principal Findings
This study explored the acceptability of STOP for individuals struggling with paranoia, in the context of the NICE Evidence Standards Framework for DHTs. Overall, participants representing the intended end users expressed broadly positive views of the smartphone app, describing aspects of the experience and design as acceptable and well thought out. These findings are consistent with a previous qualitative study exploring the acceptability of an earlier, computer-based version of the intervention []. However, given the modest engagement levels and variation in participant feedback, these results should be interpreted cautiously and viewed as indicative rather than conclusive.
More specifically, after up to 6 weeks of using STOP, participants acknowledged its potential as an autonomous treatment that was not directly supported by a therapist. Most participants reported that the way the intervention was delivered fit their preferences and offered them flexibility and privacy, in line with the perceived advantages of internet-based interventions in previous studies [,].
The transdiagnostic approach adopted by STOP avoids the use of stigmatizing diagnostic labels and targets a single symptom by seeking to alter its associated underlying maintaining mechanism.
Our findings suggest that the autonomy of STOP (theme 1) gave users a sense of agency, in contrast to the lack of empowerment frequently experienced by individuals with severe mental health conditions. In this respect, our findings concur with those of others suggesting that DHTs can help empower individuals, offering them more control of their condition and a better understanding of it [,].
Of the 12 participants, 1 (8%) reported concerns about data protection, which is common in the context of using DHTs in similar populations []. Digital therapies designed for use with those experiencing paranoia will need to pay special attention to transparency and reassurance around data privacy and security if these technologies are to benefit this user group.
Some participants reported that factors related to technical proficiency, neurodiversity, and individual preferences may interfere with users’ ability to benefit from the intervention, highlighting the need to address these considerations in future versions of STOP. While only 1 (8%) of the 12 participants explicitly reported concerns about digital exclusion, this is a commonly reported barrier in the digital mental health literature. By contrast, other circumstances, such as interpersonal challenges, can lead to digital technology improving access to appropriate health care [,]. It is likely, then, that some health inequalities may be mitigated by DHTs such as STOP, while others will inevitably be introduced. For this reason, STOP, and DHTs in general, are best seen not as a panacea but as a complement to traditional therapies, with the potential to fill some important gaps in provision.
Some disadvantages of using STOP were also reported (captured in the Theme 2: Digital Versus Traditional Approaches subsection), the most common one being the lack of human contact and feedback—absent from many DHTs and often reported in the literature []. Participants offered some suggestions to resolve this perceived disadvantage, such as providing the option to contact a mental health professional as needed while using the app or combining sessions of STOP with traditional psychological therapy. The preference for having alternative options to seek help seems to be common in the context of DHT use []. Acceptability was also related to user experience (theme 3). Some participants perceived STOP as judgmental and restrictive, mainly due to the language used within the app (despite it having been codeveloped with users). Others described the intervention as validating and normalizing. Some participants held what seemed to be opposing views of the experience, such as frustration and validation or sadness and hope. This may reflect a common experience of therapy, in which patients may have ambiguous and contradicting feelings toward the approach used, the process of therapy, or their therapist. However, in traditional therapy, the therapeutic alliance is examined, feelings about therapy explored, and approaches adapted to the patients’ needs and feedback. Such a process may not be achievable in digital interventions where the negative aspects of the experience are not addressed in the moment. Future digital therapies may benefit from accommodating this need, for example, by giving participants the opportunity to report their experience in real time. Indeed, the concept of the “digital therapeutic alliance” is gaining traction in psychotherapy literature and provides a suitable theoretical platform from which to explore and improve the digital therapeutic experience [].
Moreover, some users reported noticing a positive impact of STOP on their lives (theme 4), for example, in terms of their well-being, their thinking patterns, self-reflection, understanding, and awareness of their paranoia. Users also reported general satisfaction with both the core therapeutic intervention and its digital interface (themes 5 and 6). However, some participants expressed that they found a 30- to 40-minute session too long to complete in 1 sitting. These findings echo previous literature on CBM, which has noted that the approach can feel repetitive and monotonous [,]. Participants offered some suggestions for improvement, such as shortening the intervention, integrating some additional support, amending the duration of each session, personalizing the scenarios, or making the intervention more challenging and stimulating to optimize changes in thinking patterns.
As shown in , STOP received average or above average ratings for attractiveness, perspicuity, dependability, and novelty, but it scored below average on efficiency and stimulation. These findings align with qualitative themes in which participants described the task as repetitive or lacking engagement and occasionally reported that the interface did not facilitate smooth or efficient use. The absence of “good” or “excellent” ratings across scales may reflect the early-stage design of STOP and highlight areas for improvement, particularly in terms of enhancing engagement and interactivity. Given the exploratory nature of this qualitative study and the limited UEQ response rate, we present these results descriptively and cautiously. We have clarified this point in the manuscript and reflected on how this feedback can inform future iterations of STOP.
Finally, it is important to note that engagement with the intervention was low, with only 3 (25%) of the 12 participants completing ≥4 sessions and none completing the full 6-session program. Previous reviews of unguided digital mental health tools have consistently highlighted high dropout rates, with engagement often declining sharply after initial use []. In this context, the lack of full intervention completion among our participants aligns with broader trends but still raises important questions about the real-world feasibility of STOP as a stand-alone tool.
Comparisons with similar digital interventions show mixed engagement outcomes; for example, the EMPOWER study reported relatively high engagement when supported by peer involvement [], while other unguided CBM interventions have faced attrition challenges [,]. These findings suggest that embedding some form of human support may be critical for boosting sustained use, particularly in populations with paranoia. Future versions of STOP may therefore benefit from exploring hybrid models that balance autonomy with light-touch support or therapeutic onboarding. Furthermore, the role of the digital navigator is becoming increasingly recognized as a valuable component in the delivery of digital interventions, and this will be an important element to explore in future iterations of STOP [].
Strengths and Limitations
A strength of this study is that a critical realist perspective [] was adopted during the coding and analysis stages, allowing the researchers to reflect on their views and experiences throughout the study and understand how these may have influenced the analytic process. In line with this perspective, a reflective log was maintained throughout the study. To account for the possibility of response and desirability bias during the interviews, the researcher (LE) attempted to normalize all points of view, including negative experiences of using the app. This was achieved by using open-ended questions and encouraging participants to share their own experiences. To avoid steering responses toward the researcher’s expectations, participants were prompted and guided only when necessary. Finally, the researcher adopted a stance of curiosity and nonjudgment during the interviews to encourage participants to share their views openly and without fear of judgment. To our knowledge, this study is the first to explore the acceptability of a digital health intervention delivered via smartphone that targets interpretation bias in individuals with paranoia, in line with the NICE Evidence Standards Framework.
The findings need to be understood in light of possible limitations. Most participants reported mild levels of paranoia, as indicated by their PANSS scores (). The sample does not therefore reflect the full spectrum of paranoia that will be experienced by the intended end users. In addition, there was an underrepresentation of ethnic diversity in the sample because most of the participants (7/12, 58%) identified as White British. Moreover, none of the participants completed the full 6-week intervention before the interview. One possible barrier may be related to difficulties engaging with apps in psychosis [], particularly for service users with more severe paranoid symptoms. This partial exposure may have influenced their views on usability, engagement, and perceived benefit. However, even limited engagement was sufficient for participants to form meaningful opinions about key features of the intervention, and themes were largely consistent across those who completed more versus fewer sessions. Low completion rates are not uncommon in digital interventions for psychosis and may reflect broader challenges around attention, motivation, and trust in technology—particularly among individuals with more severe symptoms []. Participant feedback in this study pointed to task repetitiveness, lack of stimulation, and external factors (eg, life stressors and environment) as potential barriers. These insights are valuable for informing future iterations of STOP. Future studies could focus on implementing the suggestions offered by participants and reassessing acceptability in line with NICE guidelines. In addition, it could be helpful to explore the acceptability of different intervention doses (eg, 6 wk vs 12 wk), particularly in relation to the efficacy findings reported for each dose in the main trial []. It will also be important to understand clinicians’ experiences, opinions, and recommendations regarding STOP, both to improve the intervention further and to assess its “credibility with UK health and social care professionals,” as recommended in the NICE Evidence Standards Framework []. One other limitation of this study is that, although participants self-reported experiences of paranoia, their interpretation bias and paranoid symptoms were not formally assessed. While this means that our sample did not correspond to that of the main efficacy trial, it was arguably a more appropriate reflection of the likely population when in clinical use. Finally, the absence of contributors offering external critical perspectives represents a further limitation, which we acknowledge as an area for improvement in future work.
Conclusions
This study provides evidence of the central involvement of users experiencing paranoia in the qualitative evaluation and further development of the STOP digital mental health intervention. Participants were generally satisfied with STOP, as reflected in the quantitative survey data (UEQ scores) and views expressed during the qualitative interviews. They found the app broadly acceptable and were mostly satisfied with the way it was delivered, the core intervention itself, and the impact it had on their lives. The findings were in line with the target expectations outlined in the NICE Evidence Standards Framework for DHTs []. Participants reported that the intervention provided insight and awareness of their paranoid thinking patterns and offered them alternative ways of interpreting situations they related to. Participants also offered some suggestions for improvement that fit with their preferences and condition. This study therefore incorporates suggestions for future development. It will be important to take the findings into account if STOP is to succeed as a new potential intervention targeting interpretation bias toward threat in individuals with paranoia.
Acknowledgments
This work was supported by the Medical Research Council Biomedical Catalyst: Developmental Pathway Funding Scheme (MR/V027484/1). The authors would like to express their gratitude to the National Institute for Health and Care Research Biomedical Research Centre hosted at South London and Maudsley NHS Foundation Trust in partnership with King’s College London. The views expressed are those of the authors and not necessarily those of the National Health Service, the National Institute for Health and Care Research, the Department of Health and Social Care, or King’s College London. The authors acknowledge with thanks the contributions made to this work by the Successful Treatment of Paranoia study lived experience panel supported by the McPin Foundation. The authors would like to thank Avegen Health, the digital health care product development company that developed and hosted Successful Treatment of Paranoia throughout this work.
Disclaimer
For the purposes of open access, the author has applied a Creative Commons attribution (CC BY) license to any accepted author manuscript version arising from this submission.
Authors' Contributions
Conceptualization: LE (lead), PJ (equal), JY (equal), AK (supporting), CWH (supporting), and RT (supporting).
Data curation: LE.
Formal analysis: LE.
Funding acquisition: LE, PJ, and JY.
Investigation: LE.
Methodology: LE, PJ, JY, AK, CWH, and RT.
Project administration: LE, AK, and RT.
Resources: JY.
Supervision: JY and PJ.
Validation: JY and PJ.
Visualization: LE and RT.
Writing – original draft: LE, PJ, JY, AK, CWH, and RT.
Writing – review & editing: LE, PJ, JY, AK, CWH, and RT.
Conflicts of Interest
Patient and public involvement researcher and coauthor AK took part as a participant in this study.
Conceptual relationship of themes.
DOCX File , 187 KBSemistructured interview topic guide.
DOCX File , 145 KBUser interview recommendations for improvement.
DOCX File , 36 KBReferences
- Freeman D. Suspicious minds: the psychology of persecutory delusions. Clin Psychol Rev. May 2007;27(4):425-457. [CrossRef] [Medline]
- Clark DM. Anxiety disorders: why they persist and how to treat them. Behav Res Ther. Jul 1999;37 Suppl 1:S5-27. [CrossRef] [Medline]
- Freeman D, Garety PA. Worry, worry processes and dimensions of delusions: an exploratory investigation of a role for anxiety processes in the maintenance of delusional distress. Behav Cogn Psychother. Jan 01, 1999;27(1):47-62. [CrossRef]
- Chen J, Short M, Kemps E. Interpretation bias in social anxiety: a systematic review and meta-analysis. J Affect Disord. Nov 01, 2020;276:1119-1130. [CrossRef] [Medline]
- Trotta A, Kang J, Stahl D, Yiend J. Interpretation bias in paranoia: a systematic review and meta-analysis. Clin Psychol Sci. Oct 21, 2020;9(1):3-23. [CrossRef]
- Yiend J, Allen P, Lopez ND, Falkenberg I, Tseng H, McGuire P. Negative interpretation biases precede the onset of psychosis. Behav Ther. Jul 2019;50(4):718-731. [CrossRef] [Medline]
- Savulich G, Edwards A, Assadi S, Guven H, Leathers-Smith E, Shergill S, et al. Biased interpretation in paranoia and its modification. J Behav Ther Exp Psychiatry. Dec 2020;69:101575. [CrossRef] [Medline]
- Henry JD, Von Hippel C, Ruffman T, Perry Y, Rendell PG. Threat perception in schizophrenia-spectrum disorders. J Int Neuropsychol Soc. Sep 2010;16(5):805-812. [CrossRef] [Medline]
- Yiend J, Mackintosh B, Mathews A. Enduring consequences of experimentally induced biases in interpretation. Behav Res Ther. Jun 2005;43(6):779-797. [CrossRef] [Medline]
- Mathews A, Mackintosh B. Induced emotional interpretation bias and anxiety. J Abnorm Psychol. Nov 2000;109(4):602-615. [Medline]
- Hsu CW, Stahl D, Mouchlianitis E, Peters E, Vamvakas G, Keppens J, et al. User-centered development of STOP (successful treatment for paranoia): material development and usability testing for a digital therapeutic for paranoia. JMIR Hum Factors. Dec 08, 2023;10:e45453. [FREE Full text] [CrossRef] [Medline]
- Hirsch CR, Krahé C, Whyte J, Bridge L, Loizou S, Norton S, et al. Effects of modifying interpretation bias on transdiagnostic repetitive negative thinking. J Consult Clin Psychol. Mar 2020;88(3):226-239. [CrossRef] [Medline]
- Jones EB, Sharpe L. Cognitive bias modification: a review of meta-analyses. J Affect Disord. Dec 01, 2017;223:175-183. [CrossRef] [Medline]
- Martinelli A, Grüll J, Baum C. Attention and interpretation cognitive bias change: a systematic review and meta-analysis of bias modification paradigms. Behav Res Ther. Oct 2022;157:104180. [CrossRef] [Medline]
- Yiend J, Trotta A, Meek C, Dzafic I, Baldus N, Crane B, et al. Cognitive Bias Modification for paranoia (CBM-pa): study protocol for a randomised controlled trial. Trials. Jun 29, 2017;18(1):298. [FREE Full text] [CrossRef] [Medline]
- Leung CJ, Fosuaah A, Frerichs J, Heslin M, Kabir T, Lee TM, et al. A qualitative study of the acceptability of cognitive bias modification for paranoia (CBM-pa) in patients with psychosis. BMC Psychiatry. Jul 23, 2019;19(1):225. [FREE Full text] [CrossRef] [Medline]
- Evidence standards framework for digital health technologies. National Institute for Health and Care Excellence. 2019. URL: https://www.nice.org.uk/Media/Default/About/what-we-do/our-programmes/evidence-standards-framework/digital-evidence-standards-framework.pdf [accessed 2024-11-19]
- Schrepp M. User experience questionnaire handbook version 11. Research Gate. URL: https://www.researchgate.net/publication/281973617_User_Experience_Questionnaire_Handbook [accessed 2025-05-29]
- Zhang M, Ying J, Song G, Fung DS, Smith H. Web-based cognitive bias modification interventions for psychiatric disorders: scoping review. JMIR Ment Health. Oct 24, 2019;6(10):e11841. [FREE Full text] [CrossRef] [Medline]
- Yanos PT, Roe D, Lysaker PH. Narrative enhancement and cognitive therapy: a new group-based treatment for internalized stigma among persons with severe mental illness. Int J Group Psychother. Oct 2011;61(4):577-595. [FREE Full text] [CrossRef] [Medline]
- Malla AK, Takhar JJ, Norman RM, Manchanda R, Cortese L, Haricharan R, et al. Negative symptoms in first episode non-affective psychosis. Acta Psychiatr Scand. Jun 2002;105(6):431-439. [CrossRef] [Medline]
- Kay SR, Fiszbein A, Opler LA. The positive and negative syndrome scale (PANSS) for schizophrenia. Schizophr Bull. 1987;13(2):261-276. [CrossRef] [Medline]
- Posner K, Brown GK, Stanley B, Brent DA, Yershova KV, Oquendo MA, et al. The Columbia-suicide severity rating scale: initial validity and internal consistency findings from three multisite studies with adolescents and adults. Am J Psychiatry. Dec 2011;168(12):1266-1277. [FREE Full text] [CrossRef] [Medline]
- Malterud K, Siersma VD, Guassora AD. Sample size in qualitative interview studies: guided by information power. Qual Health Res. Nov 2016;26(13):1753-1760. [CrossRef] [Medline]
- Laugwitz B, Held T, Schrepp M. Construction and evaluation of a user experience questionnaire. In: Proceedings of the 4th Symposium of the Workgroup Human-Computer Interaction and Usability Engineering of the Austrian Computer. 2008. Presented at: USAB '08; November 20-21, 2008:63-76; Graz, Austria. URL: https://doi.org/10.1007/978-3-540-89350-9_6 [CrossRef]
- Gale NK, Heath G, Cameron E, Rashid S, Redwood S. Using the framework method for the analysis of qualitative data in multi-disciplinary health research. BMC Med Res Methodol. Sep 18, 2013;13:117. [FREE Full text] [CrossRef] [Medline]
- Braun V, Clarke V. Supporting best practice in reflexive thematic analysis reporting in Palliative Medicine: a review of published research and introduction to the Reflexive Thematic Analysis Reporting Guidelines (RTARG). Palliat Med. Jun 2024;38(6):608-616. [FREE Full text] [CrossRef] [Medline]
- Beard C, Weisberg RB, Primack J. Socially anxious primary care patients' attitudes toward cognitive bias modification (CBM): a qualitative study. Behav Cogn Psychother. Oct 2012;40(5):618-633. [FREE Full text] [CrossRef] [Medline]
- Steare T, Giorgalli M, Free K, Harju-Seppänen J, Akther S, Eskinazi M, et al. A qualitative study of stakeholder views on the use of a digital app for supported self-management in early intervention services for psychosis. BMC Psychiatry. Jun 19, 2021;21(1):311. [FREE Full text] [CrossRef] [Medline]
- Wallin EE, Mattsson S, Olsson EM. The preference for internet-based psychological interventions by individuals without past or current use of mental health treatment delivered online: a survey study with mixed-methods analysis. JMIR Ment Health. Jun 14, 2016;3(2):e25. [FREE Full text] [CrossRef] [Medline]
- Beattie A, Shaw A, Kaur S, Kessler D. Primary-care patients' expectations and experiences of online cognitive behavioural therapy for depression: a qualitative study. Health Expect. Mar 2009;12(1):45-59. [FREE Full text] [CrossRef] [Medline]
- Bucci S, Morris R, Berry K, Berry N, Haddock G, Barrowclough C, et al. Early psychosis service user views on digital technology: qualitative analysis. JMIR Ment Health. Oct 31, 2018;5(4):e10091. [FREE Full text] [CrossRef] [Medline]
- Berry N, Lobban F, Bucci S. A qualitative exploration of service user views about using digital health interventions for self-management in severe mental health problems. BMC Psychiatry. Jan 21, 2019;19(1):35. [FREE Full text] [CrossRef] [Medline]
- Borghouts J, Eikey E, Mark G, De Leon C, Schueller SM, Schneider M, et al. Barriers to and facilitators of user engagement with digital mental health interventions: systematic review. J Med Internet Res. Mar 24, 2021;23(3):e24387. [FREE Full text] [CrossRef] [Medline]
- Lederman R, D'Alfonso S. The digital therapeutic alliance: prospects and considerations. JMIR Ment Health. Jul 20, 2021;8(7):e31385. [FREE Full text] [CrossRef] [Medline]
- Brosan L, Hoppitt L, Shelfer L, Sillence A, Mackintosh B. Cognitive bias modification for attention and interpretation reduces trait and state anxiety in anxious patients referred to an out-patient service: results from a pilot study. J Behav Ther Exp Psychiatry. Sep 2011;42(3):258-264. [CrossRef] [Medline]
- Torous J, Lipschitz J, Ng M, Firth J. Dropout rates in clinical trials of smartphone apps for depressive symptoms: a systematic review and meta-analysis. J Affect Disord. Feb 15, 2020;263:413-419. [FREE Full text] [CrossRef] [Medline]
- Gumley A, Bradstreet S, Ainsworth J, Allan S, Alvarez-Jimenez M, Beattie L, et al. Early signs monitoring to prevent relapse in psychosis and promote well-being, engagement, and recovery: protocol for a feasibility cluster randomized controlled trial harnessing mobile phone technology blended with peer support. JMIR Res Protoc. Jan 09, 2020;9(1):e15058. [FREE Full text] [CrossRef] [Medline]
- Wisniewski H, Gorrindo T, Rauseo-Ricupero N, Hilty D, Torous J. The role of digital navigators in promoting clinical care and technology integration into practice. Digit Biomark. 2020;4(Suppl 1):119-135. [FREE Full text] [CrossRef] [Medline]
- Sturgiss EA, Clark AM. Using critical realism in primary care research: an overview of methods. Family Pract. 2020;37(1):143-145. [FREE Full text] [CrossRef]
- Berry N, Bucci S, Lobban F. Use of the internet and mobile phones for self-management of severe mental health problems: qualitative study of staff views. JMIR Ment Health. Nov 01, 2017;4(4):e52. [FREE Full text] [CrossRef] [Medline]
- Yiend J, Taher R, Fialho C, Hampshire C, Hsu C, Kabir T, MPIT, et al. Avegen. Assessing the efficacy and safety of STOP (successful treatment for paranoia)-an app-based cognitive bias modification therapy for paranoia: a randomised clinical trial protocol. Trials. Dec 02, 2024;25(1):806. [FREE Full text] [CrossRef] [Medline]
Abbreviations
| CBM-I: cognitive bias modification for interpretation |
| CBM-pa: cognitive bias modification for paranoia |
| DHT: digital health technology |
| NICE: National Institute for Health and Care Excellence |
| PANSS: Positive and Negative Syndrome Scale |
| STOP: Successful Treatment of Paranoia |
| UEQ: User Experience Questionnaire |
Edited by D Gooch; submitted 19.12.24; peer-reviewed by D Katz, A Hardy, AKC Chau; comments to author 29.03.25; revised version received 19.05.25; accepted 05.06.25; published 07.08.25.
Copyright©Laura Eid, Alexandra Kenny, Rayan Taher, Pamela Jacobsen, Che Wei Hsu, Jenny Yiend. Originally published in JMIR Human Factors (https://humanfactors.jmir.org), 07.08.2025.
This is an open-access article distributed under the terms of the Creative Commons Attribution License (https://creativecommons.org/licenses/by/4.0/), which permits unrestricted use, distribution, and reproduction in any medium, provided the original work, first published in JMIR Human Factors, is properly cited. The complete bibliographic information, a link to the original publication on https://humanfactors.jmir.org, as well as this copyright and license information must be included.

