This is an open-access article distributed under the terms of the Creative Commons Attribution License (https://creativecommons.org/licenses/by/4.0/), which permits unrestricted use, distribution, and reproduction in any medium, provided the original work, first published in JMIR Human Factors, is properly cited. The complete bibliographic information, a link to the original publication on https://humanfactors.jmir.org, as well as this copyright and license information must be included.
Cardiovascular disease and type 2 diabetes mellitus are two of the most prevalent chronic conditions worldwide. An unhealthy lifestyle greatly contributes to someone’s risk of developing these conditions. Mobile health is an emerging technology that can help deliver health promotion interventions to the population, for example, in the form of health apps.
The aim of this study was to test the feasibility of an app-based intervention for cardiovascular and diabetes risk awareness and prevention by measuring nonusage, dropout, adherence to app use, and usability of the app over 3 months.
Participants were eligible if they were aged 45 years or older, resided in Australia, were free of cardiovascular disease and diabetes, were fluent in English, and owned a smartphone. In the beginning, participants received an email with instructions on how to install the app and a user guide. After 3 months, they received an email with an invitation to an end-of-study survey. The survey included questions about general smartphone use and the user version of the Mobile Application Rating Scale. We analyzed app-generated and survey data by using descriptive and inferential statistics as well as thematic analysis for open-text comments.
Recruitment took place between September and October 2021. Of the 46 participants who consented to the study, 20 (44%) never used the app and 15 (33%) dropped out. The median age of the app users at baseline was 62 (IQR 56-67) years. Adherence to app use, that is, using the app at least once a week over 3 months, was 17% (8/46) of the total sample and 31% (8/26) of all app users. The mean app quality rating on the user version of the Mobile Application Rating Scale was 3.5 (SD 0.6) of 5 points. The app scored the highest for the information section and the lowest for the engagement section of the scale.
Nonusage and dropouts were too high, and the adherence was too low to consider the intervention in its current form feasible. Potential barriers that we identified include the research team not actively engaging with participants early in the study to verify that all participants could install the app, the intervention did not involve direct contact with health care professionals, and the app did not have enough interactive features.
Cardiovascular disease (CVD) and type 2 diabetes mellitus (T2DM) have a high prevalence worldwide, although both diseases could often be prevented through a healthier lifestyle [
Kumar et al [
We received ethics approval from the University of New South Wales Australia Human Research Ethics Advisory Panel G: Health, Medical, Community and Social (approval HC210520) and reciprocal approval from the Commonwealth Scientific and Industrial Research Organisation Health and Medical Human Research Ethics Committee (approval 2021_071_RR). All participants provided consent to participate in this study.
Our predefined sample size was 40 participants. The number was based on a sample size calculation according to Hooper [
People were eligible to take part in the study if they were aged 45 years and older, resided in Australia, were fluent in written and spoken English, owned a smartphone (Android or iPhone) with internet access, and had an email address. We set the start age to 45 years according to the guidelines of the Royal Australian College of General Practitioners. These state that general practitioners should screen for chronic diseases in the low-risk population and potentially initiate preventive measures starting from that age [
We recruited participants with the help of a recruitment agency that identified and contacted potential participants from panelists. Panelist members received a link to an eligibility survey. If people fulfilled the inclusion/exclusion criteria and indicated interest in participating, we contacted them via email. After providing consent via a web-based survey, participants received another email from us that included the study instructions, the user guide, and a unique identifier. Participants were asked to download the app from the app store on their phones and then use it for 3 months (approximately 90 days). It was up to the participants how often they used the app. We only said that we encouraged regular use. For questions or technical issues, participants could get in touch with us via email. After 3 months, participants received an invitation to an end-of-study survey. The app contained 4 core modules. The first module comprised risk scores for the 5-year risk of CVD and T2DM. These were the Framingham risk score for CVD and the Australian Type 2 Diabetes Risk tool for T2DM [
We collected 2 types of data: app-generated and survey data. The outcomes we measured were nonusage rate (defined as the proportion of participants who never used the app), dropout rate (defined as the proportion of participants who completely stopped using the app at least 14 days before they received the end-of-study survey invitation), adherence rate (defined as the proportion of participants who used the app each week at least once during the 3 months), and usability of the app. For the usability assessment, we used the user version of the Mobile Application Rating Scale (uMARS), a validated instrument to measure the quality of mobile health apps [
After the data collection was completed, we conducted the data analyses in RStudio using the programming language R. We used the following functions from the R Stats package. For differences between means, if data were normally distributed, we used the unpaired 2-sample
Score items for smartphone use.
Scoring items | Points |
Use smartphone multiple times a day | 1 |
Access the internet multiple times a day | 1 |
Have mobile data on the smartphone | 1 |
Type on the smartphone’s touchscreen without assistance | 1 |
Use a search engine on the smartphone without assistance | 1 |
Send an email with the smartphone without assistance | 1 |
Take and send a picture with the smartphone without assistance | 1 |
Install and update an app with the smartphone without assistance | 1 |
Message or chat using internet-based apps with the smartphone without assistance | 1 |
Make video calls with the smartphone without assistance | 1 |
This study took place between September 2021 and January 2022. We assessed 483 persons for eligibility, of which 142 persons were eligible (
We received 35 end-of-study survey responses from the 46 participants at baseline regarding their general smartphone use. These consisted of 24 responses from participants who had used the app and 11 responses from participants who had not used the app (
Flow diagram for this study.
Tasks that participants (n=35) stated that they could do with a smartphone.
Tasks | Without assistance, n (%) | Require assistance, n (%) | Never tried before, n (%) |
Type on the touchscreen | 32 (91) | 1 (3) | 2 (6) |
Use a search engine | 34 (97) | 1 (3) | 0 (0) |
Send an email | 33 (94) | 2 (6) | 0 (0) |
Take and send a picture | 33 (94) | 2 (6) | 0 (0) |
Install and update an app | 32 (91) | 2 (6) | 1 (3) |
Message or chat using internet-based apps | 30 (86) | 1 (3) | 4 (11) |
Make video calls | 28 (80) | 0 (0) | 7 (20) |
The nonusage rate was 44% (20/46). Of the 26 participants who used the app, 16 participants were 45-64 years old and 10 were 65 years or older. There was no statistically significant difference between app use and age groups (
Baseline characteristics of app users (n=26) and their duration of app use.
Characteristics of app users | Values, n (%) | ||
|
|||
|
Female | 8 (31) | |
|
Age ≥65 years | 10 (39) | |
|
Born in Australia | 17 (65) | |
|
|||
|
Low | 16 (64) | |
|
Moderate | 4 (16) | |
|
High | 5 (20) | |
|
|||
|
Low | 0 (0) | |
|
Moderate | 7 (27) | |
|
High | 19 (73) | |
|
Regular smoker | 7 (27) | |
|
|||
|
Physical activity, 2.5 hours per week | 18 (69) | |
|
Daily fruit and vegetable intake | 19 (73) | |
|
|||
|
1 day | 5 (19) | |
|
2-7 days | 2 (8) | |
|
8-30 days | 3 (12) | |
|
31-60 days | 3 (12) | |
|
61-90 days | 13 (50) |
App users calculated their CVD and T2DM risk in a median twice (IQR 1-4), with a maximum of 14 times. For some app users, the risk changed over time, but only on 4 occasions this led to a different risk category displayed in the app. After the registration, app users were automatically directed to the goal-setting module. They set goals of a median of once (IQR 1-3) and a maximum of 11 times, which was an outlier. Six app users never set a goal, and 3 never tracked health-related behaviors. The median number of times app users tracked health-related behaviors was 14 (IQR 1-57), with a maximum of 137 times. This value was not an outlier. Among those (15/26, 58%) who tracked their health behaviors on at least 7 days, 12 persons (80%) tracked them on a median every day, 2 persons (13%) on a median every second day, and 1 person (7%) on a median every third day. Among those who regularly tracked health-related behaviors, 4 people (33%) never reached their goals for all health-related behaviors in 1 day. The maximum was reached by 1 person who achieved their goals in 8 days. This corresponded to 13% (8/61) of the days that the person recorded health-related behaviors. The health-related behavior that app users achieved the least was minutes of physical activity per week.
The results from the uMARS are based on 22 participants who had used the app and had completed the end-of-study survey in its entirety. The overall app quality rating on the uMARS was 3.5 (SD 0.6) points out of a maximum of 5 points.
Regarding the subjective quality of the app, of the 22 users, 2 (9%) app users stated that they would recommend the app to everyone, 3 (14%) would recommend it to many people, 3 (14%) would recommend it to several people, 9 (41%) would recommend it to very few people, and 5 (23%) would not recommend it to anyone. Of those 8 app users who would recommend the app to everyone, many people, or several people, 6 (75%) were 45-64 years old, 2 (25%) were 65 years or older, 6 (75%) were males, and 2 (25%) were females. They rated the app quality with a mean score of 4.07 (SD 0.41). Among the 14 app users who would recommend the app to only very few or none, 8 (57%) were 45-64 years old, 6 (43%) were 65 years and older, 12 (86%) were males, and 2 (14%) were females. They provided a mean app quality score of 3.11 (SD 0.32). The difference in the mean scores was the greatest for engagement (3.78 [SD 0.65] vs 2.54 [SD 0.61]), followed by aesthetics (4.17 [SD 0.59] vs 3.24 [SD 0.40]), information (4.25 [SD 0.42] vs 3.38 [SD 0.32]), and functionality (4.09 [SD 0.40] vs 3.27 [SD 0.39]). When asked how often they think they would use the app in the next 12 months, 10 (46%) app users answered never, 1 (5%) answered once or twice, 3 (14%) answered 3-10 times, 6 (27%) answered 10-50 times, and 2 (9%) answered more than 50 times. When asked about payment, 14 (64%) app users responded that they would definitely not pay for the app, 4 (18%) responded probably not, 3 (14%) responded they might or might not, and 1 (5%) responded probably yes. The last set of uMARS questions was about the perceived impact on the users’ knowledge, attitudes, and intentions related to the target health behavior. Responses were based on the 5-point Likert scale (
Among the app users, 15 people left comments about the app at the end of the survey. We identified 6 themes (issues with self-monitoring, lack of interaction, credibility, user-friendliness, interaction with health care professionals, and privacy). One of the main themes we identified was issues with self-monitoring of health-related behavior. Some app users could not see the health-related behavior trends shown over time. One described initial confusion over the difference between daily and weekly goals when entering the number of alcoholic drinks consumed. Others mentioned that it took them too long to enter the values manually. One said it would have been nice to link the app to a step counter app on the phone. Some participants would have liked reminders for self-monitoring. This also relates to the theme of lack of interaction between users and the app. One specifically stated that the app lacked features that incentivize app use. A further theme was the credibility of the information. One person found the app inaccurate because it only considered waist circumference but not BMI. However, others mentioned that they liked the health information provided and found it credible. Regarding the user-friendliness of the app, some found the app clunky, while others specifically said that they felt it was easy to use. Concerning interaction with health care professionals, one person explained that using the app encouraged the person to get blood glucose levels checked and make an appointment with a cardiologist. Another person outlined that they were already working with their general practitioner on the health behaviors targeted with the app due to increased disease risk and reported using a diet-tracking app. One person raised privacy concerns and suggested that to protect their privacy, a password should be included to safeguard their information from other people who might be using their smartphones. One person who did not use the app said they could not access it. Other participants (10/46, 22%) had reached out to us via email at the beginning of the study to receive help downloading the app and registering.
Results of the app quality rating on the Likert scale in the user version of the Mobile Application Rating Scale (n=22).
Results of the perceived impact rating on the Likert scale.
Our objectives were to evaluate nonusage, dropout, adherence to app use, and usability of the app-based intervention for cardiovascular and diabetes risk awareness and prevention over 3 months. The nonusage and dropout rates were high, and the adherence rate was low. The overall quality rating on the uMARS was satisfactory. However, scores for interactivity and entertainment, which are part of the engagement section, were particularly low. We noticed differences between those who would recommend the app to everyone, many, or several people and those who would recommend it to only very few people or no one. Interestingly, the difference in the mean scores was the smallest for app functionality. Since our sample size of app users was quite small, one must interpret these differences cautiously.
Our results showed issues with the adoption of and engagement with the app-based intervention. We have different hypotheses about what might have contributed to these issues. Possible explanations for nonadoption are (1) problems installing the app, as stated by a participant in the survey; (2) the use of other health apps that better suit their needs and preferences, as mentioned by 2 participants in the survey; and (3) other pressures such as those caused by the COVID-19 pandemic with people potentially being more concerned with them or a family member contracting COVID-19 than developing CVD or T2DM. A likely explanation for the low engagement is that the app lacked interactive features. Although the app included 2 types of push notifications when users achieved their goals, the data analysis showed that participants barely met the required conditions to see these messages. That means participants received little to no notifications through the app. Although the registration process required app users to answer a total of 21 questions, we saw no indication that this affected adherence. Since each participant had to enter a unique identifier at the beginning of the registration process before proceeding to the questions, we could determine that all app users completed the registration. The non–app users never saw the questions. Participants did not indicate that they perceived the risk scores as disempowering or that they were overwhelmed by 2 conditions being integrated into the app. Even though the Framingham CVD risk score does not directly rely on data about physical activity and diet [
We estimated that a sample size of 40 participants would be sufficient to detect a 30% dropout and 50% adherence rate. We did not factor in nonusage when calculating the sample size because we did not anticipate nonusage to be an issue. When analyzing the data, we decided to differentiate between nonusage and discontinuation of use, that is, dropout. In retrospect, a larger sample size could have been beneficial. However, the sample size was sufficient to answer our research question. This study showed that asking people aged 45 years and older to download the app and expect them to use it over 3 months without additional interaction was not feasible. In addition to the small sample size, another limitation of this study was that we recruited participants through a recruitment agency. We noticed that some participants had completed the end-of-study questionnaire in full, including the question from the uMARS, even though they did not use the app. Those answers were excluded from the analysis. A further limitation of this study was that we did not collect data about the participants’ educational level or socioeconomic status. Another factor that may have influenced the study is that it took place during the COVID-19 pandemic, and some participants might have been in a government-regulated lockdown. It might partly explain the high nonuse and dropout as well as the low level of engagement, as participants may have had other health priorities on their minds. However, others such as Wright et al [
In comparison to the findings in our study, Krishnamurthi et al [
Several studies focusing on weight loss reported differences in self-monitoring of diet, physical activity, and body weight by using apps. For example, Carpenter et al [
Carpenter et al [
This study demonstrated that it would not be feasible to implement the app-based intervention in the current form because we would not expect sufficient engagement with the app to achieve significant behavior change in participants. There are different options on how we could adjust the intervention to hopefully achieve fewer nonusage and dropouts as well as higher adherence. One option would be to check in with participants at the beginning of the study to ensure that they could download the app. Potentially, that could significantly reduce the number of people who never used the app. Another option is to increase the number of interactive features in the app so that app users feel more motivated to use the app regularly. We could also enable voice input options to facilitate data entry. However, that would require access to the phone’s audio input, which may risk the user’s privacy. Additionally, it would increase app-specific storage. Further, we could include interactions with health care professionals in the intervention to improve adherence. We considered this when developing the intervention. However, the evidence for its superiority was inconclusive, for example, as reported by Cucciniello et al [
Byambasuren et al [
The app-based intervention proved to be unfeasible in its current form because too many study participants never used the app or dropped out and too few used the app weekly. We identified potential barriers such as no active query from the research team at the start of the study as to whether participants were able to install the app, insufficient interactive app features, as well as no direct interaction with health care professionals. We believe it was important to conduct this feasibility study before evaluating the intervention’s effectiveness in a larger trial. It saved resources for a study that likely would not have shown intervention effectiveness owing to low user engagement.
cardiovascular disease
type 2 diabetes mellitus
user version of the Mobile Application Rating Scale
This work has been jointly funded by the Australian e-Health Research Centre (Commonwealth Scientific and Industrial Research Organisation) and the Centre for Primary Health Care and Equity (University of New South Wales). We would like to thank the study participants for their valuable contributions. Further, we would like to acknowledge the work of the engineering team who developed the app, namely, Karen Harrap, Derek Ireland, and Vanessa Smallbon from the Australian e-Health Research Centre.
None declared.