Original Paper
Abstract
Background: Patient autonomy through informed consent is a foundational ethical principle for health care practitioners. Online consent processes risk producing “consent in name only,” using manipulative or confusing user interfaces to extract consent artificially. This presents a significant danger for safe and ethical remote consultations for primary care providers, which often extract significant amounts of sensitive personal data.
Objective: This study aims to examine the quality of consent obtained through both currently used and novel consent acquisition interfaces for remote e-consultations between a patient and a primary care provider.
Methods: A total of 55 adult participants in the United Kingdom completed an interaction with a mock-up e-consultation system’s consent interface for data processing, with 54 completing the full study protocol. The participants were then asked questions regarding what they had provided consent for and the usability of the interface. These responses led to the calculation of an industry-standard System Usability Scale (SUS) score and a novel Quality of Informed Consent Collected Digitally (QuICCDig) score.
Results: Users perceiving interfaces to be more usable (with a greater SUS score) were statistically significantly (n=54; P=.004) correlated with an increase in the quality of consent collected from those users (with a higher QuICCDig score). Nonetheless, both existing and novel user interfaces for collecting e-consultation consent were rated poorly, achieving a maximum SUS letter grade of “F.” In total, 45% (25/55) of all the participants reported not recalling making a privacy-related decision at all during their consultation, and 87% (48/55) did not recall being offered any alternatives to e-consultation.
Conclusions: The findings demonstrate that current methods for collecting consent in telemedical applications may not be fit for purpose and potentially fail to collect valid informed consent. However, increased usability scores from users do appear to drive improvements in the quality of consent collected. Therefore, decision-makers should place importance on high-quality interface design when building or procuring these systems. We have also provided the QuICCDig score for further use.
doi:10.2196/78483
Keywords
Introduction
Remote electronic systems for consultations between patients and their primary care providers have been growing rapidly in use since they were first conceptualized—growth that was accelerated by the pressures of the COVID-19 pandemic on the health care system []. These systems allow patients to obtain medical advice without having to physically travel to a clinic or speak to a primary care provider over the phone by answering questions and uploading images to an online system for a health care professional to review, thereby expanding access to health care. They allow for rapid symptom triage, which may increase primary care operational efficiency []; among some patient populations, they may also lead to increased honesty with health care providers when consulting about symptoms perceived to be embarrassing [].
Adoption of these systems has varied across international primary care environments. Around the world, asynchronous primary care “e-consultations” conducted via online platforms are known to be used in Norway, Spain, Sweden, the United Kingdom, and the United States [,]. In the United Kingdom, access to online consultations from patients to their general practitioner (GP) has been compulsory for GP practices since the April 2021 financial year’s contract [], as part of the National Health Service (NHS) Long Term Plan from 2019 []. However, partly owing to unforeseen pressures of the COVID-19 pandemic to implement the technology in a crisis-driven fashion, the success of the rollout has been variable, with some practices seeing substantial efficiency improvements, while others have since reverted to a “strategically traditional” model for meeting the needs of their patient population [].
Ethical challenges relating to informed consent in telemedicine are nothing new—home monitoring systems have faced them for some time []. However, despite the increased uptake of e-consultations [], issues of safety and consent have not been well evaluated in existing literature []. This lack of focus on safety, combined with technocentric policymaking, risks leading clinicians and technologists alike to be overeager to implement new technologies without adequate consideration of the implications for patients’ bioethical and legal rights []. Regulators and lawmakers internationally have been slow to respond to advances in telemedicine, muddying the waters further []—even as disparities in patient understanding of digital health information across age groups and health numeracy have been demonstrated [].
In the development of websites more broadly, impetus for collecting consent from users for data processing largely comes from regulatory requirements rather than ethical frameworks. Standards for ethical practice for software engineers have been described as “toothless” and criticized for “ethics-washing” without actually changing corporate incentives []. Perhaps unsurprisingly, many examples exist of users’ data being collected on a “legal minimum” basis or even without proper consent at all [,]. Bollinger et al [] found that, of approximately 30,000 websites studied, 94.7% contained potential European Union General Data Protection Regulation violations related to cookie consent, although that figure is based on automated detection without human verification. In some cases, interfaces are deliberately designed so that users do not remember their choices, even when they regret them upon being reminded of them []; however, this has become such standard practice that users simply consider it part and parcel of their internet experience [].
This resigned acceptance of poor ethical standards in the context of the internet presents a challenge for facilitating ethical telemedical interactions, as the ethical and legal standards for consent in medical contexts are substantially more stringent. Notably, for example, guidance in the United Kingdom consistently highlights the need for clinicians to obtain informed consent from patients for examination and assessment and not just subsequent treatment [-]; indeed, government guidance makes it clear that even implied consent for measuring blood pressure by a patient holding out an arm may require the patient “receiving appropriate information” first []. The courts have confirmed this; in R v Hallstrom, Mr Justice McCullough writes “it goes without saying that, unless clear statutory authority to the contrary exists, no one is […] even to submit himself to a medical examination without his consent” []. Failure to collect consent appropriately may be considered a reportable adverse event [].
There are no currently accepted systems that evaluate clinical informed consent collected through entirely digital systems with no human-in-the-loop involvement. Usability, defined by International Organization for Standardization 9241-11:2018 as the “extent to which a system, product or service can be used by specified users to achieve specified goals with effectiveness, efficiency and satisfaction in a specified context of use” [], is measured using well-established metrics such as the Post-Study System Usability Questionnaire or the System Usability Scale (SUS) [,]. However, these metrics, when used alone, could, for instance, result in high scores for interfaces that are extremely user-friendly but fail to convey important information, as patients’ feelings of being informed in the context of clinical consent may not actually relate to objective measures of knowledge []. Therefore, a clear gap exists for a metric to assess the quality of digitally collected informed consent.
This work analyzed the quality of consent that can be collected through remote patient–primary care provider e-consultations. To do this, we (1) developed a scoring system by which consent gathered for electronic patient–primary care provider medical consultations can reliably be evaluated, (2) established an understanding of the validity of current strategies for consent acquisition in remote patient–primary care provider e-consultation via the internet, and (3) evaluated whether changes in user interface design could impact the quality of consent collected.
Methods
Overview
This research developed a novel questionnaire and scoring system, Quality of Informed Consent Collected Digitally (QuICCDig), for evaluating the quality of health care informed consent collected digitally, based on a synthesis of existing systems from health and technology consent research. A selection of real online consultation systems was then evaluated, using both QuICCDig to evaluate the quality of the consent collected and the industry-standard SUS to evaluate the perceived usability of the interface.
QuICCDig Development
In this study, we synthesized QuICCDig from an analysis of multiple existing criteria for consent evaluation across both health care and digital technology. A narrative literature review was conducted, investigating current methods used for evaluating the quality of consent collection in medical consultations and digital interactions. PubMed, Google Scholar, and HeinOnline were searched for terms including “quality of informed consent,” “digital consent evaluation,” “medical consent evaluation,” “digital consent quality,” and “medical consent quality” without restricting to a particular date range, location of publication, or format of publication.
Search results were then assessed for relevance. In particular, results with no full text available to the researchers were excluded as were results whose full text did not contain evaluation methodologies for consent quality or whose methodology was based on parameters such as refusal rate rather than the quality of the consent process itself. Methodologies that were specific to the consent process for a particular subspeciality were excluded, but methodologies used for the evaluation of consent for clinical research were included, as it was hypothesized these might relate to more relevant data-sharing exercises. Where results were already referenced in a broader piece of work, such as a systematic review, the original work was excluded in favor of the broader work to simplify identification of themes.
Interface Evaluation Study Design
To understand how well existing digital consultation systems manage the challenge of acquiring informed consent, we (1) reviewed existing consultation systems approved for use in the United Kingdom to evaluate what consent interfaces they use, (2) created a series of interactive mock-up interfaces (defined as midfidelity interactive prototype user experiences that allow users to complete a specific interaction as relevant for the study) for consent collection in the context of a digital consultation, and (3) provided participants with the interface and subsequently administered the SUS and QuICCDig questionnaires to evaluate the usability of the interface and the quality of the consent collected through it.
All 9 online consultation systems approved for use in the United Kingdom NHS Digital Buying Catalogue [] as of January 2023 that supported “self-help and signposting” or “symptom checking” were reviewed, of which 7 were noted to be suitable for primary care patient-provider remote consultation use. Only 2 contemporary practices for consent were identified in these 7 systems: either “link and checkbox,” in which users are shown a checkbox to confirm their acceptance of privacy information or “summary and checkbox,” in which users are shown a checkbox to confirm their acceptance along with a brief explanation of what they are providing consent for and a link to read more. The design of these interfaces reflects different modalities of user interaction as well as different amounts of content presented initially to the user; these differences may reflect attempts to promote user recognition of familiar interaction paths within known user interface paradigms while also allowing appropriate use of users’ background knowledge, mapping to the Nielsen design heuristics of “recognition rather than recall” and “match between system and real world” [].
On the basis of these real consultation systems, 9 mock-up user interfaces were designed, with 2 based on contemporary “link and checkbox” or “summary and checkbox” consent practices and the remainder being novel interfaces not seen to be used in existing approved software (; for screenshots, refer to ). Checkbox, drag-and-drop, and swipe interfaces were investigated, as previous research has shown that the quality of consent collected and the perceived usability of the interface may vary across these interface types [,].
Participants’ demographic data were collected, and they were then presented with a brief to complete a simulated consultation related to a low-acuity mental health presentation. This presentation was chosen because it may result in greater concerns about privacy and confidentiality than a physical health presentation [,]. Participants were randomized by computer to interact with 1 of the 9 interfaces, and metadata regarding interactions, such as click heat maps and interaction times, were recorded by the system. The only differences between the interfaces were their consent collection techniques—the other elements were identical. Each participant was only shown the single interface to which they were randomly allocated.
After completing the interaction, participants were presented with a survey evaluating their impressions. First, participants were asked to complete the standard “SUS” questions []; they were then presented with the QuICCDig questionnaire described earlier.
| Blanket statement with link to privacy policy | Summary of privacy policy and details page | All policy information contained on a single page | |
| Checkbox | Interface 1 | Interface 4 | Interface 7 |
| Drag and drop | Interface 2 | Interface 5 | Interface 8 |
| Swipe | Interface 3 | Interface 6 | Interface 9 |
Ethical Considerations
This research was approved by the University of Southampton Faculty of Engineering and Physical Sciences Ethics Committee (FEPS/ERGO/78443). Participants were presented with signposting information for support if the study caused any discomfort or distress and informed of the voluntary, anonymous nature of the study. Informed consent was gained from all participants before their participation, and participants were reminded of their ability to opt-out at any time. Participants were not compensated for their participation. No personal data were used in the analysis.
Recruitment and Statistical Power
Participants aged 18 years or older who had previously interacted with a GP were recruited via snowball sampling from an initial set of posts on social media as well as from advertisements placed on a university campus. A target of 113 participants was set based on a planned linear multiple regression analysis (with an anticipated f2 value of 0.15, an α value of .05, and a power of 0.8). In actuality, 93 participants were recruited; 66 (71%) completed their interactions enough to answer at least the SUS questions, and 55 (59%) completed the survey well enough to calculate a QuICCDig score. Of these 55 participants, 45% (25/55) had formal computer science-related education, 27% (15/55) had formal medical education, and 76% (42/55) had been prior users of e-consultation technology.
In total, 1 (1%) participant who completed the QuICCDig questionnaire could not have their SUS score calculated due to a technical issue, leaving 54 (58%) of the recruited participants with valid scores for both SUS and QuICCDig. Demographic breakdowns of participants are included in .

Results
QuICCDig Synthesis
A total of 11 key papers were identified, and their content was then reviewed for themes. After excluding themes that were either impossible to measure reliably without a human in the loop and were thus impractical to include in an online primary care patient-provider e-consultation system, such as the use of repeat-back testing [,], or items that could automatically be collected in a digital system without having to ask a patient (and therefore did not need to be explicitly asked), such as time to complete a consent process or parts of a website that were consulted, 14 key themes were identified as being used in current evaluation criteria. From these themes, corresponding questions were synthesized for inclusion in the QuICCDig metric. The themes included and the questions produced are presented in , while themes identified but excluded are detailed in . The scoring metric for these questions is detailed in .
| Theme | Study | Question |
| Ability to complete the task | [,] |
|
| Perceived ease of use | [,] |
|
| Perceived quality of description and explanation of processes and how they work | [-] |
|
| Objective understanding of processes and how they work | [,] |
|
| Why it is being done and the benefits to the patient | [-] |
|
| Risks to the patient inherent to the procedure | [,-] |
|
| Alternatives | [,-] |
|
| Recollections of consent | [,] |
|
| Effect on privacy and confidentiality | [] |
|
| Degree of satisfaction with the decision-making and consent process | [,,] |
|
| Whether the patient knew who to ask questions to | [,] |
|
| How to revoke consent | [] |
|
| Patient’s feeling of involvement with the process | [,,] |
|
| Anything the patient felt was missing | [] |
|
| Theme | Study | Justification for lack of inclusion |
| Whether the patient was asked for consent | [] | In the context of an interaction that is guaranteed to present a prominent consent screen, asking users whether they were prompted for consent would not be expected to yield useful data compared to an in-person medical interaction, where it could be forgotten or made nonobvious. |
| Timing of information being given | [,] | Specific times at which consent information is presented can be automatically collected from an online system but are also less likely to be relevant in the context of an asynchronous patient–primary care provider interaction. |
| Who gave the information to the patient | [] | Consent information in online systems, such as primary care patient-provider e-consultation systems, is consistently provided through an automated online interface. |
| Whether the patient was asked to repeat the explanation | [] | Assessing repeated explanations for coherence and relevance would be essentially impossible to achieve without the involvement of a clinician, which is not practical for the purposes of a patient-provider e-consultation in primary care. |
| Questions relating to research consent | [] | The scope of the QuICCDig score is limited to clinical consent for consultation and assessment. |
| Questions relating to ignoring consent dialogues and continuing use of the application in any event | [] | In all the interfaces used and identified, it is not possible for the user to proceed with the consultation without interacting with the consent interface. |
| Questions relating to specific fine-grained control of (eg, cookies) | [] | Control of specific aspects of the technical function of the system is outside the scope of the QuICCDig score; rather, the overall consent process is evaluated. |
| Questions about which parts of the website were consulted | [] | In this study and in most similar interfaces identified in real-world use in the National Health Service Digital Buying Catalogue, interfaces are presented as single-page applications rather than multipage websites with multiple interaction paths. |
| Time to complete the consent and consultation process | [] | This can be captured automatically by online systems. |
Interface Evaluation Study
Among participants for whom both a QuICCDig and a SUS score could be calculated, a statistically significant moderate positive correlation was identified (n=54; ρ52=0.388; P=.004; 95% CI 0.120-0.604) between a user’s calculated SUS score and QuICCDig score (). There was no statistically significant difference in the number of study completions between mock-up interfaces (n=93; F8,84=1.122; P=.36), amount of information presented (n=93; F2,90=1.719; P=.19), or consent interface type used (n=93; F2,90=0.271; P=.76). The calculated QuICCDig scores ranged from –0.775 to 0.775 with a mean of 0.371 (SD 0.305); the SUS scores ranged from 13 to 40 with a mean of 32.11 (SD 6.864).
While there were substantial numerical differences in the mean scores between interfaces, as shown in , an ANOVA across the different mock-up interfaces for average QuICCDig score (N=55; F8,46=0.946; P=.49) and average SUS score (n=54, F8,45=1.629; P=.14) found that these differences were not statistically significant, although the averages did indicate some superiority of the first 2 information levels compared to all information being contained on the same page ( and ). Likewise, no significant difference was found in either SUS or QuICCDig scores based on previous e-consultation use (SUS: n=54; Mann-Whitney U=228.0; P=.44; QuICCDig: N=55; Mann-Whitney U=248.0; P=.62). Time elapsed interacting with the interfaces is shown in but was not statistically significantly different between experiments (N=55; F8,46=0.895; P=.53).
Across the board, only 13% (7/55) of the participants recalled being offered alternatives to e-consultation, something presented only in the policy text itself. No correlation was observed between the number of participants who recalled being offered alternatives and the mock-up interface shown. Of the 7 participants who did remember alternatives and stated which ones they remembered, 6 correctly recalled at least 1 alternative option presented by the privacy notice, although only 1 recalled both alternatives (NHS 111 or speaking to the GP by another means), and 1 also produced an alternative which was not included in the notice (NHS 999).
A total of 55% (30/55) of the participants recalled making a privacy-related decision during their e-consultation. Of the 26 participants who responded when asked what they recalled deciding, 12 (46%) referenced the idea of their data being processed; 9 (35%) wrote that they had to agree to privacy information, while 6 (23%) recalled deciding not to read the privacy information. Only 5 (19%) participants gave a specific decision they had to make, for example, “agree [sic] to google analytics,” or “if I should use e-consultation at all […] as I was worried about data breaches.”
When asked about the impact that the use of e-consultation would have on the confidentiality of their medical records, of the 37 respondents who wrote a response, 3 (8%) gave a response identifying a specific change (“It’ll be visible to my medical provider”; “Only the transport risk”; and “The information […] may be accessible to the software company and Google”); 15 (41%) stated they did not believe there to be any change. Some participants (n=8, 22%) also expressed feeling that their data would be generically “more” or “less” secure, while 11 (30%) stated they did not know.


Notably, 29% (2/7) of the participants from interface 2 (the drag and drop group was presented with a simple privacy policy link) wrote in free text that they would have liked to have what they described as a “simplified” or “short” version of the legal information before providing consent. No participants from any of the other groups made such a comment.
Most of the participants who responded when asked in free text whether their understanding of what an e-consultation was had changed after completing the consultation stated that their understanding did not change (13/23, 57%), while a plurality stated that they thought the mock-up interface was too simple for a full e-consultation (10/23, 43%).
| Blanket with privacy policy, mean (SD) | Summary and details page, mean (SD) | All policy information contained on a single page, mean (SD) | |
| Checkbox | Interface 1 34.00 (5.249) | Interface 436.00 (1.732)a | Interface 7 26.84 (7.360) |
| Drag and drop | Interface 2 34.43 (5.563) | Interface 5 32.57 (5.940) | Interface 8 26.67 (11.150)b |
| Swipe | Interface 336.00 (3.162) | Interface 631.84 (5.115) | Interface 9 28.17 (10.815) |
aItalics indicate the best result.
bThe worst result.
| Blanket with privacy policy, mean (SD) | Summary and details page, mean (SD) | All policy information contained on a single page, mean (SD) | |
| Checkbox | Interface 1 0.405 (0.192) | Interface 4 0.363 (0.063) | Interface 7 0.192 (0.504) |
| Drag and drop | Interface 2 0.371 (0.118) | Interface 5 0.475 (0.155) | Interface 8 0.075 (0.728)a |
| Swipe | Interface 3 0.458 (0.135) | Interface 60.5 (0.172)b | Interface 9 0.329 (0.472) |
aThe worst result.
bItalics indicate the best result.
| Interface number | Interaction duration (s), median (IQR) |
| 1 | 53.3 (35.9-72.4) |
| 2 | 61.0 (52.7-64.4) |
| 3 | 61.0 (59.2-62.8) |
| 4 | 57.9 (32.0-77.2) |
| 5 | 84.0 (56.1-89.7) |
| 6 | 69.3 (40.8-79.8) |
| 7 | 52.0 (39.2-71.1) |
| 8 | 95.3 (67.7-103.9) |
| 9 | 86.0 (53.2-97.2) |
Discussion
Principal Results
The link found between the quality of consent and perceived interface usability is a significant novel result and strongly supports the importance of high-quality interface design when building remote-access consent collection systems, such as those used for e-consultations. Designers of these systems should consider using user testing with the SUS, QuICCDig, and/or similar scoring systems to evaluate the effectiveness of their designs for gathering informed consent from users. Likewise, organizations responsible for their implementation should consider high-quality user interface design to be a requirement of such systems to effectively gather consent from patients and not merely a “nice-to-have.”
Nonetheless, across all user interfaces tested, both SUS and QuICCDig scores were objectively low. The maximum average SUS score attained by any of the interfaces was 36 for interfaces 3 and 4 (SDs 3.162 and 1.732, respectively), which is between the fourth and sixth percentile of raw SUS scores and would equate to a letter grade of F []. Likewise, the maximum theoretical QuICCDig score is 1, yet the highest attained score by any of the interfaces presented was 0.5. A total of 17% (9/54) of the participants who completed the full study admitted in free text, choosing not to read privacy information thoroughly or at all, even in the context of a study in which participants were informed beforehand of the aim to assess the informedness of consent, which may have had an impact on the SUS scores obtained. An interface required to complete a task that is a prerequisite but ancillary to the user’s intended goal in the consultation may therefore always receive low scores. Equally, participants’ broader attitudes to privacy and medical consent may have impacted perceived usability or desire to interact with the interface at all. The low number of participants conveying understanding of either positive or negative changes in the confidentiality of their medical records may have arisen for similar reasons. Alternative novel designs may warrant exploration, such as those incorporating voice-over or slideshow interfaces, to examine whether different modalities can deliver improvements.
Fewer than 1 in 7 participants remembered being offered alternatives to completing an e-consultation, even when every design included them in the presented information. This lack of awareness may be particularly concerning for patients less confident in completing their health interactions online, such as older patients, or for patients with what they perceive to be more serious conditions, who may feel less comfortable with online contact with their health care provider []. Further research should be conducted targeting these groups specifically. Real-world patients may be aware of existing alternatives anyway, such as out-of-hours services or calling their medical practice by telephone []. However, some patients feel pushed into e-consultation use by their primary care provider to avoid alternative methodologies such as face-to-face appointments, which may affect their trust in the service []. One study conducted in Norway in 2025 found that 9.5% of the patients who used a primary care e-consultation platform said that sending an e-consultation was not their first choice []; clear display of alternative options as part of the process of beginning a consultation may benefit these patients. Similarly, while some patients who are not English speakers may feel more confident completing an e-consultation than a telephone or face-to-face consultation [], others may prefer to be seen in person []; challenges among this patient group with understanding how to contact primary care providers have been specifically highlighted in focus groups conducted by the NHS [], underscoring the importance of this information being properly conveyed.
Both the drag-and-drop summary and swipe summary interfaces had higher average QuICCDig scores, but lower average SUS scores, than all other interfaces except for the ones containing the entire policy on the consent page. This could potentially indicate a priming effect: perhaps seeing a small amount of information begets interest in more, owing to an increased level of concern, akin to the bulletproof glass effect identified by Brough et al []. However, this study’s low sample size makes drawing any strong inference from this difficult; future work could investigate this effect in greater detail, perhaps through a more qualitative analysis.
Comparison to Prior Work
Easier-to-use user interfaces appear to be associated with increased quality of consent in this study. This builds on the findings of Lindegren et al [], demonstrating the effect of consent design patterns on usability and user attention. Habib et al [] further found that different user interfaces could result in substantially different patterns in recall of consent information, although they did not test the usability of the interfaces they examined.
Research has previously highlighted issues surrounding a lack of reporting on the safety and quality of electronic health care consent collection. Ramos [], writing in the context of an HIV clinic setting, pointed out that although guidelines clearly assess the accessibility of medicolegal consent information provided in paper form, no standard has existed for evaluating patients’ consent following interactions with a digital consent collection user interface. This was once again raised as an important gap in a 2024 systematic review by Leighton et al [], noting poor reporting on safety— underscoring the fact that in nearly a decade, there has been little progress in this field.
In research medicine, many studies have evaluated the use of technological consent acquisition user interfaces. By virtue of the fact that the end users of these systems are research volunteers, they are likely to have more time and inclination to read consent information than members of the general public accessing health care services, yet even in this group, few definitive findings relating to usability exist []. However, usability as measured by SUS scores in research participants appears to be far better than that found in this study []. It is possible that participants view providing consent as the primary task in a research consent collection platform, whereas they view it as an ancillary one in a patient–primary care provider consultation interaction, and that this difference is reflected in the reduced SUS scores in this study. This would mirror the findings of Utz et al [] that some users provide consent for cookies to be placed by websites simply as the easiest path to completing the objective of their visit, obviously with concerning implications for the quality and validity of medical consent.
This study focuses on traditional user interfaces and has not addressed the potential of conversational user interfaces (CUIs) to impact consent. Previous research has shown that users may accept or even prefer information given in this form []. CUIs can evoke feelings of social presence [], and while this has been shown to feel intrusive in the e-commerce context, personal connection to clinicians during consent conversations has been shown to reduce anxiety and improve clinical outcomes []. Some research suggests that “chatbot” interfaces that appear overtly as a bot may produce equal perceptions of expertise as those that present themselves as a physician []. However, unless a bank of preset responses is used, presenting privacy information this way risks eventual “inevitable” errors [], potentiating invalid consent based on invalid information. Further research is required in this area to evaluate the safety and efficacy of modern CUI systems in privacy-related decision-making interfaces.
Limitations
The interpretation of these results should be tempered by the nature and size of the sample and the snowball method of acquisition. While the positive findings are still significant, the negative results may have limited generalizability, owing to the underpowering of the study—the lower-than-targeted sample size may, for instance, have prevented the detection of actual differences in the SUS or QuICCDig scores between the interface types. The sample achieved is also heavily biased toward young, well-educated participants, including those with health care– or computer science–related education. Well-educated participants appear to consistently have higher levels of comprehension and better quality of consent as a consequence [,]; education may also have a more direct influence on privacy-protective behavior []. Results have varied internationally with respect to the effect that age has on measures of consent quality and recall, although older patients, underrepresented in this study, are known to have higher rates of neurocognitive impairment than younger patients, which presents unique issues for ensuring informed consent is collected effectively wherever possible []. Furthermore, some evidence suggests older people may have greater privacy concerns on average than younger people [].
The variability of health systems’ uptake of asynchronous e-consultation systems internationally and the apparent heterogeneity of different populations’ digital health literacy [] may also limit this study’s relevance outside of the UK NHS context. The effect of using a simulated clinical interaction in which participants were aware that consent and privacy were being studied, as opposed to using real-world patients, is also unknown and limits the direct applicability of this study to clinical practice. With rigorous ethical safeguards in place, future trials conducted directly in primary care settings with larger, more representative populations should consider using real patient interactions to evaluate these systems and explore in more detail the potential for an intention-behavior gap in these interactions with respect to informed consent.
The QuICCDig scoring system has not been formally externally reviewed, and this is not a formal validation study. There is currently no clear gold-standard metric in the field to validate against, which makes designing a robust validation study challenging. Future research could consider validating against clinician-perceived informedness, although it is important to be mindful of the biases this could introduce, or perhaps simultaneously against validated scores for digital consent, clinical consent, and/or health care research consent. However, this may be difficult to orchestrate in practice.
Conclusions
We find that contemporary user interfaces for collecting informed consent in remote electronic consultations for primary care may not reliably collect fully informed consent from patients. Furthermore, we find the perceived usability of a user interface to be significantly (P=.004) correlated with the quality of consent acquired through that interface. However, even in a study where participants knew consent was being studied, we find that almost half (25/55, 45%) of the participants did not remember making a privacy decision, and more than 87% (48/55) of the participants did not remember being offered alternatives to agreeing to complete their consultation. We recommend that primary care decision-makers consider the quality and usability of a remote consultation and triage application’s consent interface when making procurement decisions.
Acknowledgments
The authors thank Dr Reinder Broekstra of the Faculty of Medical Sciences at the University of Groningen in the Netherlands for his input concerning medicolegal aspects of digital consent and Dr Laura Carmichael of the School of Electronics and Computer Science at the University of Southampton for her input in the field of data protection and privacy law.
This study makes use of the System Usability Scale. The System Usability Scale was developed as part of the usability engineering program in integrated office systems development at Digital Equipment Co Ltd, Reading, United Kingdom.
This work is partially supported by the National Institute for Health Research Southampton Biomedical Research Centre.
Data Availability
The datasets generated or analyzed during this study are available from the corresponding author on reasonable request.
Conflicts of Interest
None declared.
Screenshots of the 9 user interfaces users were presented with, demonstrating the levels of privacy information presented and the methodologies used for the consent process in each.
PDF File (Adobe PDF File), 1367 KBQuality of Informed Consent Collected Digitally questionnaire and calculation process.
PDF File (Adobe PDF File), 94 KBReferences
- Clarke G, Wolters A, Dias A. Access to and delivery of general practice services: a study of patients at practices using digital and online tools. The Health Foundation. 2022. URL: https://www.health.org.uk/reports-and-analysis/briefings/access-to-and-delivery-of-general-practice-services [accessed 2022-10-10]
- Gottliebsen K, Petersson G. Limited evidence of benefits of patient operated intelligent primary care triage tools: findings of a literature review. BMJ Health Care Inform. May 2020;27(1):e100114. [FREE Full text] [CrossRef] [Medline]
- Wise J. Electronic consultations offer few benefits for GP practices, says study. BMJ. Nov 07, 2017:j5141. [CrossRef]
- Leighton C, Cooper A, Porter A, Edwards A, Joseph-Williams N. Effectiveness and safety of asynchronous telemedicine consultations in general practice: a systematic review. BJGP Open. Apr 25, 2024;8(1):BJGPO.2023.0177. [FREE Full text] [CrossRef] [Medline]
- Glock H, Jakobsson U, Borgström Bolmsjö B, Milos Nymberg V, Wolff M, Calling S. eVisits to primary care and subsequent health care contacts: a register-based study. BMC Prim Care. Aug 12, 2024;25(1):297. [CrossRef] [Medline]
- Kanani N, Vautrey R, Kettell R. Supporting general practice in 2021/22. National Health Service England. 2021. URL: https://www.england.nhs.uk/wp-content/uploads/2021/01/C1054-supporting-general-practice-in-21-22.pdf [accessed 2025-09-28]
- The NHS long term plan. National Health Service England. Jan 7, 2019. URL: https://webarchive.nationalarchives.gov.uk/ukgwa/20230418155402/https:/www.longtermplan.nhs.uk/publication/nhs-long-term-plan/ [accessed 2025-09-28]
- Greenhalgh T, Clarke A, Byng R, Dakin F, Faulkner S, Hemmings N, et al. After the disruptive innovation: how remote and digital services were embedded, blended and abandoned in UK general practice - longitudinal study. Health Soc Care Deliv Res. Jun 2025;13(31):1-37. [CrossRef] [Medline]
- Demiris G, Oliver DP, Courtney KL. Ethical considerations for the utilization of tele-health technologies in home and hospice care by the nursing profession. Nurs Adm Q. 2006;30(1):56-66. [CrossRef] [Medline]
- Solimini R, Busardò FP, Gibelli F, Sirignano A, Ricci G. Ethical and legal challenges of telemedicine in the era of the COVID-19 pandemic. Medicina (Kaunas). Nov 30, 2021;57(12):1314. [FREE Full text] [CrossRef] [Medline]
- Kaplan B. Ethics, guidelines, standards, and policy: telemedicine, COVID-19, and broadening the ethical scope. Camb Q Healthc Ethics. Jan 2022;31(1):105-118. [CrossRef] [Medline]
- Andreadis K, Buderer N, Langford AT. Patients' understanding of health information in online medical records and patient portals: analysis of the 2022 Health Information National Trends Survey. J Med Internet Res. May 30, 2025;27:e62696. [FREE Full text] [CrossRef] [Medline]
- Green B. The contestation of tech ethics: a sociotechnical approach to technology ethics in practice. J Soc Comput. Sep 2021;2(3):209-225. [CrossRef]
- Bollinger D, Kubicek K, Cotrini C, Basin D. Automating cookie consent and GDPR violation detection. In: Proceedings of the 31st USENIX Security Symposium. 2022. Presented at: USENIX Security '22; August 10-12, 2022; Boston, MA. [CrossRef]
- Bhoot AM, Shinde MA, Mishra WP. Towards the identification of dark patterns: an analysis based on end-user reactions. In: Proceedings of the 11th Indian Conference on Human-Computer Interaction. 2020. Presented at: IndiaHCI '20; November 5-8, 2020; Online. [CrossRef]
- Consent and refusal by adults with decision-making capacity: a toolkit for doctors. British Medical Association. 2019. URL: https://www.bma.org.uk/media/txrnpo3s/consent-and-refusal-by-adults-with-decision-making-capacity-guidance-updated-2025.pdf [accessed 2022-10-05]
- Guidance for providers on meeting the regulations. Care Quality Commission. 2015. URL: https://www.cqc.org.uk/sites/default/files/20150324_guidance_providers_meeting_regulations_01.pdf [accessed 2022-10-05]
- Arslan R. A review on ethical issues and rules in psychological assessment. J Fam Couns Educ. Jun 01, 2018;3(1):17-29. [CrossRef]
- COVID-19 Recovery Toolkit Tool 4: virtual consultations. Royal College of Surgeons of England. Jun 2020. URL: https://www.rcseng.ac.uk/coronavirus/recovery-of-surgical-services/tool-4/ [accessed 2022-10-10]
- Reference guide to consent for examination or treatment (second edition). United Kingdom Government Department of Health. 2009. URL: https://www.gov.uk/government/publications/reference-guide-to-consent-for-examination-or-treatment-second-edition [accessed 2022-11-09]
- R v Hallstrom ex p W (No 2), R v Gardner ex p L [1986] QB 1090. In: All England Law Reports vol 2. London. Butterworth & Co; 1986:306-320.
- Fonseca GM, Belmar-Durán M, Matamala-Santander C. Failure to obtain informed consent should also be considered an adverse event. J Dent Sci. Jun 2020;15(2):232-233. [FREE Full text] [CrossRef] [Medline]
- Ergonomics of human-system interaction part 11: usability: definitions and concepts. International Organization for Standardization. 2018. URL: https://www.iso.org/standard/63500.html [accessed 2026-01-22]
- Brooke J. SUS: a 'quick and dirty' usability scale. In: Jordan PW, Thomas B, McClelland IL, Weerdmeester B, editors. Usability Evaluation in Industry. Boca Raton, FL. CRC Press; 1996.
- Hajesmaeel-Gohari S, Khordastan F, Fatehi F, Samzadeh H, Bahaadinbeigy K. The most used questionnaires for evaluating satisfaction, usability, acceptance, and quality outcomes of mobile health. BMC Med Inform Decis Mak. Jan 27, 2022;22(1):22. [FREE Full text] [CrossRef] [Medline]
- Sepucha KR, Fagerlin A, Couper MP, Levin CA, Singer E, Zikmund-Fisher BJ. How does feeling informed relate to being informed? The DECISIONS survey. Med Decis Making. 2010;30(5 Suppl):77S-84S. [CrossRef] [Medline]
- Catalogue solutions. NHS Digital Buying Catalogue. URL: https://buyingcatalogue.digital.nhs.uk/catalogue-solutions?se lectedCapabilityIds=43&selectedEpicIds=E00001.E00019 [accessed 2023-01-30]
- Nielsen J. Enhancing the explanatory power of usability heuristics. In: Proceedings of the SIGCHI Conference on Human Factors in Computing Systems. 1994. Presented at: CHI '94; April 24-28, 1994; Boston, MA. [CrossRef]
- Lindegren D, Karegar F, Kane B, Pettersson JS. An evaluation of three designs to engage users when providing their consent on smartphones. Behav Inf Technol. Dec 17, 2019;40(4):398-414. [CrossRef]
- Singh AK, Upadhyaya N, Seth A, Hu X, Sastry N, Mondal M. What cookie consent notices do users prefer: a study in the wild. In: Proceedings of the 2022 European Symposium on Usable Security. 2022. Presented at: EuroUSEC '22; September 29-30, 2022:28-39; Karlsruhe, Germany. [CrossRef]
- Rapaport J, Bellringer S, Pinfold V, Huxley P. Carers and confidentiality in mental health care: considering the role of the carer's assessment: a study of service users', carers' and practitioners' views. Health Soc Care Community. Jul 2006;14(4):357-365. [CrossRef] [Medline]
- Mork MJ, Price S, Best K. Privacy versus care--the shifting balance in mental health. Fam Syst Health. Mar 2016;34(1):56-57. [CrossRef] [Medline]
- Fink AS, Prochazka AV, Henderson WG, Bartenfeld D, Nyirenda C, Webb A, et al. Enhancement of surgical informed consent by addition of repeat back: a multicenter, randomized controlled clinical trial. Ann Surg. Jul 2010;252(1):27-36. [CrossRef] [Medline]
- Brezis M, Israel S, Weinstein-Birenshtock A, Pogoda P, Sharon A, Tauber R. Quality of informed consent for invasive procedures. Int J Qual Health Care. Oct 2008;20(5):352-357. [CrossRef] [Medline]
- Habib H, Li M, Young E, Cranor LF. “Okay, whatever”: an evaluation of cookie consent interfaces. In: Proceedings of the 2022 CHI Conference on Human Factors in Computing Systems. 2022. Presented at: CHI '22; April 29-May 5, 2022; New Orleans, LA. [CrossRef]
- Spatz ES, Bao H, Herrin J, Desai V, Ramanan S, Lines L, et al. Quality of informed consent documents among US. hospitals: a cross-sectional study. BMJ Open. May 19, 2020;10(5):e033299. [FREE Full text] [CrossRef] [Medline]
- Joffe S, Cook EF, Cleary PD, Clark JW, Weeks JC. Quality of informed consent: a new measure of understanding among research subjects. J Natl Cancer Inst. Jan 17, 2001;93(2):139-147. [CrossRef] [Medline]
- Lühnen J, Mühlhauser I, Steckelberg A. The quality of informed consent forms-a systematic review and critical analysis. Dtsch Arztebl Int. Jun 01, 2018;115(22):377-383. [FREE Full text] [CrossRef] [Medline]
- Gröndahl L. Public knowledge of digital cookies: exploring the design of cookie consent forms. KTH Royal Institute of Technology. 2020. URL: https://kth.diva-portal.org/smash/record.jsf?pid=diva2%3A1470723&dswid=-1949 [accessed 2022-11-12]
- Sauro J, Lewis JR. Standardized usability questionnaires. In: Sauro J, Lewis JR, editors. Quantifying the User Experience: Practical Statistics for User Research. Burlington, MA. Morgan Kaufmann; 2012:185-240.
- Mold F, Hendy J, Lai YL, de Lusignan S. Electronic consultation in primary care between providers and patients: systematic review. JMIR Med Inform. Dec 03, 2019;7(4):e13042. [FREE Full text] [CrossRef] [Medline]
- Kristiansen E, Atherton H, Austad B, Bergmo TS, Norberg BL, Salisbury C, et al. Patients' use of e-consultations as an alternative to other general practitioner services: cross-sectional survey study. J Med Internet Res. Jan 08, 2025;27:e55158. [FREE Full text] [CrossRef] [Medline]
- Khan N, Pitchforth E, Winder R, Abel G, Clark CE, Cockcroft E, et al. What helps patients access web-based services in primary care? Free-text analysis of patient responses to the Di-Facto questionnaire. BMC Prim Care. Jan 10, 2024;25(1):20. [FREE Full text] [CrossRef] [Medline]
- Ge X, Chappell P, Ledger J, Bakhai M, Clarke GM. The use of online consultation systems and patient experience of primary care: cross-sectional analysis using the general practice patient survey. J Med Internet Res. Jul 26, 2024;26:e51272. [FREE Full text] [CrossRef] [Medline]
- Leung K. Use of an electronic consultation system in an inner city general practice: a mixed-methods service evaluation. BMJ Open Qual. Aug 04, 2025;14(3):e002741. [FREE Full text] [CrossRef] [Medline]
- NHS South, Central and West Commissioning Support Unit, NHS England. Community languages translation and interpreting services. Support and Transformation for Health and Care. Dec 2024. URL: https://tinyurl.com/77jfktuv [accessed 2025-09-26]
- Brough AR, Norton DA, Sciarappa SL, John LK. The bulletproof glass effect: unintended consequences of privacy notices. J Mark Res. 2022;59(4):739-754. [CrossRef]
- Ramos SR. User-centered design, experience, and usability of an electronic consent user interface to facilitate informed decision-making in an HIV clinic. Comput Inform Nurs. Nov 2017;35(11):556-564. [FREE Full text] [CrossRef] [Medline]
- De Sutter E, Zaçe D, Boccia S, Di Pietro ML, Geerts D, Borry P, et al. Implementation of electronic informed consent in biomedical research and stakeholders' perspectives: systematic review. J Med Internet Res. Oct 08, 2020;22(10):e19129. [FREE Full text] [CrossRef] [Medline]
- Robins D, Brody R, Jeong IC, Parvanova I, Liu J, Finkelstein J. Towards a highly usable, mobile electronic platform for patient recruitment and consent management. Stud Health Technol Inform. Jun 16, 2020;270:1066-1070. [CrossRef] [Medline]
- Utz C, Degeling M, Fahl S, Schaub F, Holz T. (Un)informed consent: studying GDPR consent notices in the field. In: Proceedings of the 2019 ACM SIGSAC Conference on Computer and Communications Security. 2019. Presented at: CCS '19; November 11-15, 2019; London, United Kingdom. [CrossRef]
- Brüggemeier B, Lalone P. Perceptions and reactions to conversational privacy initiated by a conversational user interface. Comput Speech Lang. Jan 2022;71:101269. [CrossRef]
- Sohn S. Can conversational user interfaces be harmful? The undesirable effects on privacy concern. In: Proceedings of the ICIS 2019. 2019. Presented at: ICIS '19; December 15-18, 2019; Munich, Germany. URL: https://aisel.aisnet.org/icis2019/cyber_security_privacy_ethics_IS/cyber_security_privacy/29/
- Anandaiah A, Rock L. Twelve tips for teaching the informed consent conversation. Med Teach. Apr 2019;41(4):465-470. [CrossRef] [Medline]
- Jin E, Eastin M. Towards more trusted virtual physicians: the combinative effects of healthcare chatbot design cues and threat perception on health information trust. Behav Inf Technol. May 02, 2024;44(4):829-842. [CrossRef]
- Harkous H, Fawaz K, Shin KG, Aberer K. PriBots: conversational privacy with chatbots. In: Proceedings of the Twelfth Symposium on Usable Privacy and Security. 2016. Presented at: SOUPS '16; June 22-24, 2016; Denver, CO. URL: https://infoscience.epfl.ch/server/api/core/bitstreams/a144b30e-a202-47a8-9772-5dbf472d8f7d/content
- Fons-Martinez J, Murciano-Gamborino C, Ferrer-Albero C, Vergara-Hernandez C, Diez-Domingo J. Digital informed consent/assent in clinical trials among pregnant women, minors, and adults: multicountry cross-sectional evaluation of comprehension and satisfaction. JMIR Hum Factors. Aug 15, 2025;12:e65569. [FREE Full text] [CrossRef] [Medline]
- Nnabugwu II, Ugwumba FO, Udeh EI, Anyimba SK, Ozoemena OF. Informed consent for clinical treatment in low-income setting: evaluating the relationship between satisfying consent and extent of recall of consent information. BMC Med Ethics. Dec 02, 2017;18(1):69. [FREE Full text] [CrossRef] [Medline]
- Boerman SC, Kruikemeier S, Zuiderveen Borgesius FJ. Exploring motivations for online privacy protection behavior: insights from panel data. Commun Res. Oct 05, 2018;48(7):953-977. [CrossRef]
- Ng IK. Informed consent in clinical practice: old problems, new challenges. J R Coll Physicians Edinb. Jun 2024;54(2):153-158. [CrossRef] [Medline]
- Prince C, Omrani N, Maalaoui A, Dabic M, Kraus S. Are we living in surveillance societies and is privacy an illusion? An empirical study on privacy literacy and privacy concerns. IEEE Trans Eng Manage. Oct 2023;70(10):3553-3570. [CrossRef]
- Zeeb H, Maaß J, Schultz T, Haug U, Pigeot I, Schüz B. Digital Public Health: Interdisciplinary Perspectives. Cham, Switzerland. Springer; 2025.
Abbreviations
| CUI: conversational user interface |
| GP: general practitioner |
| NHS: National Health Service |
| QuICCDig: Quality of Informed Consent Collected Digitally |
| SUS: System Usability Scale |
Edited by A Kushniruk; submitted 03.Jun.2025; peer-reviewed by O Akinsola, M Beltrão, R Marshall, M Al Zoubi, D Agbo; comments to author 10.Sep.2025; revised version received 15.Dec.2025; accepted 02.Jan.2026; published 09.Feb.2026.
Copyright©Curtis Parfitt-Ford, Lisa Ballard, Adriane Chapman. Originally published in JMIR Human Factors (https://humanfactors.jmir.org), 09.Feb.2026.
This is an open-access article distributed under the terms of the Creative Commons Attribution License (https://creativecommons.org/licenses/by/4.0/), which permits unrestricted use, distribution, and reproduction in any medium, provided the original work, first published in JMIR Human Factors, is properly cited. The complete bibliographic information, a link to the original publication on https://humanfactors.jmir.org, as well as this copyright and license information must be included.

