e.g. mhealth
Search Results (1 to 4 of 4 Results)
Download search results: CSV END BibTex RIS
Skip search results from other journals and go to results- 1 JMIR Human Factors
- 1 JMIR Medical Informatics
- 1 JMIR Research Protocols
- 1 Journal of Medical Internet Research
- 0 Medicine 2.0
- 0 Interactive Journal of Medical Research
- 0 iProceedings
- 0 JMIR Public Health and Surveillance
- 0 JMIR mHealth and uHealth
- 0 JMIR Serious Games
- 0 JMIR Mental Health
- 0 JMIR Rehabilitation and Assistive Technologies
- 0 JMIR Preprints
- 0 JMIR Bioinformatics and Biotechnology
- 0 JMIR Medical Education
- 0 JMIR Cancer
- 0 JMIR Challenges
- 0 JMIR Diabetes
- 0 JMIR Biomedical Engineering
- 0 JMIR Data
- 0 JMIR Cardio
- 0 JMIR Formative Research
- 0 Journal of Participatory Medicine
- 0 JMIR Dermatology
- 0 JMIR Pediatrics and Parenting
- 0 JMIR Aging
- 0 JMIR Perioperative Medicine
- 0 JMIR Nursing
- 0 JMIRx Med
- 0 JMIRx Bio
- 0 JMIR Infodemiology
- 0 Transfer Hub (manuscript eXchange)
- 0 JMIR AI
- 0 JMIR Neurotechnology
- 0 Asian/Pacific Island Nursing Journal
- 0 Online Journal of Public Health Informatics
- 0 JMIR XR and Spatial Computing (JMXR)

Crowdsourced approaches to data set labeling are growing in popularity, and beneficial effects of crowdsourcing have been demonstrated in health care–related tasks including biomedical imaging analysis [10-14].
Using crowdsourcing for biomedical image labeling is challenged by the complexity of the tasks and the need to ensure label quality control. The user interface design for collecting crowd opinions and the metrics used for assessing opinion quality is key to successful results.
J Med Internet Res 2024;26:e51397
Download Citation: END BibTex RIS

One study reported that it took 6 months to obtain expert labels comprising 340 sentences from radiology reports written by 2 radiologists, whereas the authors obtained crowdsourced annotations of 717 sentences in under 2 days at a cost of less than $600. A classification algorithm trained using these crowdsourced annotations outperformed an algorithm trained using the expert-labeled data as a result of the increased volume of available training examples [32].
JMIR Med Inform 2023;11:e38412
Download Citation: END BibTex RIS

Hosio et al [43] developed a crowdsourced online system named Back Pain Workshop. They collected 2 knowledge bases, 1 from clinical professionals and 1 from nonprofessionals. Professionals found the system beneficial for self-reflection and educating new patients, while nonprofessionals acknowledged the reliable decision support that also respected the nonprofessional opinion [43].
JMIR Hum Factors 2022;9(3):e38265
Download Citation: END BibTex RIS

The study is a crowdsourced initiative [20] that will involve remote enrollment. It will use a cross-platform phone app to deliver surveys; allow for the input of COVID-19–related data; and allow participants to connect to third-party sources of wearable data, such as Fitbit LLC. By prospectively collecting regular mental well-being and COVID-19 survey data alongside historic and ongoing health-related wearable device data, we hope to address the following objectives.
JMIR Res Protoc 2021;10(12):e32587
Download Citation: END BibTex RIS