e.g. mhealth
Search Results (1 to 5 of 5 Results)
Download search results: CSV END BibTex RIS
Skip search results from other journals and go to results- 2 Journal of Medical Internet Research
- 1 JMIR Formative Research
- 1 JMIR Human Factors
- 1 JMIR Pediatrics and Parenting
- 0 Medicine 2.0
- 0 Interactive Journal of Medical Research
- 0 iProceedings
- 0 JMIR Research Protocols
- 0 JMIR Medical Informatics
- 0 JMIR Public Health and Surveillance
- 0 JMIR mHealth and uHealth
- 0 JMIR Serious Games
- 0 JMIR Mental Health
- 0 JMIR Rehabilitation and Assistive Technologies
- 0 JMIR Preprints
- 0 JMIR Bioinformatics and Biotechnology
- 0 JMIR Medical Education
- 0 JMIR Cancer
- 0 JMIR Challenges
- 0 JMIR Diabetes
- 0 JMIR Biomedical Engineering
- 0 JMIR Data
- 0 JMIR Cardio
- 0 Journal of Participatory Medicine
- 0 JMIR Dermatology
- 0 JMIR Aging
- 0 JMIR Perioperative Medicine
- 0 JMIR Nursing
- 0 JMIRx Med
- 0 JMIRx Bio
- 0 JMIR Infodemiology
- 0 Transfer Hub (manuscript eXchange)
- 0 JMIR AI
- 0 JMIR Neurotechnology
- 0 Asian/Pacific Island Nursing Journal
- 0 Online Journal of Public Health Informatics
- 0 JMIR XR and Spatial Computing (JMXR)

Lim et al [15] identified several features that can be extracted from eye-tracking data, including pupil size, saccade, fixations, velocity, blink, pupil position, electrooculogram, and gaze point, to be used in machine learning models. Among these features, fixation was the most commonly used feature in the studies reviewed.
Shamyuktha et al [16] developed a machine learning framework using eye gaze data such as saccade latency and amplitude to classify expert and nonexpert radiologists.
JMIR Form Res 2025;9:e53928
Download Citation: END BibTex RIS

Prior gaze detection systems focused on estimating gaze direction from high-resolution images of eyes recorded on mobile phones, tablets, or laptops, where the distances were less than a meter [30-34]. Unfortunately, the FLASH-TV gaze detector had to work with the small facial image sizes (typically
We adapted the Gaze360 approach of Kellnhofer et al [35] for FLASH-TV gaze estimation using a publicly available code base [36].
JMIR Pediatr Parent 2022;5(1):e33569
Download Citation: END BibTex RIS

We then used an open-source facial landmark annotation platform called Open Face to obtain automated estimates of gaze directions [35,36]. Each frame with an identifiable face was assigned a coordinate pair (x,y) representing the direction of the individual's gaze. The value of x ranges from –1 (indicating a leftward gaze) to 1 (indicating a rightward gaze); similarly, the value of y ranges from –1 (indicating a downward gaze) to 1 (indicating an upward gaze), as shown in Figure 3.
J Med Internet Res 2022;24(2):e31830
Download Citation: END BibTex RIS

Sex Differences in Electronic Health Record Navigation Strategies: Secondary Data Analysis
In these preliminary studies, we incorporated eye and screen tracking to determine whether gaze metrics could be used as surrogates for EHR performance. We demonstrated that a number of eye-tracking metrics correlated with the recognition of embedded safety items within the chart for the entire cohort.
JMIR Hum Factors 2021;8(2):e25957
Download Citation: END BibTex RIS

This results in a binary classification (no screen gaze, screen gaze) for each 0.5 seconds of video.
Figure 2 Classifying the doctor's computer screen gaze using face key point estimation.
Classifying the doctor's computer screen gaze using face key point estimation.
The purpose of the dialogue classifier was to detect when the doctor and patient were engaging in conversation.
J Med Internet Res 2021;23(5):e25218
Download Citation: END BibTex RIS