Search Articles

View query in Help articles search

Search Results (1 to 5 of 5 Results)

Download search results: CSV END BibTex RIS


Discrimination of Radiologists' Experience Level Using Eye-Tracking Technology and Machine Learning: Case Study

Discrimination of Radiologists' Experience Level Using Eye-Tracking Technology and Machine Learning: Case Study

Lim et al [15] identified several features that can be extracted from eye-tracking data, including pupil size, saccade, fixations, velocity, blink, pupil position, electrooculogram, and gaze point, to be used in machine learning models. Among these features, fixation was the most commonly used feature in the studies reviewed. Shamyuktha et al [16] developed a machine learning framework using eye gaze data such as saccade latency and amplitude to classify expert and nonexpert radiologists.

Stanford Martinez, Carolina Ramirez-Tamayo, Syed Hasib Akhter Faruqui, Kal Clark, Adel Alaeddini, Nicholas Czarnek, Aarushi Aggarwal, Sahra Emamzadeh, Jeffrey R Mock, Edward J Golob

JMIR Form Res 2025;9:e53928

An Objective System for Quantitative Assessment of Television Viewing Among Children (Family Level Assessment of Screen Use in the Home-Television): System Development Study

An Objective System for Quantitative Assessment of Television Viewing Among Children (Family Level Assessment of Screen Use in the Home-Television): System Development Study

Prior gaze detection systems focused on estimating gaze direction from high-resolution images of eyes recorded on mobile phones, tablets, or laptops, where the distances were less than a meter [30-34]. Unfortunately, the FLASH-TV gaze detector had to work with the small facial image sizes (typically We adapted the Gaze360 approach of Kellnhofer et al [35] for FLASH-TV gaze estimation using a publicly available code base [36].

Anil Kumar Vadathya, Salma Musaad, Alicia Beltran, Oriana Perez, Leo Meister, Tom Baranowski, Sheryl O Hughes, Jason A Mendoza, Ashutosh Sabharwal, Ashok Veeraraghavan, Teresia O'Connor

JMIR Pediatr Parent 2022;5(1):e33569

Identification of Social Engagement Indicators Associated With Autism Spectrum Disorder Using a Game-Based Mobile App: Comparative Study of Gaze Fixation and Visual Scanning Methods

Identification of Social Engagement Indicators Associated With Autism Spectrum Disorder Using a Game-Based Mobile App: Comparative Study of Gaze Fixation and Visual Scanning Methods

We then used an open-source facial landmark annotation platform called Open Face to obtain automated estimates of gaze directions [35,36]. Each frame with an identifiable face was assigned a coordinate pair (x,y) representing the direction of the individual's gaze. The value of x ranges from –1 (indicating a leftward gaze) to 1 (indicating a rightward gaze); similarly, the value of y ranges from –1 (indicating a downward gaze) to 1 (indicating an upward gaze), as shown in Figure 3.

Maya Varma, Peter Washington, Brianna Chrisman, Aaron Kline, Emilie Leblanc, Kelley Paskov, Nate Stockham, Jae-Yoon Jung, Min Woo Sun, Dennis P Wall

J Med Internet Res 2022;24(2):e31830

Sex Differences in Electronic Health Record Navigation Strategies: Secondary Data Analysis

Sex Differences in Electronic Health Record Navigation Strategies: Secondary Data Analysis

In these preliminary studies, we incorporated eye and screen tracking to determine whether gaze metrics could be used as surrogates for EHR performance. We demonstrated that a number of eye-tracking metrics correlated with the recognition of embedded safety items within the chart for the entire cohort.

Daniel R Allen Seifer, Karess Mcgrath, Gretchen Scholl, Vishnu Mohan, Jeffrey A Gold

JMIR Hum Factors 2021;8(2):e25957

Automatic Classification of Screen Gaze and Dialogue in Doctor-Patient-Computer Interactions: Computational Ethnography Algorithm Development and Validation

Automatic Classification of Screen Gaze and Dialogue in Doctor-Patient-Computer Interactions: Computational Ethnography Algorithm Development and Validation

This results in a binary classification (no screen gaze, screen gaze) for each 0.5 seconds of video. Figure 2 Classifying the doctor's computer screen gaze using face key point estimation. Classifying the doctor's computer screen gaze using face key point estimation. The purpose of the dialogue classifier was to detect when the doctor and patient were engaging in conversation.

Samar Helou, Victoria Abou-Khalil, Riccardo Iacobucci, Elie El Helou, Ken Kiyono

J Med Internet Res 2021;23(5):e25218