TY - JOUR AU - Meidani, Zahra AU - Omidvar, Aydine AU - Akbari, Hossein AU - Asghari, Fatemeh AU - Khajouei, Reza AU - Nazemi, Zahra AU - Nabovati, Ehsan AU - Holl, Felix PY - 2024 DA - 2024/9/9 TI - Evaluating the Usability and Quality of a Clinical Mobile App for Assisting Physicians in Head Computed Tomography Scan Ordering: Mixed Methods Study JO - JMIR Hum Factors SP - e55790 VL - 11 KW - mobile apps KW - user-centered design KW - user-computer interface KW - physicians KW - tomography KW - x-ray computed KW - mobile phone AB - Background: Among the numerous factors contributing to health care providers’ engagement with mobile apps, including user characteristics (eg, dexterity, anatomy, and attitude) and mobile features (eg, screen and button size), usability and quality of apps have been introduced as the most influential factors. Objective: This study aims to investigate the usability and quality of the Head Computed Tomography Scan Appropriateness Criteria (HAC) mobile app for physicians’ computed tomography scan ordering. Methods: Our study design was primarily based on methodological triangulation by using mixed methods research involving quantitative and qualitative think-aloud usability testing, quantitative analysis of the Mobile Apps Rating Scale (MARS) for quality assessment, and debriefing across 3 phases. In total, 16 medical interns participated in quality assessment and testing usability characteristics, including efficiency, effectiveness, learnability, errors, and satisfaction with the HAC app. Results: The efficiency and effectiveness of the HAC app were deemed satisfactory, with ratings of 97.8% and 96.9%, respectively. MARS assessment scale indicated the overall favorable quality score of the HAC app (82 out of 100). Scoring 4 MARS subscales, Information (73.37 out of 100) and Engagement (73.48 out of 100) had the lowest scores, while Aesthetics had the highest score (87.86 out of 100). Analysis of the items in each MARS subscale revealed that in the Engagement subscale, the lowest score of the HAC app was “customization” (63.6 out of 100). In the Functionality subscale, the HAC app’s lowest value was “performance” (67.4 out of 100). Qualitative think-aloud usability testing of the HAC app found notable usability issues grouped into 8 main categories: lack of finger-friendly touch targets, poor search capabilities, input problems, inefficient data presentation and information control, unclear control and confirmation, lack of predictive capabilities, poor assistance and support, and unclear navigation logic. Conclusions: Evaluating the quality and usability of mobile apps using a mixed methods approach provides valuable information about their functionality and disadvantages. It is highly recommended to embrace a more holistic and mixed methods strategy when evaluating mobile apps, because results from a single method imperfectly reflect trustworthy and reliable information regarding the usability and quality of apps. SN - 2292-9495 UR - https://humanfactors.jmir.org/2024/1/e55790 UR - https://doi.org/10.2196/55790 DO - 10.2196/55790 ID - info:doi/10.2196/55790 ER -