Accessibility settings

Published on in Vol 13 (2026)

Preprints (earlier versions) of this paper are available at https://preprints.jmir.org/preprint/77438, first published .
An Automated Curriculum to Support Behavioral Health Counseling Among Pediatric Residents: Usability Study

An Automated Curriculum to Support Behavioral Health Counseling Among Pediatric Residents: Usability Study

An Automated Curriculum to Support Behavioral Health Counseling Among Pediatric Residents: Usability Study

1Cincinnati Children's Hospital Medical Center, Cincinnati, OH, United States

2University of Cincinnati, 3333 Burnet Ave, Cincinnati, OH, United States

3USC Institute for Creative Technologies, Los Angeles, CA, United States

4BreakAway, Ltd, Hunt Valley, MD, United States

Corresponding Author:

Francis Real, MEd, MD


Background: Behavioral health concerns are common in pediatric practice, with pediatricians reporting a lack of skills related to providing effective behavioral management strategies to parents. A prior human-facilitated, screen-based virtual reality training curriculum proved effective in enhancing behavioral health communication skills among pediatric residents. However, barriers to spread and scale of the curriculum included the need for human facilitation.

Objective: This study explored the usability of an automated virtual reality–based behavioral health anticipatory guidance curriculum for pediatric residents.

Methods: Through a partnership with BreakAway Ltd, an automated prototype was developed that required verbalization by users to support progress through simulated scenarios and included the receipt of personalized feedback to enable deliberate practice of communication skills. Usability of the prototype was assessed using mixed methods that included in-person completion of the prototype, a semistructured interview, and completion of survey instruments including the System Usability Scale and the Measurement, Effects, Conditions: Spatial Presence Questionnaire.

Results: Nine individuals completed usability testing. Qualitatively, users indicated that the system was easy to use, realistic, gamified, and likely most helpful for novice learners. Quantitatively, the ease of system usability was rated highly with some limitations related to spatial presence noted.

Conclusions: Usability testing of an automated curriculum to support behavioral health counseling skills in pediatric residents was completed, providing data to support adaptations in preparation for implementation.

JMIR Hum Factors 2026;13:e77438

doi:10.2196/77438

Keywords



Background

Although pediatricians frequently discuss behavioral health concerns with families [1], they report a lack of training on how to effectively address these concerns using evidence-based approaches and collaborative communication [2,3]. This represents a critical training gap, as the implementation of effective behavioral management strategies by parents to address typical childhood behaviors (eg, tantrums) can mitigate the development of future behavioral health disorders [4]. With approximately 20% of children currently having a diagnosis of a behavioral health disorder [5], identification of training strategies to support future pediatricians’ skills in delivering behavioral health anticipatory guidance (BHAG) is needed [6]. Recent curricula aiming to support BHAG skills in pediatric residents have primarily relied upon experts to deliver content via didactic or case-based discussions, limiting the opportunity for spread and scale of effective interventions [7,8]. Simulation-based medical education in the form of patient actors has been incorporated into training pediatric residents on BHAG; however, limitations related to accessibility, realism, and psychological safety have been reported [9]. Advances in technology offer an opportunity for broad dissemination of theory-based education. Screen-based virtual reality (VR) is one technology that can be leveraged to support BHAG skill development in pediatric residents to optimize pediatric health outcomes.

Prior Work

We previously developed a novel screen-based VR curriculum to enhance BHAG competencies among pediatric residents through deliberate practice of skills [10-12]. A human facilitator drove both the VR responses of graphical characters in scenarios and provided personalized performance feedback following each simulated scenario. The VR curriculum proved highly acceptable and efficacious in enhancing BHAG and health care communication skills [10,12-14]. During usability testing of the human-facilitated curriculum, residents reported the curriculum as realistic, engaging, practical, appropriately scaffolded, and psychologically safe. They indicated the curriculum was easy to use and reported high levels of immersion and spatial presence within the virtual environment [10]. The spread and scale of this education was limited due to the need for highly trained human facilitators. Artificial intelligence (AI) offers an opportunity to explore how automation may support maintenance and scale of such VR-based educational interventions.

AI allows users to interact with computer systems that use data sources to interpret user inputs and deliver curated responses [15]. The use of AI in medical education and simulation has been limited; prior research indicates limitations related to realism and accuracy [16,17]. Thus, we sought to assess the feasibility and acceptability of an automated curriculum to support BHAG and communication skills in pediatric residents.


Ethical Considerations

This study was reviewed and determined to be exempt by the Cincinnati Children's Hospital Medical Center Institutional Review Board (2023-0314). A waiver of documentation of informed consent was granted; therefore, prior to data collection, participants reviewed a study information sheet, and participation in study procedures indicated their consent. All study team members adhered to institutional procedures governing data access and privacy. Compensation was provided to all participants.

Curriculum Development

To develop the automated intervention, we partnered with BreakAway Ltd, a game developer that created a conversational interaction system titled Standard Patient Studio (SPS) [18]. SPS uses a cloud-based AI platform in which progression through a simulated scenario is driven by a user’s verbalizations. A multidisciplinary team that included 2 pediatricians, 1 pediatric psychologist, 3 software developers with expertise in AI, and 2 research assistants adapted the human-facilitated VR curriculum to the SPS platform. Procedures included scenario storyboarding with branching logic using the prior human-facilitated VR scenarios as a guide, with iterative scenario development testing by study team members. The final curricular prototype yielded 2 scenarios that replicated the human-facilitated VR curriculum’s learning objectives and used identical verbal statements by the graphical parent. Potential answer options for the learner were displayed on the screen during each scenario to support scaffolded learning and to inform a learner’s spontaneous verbalizations, thereby ensuring progression through a scenario. The answer option most closely aligned with the user’s verbalizations was selected by the natural language understanding system. This system categorizes user input via a medical taxonomy reference that assigns an appropriate patient action. In prior trials, this system has demonstrated a 92% appropriate response rate [19]. On the basis of the selected answer option, the user received immediate feedback (ie, excellent, acceptable, or poor response) via color-coded emojis that were previously embedded in the SPS platform. Upon scenario completion, users received feedback indicating why an answer selection was preferred or not preferred. The feedback mirrored the human-facilitated feedback by identifying the specific BHAG and motivational interviewing skills demonstrated during a scenario and where there were missed opportunities to demonstrate such skills. A low performance score required users to repeat the scenario, which mirrored the deliberate practice approach used for the human-facilitated VR training.

Usability Testing

Usability testing, a critical component of intervention development, which seeks to systematically evaluate the extent to which an intervention achieves its intended purpose by engaging experts to identify weaknesses, occurred using mixed methods [20,21]. Health care team members, including senior faculty, psychologists, simulation educators, and pediatric residents, were purposefully sampled to pilot the curriculum. Usability testing occurred in person, where users were asked to “think aloud,” verbalizing their perspectives as they navigated the automated curriculum [22]. Following completion, participants underwent a semistructured interview to explore their overall impressions (Textbox 1). Interviews were recorded, transcribed, and analyzed by 2 raters (AM and FR) using the rigorous and accelerated data reduction (RADAR) technique, a rapid qualitative analysis approach [23]. RADAR uses a 5-step approach to data reduction through the iterative refinement of data tables and is well suited for studies with narrow research questions. The raters ensured consensus on reduction decisions at each step. Trustworthiness of the qualitative data was also supported through the use of an interview guide that generated descriptive responses (credibility), multiple analyzers and peer debriefing (dependability), and recording of detailed procedures (confirmability) [24]. Participants also completed the System Usability Scale (SUS) [25] to examine the platform’s ease of use and a subset of items from the Measurement, Effects, Conditions: Spatial Presence Questionnaire (MEC-SPQ) [26] to assess for immersion in the virtual environment.

Textbox 1. Interview guide.

Usability interview guide items

  • What are your overall impressions of the automated curriculum?
  • How easy or difficult was the curriculum to use? Tell me more.
  • How was your interaction with the avatar characters?
    • What would you change?
  • What were your impressions of the feedback you received after the simulation?
    • Did it seem to accurately reflect your performance?
    • How easy or difficult was it to interpret?
  • What one change would you make to the automated curriculum?
  • Would you recommend the automated curriculum for learners? Why or why not?

A total of 9 individuals, including 2 pediatric psychologists, 3 senior pediatric faculty, 2 pediatric residents, and 2 simulation educators, underwent testing during a 1-week period in August 2024. Participants had a mean age of 36 (SD 7.9) years and were mostly women (n=8, 89%), White (n=8, 89%), and non-Hispanic (n=9, 100%).

Qualitative

Of the 9 participants, 8 (89%) provided qualitative data. Participants described the scenarios as relevant and realistic. Overall, they reported that the system was easy to use and correctly coded their verbalizations. The study team observed, and participants reported, that there was a learning curve associated with using the online microphone. Participants indicated appreciation for the nuanced answer options and level of difficulty, although some participants desired decreased scaffolding and increased free agency. Some participants described a game-based experience, attributed to the immediate feedback in the form of color-differentiated emojis. Some viewed the game mechanics integration positively, while others felt it detracted from realism. All participants (n=8) indicated that they would recommend the curriculum for learners. Due to the inclusion of game-based elements and the select option format, some participants believed the curriculum was best suited for novice learners (eg, medical students and residents) rather than practicing pediatricians. All participants indicated that the feedback accurately reflected their performance. When asked what one change participants would make to the automated curriculum, 3 (38%) requested more free agency and less scaffolding during the simulations, 2 (25%) wanted shorter answer options, 2 (25%) wanted more variability in terms of graphical character responses, and 1 (13%) wanted more specific feedback regarding incorrect answer selections.

Table 1 includes exemplar quotes informing these data interpretations.

Table 1. Principal themes and supporting quotes.
ThemeSupporting quotes
Easy to use
  • “I thought it was very easy. I had no problems using the system and getting through the scenario.” [ID 26]
  • “It’s very self-explanatory, and you just have to like push a button. It’s not super high, not high-tech in a way that I’m like I have no clue what to do with this.” [ID 21]
Realistic scenarios
  • “I think it’s a practical way, some of the things that the parents said are very realistic and real life that we hear all the time, so I thought that was realistic.” [ID 24]
  • “I felt the situation was very realistic, and it felt like the, it just, it felt accurate. That’s my biggest reaction. These things come up all the time, and kids do have tantrums in the office all the time, so this is very relevant.” [ID 26]
Accurate character responses
  • “I feel like she responded appropriately to what I said.” [ID 22]
  • “I think I would say it feels representative in some ways of what like parents might say.” [ID 27]
Game-based design
  • “I felt more like it was like a game to find the best answer than an actual interaction with the person. It felt more like a trivia game.” [ID 26]
  • “It [the feedback] felt like a score, like a game and a score.” [ID 27]
Potential for more realistic characters
  • “I would like the room to look more realistic, and I’d like Phyllis [the graphical parent] to look more realistic, and I’d like Alex [the graphical child] to actually be with Phyllis and not a pop-out video.” [ID 30]
  • “I mean, if it looked more realistic, obviously, that would be nice. And her voice sounds a little bit monotone. Like if it could sound a little bit more natural, then I think I would feel more bought in that it feels like a more realistic patient interaction.” [ID 22]
Accurate and clear feedback
  • “I like this [feedback] page, especially because it breaks it up, and it gives specific feedback. I guess sometimes I didn’t pick the most egregious answers, so I didn’t always get like the bad feedback. But I do like that you could see kind of what the other choices are, and you could kind of see what the scale is.” [ID 29]
  • “I like that it’s relatively clear [the feedback]. Obviously, you know, there’s the green, yellow, red. Green is good. Yellow is kind of that middle of the road. Red is bad.” [ID 23]
Most helpful for novice learners
  • “I can see how it could have a lot of value for resident training and definitely helping residents get more experience.” [ID 27]
  • “I think I would, especially for like interns or early learners because, I mean, it’s low stakes with the avatar. And I also like the microphone aspect of it, because I think speaking the words instead of just where you’ve been clicking goes a long way.” [ID 29]

Quantitative

The overall SUS mean score among participants was 78 (SD 9.8; range 65.0-92.5), indicating good to excellent usability of the system [27]. On the MEC-SPQ, 44% (4/9) of participants agreed or strongly agreed that it felt like they actually participated in the action of the simulation. Only 11% (1/9) agreed or strongly agreed that they felt as though they were physically present in the environment. (Table 2)

Table 2. Quantitative outcome metrics assessing system usability (System Usability Scale [25]) and spatial presence (adapted Measurement, Effects, Conditions: Spatial Presence Questionnaire [MEC-SPQ] [26]), which both use a 5-point Likert scale from strongly disagree (score=1) to strongly agree (score=5).
ItemsScore, mean (SD)
System Usability Scale
I think that I would like to use this system frequently3.6 (0.9)
I found the system unnecessarily complex1.9 (1.1)
I thought the system was easy to use4.1 (0.6)
I think that I would need the support of a technical person to be able to use this system1.4 (0.5)
I found the various functions in this system were well integrated3.8 (0.8)
I thought there was too much inconsistency in this system1.9 (0.9)
I would imagine that most people would learn to use this system very quickly4.3 (0.5)
I found the system very cumbersome to use1.6 (0.5)
I felt very confident using the system4.0 (0.7)
I needed to learn a lot of things before I could get going with this system1.7 (0.5)
Total78.3 (9.8)
MEC-SPQ (subset)
The virtual reality experience captured my senses (ie, it held my attention)3.3 (1.0)
I dedicated myself completely to the virtual reality experience. (ie, I was not distracted)3.4 (1.0)
I was able to make a good estimate of the size of the presented space3.2 (0.8)
I felt as though I was physically present in the environment of the simulation2.6 (1.0)
It seemed as though I actually took part in the action of the simulation3.1 (0.9)
The objects in the simulation gave me the feeling that I could do things with them2.4 (1.0)
Even now, I still have a concrete mental image of the spatial environment3.6 (1.0)

Principal Results

Adaptation of a human-facilitated VR curriculum on BHAG was successfully automated through the use of the SPS platform developed by BreakAway Ltd. Qualitative and quantitative data indicated that users could easily navigate the platform and interact with its components. Compared with the human-facilitated VR curriculum [10], users reported less spatial presence in the virtual environment. However, users still described the automated training as realistic and relevant. Users also endorsed a game-based aspect to the training experience, which was not reported with the prior human-facilitated VR curriculum [10]. Overall, users indicated that the current automated training might be most appropriate for novice learners. Our decision to scaffold the learning experience by displaying answer options supported reliable progression through the scenarios and safeguarded against misclassification of verbalizations, which can occur when using generative AI models [17,28-30]. However, more advanced learners may benefit from removing scaffolded answer options at a limited number of interaction points.

Limitations

This study had several limitations. First, our small, purposeful sample, ideal for usability testing [31-33] and intervention development, limited generalizability. However, recent literature by Guest et al [31,32] indicates that 6 to 7 interviews are likely sufficient to support theme generation for homogenous samples such as ours. Second, the rapid qualitative analysis method may have limited exploration of interview data. However, the RADAR technique is well suited for research questions that are narrow in scope [23].

Comparison with Prior Work

Behavioral health medical education lacks effective, innovative training curricula necessary to develop practice competencies [3]. By removing the need for a human facilitator or patient actors, AI-based communication training provides an opportunity for spread and scale [16,34]. Additional research is needed on how to best incorporate AI into education interventions to optimize learning outcomes that support behavior change. We specifically sought to align the automated curriculum with deliberate practice principles, as this served as the theoretical foundation for the prior human-facilitated VR curriculum [11]. Deliberate practice is a personal, goal-oriented approach to training that focuses on attempting a behavior (eg, providing BHAG), receiving immediate feedback, and then repeating it until demonstrating skill mastery. Consistent with this theoretical framework, the automated curriculum provided a platform for rehearsal of specific verbiage, feedback based on each interaction point, repetition of skills over the two scenarios, and the opportunity to repeat scenarios. Aligning AI-based educational interventions with evidence-based adult learning theories will allow us to establish a literature base on how AI might most effectively support learning. This is particularly important for the topic of behavioral health, as recent studies describing curricular interventions often do not specifically indicate a theoretical foundation for the work [8,35,36]. As AI becomes integrated routinely into training, we should consider how we might best facilitate trust for learners interacting with AI systems as well as provide transparency regarding AI algorithms and source data [37].

Conclusions

Our study demonstrated the feasibility of developing an automated curriculum to support BHAG skills in pediatric residents through collaboration with an experienced industry partner. Automation of training curricula adds value by enhancing the capacity for scalability by reducing dependence on human facilitation. As such, automation also decreases the costs for participation, supporting equity among learners in accessing novel, evidence-based education. Next steps include updating the curriculum based on usability testing and implementing it among pediatric residents to assess its acceptability and effectiveness among that population and to explore how its outcomes compare to other training modalities, such as human-facilitated VR interventions. An evaluation of implementation strategies will also be critical to understand how such novel education can be integrated into resident curricula. Moreover, given the varying perceptions related to the game-based elements incorporated into the intervention, further exploration of how such game-based features impact attitudes and learning outcomes is warranted. We also plan to add an orientation regarding learning goals prior to intervention completion to align with best practices for promoting a constructive digital feedback environment [38]. In the future, we aim to refine the automated curriculum, including adjusting the level of free agency and gamified elements, to support learning across the continuum of health care providers.

Funding

This work was supported through the Cincinnati Children’s Hospital Medical Innovation Fund award. Funders played no role in the design and conduct of the study; collection, management, analysis, and interpretation of the data; and preparation, review, or approval of the manuscript.

Conflicts of Interest

TT and MR are employed or consult with BreakAway, Ltd. To minimize bias, these authors did not participate in the collection, management, or analysis of data. All other authors declare no other conflicts of interest.

  1. Committee on Psychosocial Aspects of Child and Family Health and Task Force on Mental Health. Policy statement--the future of pediatrics: mental health competencies for pediatric primary care. Pediatrics. Jul 2009;124(1):410-421. [CrossRef] [Medline]
  2. McMillan JA, Land MJ, Leslie LK. Pediatric residency education and the behavioral and mental health crisis: a call to action. Pediatrics. Jan 2017;139(1):e20162141. [CrossRef] [Medline]
  3. Green CM, Foy JM, Earls MF, Committee on Psychosocial Aspects of Child and Family Health, Mental Health Leadership Work Group. Achieving the pediatric mental health competencies. Pediatrics. Nov 2019;144(5):e20192758. [CrossRef] [Medline]
  4. Fostering Healthy Mental, Emotional, and Behavioral Development in Children and Youth: A National Agenda. National Academies Press; 2019. URL: https:/​/books.​google.co.in/​books/​about/​Fostering_Healthy_Mental_Emotional_and_B.​html?id=PhXHDwAAQBAJ&source=kp_book_description&redir_esc=y [Accessed 2026-04-02]
  5. Sappenfield O, Alberto C, Minnaert J, Donney J, Lebrun-Harris L, Ghandour R. Adolescent mental and behavioral health, 2023. In: National Survey of Children’s Health Data Briefs. Health Resources and Services Administration; 2018. URL: https://www.ncbi.nlm.nih.gov/books/NBK608531/ [Accessed 2026-04-02]
  6. McMillan JA, Land MJ, Tucker AE, Leslie LK. Preparing future pediatricians to meet the behavioral and mental health needs of children. Pediatrics. Jan 2020;145(1):e20183796. [CrossRef] [Medline]
  7. Manning A, Weingard M, Fabricius J, French A, Sendak M, Davis N. Be ExPeRT (behavioral health expansion in pediatric residency training): a case-based seminar. MedEdPORTAL. 2023;19:11326. [CrossRef] [Medline]
  8. Agazzi H, Dickinson S, Plant RM. Helping our toddlers, developing our children’s skills: innovative behavioral management training for pediatric residents. Adv Pediatr. Aug 2022;69(1):13-21. [CrossRef] [Medline]
  9. Jones MR, Dadiz R, Baldwin CD, Alpert-Gillis L, Jee SH. Integrated behavioral health education using simulated patients for pediatric residents engaged in a primary care community of practice. Fam Syst Health. Dec 2022;40(4):472-483. [CrossRef] [Medline]
  10. Herbst R, Rybak T, Meisman A, et al. A virtual reality resident training curriculum on behavioral health anticipatory guidance: development and usability study. JMIR Pediatr Parent. Jun 29, 2021;4(2):e29518. [CrossRef] [Medline]
  11. Ericsson KA, Krampe RT, Tesch-Römer C. The role of deliberate practice in the acquisition of expert performance. Psychol Rev. 1993;100(3):363-406. [CrossRef]
  12. Real FJ, Whitehead M, Ollberding NJ, et al. A virtual reality curriculum to enhance residents’ behavioral health anticipatory guidance skills: a pilot trial. Acad Pediatr. 2023;23(1):185-192. [CrossRef] [Medline]
  13. Rollnick S, Miller WR. What is motivational interviewing? Behav Cogn Psychother. Oct 1995;23(4):325-334. [CrossRef]
  14. Erickson SJ, Gerstle M, Feldstein SW. Brief interventions and motivational interviewing with children, adolescents, and their parents in pediatric health care settings: a review. Arch Pediatr Adolesc Med. Dec 2005;159(12):1173-1180. [CrossRef] [Medline]
  15. OpenAI. URL: https://openai.com [Accessed 2026-04-02]
  16. Merritt C, Glisson M, Dewan M, Klein M, Zackoff M. Implementation and evaluation of an artificial intelligence driven simulation to improve resident communication with primary care providers. Acad Pediatr. Apr 2022;22(3):503-505. [CrossRef] [Medline]
  17. Rodgers DL, Needler M, Robinson A, et al. Artificial intelligence and the simulationists. Simul Healthc. Dec 1, 2023;18(6):395-399. [CrossRef] [Medline]
  18. USC Standard Patient Studio. BreakAway Games. URL: https://www.breakawaygames.com/case-studies/usc-standard-patient-studio [Accessed 2026-04-02]
  19. Talbot TB, Kalisch N, Christoffersen K, Lucas G, Forbell E. Natural language understanding performance & use considerations in virtual medical encounters. Stud Health Technol Inform. 2016;220:407-413. [Medline]
  20. Jake-Schoffman DE, Silfee VJ, Waring ME, et al. Methods for evaluating the content, usability, and efficacy of commercial mobile health apps. JMIR Mhealth Uhealth. Dec 18, 2017;5(12):e190. [CrossRef] [Medline]
  21. Real FJ, Meisman A, Rosen BL. Usability matters for virtual reality simulations teaching communication. Med Educ. Nov 2020;54(11):1067-1068. [CrossRef] [Medline]
  22. Johnson WR, Artino ARJ, Durning SJ. Using the think aloud protocol in health professions education: an interview method for exploring thought processes: AMEE guide no. 151. Med Teach. Sep 2023;45(9):937-948. [CrossRef] [Medline]
  23. Watkins DC. Rapid and rigorous qualitative data analysis: the “RADaR” technique for applied research. Int J Qual Methods. 2017;16:1-9. [CrossRef]
  24. Hanson JL, Balmer DF, Giardino AP. Qualitative research methods for medical educators. Acad Pediatr. 2011;11(5):375-386. [CrossRef] [Medline]
  25. Brooke J. SUS: a 'quick and dirty' usability scale. In: Usability Evaluation In Industry. CRC Press; 1996. URL: https:/​/www.​taylorfrancis.com/​chapters/​edit/​10.1201/​9781498710411-35/​sus-quick-dirty-usability-scale-john-brooke [Accessed 2026-04-02]
  26. Vorderer P, Wirth W, Gouveia FR, et al. MEC Spatial Presence Questionnaire (MEC-SPQ): short documentation and instructions for application. Project Presence; 2004.
  27. Bangor A, Kortum P, Miller J. Determining what individual SUS scores mean: adding an adjective rating scale. J Usability Stud. 2009;4(3):114-123. [CrossRef]
  28. Stokel-Walker C, Van Noorden R. What ChatGPT and generative AI mean for science. Nature. Feb 2023;614(7947):214-216. [CrossRef] [Medline]
  29. Shoja MM, Van de Ridder JM, Rajput V. The emerging role of generative artificial intelligence in medical education, research, and practice. Cureus. Jun 2023;15(6):e40883. [CrossRef] [Medline]
  30. Misra SM, Suresh S. Artificial intelligence and objective structured clinical examinations: using ChatGPT to revolutionize clinical skills assessment in medical education. J Med Educ Curric Dev. 2024;11:23821205241263475. [CrossRef] [Medline]
  31. Guest G, Bunce A, Johnson L. How many interviews are enough?: an experiment with data saturation and variability. Field Methods. 2006;18(1):59-82. [CrossRef]
  32. Guest G, Namey E, Chen M. A simple method to assess and report thematic saturation in qualitative research. PLoS One. 2020;15(5):e0232076. [CrossRef] [Medline]
  33. Sandars J, Lafferty N. Twelve Tips on usability testing to develop effective e-learning in medical education. Med Teach. 2010;32(12):956-960. [CrossRef] [Medline]
  34. Liaw SY, Tan JZ, Lim S, et al. Artificial intelligence in virtual reality simulation for interprofessional communication training: mixed method study. Nurse Educ Today. Mar 2023;122:105718. [CrossRef] [Medline]
  35. Kiger ME, Fowler L, Eviston M, et al. A case-based, longitudinal curriculum in pediatric behavioral and mental health. MedEdPORTAL. 2024;20:11400. [CrossRef] [Medline]
  36. Meyers N, Maletz B, Berger-Jenkins E, et al. Mental health in the medical home: a longitudinal curriculum for pediatric residents on behavioral and mental health care. MedEdPORTAL. 2022;18:11270. [CrossRef] [Medline]
  37. Zhang X, Wei X, Ou CX, Caron E, Zhu H, Xiong H. From human-AI confrontation to human-AI symbiosis in Society 5.0: transformation challenges and mechanisms. IT Prof. 2022;24(3):43-51. [CrossRef]
  38. Zhang X, Wang X, Tian F, Xu D, Fan L. Anticipating the antecedents of feedback-seeking behavior in digital environments: a socio-technical system perspective. Internet Res. Mar 28, 2023;33(1):388-409. [CrossRef]


AI: artificial intelligence
BHAG: behavioral health anticipatory guidance
MEC-SPQ: Measurement, Effects, Conditions: Spatial Presence Questionnaire
RADAR: rigorous and accelerated data reduction
SPS: Standard Patient Studio
SUS: System Usability Scale
VR: virtual reality


Edited by Andre Kushniruk; submitted 16.May.2025; peer-reviewed by Masab Mansoor, Xi Zhang; final revised version received 09.Jan.2026; accepted 28.Feb.2026; published 22.Apr.2026.

Copyright

© Liam Fleck, Rachel Herbst, Andrea Meisman, Thomas Talbot, Michael Glisson, Max Remington, Micah Goldson, Francis Real. Originally published in JMIR Human Factors (https://humanfactors.jmir.org), 22.Apr.2026.

This is an open-access article distributed under the terms of the Creative Commons Attribution License (https://creativecommons.org/licenses/by/4.0/), which permits unrestricted use, distribution, and reproduction in any medium, provided the original work, first published in JMIR Human Factors, is properly cited. The complete bibliographic information, a link to the original publication on https://humanfactors.jmir.org, as well as this copyright and license information must be included.