- Research
- Open access
- Published:
AI-enhanced guidance demonstrated improvement in novices’ Apical-4-chamber and Apical-5-chamber views
BMC Medical Education volume 25, Article number: 558 (2025)
Abstract
Introduction
Artificial Intelligence (AI) modules might simplify the complexities of cardiac ultrasound (US) training by offering real-time, step-by-step guidance on probe manipulation for high-quality diagnostic imaging. This study investigates real-time AI-based guidance tool in facilitating cardiac US training and its impact on novice users’ proficiency.
Methods
This independent, prospective randomized controlled trial enrolled participants who completed a six-hour cardiac US course, followed by a designated cardiac US proficiency exam. Both groups received in-person guided training using the same devices, with the AI-enhanced group receiving additional real-time AI feedback on probe navigation and image quality during both training and testing, while the non-AI group relied solely on the instructor’s guidance.
Results
Data were collected from 44 participants: 21 in the AI-enhanced group and 23 in the non-AI group. Improvement was observed in the assessment of the AI-enhanced group compared to the non-AI in acquiring the Apical-4-chamber and the Apical-5- chamber views [mean 88% (± SD 10%) vs. mean 76% (± SD 17%), respectively; p = 0.016]. On the other hand, a slower time to complete the echocardiography exam was observed by the AI-enhanced group [mean 401 s (± SD 51) vs. 348 s (± SD 81) respectively; p = 0.038].
Discussion
The addition of real-time, AI-based feedback demonstrated benefits in the cardiac POCUS teaching process for the more challenging echocardiography four- and five- chamber views. It also has the potential to surpass challenges related to in-person POCUS training. Additional studies are required to explore the long-term effect of this training approach.
Clinical trial number
Not applicable.
Introduction
Point-of-care ultrasonography (POCUS) is a rapidly evolving discipline that has been found to improve the diagnostic accuracy of patients presenting with hypotension, chest pain, and acute dyspnea [1,2,3] and significantly shorten the duration required for initiating appropriate treatment [1]. Thus, incorporating POCUS into medical education and ongoing professional development is essential [2].
Teaching ultrasound (US) is a time-consuming and labor-intensive process that traditionally necessitates small-group, bedside instruction, usually by skilled physician sonographers who are often clinically busy [3,4,5]. Additionally, US teaching is susceptible to variance and inconsistencies, as guides may differ in quality and teaching methodology. Different POCUS teaching methodologies, such as hands-on teaching [6], e-learning modules [7], or telemedicine [8], have not fully addressed this issue, as these methods still necessitate the presence of an instructor during hands-on practice. This rising demand for POCUS training drives the need for more efficient methods, including remote feedback systems that reduce reliance on in-person instructors.
Artificial Intelligence (AI)-enhanced POCUS guidance tools might facilitate consistent, cost-effective, and autonomous training for medical students and practitioners [9,10,11,12]. While US AI-based measurements have been extensively explored in recent years for assessing critical cardiac US measurements, such as Ejection Fraction [13, 14], VTI [15], IVC overload [16], and RV function [17], the application of these tools is constrained as the majority of AI-based tools necessitate the presence of a skilled POCUS operator during the tool’s operation. On the other hand, the impact of AI in assisting novice operators with real-time adjustments of the US probe for image acquisition has been the subject of a relatively minor number of studies [18, 19], resulting in less data in this research area.
This trial aimed to evaluate whether adding an AI-based feedback and navigation tool could enhance the efficiency of cardiac US training for novice operators. We assessed its impact by comparing performance on a validated cardiac US proficiency exam between two groups [2, 7]. Our hypothesis was that the AI-enhanced group would obtain higher-quality echocardiographic images, indicating that AI tools could augment in-person training and accelerate skill acquisition.
Methods
Study participants and design
Participants were second and third-year medical students from Ben Gurion University and Soroka University Medical Center interns. Eligibility was assessed using a recruitment questionnaire specifically developed for this study, which gathered information on participants’ demographics and additional background information for equal allocation between study groups (Appendix 1). Participants who rated their experience with US operation above three on a subjective 1–5 scale (where 1 indicates ‘no prior experience or knowledge’ and 5 indicates ‘highly proficient’) within the questionnaire were excluded. Additional exclusions were applied to individuals who did not watch the pre-course recorded lectures, those who missed the in-person session, and those with incomplete data due to technical issues. Consequently, the trial was capped at 48 participants (Fig. 1).
After enrollment, participants were randomized and assigned in a 1:1 ratio to the AI group and non-AI group, stratified by gender, year in medical school, paramedic training, simulator operating experience (of any kind), and prior US operation experience (using the 1–5 subjective scale noted above; Fig. 1; Table 1).
The study received approval from the Ethics Review Board of the Faculty of Health Sciences at Ben-Gurion University of the Negev (Ethics Approval Number: 15-2022). The research was performed in accordance with the Declaration of Helsinki, and all methods were carried out in accordance with relevant guidelines and regulations. Written consent was obtained from all participants, who were fully informed about the study’s purpose and procedures, as well as their right to withdraw at any time without consequence. Participation was entirely voluntary, with performance results remained confidential and were not disclosed to any overseeing organizations, ensuring it had no impact on their evaluations. Findings are reported in accordance with the CONSORT 2010 Statement: updated guidelines for reporting parallel group randomized trials [21].
Settings and interventions
This prospective randomized controlled study was conducted at the simulation center of Ben Gurion University of the Negev, Israel, during March 2022.
Participants from both groups were given equal training time, focusing on maximizing hands-on instruction guided by trainers at a 1:3 guidance ratio. The cardiac US course included three 30-minute lectures as part of the pre-course preparation, each focusing on a different cardiac US anatomical window: Parasternal views, apical views, and subcostal views. The hands-on training session lasted a total of four hours, focused on obtaining echocardiography maneuvers and views. Of this time, three hours were conducted under the guidance of an experienced POCUS instructor using healthy human models for demonstration, while the final hour was reserved for self-practice on the same models (Fig. 1).
Both groups began with an identical 90-minute hands-on session, practicing echocardiography views with feedback from a POCUS instructor. Afterward, the intervention group received a brief introduction to the AI software’s navigation guidance and quality indicator features. The AI software then provided real-time feedback alongside instructor guidance, offering dual feedback throughout the session, including the self-practice period. In contrast, the control group continued practicing with the same POCUS device but without AI support—the sole difference between the two groups (Fig. 2).
POCUS device and AI software
All participants in the control and intervention groups used the same Android-operated tablet device connected to a Philips Lumify™ probe. The AI-enhanced group was also aided by UltraSight’s pre-installed software on their Android devices, while the non-AI group used the same device without the AI software.
UltraSight’s FDA-approved AI software utilizes machine learning to offer healthcare professionals guidance on performing cardiac sonography. When used with a Philips Lumify™ probe, the software provides on-screen instructions to guide the user on adjusting the US probe, including actions such as tilting, rocking, rotation, and sliding, to achieve the best possible view (Fig. 2). Furthermore, the software conducts a real-time analysis of the cardiac US image, reflected in the quality bar indicator, and provides feedback on whether the image quality is adequete for diagnosing pathologies. The software suggests that the user maintain the current probe position when the image quality is ideal, thereby reducing uncertainty for the user (for further description of the AI tool see appendix 2).
On-screen instructions for maneuvering the probe are shown beneath UltraSight’s logo. In the example shown, the operator is suggested to hold position but can also slightly rotate the probe counterclockwise for the dotted line to overlap the arrow. The system’s Quality Bar is on the top right side of the screen. Essential ultrasound tools, such as gain and depth, are part of the default Philips Lumify application, shown on the left side of the screen
Outcomes
The main objective of this study was to assess the effect of using an AI-based tools – navigation feedback and quality indicator – on the efficiency of cardiac US training for novice operators. The primary outcome measured for success was the operator’s Cardiac US Score obtained in the proficiency exam (appendix 3). Secondary outcomes encompassed the time taken to complete the US exam, beginning with the first captured view (Parasternal Long Axis (PLAX)) and ending with the last (IVC), along with individual scores for distinct echocardiography views such as Parasternal long axis (PLAX), Parasternal short axis (PSAX), Apical-4-chamber (A4C) and Apical-5-chamber (A5C), Apical-2-chamber (A2C), Subcostal, and IVC views. The scoring system for each view included two components: pre-defined landmarks visibility and image quality. Image quality was rated 0–2 (“not readable” =0, “recognizable cardiac US view” =1, “excellent quality” =2). Landmark demonstration was scored based on visibility (visible landmark = 1, missing landmark = 0). The Cardiac US Score for an individual operator summed all these components from the entire exam, with a maximum achievable score of 42 points (appendix 3).
To facilitate ease of interpretation, all scores are expressed as percentages of the maximum points attainable for each variable: for instance, the Cardiac US Score is presented as a percentage out of the maximum 42 points, while specific views, such as the parasternal view with a maximum of 12 points, are calculated as percentages of their respective maximums (Appendix 3). The A4C and A5C scores were calculated together, as both views rely on the same anatomical landmarks to define the ideal image, with the only distinction being that the A5C view additionally includes the aorta as a landmark. To avoid duplicating scores for the same anatomical structures, we opted to compute them collectively (full description in appendix 3 and 4).
Assessments and surveys
Following the training, students from each group underwent a validated cardiac US exam to assess their proficiency [2, 7]. Each exam required participants to obtain and record nine views within eight minutes. Operators underwent testing while conducting the examination on models that were different from the ones they had trained with. The cardiac US proficiency exam took place immediately after the training session, focusing on the short-term impact of the teaching process (Fig. 1).
During the exam, clips recorded by each participant were stored on a USB flash drive, labeled with the student’s examination serial number, containing no personal identification details. Blinding was obtained by randomly presenting all clips generated by the AI-enhanced and non-AI groups to the assessment of a senior US expert with over 10 years of experience. These videos did not contain any information that would have alerted the evaluator to which study group – AI-enhanced or non-AI – the US was obtained from.
Statistics: sample size, randomization, and methods
Descriptive statistics were collected in the summary tables. The statistics for customarily distributed variables include mean and standard deviation. Non-normally distributed variables consist of median and interquartile ranges. Categorical variables were described with numbers and percentages from all available observations. A t-test was used to compare two customarily distributed groups, Mann Whitney for two non-normally distributed groups, and Chi-square for categorical variables. Percentages were rounded to two decimal places. The study required a minimum cohort of 44 participants, divided equally between two groups. This number was calculated to provide an 80% chance of detecting a statistically significant 5-point difference between groups under the assumed conditions. All statistical analysis from this study was completed with R version 4.3.1.
Results
Baseline characteristics
Forty-eight participants went through randomization. Two participants from the AI-enhanced group and one from the non-AI group were excluded from the trial as they did not attend the in-person training session. Additionally, due to technical issues, one examination’s clips of an AI-enhanced group participant were not appropriately recorded. Within the AI-enhanced group, 21 examinations were analyzed (95.5%), while in the non-AI group, 23 examinations were analyzed (100%). No harm or unintended effects were reported in both study groups, as scores and participation in the study were kept confidential regarding the participant’s superiors.
Primary outcome: cardiac US score
There was no statistically significant difference in the Cardiac US Score between the two study groups [mean 67% (SD ± 15%) vs. mean 64% (SD ± 13%), p = 0.17; Table 2; Fig. 3].
Secondary outcomes: time and specific views scores
A sub-analysis of specific views revealed that the group using AI-based instructions surpassed the non-AI group in the A4C and A5C views score [mean 88% (± SD 10%) vs. mean 76% (± SD 17%), respectively; p = 0.016]. Nevertheless, no statistically significant differences were observed between the groups in the other echocardiography views (refer to Table 2; Fig. 4). Upon examining the duration required to complete the Cardiac US exam, we found that the scanning times were approximately one minute faster in the non-AI group compared to the AI-enhanced group during the proficiency exam, as anticipated [mean 401 s (± SD 51 s) vs. 348 s (± SD 81 s) respectively; p = 0.038; refer to Table 2; Fig. 5).
Discussion
Our research discerned that the utilization of AI guidance significantly aids novice cardiac US operators in the procurement of A4C and A5C views (88% vs. 76%, p.v 0.007). Nevertheless, the team that did not use an AI-integrated cardiac acquisition instrument performed the cardiac US examination faster (348 s vs. 401 s). No statistically significant differences were observed across other echocardiography views, presumably due to the restricted sample size of our study.
The principal finding in our research was the improvement in capturing the A4C and A5C views, which we attribute to the incorporation of AI-based feedback as an enhancement to traditional training methods. Drawing from our decade of training experience and data derived from over 450 students examined over the years in cardiac US, it is evident that the A4C represents the most complex and demanding cardiac US view for beginners (comprehensive data can be found in Appendix 5). To our knowledge, this assumption has not been investigated in prior studies.
This pilot study aimed to evaluate whether AI-based feedback could effectively enhance traditional instructor-led guidance in training novice cardiac US operators. Rather than seeking to replace in-person guidance, our goal was to determine if AI feedback could serve as a valuable adjunct, paving the way for future research on the potential for AI to fully substitute instructor supervision. Although prior research has shown that AI can enhance the accuracy of critical echocardiography measurements [9,10,11],—such as ejection fraction [13], VTI [15], IVC overload [14, 16], and RV function [17]—these investigations primarily involved experienced physicians and required high-quality image acquisition for precise assessment. Limited studies have examined the role of AI in aiding novices in achieving fundamental cardiac views. A similar study with different real-time AI guidance tool demonstrated that nurses without ultrasonography experience were able to obtain diagnostic echocardiographic studies using AI real-time on-screen [18]. Another trial showed that internal medicine residents carrying a POCUS device with AI guidance functionality for two weeks could also obtain superior A4C views [19]. These trials and ours highlight the potential of real-time AI guidance tools on cardiac US performance among novice operators.
Our study’s secondary significant observation was the accelerated completion of the cardiac US examination by the group that did not use AI. This result was expected by our team, as we initially thought that handling the US probe and understanding AI instructions would require a longer duration for each view. Contrasting results of accelerated scanning time with a different application of real-time AI guidance was reported by another team assessing its influence on novice operators [19]. It is worth noting that their measurement was limited to acquiring the A4C view and was taken after two weeks of scanning experience with the AI guidance. Furthermore, no statistically significant difference was found between our study groups when comparing the time required to acquire an A4C view alone (Appendix 3). In the context of image acquisition training in echocardiography, the integration of AI is currently in a phase of growth and learning [10]. The continuous learning and adaptability of AI are vital factors that will facilitate its improvement over time. Nonetheless, as advancements in these applications emerge, novice operators may require additional time to familiarize themselves with the new tools, as the on-screen instructions provided by AI can initially be challenging for beginners to navigate.
The demonstrated AI tool, along with other automated tools that aid in image acquisition, holds potential for future use by diverse operators. Presently, remote automated ultrasonographic tools enable patients to conduct self-examinations and consult their primary physician without needing a physical clinic visit [22,23,24,25,26]. This progression enhances accessibility and convenience, such as devices for pregnant women to perform self-examinations [23], and lung ultrasonography self-examinations with dialysis [24], heart failure [26], and COVID-19 patients [22]. Remote automated guidance could engage a broader spectrum of inexperienced POCUS operators across multiple medical disciplines, such as primary care physicians, remote healthcare workers, paramedics, nurses, and even during cardiology internships. Additionally, AI-supported devices might show advantages in low-income countries in the absence of ongoing tutoring [27], opening up possibilities for utilizing a cost-effective imaging method compared to currently limited options [28, 29].
The uniqueness of our study lies in its real-time identification of correct hand movements for novice operators, primarily medical students with limited knowledge in the cardiology field and particularly in operating cardiac US exams, after a short US training period, aiding them in effectively capturing challenging cardiac views. The limitations of this study warrant further examination. In contrast to the A4C and A5C, we could not attain a statistically significant enhancement in different echocardiography views. This is likely due to the limited number of samples, but it could also be due to the short temporal relation between the proficiency test and the hands-on instruction. A longer time gap between the initial training and the proficiency test may find more significant value to the AI tool. Subsequent research involving larger sample sizes and longer time between training and testing is essential to confirm and expand on the effectiveness of AI guidance tools. Moreover, the short self-training session might have undermined the participants’ grasp of the guidance tool’s practical use, as some operators reportedly opted to disregard the guidance during the evaluation. Comparable studies employing real-time AI cardiac acquisition tools have addressed this issue by granting participants a more comprehensive introduction to the AI tool, including conducting multiple tests over an extended duration [18, 19].
Conclusion
Our findings indicate that a real-time, AI-based guidance tool for cardiac imaging significantly enhanced novice operators’ ability to acquire the technically challenging A4C and A5C views. Further studies incorporating extended self-practice periods are needed to assess the long-term effects of this training approach in clinical patient examinations and to explore its potential as a substitute for traditional in-person guidance, addressing the increasing demand for training a large number of physicians.
Data availability
The authors confirm that the database for this study will be shared upon formal request and subsequent approval by the Institutional Review Board (IRB).
Abbreviations
- AI:
-
Artificial Intelligence
- US:
-
Ultrasound
- POCUS:
-
Point-of-Care Ultrasound
- A4C:
-
Apical-4-chamber
- A5C:
-
Apical-5-chamber
- PLAX:
-
Parasternal Long Axis
- PSAX:
-
Parasternal Short Axis
- IVC:
-
Inferior Vena Cava
- RV:
-
Right Ventricular (function)
- VTI:
-
Velocity Time Integral
- FDA:
-
Food and Drug Administration
- CONSORT:
-
Consolidated Standards of Reporting Trials
- IRB:
-
Institutional Review Board
- SD:
-
Standard Deviation
- TM:
-
Trademark (as in Philips Lumify™)
References
Ben-Baruch Golan Y, Sadeh R, Mizrakli Y, Shafat T, Sagy I, Slutsky T, et al. Early Point-of-Care ultrasound assessment for medical patients reduces time to appropriate treatment: A pilot randomized controlled trial. Ultrasound Med Biol. 2020;46(8):1908–15.
Kobal SL, Lior Y, Ben-Sasson A, Liel-Cohen N, Galante O, Fuchs L. The feasibility and efficacy of implementing a focused cardiac ultrasound course into a medical school curriculum. BMC Med Educ. 2017;17(1):1–9.
Kumar A, Kugler J, Jensen T. Evaluation of trainee competency with Point-of-Care ultrasonography (POCUS): a conceptual framework and review of existing assessments. J Gen Intern Med. 2019;34(6):1025–31.
Díaz-Gómez JL, Frankel HL, Hernandez A. National certification in critical care echocardiography: its time has come. Crit Care Med. 2017;45(11):1801.
Williams JP, Nathanson R, LoPresti CM, Mader MJ, Haro EK, Drum B, et al. Current use, training, and barriers in point-of-care ultrasound in hospital medicine: A National survey of VA hospitals. J Hosp Med. 2022;17(8):601–8.
Tuvali O, Sadeh R, Kobal S, Yarza S, Golan Y, Fuchs L. The long-term effect of short point of care ultrasound course on physicians’ daily practice. PLoS ONE. 2020;15(11):e0242084.
Fuchs L, Gilad D, Mizrakli Y, Sadeh R, Galante O, Kobal S. Self-learning of point-of-care cardiac ultrasound – Can medical students teach themselves? PLoS ONE. 2018;13(9):e0204087.
Kolbe N, Killu K, Coba V, Neri L, Garcia KM, McCulloch M, et al. Point of care ultrasound (POCUS) telemedicine project in rural Nicaragua and its impact on patient management. J Ultrasound. 2014;18(2):179–85.
Zhang J, Gajjala S, Agrawal P, Tison GH, Hallock LA, Beussink-Nelson L, et al. Fully automated echocardiogram interpretation in clinical practice. Circulation. 2018;138(16):1623–35.
Dey D, Slomka PJ, Leeson P, Comaniciu D, Shrestha S, Sengupta PP, et al. Artificial intelligence in cardiovascular imaging: JACC State-of-the-Art review. J Am Coll Cardiol. 2019;73(11):1317–35.
Chen X, Owen CA, Huang EC, Maggard BD, Latif RK, Clifford SP, et al. Artificial intelligence in echocardiography for anesthesiologists. J Cardiothorac Vasc Anesth. 2021;35(1):251–61.
Cheema BS, Walter J, Narang A, Thomas JD. Artificial Intelligence–Enabled POCUS in the COVID-19 ICU. JACC Case Rep. 2021;3(2):258–63.
Knackstedt C, Bekkers SCAM, Schummers G, Schreckenberg M, Muraru D, Badano LP, et al. Fully automated versus standard tracking of left ventricular ejection fraction and longitudinal strain: the FAST-EFs multicenter study. J Am Coll Cardiol. 2015;66(13):1456–66.
Gohar E, Herling A, Mazuz M, Tsaban G, Gat T, Kobal S, et al. Artificial intelligence (AI) versus POCUS expert: A validation study of three automatic AI-Based, Real-Time, hemodynamic echocardiographic assessment tools. J Clin Med. 2023;12(4):1352.
Zhai S, Wang H, Sun L, Zhang B, Huo F, Qiu S, et al. Artificial intelligence (AI) versus expert: A comparison of left ventricular outflow tract velocity time integral (LVOT-VTI) assessment between ICU Doctors and an AI tool. J Appl Clin Med Phys. 2022;23(8):e13724.
Damodaran S, Kulkarni AV, Gunaseelan V, Raj V, Kanchi M. Automated versus manual B-lines counting, left ventricular outflow tract velocity time integral and inferior Vena Cava collapsibility index in COVID-19 patients. Indian J Anaesth. 2022;66(5):368–74.
Liu S, Bose R, Ahmed A, Maslow A, Feng Y, Sharkey A, et al. Artificial Intelligence-Based assessment of indices of right ventricular function. J Cardiothorac Vasc Anesth. 2020;34(10):2698–702.
Narang A, Bae R, Hong H, Thomas Y, Surette S, Cadieu C, et al. Utility of a Deep-Learning algorithm to guide novices to acquire echocardiograms for limited diagnostic use. JAMA Cardiol. 2021;6(6):624–32.
Baum E, Tandel MD, Ren C, Weng Y, Pascucci M, Kugler J, et al. Acquisition of cardiac Point-of-Care ultrasound images with deep learning: A randomized trial for educational outcomes with novices. CHEST Pulm. 2023;1(3):100023.
An Exploratory Study of Spatial Ability and Student Achievement in Sonography - Doug Clem. Sharlette Anderson, Joe Donaldson, Moses Hdeib, 2010 [Internet]. [cited 2021 Dec 18]. Available from: https://journals.sagepub.com/doi/10.1177/8756479310375119
Schulz KF, Altman DG, Moher D, Group CONSORT. CONSORT 2010 statement: updated guidelines for reporting parallel group randomised trials. BMJ. 2010;340:c332.
Self-Performed Lung. Ultrasound for home monitoring of a patient positive for coronavirus disease 2019. Chest. 2020;158(3):e93–7.
At-Home Ultrasounds for Pregnant Women Now A Reality. PulseNmore launches world’s first Self-Administered handheld Tele-Ultrasound device: company partners with one of world’s largest public HMOs, Israel’s Clalit health services, to reduce prenatal office visits during COVID-19 and beyond, and limit trips to the ER. (2020). PR Newswire.
Schneider E, Maimon N, Hasidim A, Shnaider A, Migliozzi G, Haviv YS, et al. Can Dialysis patients identify and diagnose pulmonary congestion using Self-Lung ultrasound?? J Clin Med. 2023;12(11):3829.
Malia L, Nye ML, Kessler DO. Exploring the Feasibility of At-Home Lung Ultra-Portable Ultrasound: Parent-Performed Pediatric Lung Imaging. J Ultrasound Med [Internet]. [cited 2024 Feb 8];n/a(n/a). Available from: https://onlinelibrary.wiley.com/doi/abs/10.1002/jum.16398
Chiem AT, Lim GW, Tabibnia AP, Takemoto AS, Weingrow DM, Shibata JE. Feasibility of patient-performed lung ultrasound self-exams (Patient-PLUS) as a potential approach to telemedicine in heart failure. ESC Heart Fail. 2021;8(5):3997–4006.
Abrokwa SK, Ruby LC, Heuvelings CC, Bélard S. Task shifting for point of care ultrasound in primary healthcare in low- and middle-income countries-a systematic review. EClinicalMedicine. 2022;45:101333.
Becker DM, Tafoya CA, Becker SL, Kruger GH, Tafoya MJ, Becker TK. The use of portable ultrasound devices in low- and middle-income countries: a systematic review of the literature. Trop Med Int Health. 2016;21(3):294–311.
Hricak H, Abdel-Wahab M, Atun R, Lette MM, Paez D, Brink JA, et al. Medical imaging and nuclear medicine: a lancet oncology commission. Lancet Oncol. 2021;22(4):e136–72.
Acknowledgements
Not applicable.
Funding
This research received no specific grant from funding agencies in the public, commercial, or not-for-profit sectors. We wish to express that this study was carried out independently using the Ben Gurion Simulation Center facilities without the influence of external sponsorship or funding. The authors initiated and solely supported all research activities and data collection. It is important to note that the POCUS guides and models were the only exceptions to this independence, as UltraSight Company financially compensated them for participating in the study.
Author information
Authors and Affiliations
Contributions
O.K: Conceptualization, Methodology, Formal Analysis, Investigation, Writing—Original Draft Preparation I.B.S: Statistical Analysis, Writing—Review & Editing A.P: Writing—Review & Editing R.J: Instruction in POCUS Course, Initial Statistical Analysis O.W: Project Administration, Supervision L.F: Methodology, Project Administration, Supervision.
Corresponding author
Ethics declarations
Ethics approval and consent to participate
The study received approval from the Ethics Review Board of the Faculty of Health Sciences at Ben-Gurion University of the Negev (Ethics Approval Number: 15-2022). The research was performed in accordance with the Declaration of Helsinki, and all methods were carried out in accordance with relevant guidelines and regulations. Written consent was obtained from all participants, who were fully informed about the study’s purpose and procedures, as well as their right to withdraw at any time without consequence. Participation was entirely voluntary, with performance results remained confidential and were not disclosed to any overseeing organizations, ensuring it had no impact on their evaluations.
Consent for publication
Not applicable.
Competing interests
The authors declare no competing interests.
Additional information
Publisher’s note
Springer Nature remains neutral with regard to jurisdictional claims in published maps and institutional affiliations.
Electronic supplementary material
Below is the link to the electronic supplementary material.
Rights and permissions
Open Access This article is licensed under a Creative Commons Attribution-NonCommercial-NoDerivatives 4.0 International License, which permits any non-commercial use, sharing, distribution and reproduction in any medium or format, as long as you give appropriate credit to the original author(s) and the source, provide a link to the Creative Commons licence, and indicate if you modified the licensed material. You do not have permission under this licence to share adapted material derived from this article or parts of it. The images or other third party material in this article are included in the article’s Creative Commons licence, unless indicated otherwise in a credit line to the material. If material is not included in the article’s Creative Commons licence and your intended use is not permitted by statutory regulation or exceeds the permitted use, you will need to obtain permission directly from the copyright holder. To view a copy of this licence, visit http://creativecommons.org/licenses/by-nc-nd/4.0/.
About this article
Cite this article
Karni, O., Shitrit, I.B., Perlin, A. et al. AI-enhanced guidance demonstrated improvement in novices’ Apical-4-chamber and Apical-5-chamber views. BMC Med Educ 25, 558 (2025). https://doiorg.publicaciones.saludcastillayleon.es/10.1186/s12909-025-06905-5
Received:
Accepted:
Published:
DOI: https://doiorg.publicaciones.saludcastillayleon.es/10.1186/s12909-025-06905-5