Leveraging Comprehensive Echo Data to Power Artificial Intelligence Models for Handheld Cardiac Ultrasound

Bibliographic Details
Title: Leveraging Comprehensive Echo Data to Power Artificial Intelligence Models for Handheld Cardiac Ultrasound
Authors: D.M. Anisuzzaman, PhD, Jeffrey G. Malins, PhD, John I. Jackson, PhD, Eunjung Lee, PhD, Jwan A. Naser, MBBS, Behrouz Rostami, PhD, Grace Greason, BA, Jared G. Bird, MD, Paul A. Friedman, MD, Jae K. Oh, MD, Patricia A. Pellikka, MD, Jeremy J. Thaden, MD, Francisco Lopez-Jimenez, MD, MSc, MBA, Zachi I. Attia, PhD, Sorin V. Pislaru, MD, PhD, Garvan C. Kane, MD, PhD
Source: Mayo Clinic Proceedings: Digital Health, Vol 3, Iss 1, Pp 100194- (2025)
Publisher Information: Elsevier, 2025.
Publication Year: 2025
Collection: LCC:Computer applications to medicine. Medical informatics
Subject Terms: Computer applications to medicine. Medical informatics, R858-859.7
More Details: Objective: To develop a fully end-to-end deep learning framework capable of estimating left ventricular ejection fraction (LVEF), estimating patient age, and classifying patient sex from echocardiographic videos, including videos collected using handheld cardiac ultrasound (HCU). Patients and Methods: Deep learning models were trained using retrospective transthoracic echocardiography (TTE) data collected in Mayo Clinic Rochester and surrounding Mayo Clinic Health System sites (training: 6432 studies and internal validation: 1369 studies). Models were then evaluated using retrospective TTE data from the 3 Mayo Clinic sites (Rochester, n=1970; Arizona, n=1367; Florida, n=1562) before being applied to a prospective dataset of handheld ultrasound and TTE videos collected from 625 patients. Study data were collected between January 1, 2018 and February 29, 2024. Results: Models showed strong performance on the retrospective TTE datasets (LVEF regression: root mean squared error (RMSE)=6.83%, 6.53%, and 6.95% for Rochester, Arizona, and Florida cohorts, respectively; classification of LVEF ≤40% versus LVEF > 40%: area under curve (AUC)=0.962, 0.967, and 0.980 for Rochester, Arizona, and Florida, respectively; age: RMSE=9.44% for Rochester; sex: AUC=0.882 for Rochester), and performed comparably for prospective HCU versus TTE data (LVEF regression: RMSE=6.37% for HCU vs 5.57% for TTE; LVEF classification: AUC=0.974 vs 0.981; age: RMSE=10.35% vs 9.32%; sex: AUC=0.896 vs 0.933). Conclusion: Robust TTE datasets can be used to effectively power HCU deep learning models, which in turn demonstrates focused diagnostic images can be obtained with handheld devices.
Document Type: article
File Description: electronic resource
Language: English
ISSN: 2949-7612
Relation: http://www.sciencedirect.com/science/article/pii/S294976122500001X; https://doaj.org/toc/2949-7612
DOI: 10.1016/j.mcpdig.2025.100194
Access URL: https://doaj.org/article/84e08b1479764eda80534ce7fe8c6ec7
Accession Number: edsdoj.84e08b1479764eda80534ce7fe8c6ec7
Database: Directory of Open Access Journals
More Details
ISSN:29497612
DOI:10.1016/j.mcpdig.2025.100194
Published in:Mayo Clinic Proceedings: Digital Health
Language:English