| Literature DB >> 33808513 |
Zeynettin Akkus1, Yousof H Aly1, Itzhak Z Attia1, Francisco Lopez-Jimenez1, Adelaide M Arruda-Olson1, Patricia A Pellikka1, Sorin V Pislaru1, Garvan C Kane1, Paul A Friedman1, Jae K Oh1.
Abstract
Echocardiography (Echo), a widely available, noninvasive, and portable bedside imaging tool, is the most frequently used imaging modality in assessing cardiac anatomy and function in clinical practice. On the other hand, its operator dependability introduces variability in image acquisition, measurements, and interpretation. To reduce these variabilities, there is an increasing demand for an operator- and interpreter-independent Echo system empowered with artificial intelligence (AI), which has been incorporated into diverse areas of clinical medicine. Recent advances in AI applications in computer vision have enabled us to identify conceptual and complex imaging features with the self-learning ability of AI models and efficient parallel computing power. This has resulted in vast opportunities such as providing AI models that are robust to variations with generalizability for instantaneous image quality control, aiding in the acquisition of optimal images and diagnosis of complex diseases, and improving the clinical workflow of cardiac ultrasound. In this review, we provide a state-of-the art overview of AI-empowered Echo applications in cardiology and future trends for AI-powered Echo technology that standardize measurements, aid physicians in diagnosing cardiac diseases, optimize Echo workflow in clinics, and ultimately, reduce healthcare costs.Entities:
Keywords: artificial intelligence; cardiac ultrasound; echocardiography; portable ultrasound
Year: 2021 PMID: 33808513 PMCID: PMC8037652 DOI: 10.3390/jcm10071391
Source DB: PubMed Journal: J Clin Med ISSN: 2077-0383 Impact factor: 4.241
Figure 1Sample US images showing different US modes. (A) B-mode image of the apical 4 chamber view of a heart. (B) Doppler image of mitral inflow. (C) Contrast enhanced ultrasound image of left ventricle. (D) Strain imaging of the left ventricle.
Figure 2The context of artificial intelligence, machine learning, and deep learning. SVM: Support Vector Machine. CNN: convolutional neural networks, R-CNN: recurrent CNN, ANN: artificial neural networks.
Figure 3A framework of training a deep-learning model for classification of myocardial diseases. Operations between layers are shown with arrows. SGD: Stochastic Gradient Descent.
Figure 4The flowchart of systematic review that includes identification, screening, eligibility, and inclusion.
Figure 5The flowchart of automated artificial-intelligence-empowered echo (AI-Echo) interpretation pipeline using a chain approach. QC: Quality Control.
Figure 6A schematic diagram of AI (artificial intelligence) interpretation of echocardiography images for preliminary diagnosis and triaging patients in emergency and primary care clinics. POCUS: point of care ultrasound.
The list of commercial software packages that provides automated measurements or diagnosis.
| Company | Software Package | AI-Empowered Tools |
|---|---|---|
| Siemens Medical Solutions Inc., USA | syngo Auto Left Heart, | Auto EF, Auto LV and LA volumes, Auto Strain for manually selected views. |
| GE Healthcare, Inc., | Ultra Edition Package, | Auto EF, Auto LV and LA volumes, Auto Strain for manually selected views |
| TOMTEC Imaging Systems GmbH, Germany | Tomtec-Arena/Tomtec-Zero | Auto EF, Auto LV and LA volumes, Auto Strain for manually selected views |
| Ultromics Ltd., | Echo Go/Echo Go Pro | Auto EF, Auto LV and LA volumes, Auto Strain, Auto identification of CHD (Fully automated) |
| Dia Imaging Analysis Ltd., | DiaCardio’s LVivoEF Software/LVivo Seamless | Auto EF and Auto standard echo view identification (Fully automated) |
| Caption Health, Inc., USA | The Caption Guidance software | AI tool for assisting to capture images of a patient’s heart |
EF: ejection fraction. CHD: coronary heart disease.
Deep-learning-based AI studies for view identification and quality assessment. MAE: mean absolute error.
| Task | DL Model | Data/Validation | Performance | |
|---|---|---|---|---|
| Zhang et al. [ | 23 standard echo view classification | Customized 13-layer CNN model | 5-fold cross validation/7168 cine clips of 277 studies | Overall accuracy: 84% at individual image level |
| Mandani et al. [ | 15 standard echo view classification | VGG [ | Training: 180,294 images of 213 studies | Overall accuracy: 97.8% at individual image level and 91.7% at cine-lip level |
| Akkus et al. [ | 24 Doppler image classes | Inception_resnet | Training: 5544 images of 140 studies | Overall accuracy of 97% |
| Abdi et al. [ | Rating quality of apical 4 chamber views (0–5 scores) | A customized fully connected CNN | 3-fold cross validation/6196 images | MAE: 0.71 ± 0.58 |
| Abdi et al. [ | Quality assessment for five standard view planes | CNN regression architecture | Total dataset: 2435 cine clips | Average of 85% accuracy |
| Dong et al. [ | QC for fetal ultrasound cardiac four chamber planes | Ensembled three CNN model | 5-fold cross validation (7032 images) | Mean average precision of 93.52%. |
| Labs et al. [ | Assessing quality of apical 4 chamber view | Hybrid model including CNN and LSTM layers | Training/validation/testing (60/20/20%) of in total of 1039 images | Average accuracy of 86% on the test set |
Deep-learning-based AI studies for image segmentation and quantification. MAD: mean absolute difference. LVEF: left ventricle ejection fraction.
| Task | DL Model | Data/Validation | Performance | |
|---|---|---|---|---|
| Zhang et al. [ | LV/LA segmentation; LVEF, LV and LA volumes, LV mass, global longitudinal strain | U-Net [ | LV segmentation: 5-fold cross validation on 791 images; LV volumes: 4748 measurements; | IOU: 0.72–0.90 for LV segmentation; MAD of 9.7% for LVEF; MAD of 15–17% for LV/LA volumes and LV mass; MAD of 9% for strain. |
| Leclerc et al. [ | LVEF, LV volumes | U-Net [ | 500 patients | LVEF: AME of 5.6% |
| Jafari et al. [ | LV segmentation and bi-plane LVEF | A shallow U-Net with multi-task learning and adversarial training | 854 studies split into 80% training and 20% testing sets | DICE of 0.92 for LV segmentation; |
| Chen et al. [ | LV segmentation in apical 2, 3, 4, or 5 chamber views | An encoder–decoder type CNN with multi-view regularization | Training set: 33,058 images; | Average DICE of 0.88 |
| Oktay et al. [ | LV segmentation; | Anatomically constrained CNN model | CETUS’14 3D US challenge dataset. (training set: 15 studies; test set: 30 studies) | DICE of 0.91 ± 0.23 for LV segmentation; |
| Ghorbani et al. [ | LV systolic and diastolic volumes; | A customized CNN model (EchoNet) for semantic segmentation | Training set: 1.6 million images from 2850 patients; | Systolic and diastolic volumes (R2 = 0.74 and R2 = 0.70); |
| Ouyang et al. [ | LVEF | 3D CNN model with residual connections | Training set: 7465 echo videos; | MAE of 4.1% and 6% for internal and external datasets |
Deep-learning-based AI studies for disease diagnosis. AUC: area under the curve.
| Task | DL Model | Data/Validation | Performance | |
|---|---|---|---|---|
| Zhang et al. [ | Diagnosis of hypertrophic cardiomyopathy (HCM), cardiac amyloidosis (amyloid), and pulmonary hypertension (PAH) | VGG [ | HCM: 495/2244 | Hypertrophic cardiomyopathy: AUC of 0.93; |
| Ghorbani et al. [ | Diagnose presence of pacemaker leads; enlarged left atrium; LV hypertrophy | A customized CNN model | Training set: 1.6 million images from 2850 patients; | Presence of pacemaker leads with AUC = 0.89; enlarged left atrium with AUC = 0.86, left ventricular hypertrophy with AUC = 0.75. |
| Ouyang et al. [ | Predict presence of HF with reduced EF | 3D convolutions with residual connection | Training set: 7465 echo videos; | AUC of 0.97 |
| Omar et al. [ | Detecting wall motion abnormalities | Modified VGG-16 [ | 120 echo studies. One-leave-out cross validation | Accuracy: RF = 72.1%, |
| Kusunose et al. | Detecting wall motion abnormalities (WMA) | Resnet [ | 300 patients with WMA +100 normal control. Training = 64% Validation:16% | AUC of 0.99 |
| Narula et al. [ | Differentiate HCM from ATH | A customized ANN | 77 ATH and 62 HCM patients. Ten-fold cross validation | Sensitivity: 87% |