| Literature DB >> 34201827 |
Masaaki Komatsu1,2, Akira Sakai3,4,5, Ai Dozen2, Kanto Shozu2, Suguru Yasutomi3,4, Hidenori Machino1,2, Ken Asada1,2, Syuzo Kaneko1,2, Ryuji Hamamoto1,2,5.
Abstract
Artificial intelligence (AI) is being increasingly adopted in medical research and applications. Medical AI devices have continuously been approved by the Food and Drug Administration in the United States and the responsible institutions of other countries. Ultrasound (US) imaging is commonly used in an extensive range of medical fields. However, AI-based US imaging analysis and its clinical implementation have not progressed steadily compared to other medical imaging modalities. The characteristic issues of US imaging owing to its manual operation and acoustic shadows cause difficulties in image quality control. In this review, we would like to introduce the global trends of medical AI research in US imaging from both clinical and basic perspectives. We also discuss US image preprocessing, ingenious algorithms that are suitable for US imaging analysis, AI explainability for obtaining informed consent, the approval process of medical AI devices, and future perspectives towards the clinical application of AI-based US diagnostic support technologies.Entities:
Keywords: artificial intelligence; classification; deep learning; detection; explainability; machine learning; preprocessing; segmentation; ultrasound imaging
Year: 2021 PMID: 34201827 PMCID: PMC8301304 DOI: 10.3390/biomedicines9070720
Source DB: PubMed Journal: Biomedicines ISSN: 2227-9059
List of FDA-approved medical AI devices for US imaging.
| No. | FDA | Product Name/Company | Description | Body Area | Decision Date | Regulatory Class/Submission Type |
|---|---|---|---|---|---|---|
| 1 | K161959 | ClearView cCAD/ClearView Diagnostics, Inc., Piscataway, NJ, USA | Automatically classifies the shape and orientation characteristics of user-selected ROIs in breast US images with the BI-RADS category using machine learning. | Breast | 28 December 2016 | Class II/510(k) |
| 2 | K162574 | AmCAD-US/AmCAD BioMed Corporation, Taipei, Taiwan | Visualizes and quantifies US image data with backscattered signals echoed by tissue compositions. | Thyroid | 30 May 2017 | Class II/510(k) |
| 3 | K173780 | EchoMD AutoEF software/Bay Labs, Inc., San Francisco, CA, USA | Provides automated estimation of the LVEF on previously acquired cardiac US images using machine learning. | Heart | 14 June 2018 | Class II/510(k) |
| 4 | K180006 | AmCAD-UT Detection 2.2/AmCad BioMed Corporation, Taipei, Taiwan | Analyzes thyroid US images of user-selected ROIs. Provides detailed information with the quantification and visualization of US characteristics of thyroid nodules. | Thyroid | 31 August 2018 | Class II/510(k) |
| 5 | K190442 | Koios DS/Koios Medical, Inc., New York, NY, USA | Diagnostic aid using machine learning to characterize US image features with user-provided ROIs to generate categorical output that aligns to BI-RADS and the auto-classified shape and orientation. | Breast | 3 July 2019 | Class II/510(k) |
| 6 | K191171 | EchoGo Core/Ultromics Ltd., Oxford, UK | Automatically measures cardiac US parameters including EF, Global Longitudinal Strain, and LV volume using machine learning. | Heart | 13 November 2019 | Class II/510(k) |
| 7 | DEN190040 | Caption Guidance/Caption Health, Inc., Brisbane, CA, USA | Assists in the acquisition of anatomically correct cardiac US images that represent standard 2D echocardiographic diagnostic views and orientations using deep learning. | Heart | 7 February 2020 | Class II/De Novo |
| 8 | K200356 | MEDO ARIA/Medo.ai, Inc., Edmonton, Canada | Views and quantifies US image data to aid trained medical professionals in the diagnosis of developmental dysplasia of the hip using machine learning. | Hip | 11 June 2020 | Class II/510(k) |
| 9 | K200980 | Auto 3D Bladder Volume Tool/Butterfly Network, Inc., Guilford, CT, USA | Views, quantifies, and reports the results acquired on Butterfly Network US systems using machine learning-based 3D volume measurements of the bladder. | Bladder | 11 June 2020 | Class II/510(k) |
| 10 | K200621 | Caption Interpretation Automated Ejection Fraction Software/Caption Health, Inc., Brisbane, CA, USA | Processes previously acquired cardiac US images and provides machine learning-based estimation of the LVEF. | Heart | 22 July 2020 | Class II/510(k) |
| 11 | K201369 | AVA (Augmented Vascular Analysis)/See-Mode Technologies Pte. Ltd., Singapore, Singapore | Analyzes vascular US scans including vessel wall segmentation and measurement of the intima-media thickness of the carotid artery using machine learning. | Carotid | 16 September 2020 | Class II/510(k) |
| 12 | K201555 | EchoGo Pro/Ultromics Ltd., Oxford, UK | Decision support system for diagnostic stress ECG using machine learning to assess the severity of CAD using LV segmentation of cardiac US images. | Heart | 18 December 2020 | Class II/510(k) |
| 13 | K210053 | LVivo software application/DiA Imaging Analysis Ltd., Beer Sheva, Israel | Evaluates the LVEF using deep learning-based LV segmentation in cardiac US images. | Heart | 5 February 2021 | Class II/510(k) |
Abbreviations: ROI, region of interest; BI-RADS, Breast Imaging Reporting and Data System; LVEF, left ventricular ejection fraction; ECG, echocardiography; CAD, coronary artery disease.
Figure 1Acoustic shadow detection: (a) The red areas represent the segmented acoustic shadows using the semi-supervised approach [43]. (b) As a candidate for clinical application, examiners can evaluate whether the current acquired US imaging is suitable for diagnosis in real time. In the case of low image quality, rescanning can be performed in the same examination time. This application may improve the workflow of examiners and reduce the patient burden.
Figure 2Fundamental algorithms generally used in US imaging analysis. (a) Image classification of whether the fetal US image contains a diagnostically useful cross-section such as a four-chamber view (4CV). (b) Detection of the fetal heart for evaluation of fetal heart structure. (c) Segmentation of the boundaries or regions of the fetal heart to measure the fetal cardiac index such as cardiothoracic area ratio (CTAR).
Figure 3Use of time-series information to reduce noisy artifacts and to perform accurate segmentation in US videos. CSC employs the time-series information of US videos and specific section information to calibrate the output of U-Net [51].
List of segmentation algorithms for US imaging analysis.
| Algorithm Name | Description | Ref. |
|---|---|---|
| U-Net | Based on a fully convolutional network and achieves more accurate segmentation using smaller amounts of training data compared with the other methods. Several studies have reported superior segmentation performances using their models based on U-Net, which is particularly suitable for biomedical image segmentation. | [ |
| DeepLab | Utilizes atrous convolution and demonstrates its state-of-the-art segmentation performance. DeepLabv3+ is the latest version developed by combining pyramidal pooling modules with an encoder-decoder model. | [ |
| CSC | Utilizes time-series information to reduce noisy artifacts and performs accurate segmentation on a small and deformable organ in US videos. | [ |
| MFCY | Uses time-series information and demonstrates high-performance segmentation on a target organ with a cylindrical shape in US videos. | [ |
| AIDAN | The attention-guided dual-path network improves segmentation performance in US imaging. | [ |
| Deep attention network | A U-Net-based network combining a channel attention module and VGG improves segmentation performance in US imaging. | [ |
| Contrastive rendering | A contrastive learning-based framework improves the boundary estimation in US imaging. | [ |
| GAN-based method | A GAN-based framework with progressive learning improves the boundary estimation in US imaging. | [ |
| 3D U-Net | The representative direct segmentation method for conventional 3D volumes is useful for accurate volume quantification. | [ |
| IFSS-NET | The interactive few-shot Siamese network uses a Siamese network and a recurrent neural network to perform 3D segmentation training from few-annotated 2D US images. | [ |
| Encoder–decoder architecture | A semi-automatic segmentation algorithm for a freehand 3D US volume by employing an encoder–decoder architecture with 2D US images and several 2D labels. | [ |
Abbreviations: CSC, cropping–segmentation–calibration; MFCY, multi-frame + cylinder method; AIDAN, attention-guided dual-path network; GAN, generative adversarial network; IFSS-NET, interactive few-shot Siamese network.
Figure 4Possible techniques for AI explainability. The cardiac substructures were detected with colored bounding boxes in a three-vessel trachea view in (a) a normal case, and (b) a tetralogy of Fallot (TOF) case. (c) An image of the class-specific heatmap indicates the discriminative regions of the image that caused the particular class activity of interest. (d) Barcode-like timeline in a TOF case. The vertical axis represents the 18 selected substructures and the horizontal axis represents the examination timeline in the rightward direction. A probability of ≥0.01 was set as well-detected and is indicated as the blue bar, whereas <0.01 was set as non-detected and is indicated by the gray bar in each frame. The pulmonary artery was not detected (red dotted box).