| Literature DB >> 31399886 |
K R Siegersma1,2, T Leiner3, D P Chew4,5, Y Appelman1, L Hofstra1,6, J W Verjans7,8,9.
Abstract
Healthcare, conceivably more than any other area of human endeavour, has the greatest potential to be affected by artificial intelligence (AI). This potential has been shown by several reports that demonstrate equal or superhuman performance in medical tasks that aim to improve efficiency, diagnosis and prognosis. This review focuses on the state of the art of AI applications in cardiovascular imaging. It provides an overview of the current applications and studies performed, including the potential value, implications, limitations and future directions of AI in cardiovascular imaging.It is envisioned that AI will dramatically change the way doctors practise medicine. In the short term, it will assist physicians with easy tasks, such as automating measurements, making predictions based on big data, and putting clinical findings into an evidence-based context. In the long term, AI will not only assist doctors, it has the potential to significantly improve access to health and well-being data for patients and their caretakers. This empowers patients. From a physician's perspective, reliable AI assistance will be available to support clinical decision-making. Although cardiovascular studies implementing AI are increasing in number, the applications have only just started to penetrate contemporary clinical care.Entities:
Keywords: Artificial intelligence; Cardiac imaging techniques; Clinical decision-making; Machine learning; Medical imaging
Year: 2019 PMID: 31399886 PMCID: PMC6712136 DOI: 10.1007/s12471-019-01311-1
Source DB: PubMed Journal: Neth Heart J ISSN: 1568-5888 Impact factor: 2.380
Fig. 1Artificial intelligence is able to impact all steps in the imaging chain
Fig. 2a–c Images obtained from the research performed by Madani et al. [37]. a 2D representation of the different echocardiographic views. Different colours represent the different standard echocardiographic views. A deep-learning model enabled classification, which resulted in the clustering as can be seen in the plot on the right. b The saliency maps (occlusion map not shown). The input pixels weighted most heavily in the neural network’s classification of the original images (left). The most important pixels (right) make an outline of relevant structures demonstrating similar patterns that humans use to classify the image. c The confusion matrices for different classifiers. The actual views are represented on the vertical axes. The horizontal axes represent the classification of views by a neural network with video classification input (c1), a neural network with still images as input (c2) and the classification performed by a board-certified echocardiographer (c3). The numbers in the squares represent the percentage of labels predicted for each category (rounding causes addition to not always add up to 100) [37]
Fig. 3a–e A stenosis (indicated by the arrowheads) displayed in different imaging methods. a computed tomography coronary angiography (CTCA). b Calculation of fractional flow reserve (FFR) with a machine learning (ML) model. c Calculation of FFR with computational fluid dynamics (CFD). d Measurement of the stenosis during invasive coronary angiography. e From coronary CTCA to an AI-based 3D model of the coronary tree, displaying the FFR at different locations along the coronary arteries. (Images reproduced with permission [15])
Fig. 4Comparison of processing times of segmentation of the aortic valve in cardiovascular magnetic resonance phase contrast imaging. Automated segmentation used a neural network approach, trained with 150 segmentations. Validation was done in a cohort of 190 segmentations. Automated segmentation times were obtained with GPU acceleration. However, also without GPU acceleration, the average segmentation time was 19.04 s. (Images obtained from Bratt et al. [10])