| Literature DB >> 30400053 |
M Alsharqi1, W J Woodward1, J A Mumith2, D C Markham2, R Upton2, P Leeson1.
Abstract
Echocardiography plays a crucial role in the diagnosis and management of cardiovascular disease. However, interpretation remains largely reliant on the subjective expertise of the operator. As a result inter-operator variability and experience can lead to incorrect diagnoses. Artificial intelligence (AI) technologies provide new possibilities for echocardiography to generate accurate, consistent and automated interpretation of echocardiograms, thus potentially reducing the risk of human error. In this review, we discuss a subfield of AI relevant to image interpretation, called machine learning, and its potential to enhance the diagnostic performance of echocardiography. We discuss recent applications of these methods and future directions for AI-assisted interpretation of echocardiograms. The research suggests it is feasible to apply machine learning models to provide rapid, highly accurate and consistent assessment of echocardiograms, comparable to clinicians. These algorithms are capable of accurately quantifying a wide range of features, such as the severity of valvular heart disease or the ischaemic burden in patients with coronary artery disease. However, the applications and their use are still in their infancy within the field of echocardiography. Research to refine methods and validate their use for automation, quantification and diagnosis are in progress. Widespread adoption of robust AI tools in clinical echocardiography practice should follow and have the potential to deliver significant benefits for patient outcome.Entities:
Keywords: artificial intelligence; echocardiography; machine learning
Year: 2018 PMID: 30400053 PMCID: PMC6280250 DOI: 10.1530/ERP-18-0056
Source DB: PubMed Journal: Echo Res Pract ISSN: 2055-0464
Figure 1Types of machine learning algorithms.
Definition of machine learning classes.
| Class of machine learning | Definition |
|---|---|
| Uses human-coded information to train machine learning models to classify unseen data. | |
| Random forest | This is an ensemble of decision trees. An item is classified according to the most common output from all of the decision trees. Due to the increased exposure to samples of training data, random forests have the advantage of not over-fitting a model to the data, compared to a single decision tree. |
| Support vector machines (SVMs) | SVMs allow the construction of models capable of separating training data into different classes. When presented with new data, these models are able to predict which class it should belong to. |
| Artificial neural networks (ANNs) | ANNs are modelled on the design of the brain. Due to their structure of interconnecting layers of neurons, artificial neural networks have been likened to the outer cortex of the brain. These networks are comprised of interconnected layers that are involved in the analysis and classification of input data. The greater the number of layers a network has, the higher the level of analysis; this forms the basis for deep learning. ANNs are able to learn which connections are the most useful for classifying data and thus weight these accordingly. |
| The model is not provided with human-coded outcomes, so the model has to classify data itself based on its own analysis. This has the potential to identify novel relationships within the data. | |
| Clustering techniques | This method is similar to SVMs, however as the data is unlabeled, the model is unable to classify it based on human-coded information. Instead, the model identifies the natural groupings, or clusters, of data and uses these clusters to classify new data. |
| Naive Bayes | These are a family of techniques which apply Bayes’ Theorem (Bayes’ Theorem states that the probability of an event can be affected by prior evidence) to classify data, assuming that the data are independent from each other. |
| Principal component analysis (PCA) | PCA is a technique that makes data easier to analyse by transforming potentially correlated variables into non-correlated variables, known as principal components. These principal components allow for feature extraction from the original dataset. |
| Autoencoders | These are a type of ANN that encode the input into a compressed dataset, learn from this compressed information, and then reconstruct this information as output. By compressing the input data, this technique aims to learn the most important features of the input data. |
| The machine learns how to interact with its environment through trial and error, to maximize the rewards. It is analogous to how a baby learns to interact with its environment. | |
| Q-Learning | In Q-Learning, the agent learns how to optimally process different types of data in different ways.(NB. whilst this is a powerful machine learning technique, its use in the medical field is limited at present). |
Figure 2Advantages of machine learning assisted echocardiography interpretation.
Basic finding and validation of machine learning applications in the field of echocardiography.
| Study | Year | Application | Machine learning model used | Training/validation set | Test set | Limit of agreements and bias | Sensitivity/Specificity/Accuracy | AUC | Time required for measurement |
|---|---|---|---|---|---|---|---|---|---|
| ( | 2018 | Recognise 15 echocardiography views | Convolutional neural network | 200,000 images | 20,000 images | – | –/–/91.7% | 0.996 | 21 ms/image |
| ( | 2018 | Quantification of wall motion abnormalities | Double density-dual tree discrete wavelet transform | 279 images | – | – | 96.12%/96%/96.05% | – | – |
| ( | 2018 | Quantification of wall motion abnormalities | Convolutional neural network | 4392 maps | 61 subjects | 81.1%/65.4%/75% | – | – | |
| ( | 2017 | Recognition/classification of apical views | Supervised dictionary learning | 210 clips | 99 clips | – | –/–/95% | – | 0.05 ± 0.003 s per clip |
| ( | 2017 | Assessment of myocardial velocity | Unsupervised multiple kernel learning | 55 subjects | – | Avg 51.7% | Avg 73.25%/78.4%/– | – | <30 s |
| ( | 2016 | Classification/discrimination of pathological patterns (HCM vs ATH) | Support vector machine, random forest, artificial neural network | – | – | – | 96%/77%/– | 0.795 | 8 s |
| ( | 2016 | Classification/discrimination of pathological patterns (RCM vs CP) | Associative memory-based machine-learning algorithm | – | – | – | –/–/93.7% | 0.962 | – |
| ( | 2016 | Quantification of MR | Support vector machine | 5004 frames | – | – | 99.38%/99.63%/99.45% | – | – |
| ( | 2015 | Calculation of EF and LS | AutoEF Software | – | 255 patients | 0.83 (0.78 to 0.86) and −0.3 (1.5 to 0.9) | – | – | 8 ± 1 s/patient |
| ( | 2013 | Automated detection of LV border | Random forest classifier with an active shape model | 50 images | 35 images | – | –/–/90.09% | – | – |
| ( | 2011 | Quantification of wall motion abnormalities | Relevance Vector Machine classifier | 173 patients | – | – | –/–/93.02% | – | – |
| ( | 2008 | Quantification of wall motion abnormalities | Hidden Markov model | 24 studies (720 frames) | 20 studies (600 frames) | – | –/–/84.17% | – | – |
| ( | 2008 | Calculation of EF | AutoEF Software | 10,000 images | 92 patients | 1% (−19% to 33%) | – | – | – |
| ( | 2007 | Calculation of EF | AutoEF Software | >10,000 images | 200 patients | 6% (−2.87 to 2.91) | – | – | <15 s per view |
ATH, athletes’ heart; Avg, average; CP, constrictive pericarditis; EF, ejection fraction; HCM, hypertrophic cardiomyopathy; LS, longitudinal strain; LV, left ventricle; MR, mitral regurgitation; ms, milliseconds; RCA, restrictive cardiomyopathy; s, seconds.
Figure 3An example of a Convolutional Neural Network model for image classification. A2C, apical two chamber; A3C, apical three chamber; A4C, apical four chamber; PLAX, parasternal long axis; PSAX, parasternal short axis.
Figure 4Diagram of an example of machine learning model process.