Literature DB >> 33012741

Development of Novel Artificial Intelligence to Detect the Presence of Clinically Meaningful Coronary Atherosclerotic Stenosis in Major Branch from Coronary Angiography Video.

Hiroto Yabushita1,2, Shinichi Goto1, Sunao Nakamura2, Hideki Oka1, Masamitsu Nakayama1, Shinya Goto1.   

Abstract

AIM: The clinically meaningful coronary stenosis is diagnosed by trained interventional cardiologists. Whether artificial intelligence (AI) could detect coronary stenosis from CAG video is unclear.
METHODS: The 199 consecutive patients who underwent coronary arteriography (CAG) with chest pain between December 2018 and May 2019 was enrolled. Each patient underwent CAG with multiple view resulting in total numbers of 1,838 videos. A multi-layer 3-dimensional convolution neural network (CNN) was trained as an AI to detect clinically meaningful coronary artery stenosis diagnosed by the expert interventional cardiologist, using data from 146 patients (resulted in 1,359 videos) randomly selected from the entire dataset (training dataset). This training dataset was further split into 109 patients (989 videos) for derivation and 37 patients (370 videos) for validation. The AI developed in derivation cohort was tuned in validation cohort to make final model.
RESULTS: The final model was selected as the model with best performance in validation dataset. Then, the predictive accuracy of final model was tested with the remaining 53 patients (479 videos) in test dataset. Our AI model showed a c-statistic of 0.61 in validation dataset and 0.61 in test dataset, respectively.
CONCLUSION: An artificial intelligence applied to CAG videos could detect clinically meaningful coronary atherosclerotic stenosis diagnosed by expert cardiologists with modest predictive value. Further studies with improved AI at larger sample size is necessary.

Entities:  

Keywords:  Artificial intelligence; Atherosclerotic coronary stenosis; Coronary angiogram; Diagnosis

Mesh:

Year:  2020        PMID: 33012741      PMCID: PMC8326176          DOI: 10.5551/jat.59675

Source DB:  PubMed          Journal:  J Atheroscler Thromb        ISSN: 1340-3478            Impact factor:   4.928


Introduction

Coronary arterial disease is common world-wide. Despite the presence of various non-invasive tests to detect myocardial ischemia [1 - 6)] , coronary angiography (CAG) by cine-film, replaced by CAG video, is still considered as the gold standard for selecting clinically meaningful coronary stenosis for coronary intervention [7 , 8)] . Traditionally, significant coronary stenosis was defined as 75% or greater luminal stenosis in major coronary arterial branch [9)] . Automatic detection of clinically meaningful coronary stenosis has been realized in computer tomography (CT) coronary angiography [10 - 12)] , but remains a challenge for CAG video. Despite long history of efforts to develop quantitative coronary angiography [13 - 15)] , standard method has not been established [16)] . Indeed, time dependent change in 2-dimensional images of CAG video was difficult to handle even using a computer equipped with standard computer vision software. Recent advance in high performance computer and artificial intelligence (AI) achieved with the use of deep-learning technology with various neural networks such as convolution neural network (CNN) enabled us to handle multi-dimensional dataset. So far, CNN based AI was applied to various bio-medical data such as 12-lead electrocardiogram [17 - 20)] , serially measured biomarkers [21)] , and various 2-dimensional images such as echocardiogram [22)] , prediction of cardiac contractilities [23)] and so on [24)] . Here we present an attempted to develop an AI model to detect the presence of clinically meaningful coronary stenosis from CAG video with the use of multilayer 3-dimensional CNN.

Methods

Patient Population

Consecutive patients who underwent coronary arteriography (CAG) with chest pain between December 2018 and May 2019 were enrolled retrospectively at single center of New Tokyo hospital. Eligible patients were adults aged >18 who were referred to the hospital with symptom of chest pain and underwent coronary angiography. Patients with history of coronary arterial bypass surgery, acute coronary syndrome, and known hereditary abnormality in coronary artery such as coronary arterial aneurysm were excluded. The study protocol was approved by the Institutional review board at New Tokyo Hospital as of February 2019 (approved number 0173). The data analysis protocol was also approved by the Institutional reviewer board at Tokai University School of Medicine. (approved number 19R-282) The study was conducted in accordance with the Declaration of Helsinki and local regulatory requirements. Approximately 9 videos with various views were obtained for each patient.

Study Design

The study design was a single center retrospective analysis. All CAG procedures in participating patients were performed as standard procedure in the study center. The clinically meaningful coronary stenosis was defined as 75% or greater coronary luminal stenosis in at least one of the 3 main branches of the coronary artery (right coronary artery, left anterior descending branch and circumflex branch of left coronary artery including left main coronary trunk). Highly trained Cardiologists out-side of this study (Haruhito Yuki, MD) independently determined whether any of these major three branches had clinically meaningful 75% or greater stenosis or not.

Artificial Intelligence Model

The structure of the multilayer CNN constructing the AI model to detect coronary arterial stenosis is shown in . Videos are time-series of 2D coronary arteriogram images. The 3D matrix of gray scale density in each region of interest (ROI) in each frame of CAG video is defined as density (D): (T nk , Y ni , X nj ) where D from 0 to 255, nk from 1 to 45 frames, ni from 0 to 224, and nj from 0 to 224. The sets of data constructing all CAG video were considered as input information for the multilayer 3-dimensional CNN ( . The 3-dimensional CNN is better suited to deal with video data with the reason previously published [25)] . Each layer except the last dense layer had a rectified linear unit (ReLU) activation and batch normalization after them. The last dense layer was followed by a sigmoid activation to deal with binary classification problem [26)] . The model was trained to minimize the binary cross-entropy between the output and the label using RMSProp optimizer [27 , 28)] . The Binary cross entropy was defined as ce=-(y log (p)+(1-y) log (1-p)) (where y is the label (0 is negative and 1 is positive), p is the probability of being positive calculated by the model.) The RMSProp optimizer [29)] was used as shown in the code described in the supplement section.
Fig.1. Structure of the neural network and input data for the model

Schematic illustration of the neural network model. Input information of “CAG video” is a converted 3D matrix of density in each region of interest (ROI) shown as density (D): (Tnk, Yni, Xnj), where density from 0 to 255, nk from 0 to 44 frames, ni from 0 to 224, and nj from 0 to 224. Conv 3-D represent three-dimensional convolution neural network (CNN). MaxPooling represents the layer for down sampling. Global Average Pooling is the layer to calculate average for each channel. Dense represent the fully Connected layer.

Schematic illustration of the neural network model. Input information of “CAG video” is a converted 3D matrix of density in each region of interest (ROI) shown as density (D): (Tnk, Yni, Xnj), where density from 0 to 255, nk from 0 to 44 frames, ni from 0 to 224, and nj from 0 to 224. Conv 3-D represent three-dimensional convolution neural network (CNN). MaxPooling represents the layer for down sampling. Global Average Pooling is the layer to calculate average for each channel. Dense represent the fully Connected layer. For model calculation, the combination of high-performance computers in our laboratory of HPC5000-XSLGPU4TS (containing 4 sets of NVIDIA ® Tesla ® V100 GPU, HPC systems Inc, Tokyo, Japan) and HPC3000-XKL2Uquad (4 sets of Xeon Phi7210, HPC systems, Inc, Tokyo, Japan) were used. The detail Code for model definition and running are shown in supplemental code section.

Cohort for Model Derivation, Validation and Testing

Of the 1,838 videos in 199 patients, 146 patients (1359 videos) were randomly selected as training dataset and was further split into 109 patients (989 videos) for derivation and 37 patients (370 videos) for validation. The derivation cohort was used to train AI model. Then, hyper parameters were tune in validation cohort. The performance of final prediction accuracy of our AI model was tested in the remaining 53 patients (479 video) as test cohort. As shown in , there are no overlap in any patients for model derivation, validation and testing.
Fig.2. Patient Selection

From a total of 1,838 CAD video in 199 patients, 146 patients (1,359 videos) were randomly selected as training cohort. This cohort was further split into 109 patients (989 videos) for derivation and 37 patients (370 videos) for validation. The remaining 53 patients (479 videos) were in test dataset. The AI model was trained solely on CAG video in training cohort patients. The hyper parameter tuning and selection of the best model within 30 epochs was done with validation cohort. The test cohort were used solely for testing the performance of the final model. There were no overlap of patients in multiple cohorts.

From a total of 1,838 CAD video in 199 patients, 146 patients (1,359 videos) were randomly selected as training cohort. This cohort was further split into 109 patients (989 videos) for derivation and 37 patients (370 videos) for validation. The remaining 53 patients (479 videos) were in test dataset. The AI model was trained solely on CAG video in training cohort patients. The hyper parameter tuning and selection of the best model within 30 epochs was done with validation cohort. The test cohort were used solely for testing the performance of the final model. There were no overlap of patients in multiple cohorts.

Input Data

From each CAG video, initial 45 frame was extracted and resized to make input data with 224 x 224 pixels 2-dimensional CAG images as shown in (panel A). Each pixel contains density information from 0 to 255 as shown by panel B in . Non square images were converted to square shape by padding with 0 (black color). Thus, the resulting structure of input data is a 3D matrix of density (D): (T nk , Y ni , X nj ), where density from 0 to 255, nk from 1 to 45 frames, ni from 0 to 224, and nj from 0 to 224. Each image of coronary angiogram was obtained in each 33 milli second. This 3D matrix was learned by a multilayer 3-dimensional CNN. The model was trained on video level with clinically meaningful 75% or greater coronary arterial stenosis yes/no label for all the 989 video streams for 109 patients in derivation dataset on the computer.
Fig.3. Data Input for Generation of AI model

One frame of CAG video was considered as a matrix of 225 x 225 regions of interests (ROI) (panel A) having gray scale from 0 to 255 (panel B). One set of video data is constructed by 45 frames (panel C) of 2-dimesional image obtained in every 33 milli second.

One frame of CAG video was considered as a matrix of 225 x 225 regions of interests (ROI) (panel A) having gray scale from 0 to 255 (panel B). One set of video data is constructed by 45 frames (panel C) of 2-dimesional image obtained in every 33 milli second.

Model Training and Testing

The process of model training was performed similarly to the method described previously [21)] . Briefly, training was performed only in videos from derivation cohort. Training was performed for 20 epochs with a mini batch of 20 CAG videos randomly selected from training datasets. Performance of the trained model was evaluated by c-statistics using validation dataset at the end of each epoch. The model with best c-statistics on validation set within the 30 epochs was chosen as the “final model” to be further evaluated. Finally, the best model was tested on the test dataset. The sensitivity, specificity, accuracy, and F-measure were calculated at median of predicted values as cut-off.

Statistical Analysis

The neural network was constructed and trained using Keras framework version 2.1.6 (https://keras.io) and TensorFlow version 1.8.0 as backend. The CNN was trained using the back-propagation supervised training algorithm [30)] . Receiver Operating Characteristic (ROC) analysis was conducted to calculate the area under the curve (AUC) to quantify the predictive accuracy of the developed model.

Results

Dataset

A total of 199 patients were identified for the study. Of them 146 patients (1,359 videos) were randomly selected as training dataset and the remaining 53 patients (479 videos) were in test dataset ( . The training dataset was further randomly split into 109 patients (989 videos) and 37 patients (370 videos) for derivation and validation cohort, respectively. Of the 989 videos in derivation cohort, 319 videos contained clinically meaningful stenosis and of the 370 videos in validation cohort, 204 videos contained clinically meaningful stenosis. In the patient level, 45 of 109, 24 of 37, and 28 of 53 patients were diagnosed to have clinically significant 75% or more of coronary stenosis for derivation, validation and test cohort respectively.

Predictive Value of Artificial Intelligence Model

The ROC analysis for assessing the predictive accuracy of the developed AI model to detect physician decided clinically meaningful 75% or greater coronary stenosis from CAG videos for the final model showed c-statistic value of 0.61 for validation cohort of the training dataset. The c-statistic for test cohort was 0.61 ( . The median of predicted values in the test cohort was 0.376. Sensitivity and specificity of our model calculated at that cutoff were 0.61 and 0.60, respectively. The accuracy and F-value in the same condition were 0.60 and 0.60 respectively. The predictive accuracy of artificial intelligence model was tested by receiver operating characteristic curve. The area under the curve (AUC) were calculated in validation cohort for training dataset and in the test cohort.

Discussion

Here we created a new method to detect the presence of clinically meaningful 75% or greater coronary luminal stenosis from CAG video with the use of 3-dimensional CNN as an AI. The developed AI was able to detect the presence of physician decided clinically meaningful 75% or greater stenosis with fair predictive value of AUC 0.61 in the test cohort. In the current clinical practice, clinically meaningful 75% or greater coronary stenosis is determined by non-standardized evaluation with the eyes of experts of interventional cardiologists. Various quantitative analyses for detection of clinically meaningful coronary 75% or greater stenosis were attempted, but no standard methods were established yet. We have developed a new model to detect the presence/absence of clinically meaningful 75% or greater coronary stenosis from CAG video. Our results indicated that CAG video contain information to detect the presence/absence of clinically meaningful 75% or greater coronary luminal stenosis that can be picked up by a computer that does not have intuition. CAG is widely used to assess the anatomical structure of coronary artery branch. In patients with chest pain, CAG also provide important information whether the patients require interventional treatment or not. Even though CAG plays an important role for determining the necessity of coronary intervention including percutaneous coronary intervention, their assessments has not been standardized yet. Standardization for the assessment of coronary stenosis in CT imaging is rather easy because they are still images and the image itself is much more standardized compared to CAG videos [12)] . Recently developed three-dimensional CNN and recurrent neural network (RNN) based AI was able to detect the presence of coronary stenosis with c-statistic of 0.80 in cardiac CT [31)] . Common use of CAG video by interventional cardiologists even after coronary CT evaluation may suggest the presence of more clinically meaningful information in CAG video than CT angiogram. But, even using the high-performance computer, time series still images of video data were technically difficult to be handled so far. We have previously applied an AI model constructed from the combinations of multiple layers 1-dimensional convolution neural network (CNN) and special recurrent neural network (RNN) of long-term short memory (LSTM) for handling time dependent change in prothrombin international normalization ratio (PT-INR) and clinical outcome [21)] . We have also applied a similar combination of neural network for handling 12 lead electrocardiography (ECG) as 12 set of voltage change in every 2 milli second [19)] . The strength of computer is the ability to handle multi-dimensional data similarly to single dimensional ones. Recently updated 3-dimensional CNN running on Graphics Processing Unit (GPU) enabled us to handle complex CAG video within reasonable calculation time. Yet, our present study showed only with modest predictive performance of AI with the use of 3-dimensional convolutional neural network. One obvious reason is small sample size. But, not extremely high predictive accuracy of AI may persist even with huge clinical data set because “clinically meaningful coronary stenosis was defined as 75% or greater coronary luminal stenosis” determined by physician and may be influenced by many factors such as physicians’ experience, intention to conduct coronary intervention, and so on. Despite strength in methodological novelty, there are several obvious limitations in our study. First, our study included only approximately 200 patients, which is not large enough to develop and validate AI model accurately. Modest predictive value of our AI model with c-statistic of 0.61 along with relatively low sensitivity and specificity suggest that our AI does not have high enough predictable ability for clinical application. Our AI may have better predictive accuracy in some coronary arterial branch such as left arterial decending artery (LAD) than the other such as right coronary artery. (RCA) But, sample size was too small to conduct comparable analysis within current dataset. Retrospective nature of sample correction is also a limitation of our study. Further studies with larger samples are expected to confirm validity of our AI model. It is of note that our study did not provide any information whether predictive performance of our AI model could be improved by increasing sample size or not. Second, our study is based upon retrospectively accumulated cohort of patients in single center, which make it difficult to generalize our results on the globe. Since the clinically meaningful 75% or greater coronary stenosis is currently determined by eyes of expert interventional cardiologists but not objectively confirmed, single center registry may provide more standardized evaluation than multi-center collaboration. Yet, external validity of our AI model is still an issue to be clariid in future. Our conclusion that AI technology could allow computer to detect the presence/absence of clinically meaningful 75% or greater coronary luminal stenosis from CAG video is not influence by these limitations. In conclusion, we suggest here that artificial intelligence equipped from 3-dimensional convolution neural network has an ability to learn CAG video to predict the presence or absence of physician decided clinically meaningful 75% or greater coronary stenosis with modest accuracy. Further studies with larger sample size are necessary to consider clinical application of artificial intelligence to detect coronary stenosis from CAG video in general.

Acknowledgements

This study was conducted with financial support from Vehicle Racing Commemorative Foundation. The authors acknowledge partial financial support from the grant-in-aid for MEXT/JSPS KAKENHI 19H03661, AMED grant number A368TS, Bristol-Myers Squibb for independent research support project (33999603) and a grant from Nakatani Foundation for Advancement of Measuring Technologies in Biomedical Engineering.

IRB

IRB at New Tokyo Hospital as 0173 IRB at Tokai University School of Medicine at 19R-282

COI

This study was conducted with financial support from Vehicle Racing Commemorative Foundation. The authors acknowledge partial financial support from the grant-in-aid for MEXT/JSPS KAKENHI 19H03661, AMED grant number A368TS, Bristol-Myers Squibb for independent research support project (33999603) and a grant from Nakatani Foundation for Advancement of Measuring Technologies in Biomedical Engineering. The author Shinya Goto disclose to receive grant support from Sanofi, Pfizer, and Ono Pharma. Shinya Goto is a consultant for Jansen Pharma and Bristol Myers Squibb for developing novel antithrombotic agents. Shinya Goto is an associate editor for Circulation, section editor for Journal of Biorheology, and Archives of Medical Science and section editor for Thrombosis and Haemostasis. Shinya Goto received personal fee from the American Heart Association (Dallas, US) as an Associate Editor and from the Thrombosis Research Institute (London, UK) as a Steering Committee Member for the GARFIELD-AF and GARFIELD-VTE project.

Supplemental Code for model definition and running AI.

import tensorflow as tf import keras as ks import numpy as np import sys import math import random import pandas as pd from keras.layers import Dense,Conv3D,BatchNormalization,MaxPooling3D from keras.layers import Input,Dropout,concatenate,GlobalAveragePooling3D from keras.models import Model from keras.optimizers import RMSprop import os from keras import backend as K from keras import regularizers SHAPE=(45,224,224,1) testData=np.load(‘Test.npy’) #Video to run the model formatted in a 4d numpy array (time,y,x,channel=1) TestDF=pd.read_csv(‘Testname.txt’) #Names of each sampeles sorted in the same order as testData batch_size=20 EPOCS=20 initial_lrate =0.0001 def get_models(Inbatchsize): inputEco=Input(shape=(45,224,224,1)) kreg = None pad = ‘same’ strd = None kernel_num = 32 kernel_num_1D = 32 trainable = True x = inputEco x = Conv3D(kernel_num, (3, 3, 3), activation=’relu’, padding=pad, trainable=trainable,kernel_regularizer=kreg)(x) x = BatchNormalization(trainable=trainable)(x) x = MaxPooling3D(pool_size=(2, 2, 2), strides=strd, trainable=trainable)(x) x = Conv3D(kernel_num, (3, 3, 3), activation=’relu’, padding=pad, trainable=trainable,kernel_regularizer=kreg)(x) x = BatchNormalization(trainable=trainable)(x) x = MaxPooling3D(pool_size=(2, 2, 2), strides=strd)(x) x = Conv3D(kernel_num * 2, (3, 3, 3), activation=’relu’, padding=pad, trainable=trainable,kernel_regularizer=kreg)(x) x = BatchNormalization(trainable=trainable)(x) x = Conv3D(kernel_num * 2, (3, 3, 3), activation=’relu’, padding=pad, trainable=trainable,kernel_regularizer=kreg)(x) x = BatchNormalization(trainable=trainable)(x) x = MaxPooling3D(pool_size=(2, 2, 2), strides=strd, trainable=trainable)(x) x = Conv3D(kernel_num * 4, (3, 3, 3), activation=’relu’, padding=pad, trainable=trainable,kernel_regularizer=kreg)(x) x = BatchNormalization()(x) x = Conv3D(kernel_num * 4, (3, 3, 3), activation=’relu’, padding=pad, trainable=trainable,kernel_regularizer=kreg)(x) x = BatchNormalization()(x) x = Conv3D(kernel_num * 4, (3, 3, 3), activation=’relu’, padding=pad, trainable=trainable,kernel_regularizer=kreg)(x) x = BatchNormalization(trainable=trainable)(x) x = MaxPooling3D(pool_size=(2, 2, 2), strides=strd)(x) x = Conv3D(kernel_num * 8, (3, 3, 3), activation=’relu’, padding=pad, trainable=trainable,kernel_regularizer=kreg)(x) x = BatchNormalization(trainable=trainable)(x) x = Conv3D(kernel_num * 8, (3, 3, 3), activation=’relu’, padding=pad, trainable=trainable,kernel_regularizer=kreg)(x) x = BatchNormalization(trainable=trainable)(x) x = Conv3D(kernel_num * 8, (3, 3, 3), activation=’relu’, padding=pad, trainable=trainable,kernel_regularizer=kreg)(x) x = BatchNormalization(trainable=trainable)(x) x = GlobalAveragePooling3D()(x) x = Dense(100, activation=’relu’)(x) x = BatchNormalization()(x) x = Dropout(0.4)(x) z = Dense(1, activation=’sigmoid’)(x) model = Model(inputEco, z) return model def runModel(): model = get_models(1) model.summary() print(‘=========================================================’) model.load_weights(os.path.join(“./n”,’weightFile.hdf5’)) classes = model.predict(testData, batch_size=int(batch_size)) TestDF[‘Predicted’]= classes TestDF.to_csv(‘resultsTest.txt’,sep=’\t’) #save results
  29 in total

1.  Exercise echocardiography or exercise SPECT imaging? A meta-analysis of diagnostic test performance.

Authors:  K E Fleischmann; M G Hunink; K M Kuntz; P S Douglas
Journal:  JAMA       Date:  1998-09-09       Impact factor: 56.272

2.  The Relation between Left Coronary Dominancy and AtheroscleroticInvolvement of Left Anterior Descending Artery Origin.

Authors:  Samad Ghaffari; Babak Kazemi; Jalil Dadashzadeh; Bita Sepehri
Journal:  J Cardiovasc Thorac Res       Date:  2013-03-13

3.  Incremental value of dual-energy CT to coronary CT angiography for the detection of significant coronary stenosis: comparison with quantitative coronary angiography and single photon emission computed tomography.

Authors:  Rui Wang; Wei Yu; Yongmei Wang; Yi He; Lin Yang; Tao Bi; Jian Jiao; Qian Wang; Liquan Chi; Yang Yu; Zhaoqi Zhang
Journal:  Int J Cardiovasc Imaging       Date:  2011-05-06       Impact factor: 2.357

4.  Assessing and Mitigating Bias in Medical Artificial Intelligence: The Effects of Race and Ethnicity on a Deep Learning Model for ECG Analysis.

Authors:  Peter A Noseworthy; Zachi I Attia; LaPrincess C Brewer; Sharonne N Hayes; Xiaoxi Yao; Suraj Kapa; Paul A Friedman; Francisco Lopez-Jimenez
Journal:  Circ Arrhythm Electrophysiol       Date:  2020-02-16

5.  Diagnostic Accuracy of Stress Myocardial Perfusion Imaging in Diagnosing Stable Ischemic Heart Disease.

Authors:  G Varadaraj; G S Chowdhary; R Ananthakrishnan; M J Jacob; P Mukherjee
Journal:  J Assoc Physicians India       Date:  2018-08

Review 6.  High-performance medicine: the convergence of human and artificial intelligence.

Authors:  Eric J Topol
Journal:  Nat Med       Date:  2019-01-07       Impact factor: 53.440

7.  Screening for cardiac contractile dysfunction using an artificial intelligence-enabled electrocardiogram.

Authors:  Zachi I Attia; Suraj Kapa; Francisco Lopez-Jimenez; Paul M McKie; Dorothy J Ladewig; Gaurav Satam; Patricia A Pellikka; Maurice Enriquez-Sarano; Peter A Noseworthy; Thomas M Munger; Samuel J Asirvatham; Christopher G Scott; Rickey E Carter; Paul A Friedman
Journal:  Nat Med       Date:  2019-01-07       Impact factor: 53.440

8.  An Efficient Three-Dimensional Convolutional Neural Network for Inferring Physical Interaction Force from Video.

Authors:  Dongyi Kim; Hyeon Cho; Hochul Shin; Soo-Chul Lim; Wonjun Hwang
Journal:  Sensors (Basel)       Date:  2019-08-17       Impact factor: 3.576

9.  An Efficient Supervised Training Algorithm for Multilayer Spiking Neural Networks.

Authors:  Xiurui Xie; Hong Qu; Guisong Liu; Malu Zhang; Jürgen Kurths
Journal:  PLoS One       Date:  2016-04-04       Impact factor: 3.240

10.  Fully Automated Echocardiogram Interpretation in Clinical Practice.

Authors:  Jeffrey Zhang; Sravani Gajjala; Pulkit Agrawal; Geoffrey H Tison; Laura A Hallock; Lauren Beussink-Nelson; Mats H Lassen; Eugene Fan; Mandar A Aras; ChaRandle Jordan; Kirsten E Fleischmann; Michelle Melisko; Atif Qasim; Sanjiv J Shah; Ruzena Bajcsy; Rahul C Deo
Journal:  Circulation       Date:  2018-10-16       Impact factor: 29.690

View more
  2 in total

Review 1.  Current State and Future Perspectives of Artificial Intelligence for Automated Coronary Angiography Imaging Analysis in Patients with Ischemic Heart Disease.

Authors:  Mitchel A Molenaar; Jasper L Selder; Johny Nicolas; Bimmer E Claessen; Roxana Mehran; Javier Oliván Bescós; Mark J Schuuring; Berto J Bouma; Niels J Verouden; Steven A J Chamuleau
Journal:  Curr Cardiol Rep       Date:  2022-03-28       Impact factor: 2.931

Review 2.  Artificial Intelligence-A Good Assistant to Multi-Modality Imaging in Managing Acute Coronary Syndrome.

Authors:  Ming-Hao Liu; Chen Zhao; Shengfang Wang; Haibo Jia; Bo Yu
Journal:  Front Cardiovasc Med       Date:  2022-02-16
  2 in total

北京卡尤迪生物科技股份有限公司 © 2022-2023.