| Literature DB >> 33012741 |
Hiroto Yabushita1,2, Shinichi Goto1, Sunao Nakamura2, Hideki Oka1, Masamitsu Nakayama1, Shinya Goto1.
Abstract
AIM: The clinically meaningful coronary stenosis is diagnosed by trained interventional cardiologists. Whether artificial intelligence (AI) could detect coronary stenosis from CAG video is unclear.Entities:
Keywords: Artificial intelligence; Atherosclerotic coronary stenosis; Coronary angiogram; Diagnosis
Mesh:
Year: 2020 PMID: 33012741 PMCID: PMC8326176 DOI: 10.5551/jat.59675
Source DB: PubMed Journal: J Atheroscler Thromb ISSN: 1340-3478 Impact factor: 4.928
Fig.1. Structure of the neural network and input data for the modelSchematic illustration of the neural network model. Input information of “CAG video” is a converted 3D matrix of density in each region of interest (ROI) shown as density (D): (Tnk, Yni, Xnj), where density from 0 to 255, nk from 0 to 44 frames, ni from 0 to 224, and nj from 0 to 224. Conv 3-D represent three-dimensional convolution neural network (CNN). MaxPooling represents the layer for down sampling. Global Average Pooling is the layer to calculate average for each channel. Dense represent the fully Connected layer.
Fig.2. Patient SelectionFrom a total of 1,838 CAD video in 199 patients, 146 patients (1,359 videos) were randomly selected as training cohort. This cohort was further split into 109 patients (989 videos) for derivation and 37 patients (370 videos) for validation. The remaining 53 patients (479 videos) were in test dataset. The AI model was trained solely on CAG video in training cohort patients. The hyper parameter tuning and selection of the best model within 30 epochs was done with validation cohort. The test cohort were used solely for testing the performance of the final model. There were no overlap of patients in multiple cohorts.
Fig.3. Data Input for Generation of AI modelOne frame of CAG video was considered as a matrix of 225 x 225 regions of interests (ROI) (panel A) having gray scale from 0 to 255 (panel B). One set of video data is constructed by 45 frames (panel C) of 2-dimesional image obtained in every 33 milli second.