| Literature DB >> 35645394 |
Harry J Carpenter1, Mergen H Ghayesh1, Anthony C Zander1, Jiawen Li2,3,4, Giuseppe Di Giovanni5, Peter J Psaltis5,6,7.
Abstract
Coronary optical coherence tomography (OCT) is an intravascular, near-infrared light-based imaging modality capable of reaching axial resolutions of 10-20 µm. This resolution allows for accurate determination of high-risk plaque features, such as thin cap fibroatheroma; however, visualization of morphological features alone still provides unreliable positive predictive capability for plaque progression or future major adverse cardiovascular events (MACE). Biomechanical simulation could assist in this prediction, but this requires extracting morphological features from intravascular imaging to construct accurate three-dimensional (3D) simulations of patients' arteries. Extracting these features is a laborious process, often carried out manually by trained experts. To address this challenge, numerous techniques have emerged to automate these processes while simultaneously overcoming difficulties associated with OCT imaging, such as its limited penetration depth. This systematic review summarizes advances in automated segmentation techniques from the past five years (2016-2021) with a focus on their application to the 3D reconstruction of vessels and their subsequent simulation. We discuss four categories based on the feature being processed, namely: coronary lumen; artery layers; plaque characteristics and subtypes; and stents. Areas for future innovation are also discussed as well as their potential for future translation.Entities:
Keywords: atherosclerosis; biomechanics; border detection; coronary artery disease; optical coherence tomography; stents; vulnerable plaque
Mesh:
Year: 2022 PMID: 35645394 PMCID: PMC9149962 DOI: 10.3390/tomography8030108
Source DB: PubMed Journal: Tomography ISSN: 2379-1381
Figure 1Schematic showing plaque features visible with optical coherence tomography (OCT) imaging as well as a visualization of A-lines in the cartesian and polar coordinates. The OCT images show a lipidic plaque (*) with fibrous cap and the delineation of the three artery wall layers is shown inset in the polar image representation. The limited penetration depth can be seen behind the lipidic component, with significant attenuation preventing visualization of the backside of plaque components.
Figure 2Schematic of key components and their layout for a convolutional neural network architecture. The encoder component consists of convolution and activation functions to extract feature maps before pooling (downsampling) to the subsequent layer. The decoder up-samples feature map data before further convolutions. Skip connections allow feature map data to be passed between layers which can assist in reducing resolution degradation between layers and is a critical feature of the popular U-Net architecture.
Figure 3Consort diagram showing the review layout and Appendix A tables for each section.
Classified articles investigating automated coronary lumen segmentation. 3D—Three-dimensional. ACC—Accuracy. ADAM—Gradient based adaptive optimization. ASSD—Average symmetric surface distance. AUC—Area under the curve. BHAT—Bhattacharya distance. BR—Bifurcation region. CK—Cohen’s kappa coefficient. CNN—Convolutional neural network. DA—Data augmentation. DICE—Dice loss coefficient. FFR—Fractional flow reserve. HD—Hausdorff distance. IVUS—Intravascular ultrasound. JS—Jaccard similarity index. KL—Kullback–Leibler divergence. MADA—Mean average difference in area. MV—Main vessel NB—Naïve Bayes. NBR—Non-bifurcation region. NPV—Negative predictive value. OCT—Optical coherence tomography. PPV—Positive predictive value. R—Pearson’s correlation R2—Coefficient of determination. RF—Random Forest. RMSD—Root mean square symmetric surface distance. SEN—Sensitivity. SPE—Specificity. SVM—Support vector machine. TNR—True negative ratio. TPR—True positive ratio. WSS—Wall shear stress. * Expert annotation implies an experienced researcher carried out the annotation. Articles varied their use of manual segmentation and expert annotation and we match the description given in each article.
| First Author [Ref] | Aim | Dataset | Morphological/Filtering Operations | Feature Detection/Classification | Outcome | Comparison * |
|---|---|---|---|---|---|---|
| Akbar et al. [ | Automated lumen extraction and 3D FFR modelling | 5931 images (40 patients) | Polar transform, Bilateral smoothing filter, dilation, erosion | L- & C-mode interpolation and Sobel edge detection | R: 0.99 | Manual segmentation and individual L- and C-mode interpolation |
| Athanasiou et al. [ | Lumen detection through optimized segmentation and 3D WSS modelling | 11 patients, 613 annotated images | Polar transform, Bilateral smoothing filter | B-spline curve fit, K-means | 3D HD: 0.05 mm (±0.19) | Expert annotation and WSS results between expert annotated reconstruction |
| Balaji et al. [ | Efficient and low memory automated lumen segmentation for clinical application | 12,011 images (22 patients) | Gaussian derivative | PyTorch based deep capsules with ADAM optimizer | DICE: 0.97 ± 0.06 | Expert annotation, UNet-ResNet18, FCNResNet50 and DeepLabV3-ResNet50 |
| Cao et al. [ | Automated lumen segmentation in challenging geometries | 880 images (five patients) | Polar transform, Narrow image smoothing filter (Gaussian) | Distance regularized level set | DICE: 0.98 ± 0.01 | Manual segmentation |
| Cao et al. [ | Automatic side branch ostium and lumen detection | 4618 images (22 pullbacks) | Dynamic programming distance transform, differential filter | MV DICE: 0.96 | Manual segmentation | |
| Cheimariotis et al. [ | Automated lumen segmentation in all image types (bifurcation, blood artefacts) | 1812 images (20 patients, 308 stented, 1504 native) | Polar transform, Median filtering, Gaussian filtering, opening, Otsu binarization, low-pass filtering | Gradient window enhancement | Stented: DICE: 0.94 | Expert annotation (area, perimeter, radius, diameter, centroid) |
| Essa at al. [ | Automatic lumen detection in OCT (and tissue characterization in IVUS) | 2303 images (13 pullbacks: Column-wise labelling 457, training 457, testing 1389) | Polar transform, A-line based dynamic tissue classification | Kalman filter based spatio-temporal segmentation method, RF | ACC: 96.27% | Expert annotation |
| Joseph et al. [ | Automated lumen contours using local transmittance-based enhancement | 8100 images (30 pullbacks, 270 images per pullback) | Polar transform, transmissivity-based mapping | Region-based level set active contour method | BR DICE: 0.78 ± 0.20 | Expert annotation |
| Macedo et al. [ | Automated lumen segmentation by morphological operations in plaque and bifurcation regions. | 1328 images (nine pullbacks, 141 BR, 1188 NBR) | Polar transform, Bilateral filtering, Otsu thresholding, Erosion/dilation | Sobel edge detection, Distance transform based automatic contour correction | NBR MADA: 0.19 ± 0.13 mm2 | Manual segmentation |
| Miyagawa et al. [ | Automated detection and outline of bifurcation regions | 2460 images (Nine patients, 157 BR, 1204 NBR, 1099 DA) | Global thresholding, closing, Hough transform | Four CNNs, three with transfer learning from lumen detection | ACC: 98.00 ± 1.00% | Expert annotation |
| Pociask et al. [ | Automated lumen segmentation | 667 images | Polar transform, Gaussian & Savitzky–Golay filtering, opening/closing | Linear interpolation | Relative difference in lumen area: 1.12% (1.55–0.68%) | Manual segmentation |
| Roy et al. [ | Random walks automatic segmentation of the lumen | Patients: six in vivo, 15 in vitro. 150–300 frames per patient | Polar transform, | Random walks based on edge weights and backscattering tracking | CK: 0.98 ± 0.01 | Expert annotation |
| Tang et al. [ | Automated lumen extraction using N-Net CNN | 20,000 images (400 for training from manual annotation) | N-Net CNN with cross entropy loss function | ACC: 98.00 ± 0.00% | Expert annotation of 400 images | |
| Yang et al. [ | Automated lumen extraction in abnormal lumen geometries | 14,207 images (54 patients) | Polar transform, Gaussian filtering | Active contour model, Gray-level co-occurrence matrix, SVM, AdaBoost, J48, RF, NB, Bagging | DICE: 0.98 ± 0.01 | Expert annotation on 1541 images |
| Yong et al. [ | Automated lumen extraction using linear regression CNN | 19,027 images (64 pullbacks, 28 patients) | Polar transform, | Linear regression CNN | Location accuracy: 22 µm | Expert annotation on 19 pullbacks (5685 images) |
| Zhao et al. [ | Automated lumen extraction using morphological operations | 268 images | Polar transform, Median filtering, Otsu binarization, closing/opening | DICE: 0.99 | Expert annotation | |
| Zhu et al. [ | Automated lumen segmentation to overcome blood artefacts | 216 images with blood artefacts (from 1436 images, 6 patients) | Polar transform, Gaussian filtering, adaptive block binarization, erosion/area opening | Connected A-line region filtering with bicubic interpolation and quadratic regression smoothing | DICE: 0.95 | Morphological only, dynamic programming, manual segmentation |
Classified articles investigating automated artery layer segmentation. ACC—Accuracy. APe—Adventitia-peri-adventitial tissue border error. CNN—Convolutional neural network. DICE—Dice loss coefficient. IMe—Intima-media border error. IVUS—Intravascular ultrasound. JS—Jaccard similarity index. MADA—Mean absolute difference in area. MAe—Media-adventitia border error. OCT—Optical coherence tomography. R2—Coefficient of determination. RF—Random Forest. SEN—Sensitivity. SPE—Specificity. SVM—Support vector machine. * Results shown for the outer wall segmentation.
| First Author [Ref] | Aim | Dataset | Morphological Operations | Feature Detection/Classification | Outcome | Comparison |
|---|---|---|---|---|---|---|
| Abdolmanafi et al. [ | Automated intima and media classification in pediatric patients | 4800 regions of interest (26 patients) | CNN (AlexNet), RF, SVM | CNN ACC: 97.00 ± 4.00% | Manual segmentation | |
| Chen et al. [ | Automated wall morphology change analyses in heart transplant patients | 43,873 images (100 pullbacks, 50 patients) | Caffe framework, LOGISMOS, Sobel edge detector | R2: 0.96 | Expert annotation | |
| Haft-Javaherian et al. [ | Automated lumen, intima and media classification in polarization-sensitive OCT | 984 images (57 patients) | CNN based on U-Net and deep residual learning model, combination of five loss functions | DICE *: 0.99 | Expert annotation and traditional OCT. | |
| Olender et al. [ | Automated delineation of outer elastic membrane using mechanical approach | 724 images (seven patients) | Contrast enhancement, image compensation, median filtering | Sobel-Feldman edge detection, anisotropic linear elastic mesh force balance | MADA: 0.93 mm2 (±0.84) | Expert annotation and IVUS |
| Pazdernik et al. [ | Automated wall morphology change analyses in heart transplant patients | 50 patients (~25,000 co-registered images) | LOGISMOS | R2: 0.99 | Expert annotation | |
| Zahnd et al. [ | Automatically segment three layers of healthy coronary artery wall | 40 patients (400 classified images, 140 training, 260 validation) | Erosion, dilation | AdaBoost, front propagation scheme with cumulative cost function, | DICE: 0.93 | Expert annotation |
Classified articles investigating automated plaque classification and segmentation. ACC—Accuracy. ADAM—Gradient based adaptive optimization. AFPDEFCM—Fourth-order PDE-based fuzzy c-means. ANN—Artificial neural network. AP—Average precision. AUC—Area under the curve. CNN—Convolutional neural network. CRF—Conditional random field. DA—Data augmentation. DB—Dual binary classifier. DICE—Dice loss coefficient. EEL—External elastic lamina. F1—F1-score. FC—Fibrocalcific plaque. FCM—Partial differential equation-based fuzzy c-means. FCN—Fully convolutional network. FRSCGMM—Fast and robust spatially constrained Gaussian mixture model. GMM—Gaussian mixture model. GMM-SMSI—GMM with spatial pixel saliency map. HEM—Heard example mining. HER—Healed erosion/rupture. MCR—Misclassification ratio. MIoU—Mean intersection over union. FIoU—Frequency weighted intersection over union. mRMR—Minimal-redundancy-maximal relevance. PB—Plaque burden. PIT—Pathological intimal thickening. PRE—Precision. PRI—Probabilistic Rand Index. REC—Recall. RF—Random Forest. SEN—Sensitivity. SMM—Student’s-t mixture model. SPE—Specificity. SVM—Support vector machine. TCFA—Thin-cap fibroatheroma. VH-IVUS—Virtual histology intravascular ultrasound. VOI—Volume of interest. * Overall classification accuracy for fibrous, lipid and background tissue. ** Mean values for presented algorithm, see text for other comparison metrics. ^ Results for the final contraction plus expansion CNN. ^^ Results for overall pathological tissue detection.
| First Author [Ref] | Aim | Dataset | Morphological Operations | Feature Detection/Classification | Outcome | Comparison |
|---|---|---|---|---|---|---|
| Abdolmanafi et al. [ | Tissue characterization in Kawasaki disease | 8910 images (33 pullbacks) | Polar transform | RF (AlexNet, VGG-19 & Inception-V3) & majority voting | ACC ^^: 99.00 ± 1.00% | Expert annotation |
| Abdolmanafi et al. [ | Tissue characterization in Kawasaki disease | 5040 images (45 pullbacks) | Polar transform | FCN, RF (VGG-19) | ACC ^^: 96.00 ± 4.00% | Expert annotation |
| Abdolmanafi et al. [ | Automatic plaque tissue classification | 41 pullbacks (~200 images per pullback) | FCN (ResNet), ADAM optimizer | ACC: 93.00 ± 10.00% | Manual segmentation | |
| Avital et al. [ | Deep learning-based calcification classification | 8000 images (540 frames for training) | U-Net | ACC: 99.03 ± 9.00% | Manual segmentation | |
| Cheimariotis et al. [ | Four-way plaque type classification | 183 images (33 patients) | Polar transform, Median filtering, Gaussian filtering, opening, Otsu binarization, low-pass filtering (ARC-OCT) | CNN (AlexNet), ADAM optimizer with attenuation coefficient | A-line transformed ACC: 83.47% | Manual segmentation |
| Gerbaud et al. [ | Plaque burden measurement with enhancement algorithm | 42 patients (96 pullbacks) 200 IVUS-OCT matched images | Adaptive attenuation compensation, frame averaging | Mean difference. | Expert annotation and IVUS | |
| Gessert et al. [ | Plaque detection and segmentation with multi-path architecture | 4000 images (49 patients) | Polar & cartesian | CNN | ACC: 91.70% | Expert annotation |
| Gharaibeh et al. [ | Classification and segmentation of lumen and calcification | 2640 images (34 pullbacks) | Polar transform, log-transform, Gaussian filtering | CNN (SegNet) & CRF | Calcific: | Manual segmentation |
| He et al. [ | Automatic classification of calcification | 4860 images (18 pullbacks) | Polar transform | CNN (ResNet-3D & 2D), cross-entropy loss, ADAM optimizer | PRE: 96.90 ± 1.30% | Manual segmentation |
| Huang et al. [ | Fibrous, calcific and lipidic tissue classification | 28 images (11 patients] | Polar transform, Otsu thresholding, | SVM (RF feature selection) | ACC: 83.00% | Manual segmentation |
| Isidori et al. [ | Automated lipid core burden index assessment | Training: 23 patients. Testing: 40 patients, | CNN | SEN: 90.50% | Expert annotation and NIRS-IVUS | |
| Kolluru et al. [ | CNN classification of plaque types (fibro-calcific and fibro-lipidic) | 4469 images (48 pullbacks) | Log transform, Gaussian filtering | CNN and ANN | ACC: 77.7% ± 4.1% for fibro-calcific, 86.5% ± 2.3% for fibro-lipid and 85.3% ± 2.5% for others | Expert annotation and ANN |
| Kolluru et al. [ | Reduce number of training images needed for deep learning | 3741 images (60 VOIs from 41 pullbacks) | Log transform, Gaussian filtering | U-Net, Image subset selection through deep-feature clustering and k-medoids algorithm | Clustering outperforms equal spacing methods for sparse annotations (F1: 0.63 vs. 0.52, AP: 66% vs. 50%) | Expert annotation |
| Lee et al. [ | Hybrid learning approach to classify fibro-lipidic and fibro-calcific tissue | 6556 images | Polar transform, Gaussian filtering | CNN (ADAM optimizer) & RF with hybrid learning approach, CRF & dynamic programming | Fibro-lipidic: | Manual segmentation, pre & post noise cleaning and active learning |
| Lee et al. [ | Automatic lipid/calcium characterization comparison | 4892 images (57 pullbacks, 55 patients) | Polar transform, non-local mean filtering | CNN (SegNet VGG16), Deeplab 3+, dynamic programming | Manual segmentation, pixel-wise vs. A-line | |
| Lee et al. [ | Fully automated 3D calcium segmentation and reconstruction | 8231 images (68 patients) 4320 ex vivo images (four cadavers) | Polar transform, Gaussian filtering, opening & closing | 3D CNN & | SEN: 97.70% | Manual segmentation, one-step approach |
| Li et al. [ | Segmentation of vulnerable plaque regions | 2000 images (50% vulnerable plaque) | Polar transform | Deep Residual U-Net | ACC: 93.31% | Manual segmentation, prototype U-Net; VGG16, ResNet50, |
| Liu et al. [ | Automated fibrous plaque detection | 1000 images | Polar & Hough transform | CNN (VGG16) | ACC ^: 94.12% | Expert annotation, SSD, YOLO-V3 |
| Liu et al. [ | Vulnerable plaque detection | 2000 training images, 300 testing images, data augmentation | Polar transform, erosion/dilation, de-noising | Deep CNN (Adaboost, YOLO, SSD, Faster R-CNN) | PRE: 88.84% | Manual segmentation |
| Liu et al. [ | Classification of six tissue types: mixed, calcification, fibrous, lipid-rich, macrophages, necrotic core | 135 images (ex vivo) | Polar transform, median filtering | Attenuation, backscatter, intensity | Attenuation and backscatter can differentiate six tissue types | Expert annotation & histology |
| Prabhu et al. [ | Detection of fibro-lipidic and fibro-calcific A-lines | 6556 in vivo images (49 pullbacks), 440 ex vivo images (10 pullbacks) | Polar transform, texture features from Leung–Malik filter bank | RF, SVM, DB, mRMR, binary Wilcoxon & CRF | ACC: 81.58% | Expert annotation |
| Rico-Jimenez et al. [ | Automated tissue characterization with A-line features | 513 images | Polar transform, entropy & frost filter | Linear Discriminant Analysis | ACC: 88.20% | Manual segmentation |
| Rico-Jimenez et al. [ | Macrophage infiltration detection | 28 ex vivo coronary segments | Normalized-intensity standard deviation ratio | ACC: 87.45% | Manual segmentation and histological evaluation | |
| Shibutani et al. [ | Automated plaque characterization in ex vivo sections | 1103 histological cross sections (45 autopsied hearts) | CNN | FC AUC: 0.91 | Expert annotation and histological evaluation | |
| Wang et al. [ | Fibrotic plaque area segmentation | 20 images (nine patients) | Adaptive diffusivity | Log-likelihood function of Gaussian mixture model (GMM) | MCR **: 0.65 ± 0.66 | Manual segmentation, GMM, FCM, SMM, FRSCGMM, AFPDEFCM, GMM-SMSI |
| Yang et al. [ | Automatic classification of plaque (fibrous, calcific and lipid-rich) | 1700 images (20 pullbacks, nine patients) | Mean filtering, graph-cut method | SVM (C-SVC) with HEM training, K-means, radial basis function | ACC: 96.80 ± 0.02% | Manual segmentation |
| Zhang et al. [ | Automated fibrous cap thickness quantification and plaque classification | 18 images (two patients, 1008 images after DA) | CNN (U-Net), CNN (FC-DenseNet), SVM | U-Net ACC *: 95.40% | Manual segmentation guided by VH-IVUS | |
| Zhang et al. [ | Comparison of automated lipid, fibrous and background tissue segmentation | 77 images (five patients) | CNN (U-Net based architecture) and SVM Focal loss function, local binary patterns, gray level co-occurrence matrices | CNN ACC *: 94.29% | Manual segmentation guided by VH-IVUS |
Classified articles investigating automated stent segmentation. 3D—Three-dimensional. ADAM—Gradient based adaptive optimization. ANN—Artificial neural network. AP—Average precision. ASSD—Average symmetric surface distance. AUC—Area under the curve. CCC—Concordance-correlation-coefficient. CFD—Computational fluid dynamics. CT—Computed Tomography DA—Data augmentation. DICE—Dice loss coefficient. F1—F1-score. FPR—False positive ratio. JS—Jaccard similarity index. MADA—Mean average difference in area. OCT—Optical coherence tomography. PPV—Positive predictive value. PRE—Precision. R2—Coefficient of determination. REC—Recall. SEN—Sensitivity. SPE—Specificity. SVM—Support vector machine. TPR—True positive ratio. * Results for the best outcome are shown in the Table, please refer to the article for detailed inter/intra-observer variability and method comparisons.
| First Author [Ref] | Aim | Dataset | Morphological Operations | Feature Detection/Classification | Outcome | Comparison |
|---|---|---|---|---|---|---|
| Bologna et al. [ | Automated lumen contour and stent strut selection for 3D reconstruction | 1150 images (23 pullbacks) | Thresholding, opening, closing, nonlinear filtering | Sobel edge detection | Lumen: | Manual segmentation |
| Cao et al. [ | Automatic stent segmentation and malapposition evaluation | 4065 images (12,550 struts, 15 pullbacks) | Cascade AdaBoost classifier, dynamic programming | DICE: 0.81 | Expert annotation | |
| Chiastra et al. [ | Stent strut and lumen contour detection through OCT and micro-CT | Eight stented bifurcation phantom arteries (in vitro), four in vivo patients | Polar transform, opening, thresholding | Sobel edge detection | Stent *: | Manual segmentation |
| Elliot et al. [ | Automated 3D stent reconstruction through OCT and micro-CT | 2156 images, four stented phantom arteries (in vitro) | Polar transform | A-line intensity profile, peak intensity, number of peaks | ASSD: 184 ± 96 µm | Manual segmentation |
| Jiang et al. [ | Automatic segmentation of metallic stent struts | 165 images, 1200 post DA on (10 pullbacks) | YOLOv3 (binary cross-entropy loss) and region-based fully-convolutional network (R-FCN), Darknet53 | YOLOv3 vs. R-FCN | Manual segmentation and between two classifiers | |
| Junedh et al. [ | Automation of polymeric stent strut segmentation | 1140 images (15 patients) | Polar transform, bilateral filter | K-means | R2: 0.88 | Expert annotation |
| Lau et al. [ | Segmentation of metallic and bioresorbable vascular scaffolds | 51 pullbacks (27 patients), 13,890 training images, 3909 test images | U-Net with combined | DICE *: 0.86 | Manual segmentation | |
| Lu et al. [ | Automatic classification of covered/uncovered stents | 7125 images (39,000 covered struts, 16,500 uncovered struts, 80 pullbacks) | Polar transform | SVM (LIBSVM), bagged decision trees classifier, pixel patch method, mesh growing, active learning relabeling | SPE: 94.00 ± 3.00% | Expert annotation |
| Lu et al. [ | Development of automated OCT image visualization and analysis toolkit for stents | (292 pullbacks) | Polar transform | SVM (LIBSVM), bagged decision trees classifier, pixel patch method, mesh growing, active learning relabeling | Lumen CCC: 0.99 | Expert annotation |
| Migliori et al. [ | Framework for automated stent segmentation and lumen reconstruction for CFD simulation | 540 images, 0ne phantom (in vitro) | Polar transform, intensity/area thresholding | Fuzzy logic, Sobel edge detection and linear interpolation | Stent *: | Manual segmentation of 95 images |
| Nam et al. [ | Automatic stent apposition and neointimal coverage analysis | 5420 images (20 pullbacks) | Polar transform, Gaussian smoothing | ANN, image gradient and intensity | PPV: 95.60% | Manual segmentation on 800 images |
| O’Brien et al. [ | Enhanced stent and lumen 3D reconstruction for CFD simulation | Four swine pullbacks | Decision tree, ramp edge detection | Lumen (62 frames) MADA: 0.42 ± 0.13 mm2 | Manual segmentation | |
| Wu et al. [ | Automated stent strut detection in multiple stent designs | Training: 10,417 images (60 pullbacks) | Polar transform, Manual training mask | U-Net based deep convolutional model (ADAM optimizer, binary cross-entropy and Tversky loss functions) | DICE: 0.91 ± 0.04 | Expert annotation and QIvus v3.1 (Medis Medical Imaging System BV, Leiden, The Netherlands) |
Figure 4Visualization of the bifurcation identification method. (A) Original OCT image with bifurcation present. (B) Contour detection around lumen and branch. (C) Distance transform and the determined main vessel and side vessel centroids. (D) Final segmented image. (E) Detection of the side branch ostium location. (F) Normal vectors to the contour surface (red) and vectors pointing to the main vessel center (green). © [2017] IEEE. Reprinted, with permission, from [76].
Figure 5A comparison between the proposed DeepCap model and two manually annotated reconstructions (H1 and H2). The proposed model agrees well with both manual reconstructions, with the 3D lumen surface visualizing the radius measured from the lumen centroids and the graph showing the cross-sectional area along the length of the vessel. The automated DeepCap segmentation was able to process the 200-image pullback in just 0.8 s on a GPU (19 s on CPU). Reprinted from [93], with permission from Elsevier.
Figure 6Results obtained from both the automatic method (blue contours) and expert annotation (red contours) in PS-OCT images with the automatic method showing robustness in difficult cases, including: (A) Thick calcium (GA) and near-wall blood residue (YA); (B) Fuzzy guidewire artefacts near the lumen boundary (GA) and side branch outside the main vessel wall (YA); (C) Changes in bright/dark tissue patterns at the outer boundary (GA) and side branch within the artery wall; (D) Lipidic (YA) and fibrous tissue (GA); (E) Side branch close to the outer wall (GA) and blood contrast near the lumen (YA); (F) Discontinuous outer wall (YA) segmentation still closely resembles expert annotation (GA); (G) Lipidic (YA) and fibrous thickening of the artery wall (GA); (H) Significant blood artefacts from improper flushing (both arrows); (I) Side branch connecting to the wall region (YA) and catheter touching the lumen wall (GA). Reprinted from [110], with permission, under the Creative Commons. YA = yellow arrow; GA = green arrow.
Figure 7Outline of the surface fitting technique using four different spring stiffnesses (blue, green, yellow, and red) fitted either to visible sections of the outer elastic membrane or the detected lumen contour. Nodes (black circles) were connected to adjacent nodes within the image frame as well as both proximal and distal frames. Gray arrows represent the applied forces proportional to the sum of A-line pixel intensities. The surface fitting and force-balance optimization was carried out across the entire pullback (j direction) to generate a smooth and continuous outer wall over the entire artery section. © [2019] IEEE. Reprinted, with permission, from [113].
Figure 8Visualization of the proof-of-concept automated segmentation and 3D rendering results for calcific (A) and lipidic (D) plaques. The original images and the corresponding automated segmentation for calcific lesion and fibrous cap over the lipid component are shown in (B,E) and (C,F), respectively. Reprinted from [115], with permission, under the Creative Commons.
Figure 9Layout of the dual-path ResNet model for automated extraction, making use of both the cartesian and polar image representations. Points Cc represent varying concatenation locations which were assessed for the two paths. © [2019] IEEE. Reprinted, with permission, from [130].
Figure 10Visualization of the five major calcified lesions (yellow arrows) after 3D reconstruction and comparison between the manually annotated ground truth (A) and the automated method (B). Reprinted from [169], with permission, under the Creative Commons.
Figure 11Patches used to extract features for uncovered, thinly covered, and thickly covered struts. Side patches (orange) capture continuity of the tissue, while the green, blue, red, and purple patches highlight the front, middle, stent strut and backside pixel regions, respectively. Reprinted from [182], with permission, under the Creative Commons.
Figure 12Layout of the presented model for stent strut segmentation. (A) The pseudo-3D polar image stack and manually annotated strut mask were taken as inputs. (B) Strut segmentation model composed of a start module, six encode and decode modules and an end module. (C) The predicted strut map including orientation, width, and position of struts. Reprinted from [175], with permission, under the Creative Commons.
Figure 13Automatically generated 3D stented artery model. (A) OCT contours (blue) and stent struts (red) placed along the 3D centerline (black). (B) Generated 3D surface model. (C) Wall shear stress resulting from CFD simulation. Reprinted from [64], with permission, under the Creative Commons.
Figure 14Framework layout for the automated reconstruction and 3D structural simulation of an artery. Initial OCT images were stacked to form a pseudo-3D image sequence before classification with a CNN and generation of label maps which were subsequently smoothed into contours to generate the digital phantom which was converted to a finite element mesh for structural simulation. Republished with permission of The Royal Society Publishing, from [206]; permission conveyed through Copyright Clearance Centre, Inc.