Diana Mojahed1, Richard S Ha2, Peter Chang3, Yu Gan4, Xinwen Yao4, Brigid Angelini4, Hanina Hibshoosh5, Bret Taback6, Christine P Hendon4. 1. Department of Biomedical Engineering, Columbia University, New York, New York; Department of Electrical Engineering, Columbia University, New York, New York. 2. Department of Radiology, Columbia University Medical Center, 622 W 168th St, PB-1-301, New York, New York 10032. Electronic address: rh2616@columbia.edu. 3. Department of Radiological Sciences, University of California Irvine Medical Center, Orange, California. 4. Department of Electrical Engineering, Columbia University, New York, New York. 5. Department of Pathology and Cell Biology, Columbia University Medical Center, New York, New York. 6. Department of Surgery, Columbia University Medical Center, New York, New York.
Abstract
BACKGROUND: The purpose of this study was to develop a deep learning classification approach to distinguish cancerous from noncancerous regions within optical coherence tomography (OCT) images of breast tissue for potential use in an intraoperative setting for margin assessment. METHODS: A custom ultrahigh-resolution OCT (UHR-OCT) system with an axial resolution of 2.7 μm and a lateral resolution of 5.5 μm was used in this study. The algorithm used an A-scan-based classification scheme and the convolutional neural network (CNN) was implemented using an 11-layer architecture consisting of serial 3 × 3 convolution kernels. Four tissue types were classified, including adipose, stroma, ductal carcinoma in situ, and invasive ductal carcinoma. RESULTS: The binary classification of cancer versus noncancer with the proposed CNN achieved 94% accuracy, 96% sensitivity, and 92% specificity. The mean five-fold validation F1 score was highest for invasive ductal carcinoma (mean standard deviation, 0.89 ± 0.09) and adipose (0.79 ± 0.17), followed by stroma (0.74 ± 0.18), and ductal carcinoma in situ (0.65 ± 0.15). CONCLUSION: It is feasible to use CNN based algorithm to accurately distinguish cancerous regions in OCT images. This fully automated method can overcome limitations of manual interpretation including interobserver variability and speed of interpretation and may enable real-time intraoperative margin assessment.
BACKGROUND: The purpose of this study was to develop a deep learning classification approach to distinguish cancerous from noncancerous regions within optical coherence tomography (OCT) images of breast tissue for potential use in an intraoperative setting for margin assessment. METHODS: A custom ultrahigh-resolution OCT (UHR-OCT) system with an axial resolution of 2.7 μm and a lateral resolution of 5.5 μm was used in this study. The algorithm used an A-scan-based classification scheme and the convolutional neural network (CNN) was implemented using an 11-layer architecture consisting of serial 3 × 3 convolution kernels. Four tissue types were classified, including adipose, stroma, ductal carcinoma in situ, and invasive ductal carcinoma. RESULTS: The binary classification of cancer versus noncancer with the proposed CNN achieved 94% accuracy, 96% sensitivity, and 92% specificity. The mean five-fold validation F1 score was highest for invasive ductal carcinoma (mean standard deviation, 0.89 ± 0.09) and adipose (0.79 ± 0.17), followed by stroma (0.74 ± 0.18), and ductal carcinoma in situ (0.65 ± 0.15). CONCLUSION: It is feasible to use CNN based algorithm to accurately distinguish cancerous regions in OCT images. This fully automated method can overcome limitations of manual interpretation including interobserver variability and speed of interpretation and may enable real-time intraoperative margin assessment.
Authors: Ken Y Foo; Kyle Newman; Qi Fang; Peijun Gong; Hina M Ismail; Devina D Lakhiani; Renate Zilkens; Benjamin F Dessauvagie; Bruce Latham; Christobel M Saunders; Lixin Chin; Brendan F Kennedy Journal: Biomed Opt Express Date: 2022-05-12 Impact factor: 3.562