| Literature DB >> 34692189 |
Julia P Owen1, Marian Blazes1, Niranchana Manivannan2, Gary C Lee2, Sophia Yu2, Mary K Durbin2, Aditya Nair2, Rishi P Singh3, Katherine E Talcott3, Alline G Melo3, Tyler Greenlee3, Eric R Chen3, Thais F Conti3, Cecilia S Lee1, Aaron Y Lee1.
Abstract
This work explores a student-teacher framework that leverages unlabeled images to train lightweight deep learning models with fewer parameters to perform fast automated detection of optical coherence tomography B-scans of interest. Twenty-seven lightweight models (LWMs) from four families of models were trained on expert-labeled B-scans (∼70 K) as either "abnormal" or "normal", which established a baseline performance for the models. Then the LWMs were trained from random initialization using a student-teacher framework to incorporate a large number of unlabeled B-scans (∼500 K). A pre-trained ResNet50 model served as the teacher network. The ResNet50 teacher model achieved 96.0% validation accuracy and the validation accuracy achieved by the LWMs ranged from 89.6% to 95.1%. The best performing LWMs were 2.53 to 4.13 times faster than ResNet50 (0.109s to 0.178s vs. 0.452s). All LWMs benefitted from increasing the training set by including unlabeled B-scans in the student-teacher framework, with several models achieving validation accuracy of 96.0% or higher. The three best-performing models achieved comparable sensitivity and specificity in two hold-out test sets to the teacher network. We demonstrated the effectiveness of a student-teacher framework for training fast LWMs for automated B-scan of interest detection leveraging unlabeled, routinely-available data.Entities:
Year: 2021 PMID: 34692189 PMCID: PMC8515993 DOI: 10.1364/BOE.433432
Source DB: PubMed Journal: Biomed Opt Express ISSN: 2156-7085 Impact factor: 3.732