| Literature DB >> 31089974 |
Kenneth A Philbrick1, Alexander D Weston2, Zeynettin Akkus2, Timothy L Kline2, Panagiotis Korfiatis2, Tomas Sakinis2,3, Petro Kostandy2, Arunnit Boonrod2,4, Atefeh Zeinoddini2, Naoki Takahashi2, Bradley J Erickson2.
Abstract
Deep-learning algorithms typically fall within the domain of supervised artificial intelligence and are designed to "learn" from annotated data. Deep-learning models require large, diverse training datasets for optimal model convergence. The effort to curate these datasets is widely regarded as a barrier to the development of deep-learning systems. We developed RIL-Contour to accelerate medical image annotation for and with deep-learning. A major goal driving the development of the software was to create an environment which enables clinically oriented users to utilize deep-learning models to rapidly annotate medical imaging. RIL-Contour supports using fully automated deep-learning methods, semi-automated methods, and manual methods to annotate medical imaging with voxel and/or text annotations. To reduce annotation error, RIL-Contour promotes the standardization of image annotations across a dataset. RIL-Contour accelerates medical imaging annotation through the process of annotation by iterative deep learning (AID). The underlying concept of AID is to iteratively annotate, train, and utilize deep-learning models during the process of dataset annotation and model development. To enable this, RIL-Contour supports workflows in which multiple-image analysts annotate medical images, radiologists approve the annotations, and data scientists utilize these annotations to train deep-learning models. To automate the feedback loop between data scientists and image analysts, RIL-Contour provides mechanisms to enable data scientists to push deep newly trained deep-learning models to other users of the software. RIL-Contour and the AID methodology accelerate dataset annotation and model development by facilitating rapid collaboration between analysts, radiologists, and engineers.Entities:
Keywords: Annotation by iterative deep learing (AID); Classification; Deep-learning; Medical image annotation; Segmentation; Software tools
Year: 2019 PMID: 31089974 PMCID: PMC6646456 DOI: 10.1007/s10278-019-00232-0
Source DB: PubMed Journal: J Digit Imaging ISSN: 0897-1889 Impact factor: 4.056
Fig. 1Screenshot of (a) dataset project viewer and (b) dataset annotation window. The dataset project viewer (a) shows a representative imaging project consisting of multiple data series, each consisting of one or more imaging exams, each further consisting of imaging series datasets. Imaging with annotation data is bolded and the selected series is shown in blue. The dataset annotation window (b) shows the currently selected dataset and the selected dataset’s annotations. Dataset slices in reference to the primary annotation view are shown in the slice viewer. Slice voxel annotations are indicated by the presence of one or more colors within the slice indicator. Voxel annotation along the axis perpendicular to the annotation view is shown to the right. The axis shown in the annotation view defaults to the projection with the greatest in slice voxel resolution and can be manually selected using the orientation view projection drop down.
Fig. 2Screenshot of (a) ROI manager dialog window and (b) ROI editor dialog window. All existing ROIs defined for a project are shown in the project ROI editor window. ROI editor (b)—the editor window allows the user to change the name, RadLex ID, and color for any ROI.
Fig. 3Statistics window displays descriptive statistics for the selected ROI with reference to the entire volume or selected slice.
Fig. 4Example of semi-automated edge refinement of the kidney; (a) manual segmentation; (b) segmentation following semi-automated edge refinement.
Fig. 5Example collaborative multiuser annotation workflow illustrating the controlled annotation of an individual series (red) by multiple users. Unannotated series assigned to analyst group at the start of the project. Analysts acquire unannotated series for annotation from the analyst pool. Analysts can (a) return partially annotated series to the Analyst’s pool for further editing by other analysts or (b) assign the annotated series to Reviewer’s pool; series can no longer be acquired by an analyst. (c) Reviewer 1 acquires the series from the Reviewer’s pool, if the annotations look correct, (d) reviewer assigns image to Scientist pool. Alternatively, not pictured, if the reviewer deemed the annotations poor, they could have re-assigned the series back to the analysts pool or to a specific analyst. (e) Scientists use available curated dataset to train deep-learning model. (f) Trained deep-learning model pushed to analysts to perform draft dataset annotation as an example of implementing the AID dataset annotation methodology.
Fig. 6Importing a deep-learning model into RIL-Contour. Metadata required to load ML model is defined in the model wizard; (a) defines model name and loads model and weights (HDF5 file) and optionally defines custom python model loading code; (b) defines affine transformations required to transform slice input into the model; (c) defines image normalization to perform prior to model execution; and (d) links model output with custom RIL-Contour annotations.
Fig. 7Representative interactive model visualizations generated in RIL-Contour illustrating the regions of an image that the model strongly activated on when performing inference; (a) saliency activation map (SAM); (b) saliency map visualizations of a deep-learning model designed to classify CT contrast enhancement. Visualizations shown using a rainbow color pallet; red = high, purple = low. This visualization indicates that portions of the left and right kidney are being used by the model to identify the imaging’s renal contrast enhancement phase.
Fig. 8Annotation by iterative deep learning (AID).