| Literature DB >> 35626158 |
Sang Kyun Yoo1,2, Tae Hyung Kim1,3, Jaehee Chun1,2,4, Byong Su Choi1,2, Hojin Kim1, Sejung Yang5, Hong In Yoon1, Jin Sung Kim1,2,4.
Abstract
Recently, several efforts have been made to develop the deep learning (DL) algorithms for automatic detection and segmentation of brain metastases (BM). In this study, we developed an advanced DL model to BM detection and segmentation, especially for small-volume BM. From the institutional cancer registry, contrast-enhanced magnetic resonance images of 65 patients and 603 BM were collected to train and evaluate our DL model. Of the 65 patients, 12 patients with 58 BM were assigned to test-set for performance evaluation. Ground-truth for BM was assigned to one radiation oncologist to manually delineate BM and another one to cross-check. Unlike other previous studies, our study dealt with relatively small BM, so the area occupied by the BM in the high-resolution images were small. Our study applied training techniques such as the overlapping patch technique and 2.5-dimensional (2.5D) training to the well-known U-Net architecture to learn better in smaller BM. As a DL architecture, 2D U-Net was utilized by 2.5D training. For better efficacy and accuracy of a two-dimensional U-Net, we applied effective preprocessing include 2.5D overlapping patch technique. The sensitivity and average false positive rate were measured as detection performance, and their values were 97% and 1.25 per patient, respectively. The dice coefficient with dilation and 95% Hausdorff distance were measured as segmentation performance, and their values were 75% and 2.057 mm, respectively. Our DL model can detect and segment BM with small volume with good performance. Our model provides considerable benefit for clinicians with automatic detection and segmentation of BM for stereotactic ablative radiotherapy.Entities:
Keywords: autosegmentation; brain metastases; convolutional neural network; deep learning; magnetic resonance imaging; stereotactic ablative radiotherapy
Year: 2022 PMID: 35626158 PMCID: PMC9139632 DOI: 10.3390/cancers14102555
Source DB: PubMed Journal: Cancers (Basel) ISSN: 2072-6694 Impact factor: 6.575
Figure 1Examples of T1Gd images with BM. The (left) image shows the randomly selected samples of large-volume BM. The (right) image shows the randomly selected samples of small-volume BM with multiple metastases. In each images, yellow arrows indicate the BM.
Patient Characteristics.
| Variables | Total (65) | Train (+Valid) Set (53) | Test Set (12) | |
|---|---|---|---|---|
| Age (years) | ||||
| Median (range) | 63 (19–87) | 63 (19–87) | 63 (26–81) | 0.869 |
| Sex | 1 | |||
| Male | 35 (54) | 29 (55) | 6 (50) | |
| Female | 30 (46) | 24 (45) | 6 (50) | |
| Primary cancer | 0.604 | |||
| Lung | 56 (86) | 45 (85) | 11 (92) | |
| Breast | 4 (6) | 4 (7) | - | |
| Others | 5 (8) | 4 (8) | 1 (8) | |
| Total number of BM | 603 | 545 | 58 | |
| >0.04 cc | 458 (76) | 414 (76) | 44 (76) | |
| ≤0.04 cc | 145 (24) | 131 (24) | 14 (24) | |
| Volumes of BM | ||||
| Max | 67.426 | 67.426 | 1.219 | |
| Min | 0.02 | 0.02 | 0.021 | |
| Median | 0.074 | 0.074 | 0.068 | |
| Mean | 0.552 | 0.592 | 0.158 |
Figure 22D U-Net with preprocessing and postprocessing, which is effective for small volumes. Preprocessing composed of bias field correction (A), 2.5D overlapping patch technique (B), and random gamma correction (C) is applied to the MR image. Postprocessing is applied to prediction through 2D U-Net to obtain results. By cropping the 1024 × 1024 size image by overlapping the 128 × 128 size patch, it was effective for small volumes, and by adding 2 slices in each above and below the reference slice, 5 channels are configured to reflect volume information (D). In step 1, form a cropped patch by sliding a 64 × 64 size patch from left to right and top to bottom of the slice. In step 2, apply the process of step 1 to all slices of the patient individually. Finally, in step 3, stack the patches with the same x-y coordinates in 5 slices along the z-axis.
Figure 3Three-dimensional visualization of the results. The first row represents an example of a case where false positives were not found, and all 11 BM were detected. In the second row, all 4 BM were detected, but 2 false positives were found. Each column represents the ground-truth gross tumor volume delineated manually (A), true positives from prediction (B), and false positives from prediction (C).
Figure 4Locations of the false positives. (A) Delineation created outside the brain; can be solved with skull-stripping. (B) Delineation created in a structure with high intensity; can be solved with extensive gamma correction. (C) Delineation created in the superior sagittal sinus; can be solved with black-blood sequencing. In each images, red in the yellow bounding box indicates false positives.
Deep Learning Performance in Test-Set Patients according to Metastases.
| No. of BM Per Patient | No. of Patients | Sensitivity [%] |
|---|---|---|
| ≥10 | 3 | 93.5 |
| <10 | 9 | 100 |
|
| 12 | 96.6 |
Summary of Detection and Segmentation Performance.
| Volume (cc) | No. of BM | Sensitivity [%] | No. of FPs | DICE | DWD | HD95 [mm] |
|---|---|---|---|---|---|---|
| >0.1 | 24 | 100 | 4 | 0.64 | 0.8 | 2.502 |
| ≤0.1 | 34 | 94.1 | 11 | 0.48 | 0.72 | 1.724 * |
| 0.08–0.1 | 1 | 100 | 2 | 0.82 | 0.9 | 1 |
| 0.06–0.08 | 10 | 100 | 2 | 0.53 | 0.78 | 1.979 |
| 0.04–0.06 | 9 | 100 | 2 | 0.56 | 0.75 | 1.608 |
| 0.02–0.04 | 14 | 85.7 | 5 | 0.38 | 0.63 | 1.689 * |
|
| 58 | 96.6 | 15 | 0.55 | 0.75 | 2.057 |
* HD95 was calculated except for BM that were not detected.
Figure 5Examples of the ground-truth delineation and predicted delineation. The yellow bounding box in the first row indicates the area where false positives occurred. The second row shows the manual delineation of the tumor volume and the manual delineation dilated to 3 mm. The third row shows the predicted delineation from DL and the predicted delineation dilated to 3 mm. For the second and the third rows, red and green indicate manual and predicted delineation, respectively. The last row shows the 3D rendering overlapped with manual delineation and predicted delineation, and 3D rendering dilated to 3 mm each. In the last row, renderingRendering in red is predicted delineation and rendering in yellow is manual delineation.
Comparison between our study and other studies.
| Authors | Median Vol. of BM [cc] | Sensitivity [%] | Avg. No. of FPs | DICE |
|---|---|---|---|---|
| Losch, M. et al. | NA | 82.8 | 7.7 | 0.66 |
| Charron, O. et al. | 0.5 | 93 | 4.4 | 0.79 |
| Xue, J. et al. | 2.22 | 96 | NA | 0.85 |
| Grovik, E. et al | NA | 83 | 3.4 | 0.79 |
| Dikici, E. et al. | 0.16 | 90 | 9.12 | NA |
| Bousabarah, K. et al. | 0.31 (train)/0.47 (test) | NA | NA | 0.71 |
| Our study | 0.074 (train)/0.068 (test) | 96.6 | 1.25 | 0.55 |
DWD is 0.75 and HD95 is 2.057 mm.