Literature DB >> 35548028

Segmentation of Organs and Tumor within Brain Magnetic Resonance Images Using K-Nearest Neighbor Classification.

S A Yoganathan1, Rui Zhang1,2.   

Abstract

Purpose: To fully exploit the benefits of magnetic resonance imaging (MRI) for radiotherapy, it is desirable to develop segmentation methods to delineate patients' MRI images fast and accurately. The purpose of this work is to develop a semi-automatic method to segment organs and tumor within the brain on standard T1- and T2-weighted MRI images. Methods and Materials: Twelve brain cancer patients were retrospectively included in this study, and a simple rigid registration was used to align all the images to the same spatial coordinates. Regions of interest were created for organs and tumor segmentations. The K-nearest neighbor (KNN) classification algorithm was used to characterize the knowledge of previous segmentations using 15 image features (T1 and T2 image intensity, 4 Gabor filtered images, 6 image gradients, and 3 Cartesian coordinates), and the trained models were used to predict organ and tumor contours. Dice similarity coefficient (DSC), normalized surface dice, sensitivity, specificity, and Hausdorff distance were used to evaluate the performance of segmentations.
Results: Our semi-automatic segmentations matched with the ground truths closely. The mean DSC value was between 0.49 (optical chiasm) and 0.89 (right eye) for organ segmentations and was 0.87 for tumor segmentation. Overall performance of our method is comparable or superior to the previous work, and the accuracy of our semi-automatic segmentation is generally better for large volume objects.
Conclusion: The proposed KNN method can accurately segment organs and tumor using standard brain MRI images, provides fast and accurate image processing and planning tools, and paves the way for clinical implementation of MRI-guided radiotherapy and adaptive radiotherapy. Copyright:
© 2022 Journal of Medical Physics.

Entities:  

Keywords:  Brain cancer; K-nearest neighbor; machine learning; magnetic resonance imaging; radiotherapy; segmentation

Year:  2022        PMID: 35548028      PMCID: PMC9084578          DOI: 10.4103/jmp.jmp_87_21

Source DB:  PubMed          Journal:  J Med Phys        ISSN: 0971-6203


INTRODUCTION

Magnetic resonance imaging (MRI) offers superior soft-tissue contrast than computed tomography (CT) and is usually the inevitable imaging modality for brain radiotherapy planning. Because MRI does not reveal electron density information, CT images are used to define the attenuation characteristics and MRI images are used for soft-tissue contouring. Usually, CT and corresponding MRI images are registered to enjoy the complementary benefits. With the development of integrated MRI-linear accelerator (MRI-linac),[1] MRI-guided radiotherapy is emerging as a highly promising technique because MRI-linac offers high-quality, real-time anatomic, and physiologic imaging which would allow treatment monitoring, tracking, online adaptive radiotherapy (ART), and tumor response assessment throughout the treatment course. Although MRI-based tracking or online ART is exciting, it is quite challenging to handle large numbers of daily images, especially the organ and tumor delineations, which were usually done manually by experienced dosimetrists or physicians. The manual segmentation is very time-consuming, is subjective with inter-observer variability, and can be the bottleneck for online tracking and ART. To fully exploit the benefits of MRI guidance, it is desirable to develop segmentation methods to delineate daily MRI images fast and accurately. Most of the previous auto-segmentation work related to radiotherapy dealt with CT images only,[23456789] while most MRI-based auto-segmentation studies[101112131415161718] were not related to radiotherapy but instead focused on classifying brain tissues or structures such as gray matter, white matter, cerebrospinal fluid, thalamus, and ventricles for neuroimaging purposes. This is because brain MRI has been the standard tool for diagnosis and treatment of mental illness. Auto-segmentation of organs at risk (OARs) on MRI for radiotherapy is understudied, and the possible reason is small and narrow organs could be much more challenging to segment than large tissues and structures, and CT has been the default choice for OAR segmentations for radiotherapy. However, a patient who receives MRI-based radiotherapy and especially online ART will only have daily MRI images, so fast and accurate segmentation of OARs on MRI is critical. In this study, we developed a segmentation approach based on K-nearest neighbor (KNN) machine learning algorithm. We chose KNN algorithm because it has only two hyperparameters as K value and distance metric, which are very easy to tune,[19] fast and simple, robust to noise and missing values in data, and works well for multiclass (multiple tissue segmentation) problems.[20] Multiple investigators have used KNN for MRI segmentations, for example, Anbeek et al. used KNN in a series of studies[21222324] to segment white matter lesions, white matter, central gray matter, cortical gray matter, cerebrospinal fluid, ventricles, and multiple sclerosis lesion in cranial MRI, and they showed that segmentation based on KNN technique is an automatic and accurate approach that is applicable to standard clinical MRI; Mazzara et al.[25] compared a semi-automated KNN method with a fully automatic knowledge-guided method for gross tumor volume segmentation on MRI images (T1, proton density weighted, and T2) of glioma patients. The semi-automated KNN method required a manual selection of region of interest (ROI) on each MRI slice for training, whereas the automatic knowledge-guided method did not require any manual intervention. They found that KNN method performed better (average accuracy 56% ± 6%) than the knowledge-guided method (52% ± 7%); Steenwijk et al.[26] improved KNN classification of white matter lesions in MRI by optimizing intensity normalization and using spatial tissue type priors, which showed excellent performance. Most of the previous studies used KNN algorithm for brain tissue classification and were not related to radiotherapy, while using KNN to segment OARs for radiotherapy is underinvestigated. The purpose of this work is to develop a KNN machine learning method to segment OARs and tumor within the brain using standard MRI sequences, i.e., T1 and T2 images. Gabor filter-derived features were used in our work to improve the performance of KNN model segmentation. Evaluation of our segmentation results and comparison with previous studies were also performed.

MATERIALS AND METHODS

Image data

MRI data of 12 brain cancer patients were anonymized[27] and included in this study. MRI consisted T1- and T2-weighted images acquired on a 1.5 Tesla Philips Intera scanner using three-dimensional (3D) gradient echo sequences with the following acquisition parameters: TE/TR = 3.414/7.33 ms, flip angle = 8o, voxel size 0.983 × 0.983 × 1.1 mm3, field of view 236 mm × 236 mm × 158.4 mm, and pixel bandwidth 241 Hz/pixel. MRI images in Digital Imaging and Communications in Medicine format were converted to Neuroimaging Informatics Technology Initiative format (.nii) using open-source image analysis software (3D SLICER, version 4.9, Slicer Community, USA).[28] MRI bias correction was also applied using the N4ITK MRI Bias correction module available within 3D SLICER with parameters: BSplineorder of 3, BSpline grid resolutions of (1, 1, 1), a shrink factor of 4, a maximum number of 100 iterations at each of the 3 resolution levels, and a convergence threshold of 0.0001. MRI images were sampled to 1 mm × 1 mm × 1 mm voxel size, and T1 and T2 MRI images were rigidly aligned. MRI intensity variation across patients was standardized by normalization process, which consisted of matching the intensity histograms of all patient images to the histogram of a randomly selected T1/T2 image.

Feature extraction

In addition to original T1 and T2 image intensity values, the following image features were derived to train KNN models: local energy and mean amplitude images based on Gabor filter, and gradient images (Gx, Gy, Gz). It was shown that including spatial information improved learning algorithm accuracy,[2429] so Cartesian coordinates originating from the center of whole-brain T1/T2 image were also included. In summary, total 15 image features were used for training, including 6 features (1 original intensity, local energy and mean amplitude images based on Gabor filters, and 3 gradients) for each of T1 and T2 MRI, and 3 Cartesian coordinates. There are certain advantages in using the Gabor filter in MRI segmentation, for example, the noise in MRI can be smoothed by the Gaussian kernel in the Gabor filter, and the Gabor filter also enables to extract the edge features accurately.[30] A Gabor filter extracts multiple narrow frequency and orientation signals from the textured images. A two-dimensional (2D) Gabor filter in spatial domain is defined as a Gaussian kernel function modulated by a sinusoidal wave and can be written mathematically as follows:[31] x cos θ + y sin θ y sin θ + y cos θ where f is the sinusoidal wave frequency, γ is the spatial aspect ratio which specifies the ellipticity of the support of the Gabor function, σ is the standard deviation of the Gaussian curve, ϕ is the phase offset, and θ represents the direction of the normal to the parallel stripes of a Gabor function. Using the method in the literature,[3132] we calculated total 40 2D Gabor filters [Figure 1] with 5 scales and 8 orientations for 3 × 3 pixel window size. The Gabor filter is a frequency and orientation-selective Gaussian envelope. The scale channels in Gabor filter would help to capture a specific band of frequency components and scale the magnitude of the Gaussian envelope, and the orientation channels are used to extract directional features from the MRI images.
Figure 1

Gabor filters used in this study. Total 40 two-dimensional filters were calculated with 5 scales and 8 orientations for pixel window 3 × 3. The colors are used to show the difference in scale and orientation

Gabor filters used in this study. Total 40 two-dimensional filters were calculated with 5 scales and 8 orientations for pixel window 3 × 3. The colors are used to show the difference in scale and orientation The original T1 and T2 MRI images were convolved with each Gabor filter (real part), which resulted in 40 different representations for each MRI image. These 40 response images were then converted to feature images such as local energy (ψ) and mean amplitude (A):[33] where Ii and Gi are the MRI image and the Gabor filter, and the symbol ⊗ represents the convolution operation. Directional gradient images of T1 and T2 images along x-axis (right-left), y-axis (anterior-posterior), and z-axis (inferior-superior) were also created using the Sobel gradient operator.[34] Further, three Cartesian coordinates x, y, and z were also derived from the center of whole-brain T1/T2 image. As T1 and T2 images were registered, they had the same spatial coordinates.

Organ and tumor segmentations

The reference class labels for right eye, right lens, right optic nerve, left eye, left lens, left optic nerve, brain stem, optic chiasm, and tumor were contoured manually on all training patients by dosimetrists in our clinic previously. In order to improve the efficiency and accuracy of KNN model, 3D ROI was used to extract the image region that had only a particular organ or tumor with a certain margin for training. The generation of ROI for each OAR was automated using an atlas-based approach: MRI images of a reference patient were selected, and individual ROIs were manually created with a 2-cm margin around the organs. The training patients' and any test patient's MRI images were affine registered with the reference patient, and the ROIs were transferred from the reference patient to the training patients and the test patient. ROI for tumor was selected manually because tumor position was different for each patient, and it was created as a region that has the tumor by visual observation with an approximately 2-cm external margin. The aforementioned 15 image features were used as predictor variables in KNN models. Eight separate KNN binary models were trained to contour each OAR inside the ROI region, and a KNN binary model was generated to contour the tumor inside the manually selected ROI region. Only features extracted within the ROI are used for the training and prediction, which will reduce the computation complexity and improve the KNN model performance. The KNN prediction classifies each pixel within ROI (like a semantic segmentation), and the final predicted segmentation patch of an OAR is reshaped to match the full 3D MRI image. The KNN classifier was trained in MATLAB (MathWorks, Natick, MA, USA). Our initial evaluation of various K values and distance metrics showed that the K value of 50 and Euclidean distance are best-suited parameters for this segmentation study. The workflows for OAR and tumor segmentations are shown in Figure 2.
Figure 2

Workflows for (a) organs at risk and (b) tumor segmentations

Workflows for (a) organs at risk and (b) tumor segmentations The overall architecture of our segmentation method is as follows: in order to bring all the images to a common coordinate system, the T1 and T2 MRI images of all patients are registered with the respective images of a reference patient selected randomly, and the ROIs of individual OARs are transferred from the reference patient to the other patients. The image region within an ROI is input into the OAR-specific KNN model which predicts the particular OAR segmentation. Finally, the predicted individual OARs are combined to form a 3D segmentation matrix. A similar process is followed for tumor segmentation of a test patient except the ROIs are created manually instead of using the atlas-based approach.

Evaluation

Performance of the trained models was evaluated using “leave-one-out cross-validation” approach: N-1 patients (training) were used to train the KNN model and the segmentations were predicted for the one remaining patient (validation), and this procedure was repeated for all possible combinations of training and validation data. The dice similarity coefficient (DSC)[35] and normalized surface dice (NSD)[36] were used to evaluate the accuracy of segmentation results: where Vt and Vp are the true and predicted segmented volumes. where St and Sp are the surfaces of the true and predicted segmentations. B and B are the border regions of true and predicted segmentation surfaces at a tolerance τ. The tolerance value τ was chosen as 1 mm for small volume OARs such as lens, optic nerves, and optic chiasm, and 3 mm for other remaining larger volumes. In case of tumor, the tolerance value τ of 3 mm was used. DSC and NSD values range from 0 to 1 and higher value indicates better segmentation performance. Sensitivity and specificity[37] were also used for evaluation: where TP represents true positive (intersection between the predicted segment and reference segment), and FN represents the false negative (parts of reference segment not covered by predicted segment). TN represents the true negative (pixels correctly detected as background) and FP represents the false positive (parts of predicted segment not covered by reference segment). We also used Hausdorff distance (HD) to measure the boundary similarity between the true and predicted segmentations. They quantify the maximum distance of a point in X (predicted) to the nearest point in Y (true). HD = max{h(X,Y),h(Y,X}     (10) h(X,Y)= max∈ min∈ ‖ a-b ‖     (11)

RESULTS

Figure 3 shows the comparison of organ segmentations with the ground truth for a typical patient. Table 1 shows DSC, NSD, and sensitivity values, and Table 2 shows HD values for organ segmentations. Specificity values are always 1 and are not included in the tables. Our method generates slightly poorer results for small organs such as eye lens, optic chiasm, and optic nerve.
Figure 3

Comparison of segmentation of organs at risk (eyes, eye lens, optic nerves, optic chiasm, and brain stem) on different slices for patient number 8. Top row: Original magnetic resonance imaging. Bottom row: Segmented magnetic resonance imaging with solid lines representing the ground truth and dashed lines representing K-nearest neighbor predictions

Table 1

Dice similarity coefficient, normalized surface dice, and sensitivity values for organs at risk segmentations

Patients numberEvaluation metricsRight eyeRight lensRight ONLeft eyeLeft lensLeft ONBSOPC
1DSC0.900.640.690.880.340.810.840.50
NSD0.820.680.730.800.460.800.720.54
Sensitivity0.920.520.550.990.221.000.880.46
2DSC0.880.820.830.840.860.860.850.60
NSD0.890.670.720.890.670.800.690.48
Sensitivity0.801.000.750.741.000.810.880.45
3DSC0.860.830.880.880.930.740.850.33
NSD0.870.730.730.870.740.730.660.39
Sensitivity0.980.960.961.000.950.890.860.24
4DSC0.830.900.820.880.840.860.890.72
NSD0.930.720.710.930.730.730.690.61
Sensitivity0.990.890.840.970.980.950.960.63
5DSC0.840.500.700.750.370.720.900.71
NSD0.850.660.700.740.610.700.670.67
Sensitivity1.000.340.910.970.240.810.990.60
6DSC0.940.880.800.930.860.770.850.40
NSD0.860.680.670.850.670.660.630.41
Sensitivity0.920.940.870.920.820.860.960.36
7DSC0.900.510.600.920.930.230.910.32
NSD0.820.630.630.880.690.310.660.44
Sensitivity0.980.340.450.910.910.130.900.22
8DSC0.930.870.620.890.880.730.870.65
NSD0.870.710.730.870.710.740.660.58
Sensitivity0.950.980.500.971.000.850.890.56
9DSC0.930.700.800.940.770.850.900.68
NSD0.830.690.760.840.710.750.670.62
Sensitivity0.940.990.760.971.000.830.970.76
10DSC0.860.760.800.890.800.720.880.34
NSD0.850.660.690.850.730.690.640.43
Sensitivity0.980.750.790.990.830.950.900.23
11DSC0.920.730.750.900.850.690.900.30
NSD0.850.680.670.850.720.700.640.12
Sensitivity0.980.590.790.990.760.540.870.02
12DSC0.900.380.710.910.430.770.910.39
NSD0.810.550.690.810.580.710.640.48
Sensitivity1.000.240.980.990.270.800.940.36
DSCMean±SD0.89±0.040.71±0.170.75±0.090.88±0.050.74±0.220.73±0.170.88±0.030.49±0.17
NSDMean±SD0.85±0.030.67±0.050.7±0.040.85±0.050.67±0.080.69±0.130.66±0.020.48±0.14
SensitivityMean±SD0.95±0.060.71±0.290.76±0.180.95±0.070.75±0.310.79±0.240.92±0.040.41±0.21

DSC: Dice similarity coefficient, NSD: Normalized surface dice, ON: Optic nerve, BS: Brain stem, OPC: Optic chiasm, SD: Standard deviation

Table 2

Hausdorff distance (mm) for organs at risk segmentations

Right eyeRight lensRight ONLeft eyeLeft lensLeft ONBSOPC
13.24.13.75.15.92.07.87.5
27.21.45.57.21.41.48.98.1
32.41.41.42.21.02.87.98.6
43.51.05.52.01.41.46.35.7
56.22.86.28.93.77.04.46.7
62.21.44.55.12.24.67.28.8
75.94.14.14.91.09.17.67.2
82.22.24.63.01.43.77.39.3
92.42.04.42.21.76.39.25.5
105.74.03.24.72.22.88.19.5
113.62.24.12.82.25.19.49.4
126.05.12.85.05.02.46.08.2
Mean±SD4.2±1.82.7±1.44.2±1.34.4±2.12.4±1.64.1±2.47.5±1.47.9±1.4

ON: Optic nerve, BS: Brain stem, OPC: Optic chiasm, SD: Standard deviation

Comparison of segmentation of organs at risk (eyes, eye lens, optic nerves, optic chiasm, and brain stem) on different slices for patient number 8. Top row: Original magnetic resonance imaging. Bottom row: Segmented magnetic resonance imaging with solid lines representing the ground truth and dashed lines representing K-nearest neighbor predictions Dice similarity coefficient, normalized surface dice, and sensitivity values for organs at risk segmentations DSC: Dice similarity coefficient, NSD: Normalized surface dice, ON: Optic nerve, BS: Brain stem, OPC: Optic chiasm, SD: Standard deviation Hausdorff distance (mm) for organs at risk segmentations ON: Optic nerve, BS: Brain stem, OPC: Optic chiasm, SD: Standard deviation Figures 4 and 5 show the comparison of tumor segmentation on axial, sagittal, and coronal planes for the best and worst cases, and Table 3 shows DSC, NSD, sensitivity, and HD values for tumor segmentations. Specificity values are always 1 and not included in the table.
Figure 4

Comparison of segmentation of tumor on different planes for patient number 9. Top row: Original magnetic resonance imaging. Bottom row: Segmented magnetic resonance imaging with solid red lines representing the ground truth and green dashed lines representing K-nearest neighbor predictions

Figure 5

Comparison of segmentation of tumor on different planes for patient number 2. Top row: Original magnetic resonance imaging. Bottom row: Segmented magnetic resonance imaging with solid red lines representing the ground truth and green dashed lines representing K-nearest neighbor predictions

Table 3

Dice similarity coefficient, normalized surface dice, sensitivity, and Hausdorff distance (mm) values for tumor segmentation

Patients numberDSCNSDSensitivityHD
10.860.910.762.2
20.810.691.009.8
30.910.811.003.3
40.930.920.981.4
50.950.980.921.4
60.720.700.956.8
70.890.840.972.2
80.860.750.806.4
90.940.970.931.4
100.910.790.962.2
110.780.930.734.4
120.870.840.823.2
Mean±SD0.87±0.070.84±0.100.90±0.103.7±2.7

DSC: Dice similarity coefficient, NSD: Normalized surface dice, SD: Standard deviation, HD: Hausdorff distance

Comparison of segmentation of tumor on different planes for patient number 9. Top row: Original magnetic resonance imaging. Bottom row: Segmented magnetic resonance imaging with solid red lines representing the ground truth and green dashed lines representing K-nearest neighbor predictions Comparison of segmentation of tumor on different planes for patient number 2. Top row: Original magnetic resonance imaging. Bottom row: Segmented magnetic resonance imaging with solid red lines representing the ground truth and green dashed lines representing K-nearest neighbor predictions Dice similarity coefficient, normalized surface dice, sensitivity, and Hausdorff distance (mm) values for tumor segmentation DSC: Dice similarity coefficient, NSD: Normalized surface dice, SD: Standard deviation, HD: Hausdorff distance Table 4 compares our study with previous automatic MRI segmentation studies using brain tumor patients.
Table 4

Comparison of dice similarity coefficient values in current work with previous studies for segmentation of organs at risks and tumor within the brain

StudyNumber of patientsDetails of the studyBSONOPCEyesEye lensTumor
Isambert et al.[38]11Method: Atlas-based Image: MRI0.85 (0.80-0.88)0.38 (0.4-0.53)0.41 (0-0.58)0.81 (0.780.85)--
Deeley et al.[10]20Method: Combination of atlas-based registration and atlas-navigated optimal medial axis and deformable model Image: MRI + CT0.83±0.060.52±0.140.37±0.180.84±0.07--
Agn, et al.[39]70Method: Atlas-based model for normal brain structure segmentation and convolutional restricted Boltzmann machine model for tumor segmentation Images: MRI + CT0.860.560.390.86-0.67
Egger et al.[11]27Method: Balloon inflation force method Image: MRI-----0.81±0.074
27Method: Graph-based method Image: MRI-----0.83±0.082
Demirhan et al.[12]20Method: Self-organizing map Image: MRI-----0.56±0.27
Liu et al.[15]36Method: Supervoxel clustering Image: MRI-----0.86±0.09
Narayana et al.[18]1008Method: Deep learning based on convolutional neural network Image: MRI-----0.86±0.016
Havaei et al.[40]70Method: KNN Image: MRI-----0.80-0.85
Current study12Method: KNN Image: MRI0.88±0.030.74±0.130.50±0.170.89±0.040.72±0.190.87±0.07

BS: Brain stem, ON: Optical nerve, OPC: Optic chiasm, CT: Computed tomography, MRI: Magnetic resonance imaging, KNN: K-nearest neighbor

Comparison of dice similarity coefficient values in current work with previous studies for segmentation of organs at risks and tumor within the brain BS: Brain stem, ON: Optical nerve, OPC: Optic chiasm, CT: Computed tomography, MRI: Magnetic resonance imaging, KNN: K-nearest neighbor The KNN model trainings take approximately 40 min for organs, and 15 min for tumor on a laptop with 2.5 GHz Intel i5 processor and 8 GB random-access memory (Intel Corp., Santa Clara, CA, USA), and the model trainings are required only once. Automatic segmentations for a new patient take approximately 6 min (4 min for organs, and 2 min for tumor).

DISCUSSION

In this work, we present a machine learning method to segment OARs and tumor in standard T1 and T2 brain MRI images. A simple rigid registration was used to align all the library images to the same spatial coordinates. The KNN models were used to characterize the knowledge of previous segmentations, and the trained models were used to predict OARs and tumor contours. Deeley et al.[10] compared the performance of automatic segmentation of OARs with human experts using T1 MRI images of 20 high-grade glioma patients and found that the differences were less 5%. However, the segmentations of smaller tubular structures such as chiasm and optic nerves showed higher variation and were challenging for both experts and automatic segmentation methods. Isambert et al.[38] also compared automatic segmentation with manual segmentation using T1 MRI images of 11 brain patients. They observed an excellent segmentation accuracy for OAR volumes >7 cc, for example, DSC values for larger OARs such as eyes, brain stem, and cerebellum were >0.8, while they were <0.41 for smaller structures such as optic nerves, optic chiasm, and pituitary gland. Similarly, our results showed that the accuracy of segmentation was generally better for large volume objects, and we achieved a slightly better segmentation accuracy for small volume objects than other studies. The mean DSC is 0.49 ± 0.17 for optic chiasm, 0.75±0.09 for right optic nerve, and 0.73 ± 0.17 for left optic nerve in our study, while it was lower than 0.41 for these organs in Isambert et al.[38] and was around 0.4–0.5 in Deeley et al.[10] For some OARs (especially for optic chiasm and brain stem) and tumors, the DSC and HD were observed to be relatively poorer [Tables 1-3] than the others, which is mainly attributed to the considerable variation of shapes, size, and location of these OARs or tumors. Furthermore, the boundaries of them were generally unclear and irregular with discontinuities, adding significant challenges to the auto-segmentation. In addition, the MRI scans in general acquired with a clinical scanner have wide inter-patient variations due to the heterogeneity in MRI protocols and difference in tissue contrast, which is a function of MRI fieldstrength and vendor specific.[29] Compared to previous studies, our research has multiple strengths. First, it develops a semi-automated segmentation method by using the simple KNN learning algorithm. Our method can avoid manual contouring of new patients, reduce the uncertainties, and facilitate online plan adaptation. Second, it does not require multiple or special MRI sequences and only needs standard T1- and T2-weighted MRI, which makes our methods easy to implement and avoids the issues associated with specialized sequences such as limited availability, increasing scan time, patient movement, and costs. Third, unlike previous studies,[1038] it does not require deformable registration between training and new patients, which avoids the possible uncertainties or errors associated with image registration. Our study has some limitations. First, the ROI for tumor segmentation was created manually due to patient specificity. However, only a rough estimation of tumor location is required, and the creation of ROI only takes a few seconds, especially with the prior knowledge of the patient's cancer condition. Second, all of the patients included in our study had a localized tumor, and they were all treated with a conventional fractionated radiotherapy. Multiple metastasis brain tumors are very common in stereotactic radiotherapy/radiosurgery treatments. A separate study is required to explore the feasibility of segmenting multiple tumors automatically. Third, our current study is a preliminary work, which used handcrafted image features and the simple KNN algorithm to understand the issues posed by the learning algorithms for automatic segmentation. Even though the KNN requires less time for training compared to the deep learning approach, the prediction process is usually slower. Note a laptop with low-level configuration was used in this study, and we expect that our method would be much faster if graphics processing unit (GPU) computing is used, considering that GPU-based high-performance computing has been increasingly used in RT.[41] Unlike the deep learning approach, the KNN does not learn the weights during training, is a memory-based classifier, and requires the entire training data for prediction. Future study should explore other alternative algorithms such as deep learning approach for a fully automatic segmentation in brain. However, we think MRI segmentation based on a simple learning method like KNN is still quite valuable because it is simple, easy-to-implement, requires minimum computer resource, and very easy to tune. In addition, KNN is a good algorithm to start with and to understand the issues posed by the learning algorithms for automatic segmentations, and can be used as a baseline for comparisons with more sophisticated learning algorithms. Finally, the dataset used in our study is smaller than most of the other studies in the literature, but our OAR and tumor segmentation results are comparable or superior to others [Table 4]. This is because we carefully evaluated the various features and found the best features for this work. In addition, we used ROIs to assist learning in this study. In the future work, the data augmentation such as random rotation, translation, and scaling may also be applied to the training data, which would increase the training data.[42]

CONCLUSIONS

In this paper, we presented a semi-automatic segmentation approach based on KNN model to segmenting organs (brain stem, optic chiasm, optic nerve, eye lens, and eye globes) and tumor on standard T1 and T2 brain MRI images. Overall performance of our method is superior to the previous work. It provides fast and accurate image processing and planning tools and is one step forward for MRI-guided radiotherapy.

Financial support and sponsorship

Nil.

Conflicts of interest

There are no conflicts of interest.
  32 in total

1.  Probabilistic segmentation of white matter lesions in MR imaging.

Authors:  Petronella Anbeek; Koen L Vincken; Matthias J P van Osch; Robertus H C Bisschops; Jeroen van der Grond
Journal:  Neuroimage       Date:  2004-03       Impact factor: 6.556

2.  Probabilistic brain tissue segmentation in neonatal magnetic resonance imaging.

Authors:  Petronella Anbeek; Koen L Vincken; Floris Groenendaal; Annemieke Koeman; Matthias J P van Osch; Jeroen van der Grond
Journal:  Pediatr Res       Date:  2008-02       Impact factor: 3.756

3.  Learning-based 3T brain MRI segmentation with guidance from 7T MRI labeling.

Authors:  Minghui Deng; Renping Yu; Li Wang; Feng Shi; Pew-Thian Yap; Dinggang Shen
Journal:  Med Phys       Date:  2016-12       Impact factor: 4.071

4.  Automatic metastatic brain tumor segmentation for stereotactic radiosurgery applications.

Authors:  Yan Liu; Strahinja Stojadinovic; Brian Hrycushko; Zabi Wardak; Weiguo Lu; Yulong Yan; Steve B Jiang; Robert Timmerman; Ramzi Abdulrahman; Lucien Nedzi; Xuejun Gu
Journal:  Phys Med Biol       Date:  2016-11-15       Impact factor: 3.609

5.  Automated segmentation of the parotid gland based on atlas registration and machine learning: a longitudinal MRI study in head-and-neck radiation therapy.

Authors:  Xiaofeng Yang; Ning Wu; Guanghui Cheng; Zhengyang Zhou; David S Yu; Jonathan J Beitler; Walter J Curran; Tian Liu
Journal:  Int J Radiat Oncol Biol Phys       Date:  2014-10-13       Impact factor: 7.038

6.  Interleaved 3D-CNNs for joint segmentation of small-volume structures in head and neck CT images.

Authors:  Xuhua Ren; Lei Xiang; Dong Nie; Yeqin Shao; Huan Zhang; Dinggang Shen; Qian Wang
Journal:  Med Phys       Date:  2018-03-23       Impact factor: 4.071

Review 7.  Review of automatic segmentation methods of multiple sclerosis white matter lesions on conventional magnetic resonance imaging.

Authors:  Daniel García-Lorenzo; Simon Francis; Sridar Narayanan; Douglas L Arnold; D Louis Collins
Journal:  Med Image Anal       Date:  2012-09-29       Impact factor: 8.545

Review 8.  Machine learning techniques for biomedical image segmentation: An overview of technical aspects and introduction to state-of-art applications.

Authors:  Hyunseok Seo; Masoud Badiei Khuzani; Varun Vasudevan; Charles Huang; Hongyi Ren; Ruoxiu Xiao; Xiao Jia; Lei Xing
Journal:  Med Phys       Date:  2020-06       Impact factor: 4.071

9.  Accurate white matter lesion segmentation by k nearest neighbor classification with tissue type priors (kNN-TTPs).

Authors:  Martijn D Steenwijk; Petra J W Pouwels; Marita Daams; Jan Willem van Dalen; Matthan W A Caan; Edo Richard; Frederik Barkhof; Hugo Vrenken
Journal:  Neuroimage Clin       Date:  2013-10-14       Impact factor: 4.881

10.  Automatic Intracranial Segmentation: Is the Clinician Still Needed?

Authors:  Nicolas Meillan; Jean-Emmanuel Bibault; Julien Vautier; Caroline Daveau-Bergerault; Sarah Kreps; Hélène Tournat; Catherine Durdux; Philippe Giraud
Journal:  Technol Cancer Res Treat       Date:  2018-01-01
View more

北京卡尤迪生物科技股份有限公司 © 2022-2023.