Literature DB >> 34660938

An Automatic Framework to Create Patient-specific Eye Models From 3D Magnetic Resonance Images for Treatment Selection in Patients With Uveal Melanoma.

Mohamed Kilany Hassan1, Emmanuelle Fleury2,3, Denis Shamonin1, Lorna Grech Fonk1,4, Marina Marinkovic4, Myriam G Jaarsma-Coes1,4, Gregorius P M Luyten4, Andrew Webb1, Jan-Willem Beenakker1,4, Berend Stoel1.   

Abstract

PURPOSE: The optimal treatment strategy for uveal melanoma (UM) relies on many factors, the most important being tumor size and location. Building on recent developments in high-resolution 3D ocular magnetic resonance imaging (MRI), we developed an automatic image-processing framework to create patient-specific eye models and to subsequently determine the full 3D tumor shape and size automatically. METHODS AND MATERIALS: From 15 patients with UM, 3D inversion-recovery gradient-echo (T1-weighted) and 3D fat-suppressed spin-echo (T2-weighted) images were acquired with a 7T MRI scanner. First, the sclera and cornea were segmented from the T2-weighted image by mesh-fitting. The T1- and T2-weighted images were then coregistered. From the registered T1-weighted image, the lens, vitreous body, retinal detachment, and tumor were segmented. Fuzzy C-means clustering was used to differentiate the tumor from retinal detachments. The tumor model was verified and (if needed) edited by an ophthalmic MRI specialist. Subsequently, the prominence and largest basal diameter of the tumor were measured automatically based on the verified contours. These results were compared with manual assessments on the original images and with ultrasound measurements to show the errors in manual analysis.
RESULTS: The framework successfully created an eye model fully automatically in 12 cases. In these cases, a Dice similarity coefficient (mean surface distance) of 97.7%±0.84% (0.17±0.11 mm) was achieved for the sclera, 96.8%±1.05% (0.20±0.06 mm) for the vitreous body, 91.6%±4.83% (0.15±0.06 mm) for the lens, and 86.0%±7.4% (0.35±0.27 mm) for the tumor. The manual assessments deviated, on average, 0.39±0.31 mm in prominence and 1.7±1.22 mm in basal diameter from the automatic measurements.
CONCLUSIONS: The described framework combined information from T1- and T2-weighted images to accurately determine tumor boundaries in 3D. The proposed process may have a direct effect on clinical workflow, as it enables an accurate 3D assessment of tumor dimensions, which directly influences therapy selection.
© 2021 Published by Elsevier Inc. on behalf of American Society for Radiation Oncology.

Entities:  

Year:  2021        PMID: 34660938      PMCID: PMC8503565          DOI: 10.1016/j.adro.2021.100697

Source DB:  PubMed          Journal:  Adv Radiat Oncol        ISSN: 2452-1094


Introduction

Uveal melanoma (UM) is the most common primary intraocular malignancy in adults, with an incidence of 5.2 million cases per year in the US, of which 50% develop metastases. Apart from enucleation, therapeutic options include brachytherapy, stereotactic radiation therapy, and proton therapy., Treatment selection relies on many factors such as tumor size and location. Tumor size is usually represented by its basal diameter (maximum diameter of the tumor base) and tumor prominence (the minimum distance between the tumor apex and the outer boundary of the sclera). Brachytherapy is generally selected for small- to medium-sized UM (for ruthenium brachytherapy, the limits are defined as largest basal diameter <16 mm and tumor prominence <6 mm from the internal sclera) distal from the optic disc,8, 9, 10 whereas proton therapy is used if the tumor is more extensive., These tumor dimensions are typically determined using 2D ultrasound, where the transducer is positioned perpendicularly to the tumor., Although ultrasound imaging is fast and inexpensive, it is hampered by low tissue contrast between the tumor and the sclera. Ultrasound can also underestimate or overestimate tumor dimensions, because it provides only 2D information via oblique planes through the tumor. Ocular magnetic resonance imaging (MRI), however, can produce high-resolution 3D images with high soft-tissue contrast, using dedicated receive coils and high magnetic field strengths.15, 16, 17, 18, 19 By comparing tumor prominence obtained from MRI with that from ultrasound (US), it has been shown that the higher accuracy of MRI measurements can significantly influence treatment selection. Treatment planning is usually based on 3D parametric models of the eye and tumor, constructed by software packages such as EYEPLAN and OCTOPUS for proton therapy planning, by combining spheres and ellipsoids. For brachytherapy, generally only the tumor prominence is used to calculate the time the applicator needs to be in situ, although 3D planning software such as Plaque Simulator (Eye Physics, LLC, Los Alamitos, California) is available. However, parametric models do not provide patient-specific information, which may lead to uncertainties, and subsequently, larger safety margins than necessary. Patient-specific MRI-based eye models have been developed to study the shape of the retina, and active shape models (ASMs) have also been used to segment the eye., However, these models did not include tumors. Moreover, ASMs may not capture all variations in the vitreous body (VB) because of the large variety in tumor shape and location, and they may have scalability problems if a test case has a different size or more complex shape than the training set. A 3D U-net convolutional neural network has been used to segment eye structures and tumors, showing improvements in sclera and lens segmentation but not in the tumor, compared with approaches using a mixture of ASMs and random forest., Recently, a weakly supervised framework based on a 2D convolutional neural network was proposed to segment the tumor only. However, the proposed slice-by-slice segmentation may suffer from discontinuities and underestimate the tumor size, especially in complex shapes. In this study, we used an automatic framework to create patient-specific eye models including UM segmentation without needing prior knowledge about its shape or location. For validation, these segmentations were verified and (if needed) edited by a specialist to create models for automatically determining the tumor prominence and basal diameter.

Materials and Methods

Clinical data set

The study protocol was in accordance with the Declaration of Helsinki and was approved by Leiden University Medical Center's medical ethical committee. Informed consent was obtained from all participants. Fifteen patients with UM (mean age, 59.3 ± 13.9 years) were included retrospectively. The study sample included both posterior and anterior tumors with and without retinal detachments, and there was wide variety in tumor sizes. Patients were examined on a 7T Philips Achieva MRI whole-body magnet (Best, The Netherlands) using a custom-built eye coil. Eye-motion artifacts were minimized by a cued-blinking protocol. Participants were instructed to focus on a cross as a fixation target. Three-dimensional inversion-recovery gradient-echo (T1-weighted) and 3D T2-weighted fat-suppressed spin-echo (T2-weighted) images were acquired (see Table 1). In one case, a postcontrast-enhanced image was required to discriminate between the tumor and retinal detachment because of an indecisive T1-weighted image. The tumor classification, according to the American Joint Committee on Cancer, was based on fundus and ultrasound imaging (Table E1).
Table 1

MRI aquisition parameters

Parameter3D inversion-recovery gradient-echo (T1-weighted image)3D T2-weighted fat-suppressed spin-echo(T2-weighted image)3D T1-weighted fat-suppressed spin-echo (postcontrast-enhanced image)
Inversion time, ms1280--
Repetition time, ms5.425005.4
Echo time, ms2.41942.4
Flip angle7o90o7o
Acquisition resolution, mm30.53×0.55×0.510.60×0.60×0.600.60×0.60×0.60
Reconstruction resolution, mm30.28×0.28×0.300.24×0.24×0.300.28×0.28×0.30
Field of view, mm340×42×3847×47×3845×45×38
Scanning directionAxial scanAxial scanAxial scan
MRI aquisition parameters

Segmentation framework

The overall approach was to segment the eye structures by combining the complementary information of T1- and T2-weighted images. Figure 1 shows the proposed framework, developed in MeVisLab, version 2.7.1 (Fraunhofer MeVis, Bremen, Germany). First, the sclera and cornea were segmented in the T2-weighted image (termed the sclera mask); then the T1- and T2-weighed images were coregistered using this mask. The VB, lens, retinal detachment, and tumor were segmented in the registered T1-weighted image. Subsequently, fuzzy C-means clustering was applied to differentiate the tumor from the retinal detachment. Finally, the surface of each structure was determined by fitting a mesh to either a strong positive or negative edge, dependent on the object's contrast, using an adaptive subdivision surface-fitting algorithm. In the following sections, each step is described in more detail.
Figure 1

Example of (A) 3D T1-weighted and (B) 3D T2-weighted images with anatomic annotations. (C) framework of the UM segmentation.

Example of (A) 3D T1-weighted and (B) 3D T2-weighted images with anatomic annotations. (C) framework of the UM segmentation.

Sclera segmentation in the T2-weighted image

First, the center of the eye was estimated using the fast radial symmetry transform algorithm, which searches for a sphere with a diameter of an average adult eye (25 mm). Subsequently, the inner boundaries of the sclera and cornea were detected in 2 phases: (1) a Hessian-based filter was applied to enhance sphere-like objects, with minimum connections to the surrounding extraocular muscles, and a mesh at the eye's center was expanded iteratively to fit to strong negative edges (bright-to-dark); and (2) a convex hull was computed for the mesh and expanded further on the T2-weighted image to fit to strong edges.

Image registration

To combine complementary information, the T1- and T2-weighted images were coregistered with intensity-based rigid registration with normalized mutual information as a similarity metric (using the software package Elastix). The sclera mask was used to focus the registration to the eye. The dedicated eye coil and cued-blinking protocol produced high-resolution isotropic 3D data, with minimal partial volume effects and motion artifacts, which allowed the application of intensity-based registration without any landmarks.

VB segmentation

To discriminate the low-intensity VB from the high-intensity lens, tumor, and retinal detachment on the registered T1-weighted images, fuzzy C-means clustering was performed within the sclera mask, and the largest region was selected through connected component analysis. Similarly to the sclera segmentation, the final VB mask was obtained by expanding an initial mesh iteratively to match the positive edges (dark to bright).

Lens segmentation

The lens was detected based on the following criteria: (1) location in the anterior part of the eye, (2) high intensity on the T1-weighted images, (3) location nearest to the eye's optical axis, and (4) a volume of approximately 165 mm3.36, 37, 38 Therefore, we divided the eye into an anterior and posterior mask, using a plane at the sclera's center perpendicular to the optical axis, and processed only the anterior part. The optical axis was estimated from the principle component analysis on the sclera mask to be close to an arbitrary point in the middle of the first coronal slice in front of the cornea. The high-intensity cluster containing the lens, tumor, and retinal detachment, obtained from the previous step, was subsequently used to select candidate objects in the anterior mask. Using connected component analysis and mesh fitting, the lens object was selected from these candidates based on its central location and volume.

Tumor segmentation

Because all the structures had been identified in the preceding steps, the tumor could be segmented by simply subtracting the VB and lens masks from the sclera mask, using subsequent fuzzy C-means clustering to distinguish the tumor from the anterior chamber and retinal detachments because it generally has a higher intensity. Morphologic operations (erosion and closing) were applied before the mesh fitting.

Tumor prominence and basal diameter

Because tumor prominence is defined as the minimum distance between the tumor apex and the outer boundary of the sclera (instead of the detected inner boundary), the outer boundary of the sclera needed to be detected first by expanding the segmented sclera contour further to fit on edges from both T1- and T2-weighted images simultaneously. The prominence was subsequently determined by a maximum-minimum search over the Euclidian distances between the outer sclera points S and the tumor points : The basal diameter was defined as the maximum Euclidian distance between all tumor base points which is a subset of with points within 0.3 mm from the inner sclera:

Evaluation

An ophthalmic MRI specialist created ground-truth contours (G) by manually correcting the segmentations, based on information from the T1-weighted, T2-weighted, and (if available) postcontrast-enhanced images. The accuracy of the automatic segmentation was assessed by computing the volume overlap between G and the automatic segmentation (S) using the Dice similarity coefficient (DSC): Furthermore, we studied the distribution of the surface distance (SD) between G and S for each anatomic structure per patient: The mean absolute surface distance (mSD) and how frequently the surface distance was within the image resolution (±0.3 mm) were computed. To highlight the influence of MRI on defining the true dimensions of the tumor, we provided the manual measurements to the tumor dimensions based on US and compared them with the manual measurements based on MR images. To evaluate the accuracy of the manual assessments of tumor dimension that were performed on the original MR images, automatic measurements were performed on the G contours to create reference measurements that avoided the confounding effect of any segmentation error. Trivial manual inconsistencies of selecting points away from the G contours (ie, intraobserver variability) were quantified by matching the manual points automatically to the G contours, then comparing the resultant measurement with the manual assessment.

Results

Eye model

The eye segmentation ran fully automatically (ie, without any user interaction) in 12 out of 15 patients (see Figure 2). The data consisted of a wide variety of cases, eg, UM located on top of the optic disc (Subject002), a complex UM shape (Subject003), ciliary melanoma (Subject005), UM in an elongated eye (Subject006; axial length, 26.6 mm), tumors touching the lens (Subject004 and Subject011), a tumor infiltrating the lens and ciliary body (Subject007 and Subject012), and necrotic UM (Subject008). Motion artifacts can be observed in T2-weighted images of Subject004, Subject005, Subject009, and Subject012 and slightly in Subject006. Figure 3 shows the 3 cases where the framework failed either to segment the tumor (Subject013 and Subject014) or to register the images (Subject015).
Figure 2

Segmentation results for each subject, overlayed on the corresponding T1- and T2-weighted images. The solid and dashed contours are the reference and segmentation contours, respectively. Sclera: cyan; vitreous body: yellow; lens: green; and tumor: red.

Figure 3

Three cases in which the fully automatic segmentation failed. Sclera: cyan; vitreous body: yellow; lens: green; and tumor: red. The red arrows on a T1-weighted image of Subject015 point to the motion artifact.

Segmentation results for each subject, overlayed on the corresponding T1- and T2-weighted images. The solid and dashed contours are the reference and segmentation contours, respectively. Sclera: cyan; vitreous body: yellow; lens: green; and tumor: red. Three cases in which the fully automatic segmentation failed. Sclera: cyan; vitreous body: yellow; lens: green; and tumor: red. The red arrows on a T1-weighted image of Subject015 point to the motion artifact. High DSC values were obtained for the sclera, VB, lens, and tumor: 97.7% ± 0.84%, 96.8% ± 1.05%, 91.6% ± 4.83%, and 86.0% ± 7.4%, respectively (Figure 4). The corresponding mSD values were 0.17 ± 0.11 mm, 0.20 ± 0.06 mm, 0.15 ± 0.06 mm, and 0.35 ± 0.27 mm, respectively. Figure E2 shows the distribution of the surface distances per patient for each segmented structure. The distributions show that, on average, 82%, 76%, 89%, and 68% of the segmentation errors in the sclera, VB, lens, and tumor, respectively, were within the voxel size of 0.3 mm.
Figure 4

Box plots show the quantitative evaluation of the automatic segmentation results for 12 cases. (A) Dice similarity coefficient. (B) mean absolute surface distance error. Blue squares show the 50% confidence intervals, whiskers show the 90% confidence intervals, red lines indicate median values, and orange dots show mean values.

Box plots show the quantitative evaluation of the automatic segmentation results for 12 cases. (A) Dice similarity coefficient. (B) mean absolute surface distance error. Blue squares show the 50% confidence intervals, whiskers show the 90% confidence intervals, red lines indicate median values, and orange dots show mean values. The manual and reference measurements of the tumor dimensions based on MR images are presented in Table 2 along with the corresponding manual measurements based on US images. For the tumor prominence, the difference between the manual and reference measurements was in the range of –0.92 to 1.12 mm, with an overall average absolute difference of 0.39 ± 0.31 mm. For the basal diameter, the range of the differences was –4.65 to 3.0 mm, with an overall average absolute difference of 1.70 ± 1.22 mm.
Table 2

Reference and manual measurements of the tumor prominence and basal diameter in all cases

Tumor prominence
Tumor basal diameter
CaseUltrasoundAutomatic (reference), mmManual, mmAutomatic (segmentation), mmUltrasoundAutomatic (reference), mmManual, mmAutomatic (segmentation), mm
Subject0016.06.46.46.313.014.113.013.7
Subject0024.03.53.83.111.010.69.48.4
Subject00313.09.39.59.321.018.719.517.8
Subject00413.013.814.314.319.018.617.913.3
Subject0053.03.33.73.16.05.94.14.1
Subject0065.05.16.24.716.013.713.714.1
Subject0079.08.28.17.914.013.38.618.2
Subject0087.06.25.86.119.013.512.812.7
Subject0094.03.73.93.69.09.87.18.1
Subject01011.010.410.110.211.011.314.310.0
Subject01112.09.910.29.622.021.920.522.2
Subject0129.08.67.78.714.016.514.118.2
Subject0131.02.62.8-7.05.04.5-
Subject0143.03.13.2-8.06.65.0-
Subject0153.03.33.9-12.012.810.1-
Reference and manual measurements of the tumor prominence and basal diameter in all cases

Discussion

We introduced an automatic framework to segment 3D ocular MR images of UM patients, which was successful in registering and segmenting the targeted structures fully automatically in 12 out of the 15 cases (80%). In the remaining 3 cases, the eye models were easily created after manual interaction. Sclera segmentation of an elongated eye (Subject006) showed that the algorithm did not suffer from scalability problems. Furthermore, the capability of the mesh-fitting technique to preserve continuity helped to estimate missed tumor-sclera boundaries (Subject002). Motion artifacts in the T2-weighted images may have degraded the quality of the sclera segmentation, because defining the correct boundaries was difficult. However, including information from T1-weighted images helped to mitigate the motion-artifact effect. Furthermore, adjusting the fitting parameters to put more constraints on the contour expansion can resolve the remaining segmentation errors that result from motion. To resolve the root of this problem, we are working on faster MRI acquisition protocols for future applications. Because our data set did not include any clips, the clip artifacts were not encountered. There are some promising publications that show only very small clip artifacts. Therefore, we expect that in these cases, using coarser contours for fitting may help avoid noisy edges and give more weight to the contour's continuity term in the fitting optimization equation so that missed edges owing to clip artifact can be estimated. Small susceptibility artifacts, caused by air bubbles beneath the eyelids, were noticed in some patients, such as Subject015, and resulted in local image distortions of the anterior segment. The contour continuity term in the fitting equation, however, helped to mitigate the effect of this distortion on the sclera segmentation result. The patients included in this study did not use mascara or other types of makeup that would cause significant artifacts and spatial distortions. The crystalline lens was segmented with high accuracy (high DSC and the lowest mSD). However, the algorithm underperformed when the tumor overlapped the lens, owing to the unclear boundary between the tumor and lens. Consequently, the lens was undersegmented in Subject007 and oversegmented in Subject013, in which the ciliary melanoma infiltrated into the lens. The algorithm could therefore be improved by incorporating constraints to the lens shape. The algorithm succeeded in segmenting the VB. However, as the segmentation was performed in 3D, the smoothness parameter of the mesh-fitting algorithm limited the flexibility of the contour to fit the complex VB-tumor interface in Subject003. Tumor segmentation was performed without a priori information on tumor shape or location. In all cases, the mSD was within the reconstruction voxel size, except for in Subject007 (1.14 mm) and Subject008 (0.49 mm), which caused a high standard deviation to the tumor's overall mSD. For Subject007, the low contrast between the lens, tumor, and retinal detachment in the T1-weighted image caused oversegmentation of the tumor. For the necrotic UM in Subject008, although parts of the necrosis were included in the tumor contour by the morphologic operations, the remainder of the necrotic area was not included, because it was larger than the morphologic mask, causing a high mSD. Overall, 90.7%±6.3% of the surface distances of the tumor contours were within the limit of 0.6 mm, which is the interobserver variability of US measurements in UM. We therefore believe that the current segmentation method is accurate enough to be used in current clinical practice and can actually provide valuable additional clinical information, because it encompasses the complete 3D tumor shape instead of only a 2D cross-section. It would, however, be very valuable to also evaluate the reproducibility of manual tumor segmentation on these MR images, as this would be a more representative benchmark for the proposed automatic segmentation framework. Retinal detachment is a common complication of ocular tumors, which can be difficult to discriminate from the main tumor in non–contrast-enhanced MR images, because depending on the amount of melanin in the tumor, it can be isointense compared with retinal detachment on T1- and T2-weighted images. In some patients, however, the fuzzy clustering could correctly identify the tumor because it is the region of highest intensity on the T1-weighted images. As this may not be valid for all cases, as can be observed in Subject007, inclusion of contrast-enhanced scans in future analyses may provide more robust differentiation between tumor and retinal detachment, as the signal intensity of only the tumor will increase on T1-weighted images. One limitation of this study is the limited number of cases; more patients need to be included in future prospective validation studies, because variation in tumor appearance is substantial among patients with UM. The patients included in this study already revealed some limitations of the proposed algorithms: the tumor in Subject014 could not be segmented automatically, because the tumor size was less than the size of the morphologic kernel (5×5×5 pixels = 2.1 mm3) that is used in the algorithm to filter out the noisy voxels. Therefore, the tumor-segmentation algorithm has difficulties in segmenting tumors that have volumes less than 2.1 mm3. This problem has been reported before in other methods, where it was difficult to segment tumors with volumes less than 50 voxels (4.7 mm3). The increased resolution of our 7T imaging protocol significantly reduced this problem, but with the current imaging methods, the visualization and therefore automatic segmentation of very small UM remain challenging. Additionally, the lens-segmentation algorithm is designed to segment the crystalline lens, which is hyperintense on the T1-weighted image. As a result, the algorithm cannot segment intraocular lenses (eg, Subject015), as its signal intensities are generally lower than those of a crystalline lens. Moreover, this study highlighted the influence of acquisition artifacts on the creation of an eye model. Although the sclera could be segmented automatically in Subject015, the registration failed because of losing mutual information between the T1- and T2-weighted images, owing to a combination of motion artifacts that appeared in the T1-weighted image and susceptibility artifacts in the cornea in the T2-weighted image caused by small air bubbles beneath the eyelid. In this case, however, we could roughly use the sclera contour to segment the VB and the tumor automatically on the T1-weighted image. Although the proposed algorithms were developed on 7T MRI, they can be used in 3T images as well, as the MR-imaging features are relatively similar across different field strengths. Nonetheless, the mesh-fitting parameters and the size of the morphologic kernels have to be optimized for 3T images, because the image resolution is slightly lower on 3T than 7T images. It has recently been shown that the MRI methodology can indeed be translated to clinical 3T scanners, but we also noticed that there are small differences in image contrast, which would need to be incorporated in the automatic segmentation algorithms. We expect some degradation in the segmentation quality, such as larger segmentation errors in most of the contours, owing to the lower resolution and contrast of 3T images compared with 7T. We provide, in Table E2, an overview comparison between the proposed model and published models. To perform a fair comparison, however, a common data set would be needed, because the results are very much influenced by the amount of the images used and their quality. Nevertheless, as the proposed model is based more on image content and depends less on the shape of the anatomies involved, we expect it to be more flexible than ASMs, and therefore, more able to segment eyes with additional pathology, such as the myopic elongated eye (Subject006), or with differently located tumors without needing a priori information about the tumor location, for instance. Furthermore, both ASMs and deep-learning models need a new training data set for every new imaging modality to create a different set of model parameters. However, we expect that the proposed framework is more generic and would not need a new data set to change the segmentation parameters if we used a different type of MR scanner. From a clinical perspective, the generated models need to be checked by an expert before using them for therapy planning, but the minimal manual corrections needed would increase objectivity and reproducibility compared with a fully manually segmented model. Accordingly, the comparison of DSC and mSD values in Table E2 shows that our proposed model has a promising performance in localizing different structures and will need minimal user interaction to correct the contours. Note that the average mSD of the tumor contours in the proposed work is higher than that of the ASM with a convolutional neural network. This is mainly caused by the results of Subject007, where the tumor was oversegmented. If we would have considered the tumor segmentation of Subject007 on the postcontrast-enhanced image instead of the T1-weighted image, the average mSD would have been smaller (0.27±0.09 mm). This average mSD measurement is within the image reconstruction resolution and shows that the proposed contours can be used clinically with minimal user correction. The 3D MR imaging allows for a more accurate assessment of tumor geometry than does 2D US, but it also shifts the difficulty of determining the correct orientation for the size measurements from the acquisition (ensuring the correct orientation of the US probe) to the image analysis (determining the correct 3D plane to perform the measurement). The manual determination of the correct plane to measure tumor prominence is relatively easy. Consequently, the difference between the manual and automatic prominence measurements was less than the acquisition resolution (0.6 mm) for 10 of 12 patients and less than the reconstruction resolution (0.3 mm) for 6 of 12 patients. The main source of the difference was the exact definition of the boundary of the tumor and/or sclera (ie, intraobserver variability), which can be seen by the average distance to the reference contours G of 0.35±0.31 mm. In the 2 patients with a relatively large difference of about 1 mm, Subject006 and Subject012, the main cause of the difference was a difference in definition of the outer sclera. The manual tumor prominence measurements were performed on the T1-weighted image, where the outer sclera boundary is often not clearly identifiable. As the automatic framework combines information from T1- and T2-weighted images, a more accurate determination of the tumor boundaries is possible. In contrast to measuring the tumor prominence, the largest basal diameter is difficult to measure manually on 3D MR images for 2 reasons. First, the direction of the largest basal diameter can best be determined in a slice parallel to the tumor base. However, in this scan plane, the sclera can be parallel to the image slice, which can mask the tumor boundaries owing to partial voluming (see Subject007 in Fig E1b). Second, the outer contours of the tumor base are generally curved in 3 dimensions, making it impossible to assess the complete tumor boundary in a 2D reconstructed slice. As a result, the manual determination of the largest basal diameter is not only a time-consuming task but is also quite inaccurate for larger tumors. The intraobserver variability was high because of the difficulty of defining the boundary of the tumor base, with a value of 0.89 ± 1.13 mm, and the difference between the manual and automatic measurements was larger than both acquisition and reconstruction resolution in 11 out of 12 patients in this study. The tumor prominence measurements from this cohort of patients confirmed the findings that manual prominence measurements based on MR images deviate approximately 1.1±0.9 mm from the manual measurements on US images. This overestimation of the tumor prominence likely originated from a slightly oblique orientation of the US probe, resulting in an apparent increase in tumor thickness. Moreover, we showed that the manual basal diameter measurements on MR images deviated by approximately 2.3±1.7 mm from the manual measurements on US images, although US generally reports a larger basal diameter. Along with the difficulty of determining the correct plane for this measurement, which is also a limiting step in the manual interpretation of the MR images, US images are further hindered by a low contrast between the tumor and sclera. Additionally, the relatively small field of view of the US images makes it not always possible to visualize the complete tumor base, increasing the inaccuracy of the method. As a result, the automatic MR-based determination of the tumor diameter was a relevant improvement to the current clinical practice. Because current radiation therapy protocols have incorporated the different uncertainties of US in the margin and/or dose delivery, a more dedicated evaluation of the effect of 3D MR-based tumor models on radiation therapy planning is needed before they can be applied in clinical practice. The main advantages of the automatic framework are its reproducibility and objectivity. Regardless of the level of user experience, the automatic frameworks can reproduce consistent contours that minimize an objective function and result in more consistent tumor-dimension measurements. The automatic framework can minimize the sources of subjectivity that are prevalent in manual segmentation and assessments. Also, with the advances of high-resolution MRI, a large number of slices with inclusion of multiple contrasts is acquired, which can make manual segmentation a time-intensive task; the time required can be significantly reduced by the proposed automatic framework. In addition, the segmentation results of this study show the potential of the proposed methods to accommodate variations either in the shape of the UM or its location within the eye. The image quality and level of tumor infiltration into other structures can affect the segmentation quality; however, these errors can always be corrected manually. Therefore, our future work will focus on resolving these difficulties and reducing manual intervention further. This framework makes it possible to separate boundary detection from the 3D geometric analysis, providing a better insight into tumor dimensions and resulting in a better treatment plan. The aforementioned difficulties in manual measurements illustrate that automatic size measurements are more accurate than manual ones.

Conclusion

We have proposed an automatic framework to segment high-resolution 3D ocular MR images of UM and to provide accurate information about the tumor dimensions. The presented segmentation results show the potential of the proposed framework to accommodate variability in eye size and UM shape and location without needing prior knowledge. The proposed framework may have a direct effect on the clinical workflow, as it enables an accurate 3D assessment of the tumor dimensions, directly influencing therapy selection that currently relies on manual measurements and delineation of tumor margins on 2D ultrasound images. A personalized 3D MR-based model of the tumor and surrounding structures may contribute to more systematic and accurate treatment determination and planning.

Acknowledgements

We thank Dr Rahil Shahzad for his assistance with visualization.
  35 in total

1.  Automated retinal topographic maps measured with magnetic resonance imaging.

Authors:  Jan-Willem M Beenakker; Denis P Shamonin; Andrew G Webb; Gregorius P M Luyten; Berend C Stoel
Journal:  Invest Ophthalmol Vis Sci       Date:  2015-01-15       Impact factor: 4.799

Review 2.  Charged particle radiation therapy for uveal melanoma: a systematic review and meta-analysis.

Authors:  Zhen Wang; Mohammed Nabhan; Steven E Schild; Scott L Stafford; Ivy A Petersen; Robert L Foote; M Hassan Murad
Journal:  Int J Radiat Oncol Biol Phys       Date:  2012-10-03       Impact factor: 7.038

3.  Landmark detection for fusion of fundus and MRI toward a patient-specific multimodal eye model.

Authors:  Sandro I De Zanet; Carlos Ciller; Tobias Rudolph; Philippe Maeder; Francis Munier; Aubin Balmer; Meritxell Bach Cuadra; Jens H Kowal
Journal:  IEEE Trans Biomed Eng       Date:  2014-09-22       Impact factor: 4.538

4.  Ruthenium-106 brachytherapy for choroidal melanoma without transpupillary thermotherapy: Similar efficacy with improved visual outcome.

Authors:  Marina Marinkovic; Nanda Horeweg; Marta Fiocco; Femke P Peters; Linda W Sommers; Mirjam S Laman; Jaco C Bleeker; Martijn Ketelaars; Gre P M Luyten; Carien L Creutzberg
Journal:  Eur J Cancer       Date:  2016-10-12       Impact factor: 9.162

5.  Planning proton therapy of the eye.

Authors:  M Goitein; T Miller
Journal:  Med Phys       Date:  1983 May-Jun       Impact factor: 4.071

Review 6.  Ruthenium-106 brachytherapy.

Authors:  Jacob Pe'er
Journal:  Dev Ophthalmol       Date:  2011-10-21

7.  Ultrasonographic measurement of uveal melanoma thickness: interobserver variability.

Authors:  D H Char; S Kroll; R D Stone; R Harrie; B Kerman
Journal:  Br J Ophthalmol       Date:  1990-03       Impact factor: 4.638

8.  Management of exudative retinal detachment in choroidal melanoma.

Authors:  Syed K Gibran; Kapil G Kapoor
Journal:  Clin Exp Ophthalmol       Date:  2009-09       Impact factor: 4.207

Review 9.  Uveal melanoma: From diagnosis to treatment and the science in between.

Authors:  Chandrani Chattopadhyay; Dae Won Kim; Dan S Gombos; Junna Oba; Yong Qin; Michelle D Williams; Bita Esmaeli; Elizabeth A Grimm; Jennifer A Wargo; Scott E Woodman; Sapna P Patel
Journal:  Cancer       Date:  2016-03-15       Impact factor: 6.860

10.  OCT-based full crystalline lens shape change during accommodation in vivo.

Authors:  Eduardo Martinez-Enriquez; Pablo Pérez-Merino; Miriam Velasco-Ocana; Susana Marcos
Journal:  Biomed Opt Express       Date:  2017-01-18       Impact factor: 3.732

View more
  1 in total

1.  MRI-based 3D retinal shape determination.

Authors:  Luc van Vught; Denis P Shamonin; Gregorius P M Luyten; Berend C Stoel; Jan-Willem M Beenakker
Journal:  BMJ Open Ophthalmol       Date:  2021-11-23
  1 in total

北京卡尤迪生物科技股份有限公司 © 2022-2023.