Literature DB >> 31572790

Automatic Tumor Segmentation With a Convolutional Neural Network in Multiparametric MRI: Influence of Distortion Correction.

Lars Bielak1,2, Nicole Wiedenmann3,2, Nils Henrik Nicolay3,2, Thomas Lottner1, Johannes Fischer1, Hatice Bunea3,2, Anca-Ligia Grosu3,2, Michael Bock1,2.   

Abstract

Precise tumor segmentation is a crucial task in radiation therapy planning. Convolutional neural networks (CNNs) are among the highest scoring automatic approaches for tumor segmentation. We investigate the difference in segmentation performance of geometrically distorted and corrected diffusion-weighted data using data of patients with head and neck tumors; 18 patients with head and neck tumors underwent multiparametric magnetic resonance imaging, including T2w, T1w, T2*, perfusion (k trans), and apparent diffusion coefficient (ADC) measurements. Owing to strong geometrical distortions in diffusion-weighted echo planar imaging in the head and neck region, ADC data were additionally distortion corrected. To investigate the influence of geometrical correction, first 14 CNNs were trained on data with geometrically corrected ADC and another 14 CNNs were trained using data without the correction on different samples of 13 patients for training and 4 patients for validation each. The different sets were each trained from scratch using randomly initialized weights, but the training data distributions were pairwise equal for corrected and uncorrected data. Segmentation performance was evaluated on the remaining 1 test-patient for each of the 14 sets. The CNN segmentation performance scored an average Dice coefficient of 0.40 ± 0.18 for data including distortion-corrected ADC and 0.37 ± 0.21 for uncorrected data. Paired t test revealed that the performance was not significantly different (P = .313). Thus, geometrical distortion on diffusion-weighted imaging data in patients with head and neck tumor does not significantly impair CNN segmentation performance in use.
© 2019 The Authors. Published by Grapho Publications, LLC.

Entities:  

Keywords:  Multi-parametric MRI; automatic tumor segmentation; convolutional neuronal network; radiation therapy

Year:  2019        PMID: 31572790      PMCID: PMC6752289          DOI: 10.18383/j.tom.2019.00010

Source DB:  PubMed          Journal:  Tomography        ISSN: 2379-1381


Introduction

Precise delineation and segmentation of tumors is an essential step in radiation therapy planning. Good segmentation accuracy is a prerequisite for both effective tumor treatment and preservation of functionality of surrounding healthy tissue and thereby for prolonged patient survival (1, 2). Manual segmentation of lesions is a tedious task, and hence automatic detection methods have been proposed as tools for diagnostics, treatment planning and response evaluation (3). With these automatic segmentation methods, problems such as interobserver variability in target volume definition, definition and assessment of tumor heterogeneity, and tumor classification may be overcome (4, 5). Early segmentation solutions were focused on image signal intensity–based methods or semiautomatic computer learning algorithms with manually selected or linearly learned image features (6–13). Many of these segmentation methods made use of multiparametric imaging based on data from multiple cross-sectional imaging modalities (eg, positron emission tomography, magnetic resonance imaging [MRI], computed tomography). A key feature of MRI however is the possibility to create multiparametric imaging data in a single modality and in a single imaging session—thus, physical, functional, and anatomical features can be imaged during the same examination session and in a (nearly) identical patient position, which facilitates the alignment of image data before segmentation. Today, the highest scoring algorithms for automatic tumor segmentation use (convolutional) neural networks [(C)NNs] (14). NNs feed a set of input data through a number of processing layers, where each layer consists of a number of neurons that are activated by a nonlinear function depending on a linear combination of input data and a bias. With increasing number of layers, the ability to represent nonlinear relationships between input and output increases, effectively enabling a deep NN to learn any functional relationship given enough input data is available. In addition, a CNN is capable of implementing contextual information and can therefore learn high representations of the data such as edge information (15). In multiparametric MRI for tumor segmentation, different anatomical image contrasts (T1- and T2-weighted) are combined with functional information acquired with perfusion und diffusion measurements. Particularly diffusion-weighted imaging (DWI) has proven to contribute valuable additional information for tumor delineation (16–18). For the DWI images, an echo planar imaging (EPI) pulse sequence is commonly used. Despite its advantages, the EPI technique has a major disadvantage: it is very sensitive to off-resonances caused by inhomogeneity in the B0 magnetic field, which leads to severe geometrical distortions (19). Several groups have worked on solutions on the pulse sequence level, such as readout segmented (rs)EPI (20) which has been shown to dramatically decrease image distortions (21, 22). As these methods alone cannot remove image distortions completely, the necessity to quantify the effect of image distortion on automatic tumor segmentation becomes evident. Image distortions are especially pronounced in MRI of head and neck tumors, where the complex geometry of head, neck, and shoulders severely limits B0 shimming. This results in an increased field inhomogeneity and thus stronger image distortions than in other body regions like the brain. In addition, in tumors with hypoxic subareas, [18F]-fluoromisonidazole positron emission tomography can be used as a metabolic marker for hypoxia localization (23–25), which is important for individualized treatment schemes, for example, by dose painting. In these patients, MRI would be a desirable imaging alternative if the effect of geometric distortion on tumor segmentation performance could be controlled. In this work, CNNs were used for the segmentation of multiparametric MRI data of patients with head and neck tumor, and the effects of geometric distortion of diffusion-weighted input data on the segmentation performance were analyzed.

Materials and Methods

Head and Neck Tumor Patient Trial

Patient data were taken from a prospective clinical trial in patients with head and neck squamous cell carcinoma, which helped investigate the correlations between tumor response under radiotherapy and hypoxic tumor subvolumes in patients with head and neck squamous cell carcinomas. Written informed consent was obtained from each patient, and the institutional review board approved the study (Approval No. 479/12). Patients received anatomical and functional MRI before undergoing radiochemotherapy and 2 and 5 weeks into treatment. In this work, the pretherapeutic MRI data were used for analysis to avoid therapy-related bias. In total, multiparametric MRI data from 18 patients were available. For MRI, a clinical 3 T whole-body magnetic resonance (MR) system (Siemens Tim Trio, Erlangen, Germany) was used. Patients were placed in an individually fitted therapy mask, which was fixed at the patient couch of the MR system. A flexible receive coil was wrapped around the anterior part of the neck, which was used in combination with the additional spine array coils for MR signal reception. The MR protocol of the study consisted of anatomical T1w and T2w MRI, T2* maps from multiecho gradient echo MRI, perfusion MRI including the vascular permeability ktrans, quantified using contrast-enhanced dynamic T1-weighted MRI, and the apparent diffusion coefficient (ADC) that was assessed with diffusion-weighted echo-planar imaging. DWI data were acquired using standard and readout-segmented diffusion-weighted EPI sequences. Conventional EPI was used with an echo time of 69 ms, acquisition time (TA) = 5 min, while the rsEPI (readout segmentation of long variable echo-trains, RESOLVE) sequence used echo time = 51 ms, TA = 7 min, with 7 segments. Both diffusion sequences used a 3-direction trace scan with b-values of 50, 400, and 800 s/mm2 to quantify the ADC, with phase-encoding (PE) along the anterior–posterior direction. All relevant sequence specifications are listed in Table 1.
Table 1.

List of Input Channels and Corresponding Sequence Details

SequenceTE [ms]TR [ms]Resolution [mm3]Comments/Other
T1 Fast Spin Echo115040.7 × 0.7 × 4.0
T2 Fast Spin Echo10050000.7 × 0.7 × 4.0
Multi-Echo GRE5-336001.1 × 1.1 × 3.0nEchoes = 12, reconstructed map: T2*
Dynamic T1w PerfusionMeasurement1.564.651.4 × 1.4 × 3.0nTimepoints = 36, reconstructed map: ktrans
DWI (rsEPI)5125102 × 2 × 3b = {50,400,800} s/mm2, reconstructed map:ADC, nSegments = 7
DWI (Conventional EPI)6935002 × 2 × 3b = {50,400,800} s/mm2, reconstructed map:ADC
List of Input Channels and Corresponding Sequence Details

Data Preprocessing

Owing to additional acquisition times, only 12 of 18 patients tolerated the additional rsEPI protocol. If available, rsEPI images were used in the study. For the other patients, conventional EPI images were used. Perfusion k was determined according to the Tofts model (26). Both k and T2* were calculated with the software platform SyngoVia (Siemens Healthcare), while monoexponentially fitted ADC-maps were determined with the MR systems' postprocessing software. To improve the performance of the subsequent CNN analysis and to ensure comparability between subjects, T1- and T2-weighted images were normalized to zero mean and unitary standard deviation. Images were then interpolated to a 1 mm isotropic resolution using cubic splines, and image coregistration was performed using standard MATLAB (The MathWorks, Natick, MA; Version 2016b) tools (eg, imregister), based on similarity transformations with a mutual information metric.

Additional DWI Preprocessing

Head and neck regions are especially challenging for DWI as the complex geometry imposes severe limitations to magnetic field shimming (27). To study the influence of the geometric accuracy on the CNN performance, an algorithm was developed to geometrically correct ADC maps. The problem of geometric distortions between 2 MR images due to field inhomogeneities is well known, but the imaging protocol did not include additional field map measurements so standard correction schemes (19) could not be applied. Instead, a postprocessing method from optical microscopy (28), 2-photon imaging (29, 30) or particle imaging techniques (31), was adapted that has been developed to correct for nonrigid motion in between acquisitions. The distorted DWI and a geometrically more precise T2w image are treated as 2 images of the same region. The distortion field in between the 2 images is then estimated according to the Lucas–Kanade (32) method implemented in a pyramidal layout (33). Our MATLAB 3D implementation of the algorithm makes use of the mutual information metric to account for the different contrasts of the images. As distortions are expected in only the PE direction owing to the low effective PE bandwidth, the spatial degrees of freedom in the distortion field were limited to the PE direction. The implementation was validated with volunteer data acquired using a 3 T MRI system (Tim Trio, Siemens, Healthineers) using T2w and DWI contrasts together with a B0 field map. With the correction algorithm, geometrically corrected ADC maps were calculated for all 18 patients as an additional preprocessing step for the CNN analysis. Distortion fields were extracted from the b = 50 s/mm2 images only, as the low b-value provides optimal signal-to-noise ratio, and the same image distortion is expected at higher b-values.

CNN

Finally, a 3D CNN was configured to perform the segmentation task on the patient data. To study the effect of image distortion on the segmentation result, 2 separate NNs were trained: the first network included the original, uncorrected ADC maps, while the second used the geometrically corrected ADC maps. For the calculations, the DeepMedic (34) CNN architecture was used. DeepMedic is a 3D CNN which uses 2 calculation pathways, a normal one and one with 3 times lower spatial resolution, to combine local fine structure with coarser contextual image information. Each pathway consisted of 8 hidden layers with {40 40 50 50 60 60 70 70} channels using 33 kernel sizes followed by 2 fully connected layers of 100 channels each, which combine high- and low-resolution pathways. In this layout, the following 5 input channels were used: T1-weighted images, T2-weighted images, ktrans maps, T2* maps, and ADC maps. As ground truth, gross tumor volumes (GTVs) were used that were contoured by a radiation oncologist and a radiologist on the basis of MR data. For contouring, all original MR data were available; however, most volumes were drawn on the basis of T1w imaging and copied to all other contrasts in the process. The data were divided in groups, with 13 patients in the training set, 4 patients in the validation set, and 1 patient in the testing set. A leave-1-out cross-validation was performed for 14 test patients, both with and without geometrically corrected ADC data. For better comparability, the 14 uncorrected and corrected data samples were chosen to have pairwise equal distributions in validation, training and testing sets. Using this set of networks, a statistical analysis for the 2 cases was used using the Dice coefficient as a measure for segmentation performance. The Dice coefficient is calculated as Dice = 2 TP/(2 TP + FN + FP) (35), where TP are true positives, FN false negatives, and FP false positives. A paired t test on the resulting Dice coefficients for the 14 training cases was used to test whether a significant difference could be observed.

Results

The verification of the distortion correction algorithm on the randomly distorted MR-image showed a substantial decrease in Euclidean image distance from 0.69 ± 0.06 to 0.21 ± 0.03. The volunteer experiment shows that the algorithm reproduces the general structure of a measured field map with minor deviations in the fine structures (Figure 1A). The Euclidean image distance between the measured field map and the calculated distortion field amounts to 2.1 ± 2.3 pixel. In few regions of strong distortions, for example, on the boundaries of the trachea, distortions are so severe that both registration methods do not deliver clinically acceptable results; however, this was the case in only 6 patients and it equally affected corrected and uncorrected data. As these irreversible distortions affect only parts of an image, the corresponding cases could still be used in the evaluation process. Figure 1 shows the results of subsequent correction—both methods realign anatomical areas well with the corresponding T2w reference image, while severe misalignments are seen without correction. The calculated distortion fields for all patient cases measure a total mean of 0.46 and a standard deviation of 4.24 pixels, which clearly illustrate the need for correction (Figure 1B).
Figure 1.

(A) Top: Overlay of T2-weighted (T2w) image (purple) and readout segmented echo planar imaging (rsEPI)-image (green). Left: Original image with distortions. Center: Corrected diffusion-weighted imaging (DWI) using the correction algorithm with the T2w image as a reference. Right: Corrected DWI using a measured B0 field map for correction. Bottom: The corresponding distortion fields used for correction. Both fields show the same general behavior, while some fine structure, especially in regions of strong distortions around the trachea, cannot be resolved using the algorithm. White arrows mark locations where the misalignment of T2w and DWI is clearly seen. (B) A histogram showing the relative amount of displacements within all diffusion images that were included in the study. The standard deviation is 4.2 pixels, which shows the large effect of the distortion correction.

(A) Top: Overlay of T2-weighted (T2w) image (purple) and readout segmented echo planar imaging (rsEPI)-image (green). Left: Original image with distortions. Center: Corrected diffusion-weighted imaging (DWI) using the correction algorithm with the T2w image as a reference. Right: Corrected DWI using a measured B0 field map for correction. Bottom: The corresponding distortion fields used for correction. Both fields show the same general behavior, while some fine structure, especially in regions of strong distortions around the trachea, cannot be resolved using the algorithm. White arrows mark locations where the misalignment of T2w and DWI is clearly seen. (B) A histogram showing the relative amount of displacements within all diffusion images that were included in the study. The standard deviation is 4.2 pixels, which shows the large effect of the distortion correction. The CNN was trained on the patient data for 35 epochs per sample case. Figure 2 shows the training progress for an exemplary case. The training progress appears to be largely the same for both input cases of corrected and uncorrected ADC data. However, as can be seen in the validation curve, there is a noticeable difference, especially in the sensitivity metric between the two cases. Figure 3 shows the subsequent segmentation result of the corresponding test sample. Both methods, with and without distortion correction, labeled some areas far from the GTV as tumor tissue, but in general, a good overlap between the ground truth (GTV) and the segmentation results with and without distortion correction was found with Dice coefficients up to 0.68 and 0.65, respectively. Figure 4 shows the segmentation performance over all test sessions in a scatter plot. As seen, despite the presence of severe image distortions in the ADC maps, the distortion correction did improve the segmentation performance of the CNN, however, not to a statistically significant degree (P = .313). The mean Dice coefficient for segmentation with distortion-corrected ADC-maps was 0.40 ± 0.18, while for uncorrected data, it amounted to 0.37 ± 0.21.
Figure 2.

Training process of the convolutional neural network (CNN) for 1 training example. After training for 35 epochs, the network seemed to have reached peak performance. The plots for corrected and uncorrected training data show great similarity, which is reflected in the comparison of Dice coefficients for testing data.

Figure 3.

3D visualization of the CNN segmentation with (A) and without (C) distortion correction. In addition, corresponding transverse slices of the region of interest are shown (B, D). The ground truth is shown in green, and the segmentation results are plotted in red. Both segmentations show good overlap with the gross tumor volume (GTV). With a Dice coefficient of 0.59, the overall segmentation of the geometrically corrected data was much higher than that of a Dice of 0.40 in the uncorrected case. However, both segmentations generally included too much tissue on the anterior side, as well as some isolated areas in the neck.

Figure 4.

Comparison of Dice coefficients with and without geometrically corrected input data for all 14 training rounds. The dashed line marks the line of identity. A paired t test on the data did not show a significant difference in Dice coefficient for corrected or uncorrected data. Mean Dice coefficient with distortion correction is 0.40 ± 0.18, and 0.37 ± 0.21 without correction. Points below the line of identity indicate an improvement in segmentation performance for geometrically corrected ADC data. The 2 different DWI-sequences are shown in yellow and blue.

Training process of the convolutional neural network (CNN) for 1 training example. After training for 35 epochs, the network seemed to have reached peak performance. The plots for corrected and uncorrected training data show great similarity, which is reflected in the comparison of Dice coefficients for testing data. 3D visualization of the CNN segmentation with (A) and without (C) distortion correction. In addition, corresponding transverse slices of the region of interest are shown (B, D). The ground truth is shown in green, and the segmentation results are plotted in red. Both segmentations show good overlap with the gross tumor volume (GTV). With a Dice coefficient of 0.59, the overall segmentation of the geometrically corrected data was much higher than that of a Dice of 0.40 in the uncorrected case. However, both segmentations generally included too much tissue on the anterior side, as well as some isolated areas in the neck. Comparison of Dice coefficients with and without geometrically corrected input data for all 14 training rounds. The dashed line marks the line of identity. A paired t test on the data did not show a significant difference in Dice coefficient for corrected or uncorrected data. Mean Dice coefficient with distortion correction is 0.40 ± 0.18, and 0.37 ± 0.21 without correction. Points below the line of identity indicate an improvement in segmentation performance for geometrically corrected ADC data. The 2 different DWI-sequences are shown in yellow and blue.

Discussion

In this work, a CNN was defined and trained to segment head and neck tumors using clinical data from patients undergoing radiation therapy. In particular, 2 input cases were compared with respect to the segmentation performance: 1 with geometric distortion correction of the input DWI data, and 1 without. In this study with 18 patients already a good segmentation could be achieved, and no significant differences between the distortion-corrected and -uncorrected cases were found with regard to the segmentation performance. Still, the correction algorithm severely reduced image distortion. The approach is capable of registering different contrasts, such as T2w and DWI image data. Registration could not provide satisfactory results, whenever signals from multiple voxels were mapped to the same location during the imaging process. Neither method, algorithm- or field map-based, could then recover the original, distortion-free image. This happened on a few sharp tissue–air boundaries and is therefore only a small limitation to the study. Owing to the limited number of complete patient data sets, a modified leave-1-out cross-validation method was chosen for statistical analysis. The method is limited by the incomplete number of possible permutations in training, validation, and testing set. A complete leave-1-out cross-validation could not be performed owing to high calculation times for each of the 42 840 possible combinations of the 3 sets. Therefore, 14 permutations with the given numbers of patients in training, validation, and testing categories have been used. Each permutation had a different data sample in the testing category, but the rest was randomly distributed among training and validation sets. This random selection was necessary owing to long calculation times required to completely train a network, taking several days on a Tesla C2075 GPU. To alleviate the challenge of small data sets, additional images after therapy starts could be used for training and testing. However, the tumors often drastically shrink in size, leading to changes in signal intensity for ADC and k (36). Therefore, owing to vanishing tumors, the amount of available during-treatment data is too small for using deep learning techniques. This can already be seen in the present data set, which shows failure of segmentation in 2 of the cross-validation sets (Figure 4). These kinds of statistical fluctuations are to be expected more frequently with a smaller amount of available data. Thorough use of cross-validation must then be applied to extract statistically relevant information. However, there is a lower limit on the amount of data to be used with deep NNs, which can, in most cases for CNNs, be determined only experimentally. From other tumor entities such as prostate or breast cancer it is known that DWI plays a vital part in tumor segmentation and definition (37–39), and similar behavior is found in head and neck cancers (40–42). In a preliminary study, we could also show that the overall segmentation performance of head and neck tumors in MRI is critically dependent on diffusion data (43). Therefore, it is surprising that the analysis of the segmentation performance of the CNN with and without distortion correction does not show significant differences. This could be caused by different reasons: In the training process the CNN could have learned a correction scheme to undistort input data within its receptive field. Because each layer consists of a number of convolutions with input data taken from the previous layer, local translation of features can be implemented. In addition, the standard deviation of the displacement map within the primary tumor over all included subjects is 2.29 pixel, while the standard deviation over all other pixels within all subjects is 4.28 pixel. This shows that distortions are far less pronounced within the tumor than in the rest of the FOV, especially in contrast to areas with tissue–air boundaries such as the nasal cavities where high distortions are to be expected in particular for EPI methods. It is also important to note that the ADC maps constitute only 1 of 5 input channels. The high-resolution T2w images, for example, offer a much higher anatomical contrast and are nearly unaffected by distortion, whereas conventional DWI images can be heavily distorted. Hence, it is to be expected that feature maps linked to the ADC channel will show an effective decrease in feature resolution, while high-resolution information is taken from different input channels such as T2w data. In general, the quality of the ADC data in this study was limited by noise, which reduces the ability to differentiate between tumor and normal tissue. To increase the signal-to-noise ratio, DWI acquisitions can be averaged, which often increase the acquisition times to durations, which are no more compatible with clinical study times. Alternatively, during the ADC calculation, noise can be explicitly modeled, which has been shown to reduce ADC heterogeneity (44, 45). In addition, the choice of the b-values of the DWI acquisition can be optimized, which requires prior knowledge about the target ADC values (46, 47). In general, a strong limitation of this study is the size of the training data set. The small size of only 18 patients can lead to a false-positive segmentation far from the GTV owing to geometric distortions (as discussed above) and owing to the selection of the training regions: the algorithm was programmed such that in a statistical mean, the same number of tumor-containing (foreground) and nontumor-containing (background) input patches is selected, leading to an effective underrepresentation of background in the training process. A larger data set could help to train a CNN that can detect more subtle differences in the segmentation performance as the high standard deviation observed in the resulting Dice coefficients in the 14 data samples is expected to converge against a common mean value. In many studies, data sets with >60 to >250 patients have been used (14, 34). Although other studies showed Dice coefficients in the range of 0.6 up to 0.9 (48), depending on the segmentation target (mostly brain tumors and subregions of brain tumors), our data set focused on a completely different tumor entity. This work offered special insight into the performance of a CNN in a body region with strong imaging challenges, as the head and neck regions show stronger field inhomogeneity than brain regions. Also, in contrast to most brain regions, the head and neck area cannot be assumed to be rigid. Although the head is immobilized using a thermoplastic mask, swallowing and tongue movement lead to intrinsic misalignment of images taken at different time points, as can be seen in Figure 5. Because the CNNs were trained on multiparametric data, some errors—especially on the GTV edges—are present in the ground-truth labels, leading to worse segmentation results than in rigid body areas. In addition, the interobserver variability of head and neck cancer is already much higher than, for example, that of the brain tumors (49, 50). Despite the intrinsic limitation on image quality, the trained network yielded good tumor segmentations, and it was shown that distortion correction of ADC data does not significantly improve segmentation performance. To reduce the effect of motion-related misalignment, a nonlinear registration method could be applied. However, successful application of these methods is particularly demanding in the head and neck area and thus simultaneous signal acquisition, that is, intrinsic coregistration, would be preferred (51). Simultaneous acquisition of multiple signal parameters could be implemented by MR-fingerprinting, as has been shown in the prostate before (52).
Figure 5.

T2w (left) and T1-weighted (T1w) (right) images showing the same anatomical area, but acquired 10 minutes after each other. Motion in the trachea leads to slightly differently located tumor borders. This effect introduces errors in the ground truth labels and decreases the maximum achievable segmentation performance.

T2w (left) and T1-weighted (T1w) (right) images showing the same anatomical area, but acquired 10 minutes after each other. Motion in the trachea leads to slightly differently located tumor borders. This effect introduces errors in the ground truth labels and decreases the maximum achievable segmentation performance. In a next step, the contribution of each CNN input channel (eg, T2w or ADC images) to the segmentation performance needs to be quantified. This will not only allow for better analysis and understanding of the segmentation but also help optimize the imaging protocol with regard to increased patient comfort, that is, gathering more relevant information in less time, and treatment outcome. In summary our data showed that within the highly challenging anatomic head and neck region, even a CNN trained on nondistortion-corrected data can provide good-quality tumor segmentation. Considering the strong changes in the head and neck anatomy during radiochemotherapy, adaptive replanning strategies may help improve dose coverage of tumors and better sparing of organs at-risk (53, 54). This might ultimately result in better locoregional control rates and decreased treatment-related toxicities (55). The advent of MR-guided radiotherapy concepts, especially using hybrid MR-LINAC systems, facilitates daily MR-based replanning strategies that in turn require swift segmentation tools to allow real-time treatment adaptation (56). To deliver daily imaging-adapted treatment plans, CNN-enabled MR-based autosegmentation strategies are crucial. Our data therefore could provide important information about the design and implementation of CNNs for MR-based autosegmentation.
  47 in total

1.  Diffusion-prepared neurography of the brachial plexus with a large field-of-view at 3T.

Authors:  Jos Oudeman; Bram F Coolen; Valentina Mazzoli; Mario Maas; Camiel Verhamme; Wyger M Brink; Andrew G Webb; Gustav J Strijkers; Aart J Nederveen
Journal:  J Magn Reson Imaging       Date:  2015-08-06       Impact factor: 4.813

2.  Development of a Combined MR Fingerprinting and Diffusion Examination for Prostate Cancer.

Authors:  Alice C Yu; Chaitra Badve; Lee E Ponsky; Shivani Pahwa; Sara Dastmalchian; Matthew Rogers; Yun Jiang; Seunghee Margevicius; Mark Schluchter; William Tabayoyong; Robert Abouassaly; Debra McGivney; Mark A Griswold; Vikas Gulani
Journal:  Radiology       Date:  2017-02-10       Impact factor: 11.105

Review 3.  Diffusion-Weighted Imaging in Head and Neck Cancer: Technique, Limitations, and Applications.

Authors:  Michael Connolly; Ashok Srinivasan
Journal:  Magn Reson Imaging Clin N Am       Date:  2017-10-14       Impact factor: 2.266

Review 4.  Radiomics in radiooncology - Challenging the medical physicist.

Authors:  Jan C Peeken; Michael Bernhofer; Benedikt Wiestler; Tatyana Goldberg; Daniel Cremers; Burkhard Rost; Jan J Wilkens; Stephanie E Combs; Fridtjof Nüsslin
Journal:  Phys Med       Date:  2018-03-27       Impact factor: 2.685

5.  Analysis of relation between hypoxia PET imaging and tissue-based biomarkers during head and neck radiochemotherapy.

Authors:  Martin-Immanuel Bittner; Nicole Wiedenmann; Sabine Bucher; Michael Hentschel; Michael Mix; Gerta Rücker; Wolfgang A Weber; Philipp T Meyer; Martin Werner; Anca-Ligia Grosu; Gian Kayser
Journal:  Acta Oncol       Date:  2016-09-03       Impact factor: 4.089

6.  Hypoxia imaging with [F-18] FMISO-PET in head and neck cancer: potential for guiding intensity modulated radiation therapy in overcoming hypoxia-induced treatment resistance.

Authors:  Kristi Hendrickson; Mark Phillips; Wade Smith; Lanell Peterson; Kenneth Krohn; Joseph Rajendran
Journal:  Radiother Oncol       Date:  2011-08-27       Impact factor: 6.280

7.  Diagnostic performance of 18 fluorodesoxyglucose positron emission/computed tomography and magnetic resonance imaging in detecting T1-T2 head and neck squamous cell carcinoma.

Authors:  Anne Chaput; Philippe Robin; Fabien Podeur; Morgan Ollivier; Nathalie Keromnes; Valentin Tissot; Michel Nonent; Pierre-Yves Salaün; Jean Rousset; Ronan Abgral
Journal:  Laryngoscope       Date:  2017-06-10       Impact factor: 3.325

8.  Decision support system for localizing prostate cancer based on multiparametric magnetic resonance imaging.

Authors:  Vijay Shah; Baris Turkbey; Haresh Mani; Yuxi Pang; Thomas Pohida; Maria J Merino; Peter A Pinto; Peter L Choyke; Marcelino Bernardo
Journal:  Med Phys       Date:  2012-07       Impact factor: 4.071

9.  Integrating Structural and Functional Imaging for Computer Assisted Detection of Prostate Cancer on Multi-Protocol In Vivo 3 Tesla MRI.

Authors:  Satish Viswanath; B Nicolas Bloch; Mark Rosen; Jonathan Chappelow; Robert Toth; Neil Rofsky; Robert Lenkinski; Elisabeth Genega; Arjun Kalyanpur; Anant Madabhushi
Journal:  Proc SPIE Int Soc Opt Eng       Date:  2009-02-27

10.  Comparison of 68Ga-HBED-CC PSMA-PET/CT and multiparametric MRI for gross tumour volume detection in patients with primary prostate cancer based on slice by slice comparison with histopathology.

Authors:  Constantinos Zamboglou; Vanessa Drendel; Cordula A Jilg; Hans C Rischke; Teresa I Beck; Wolfgang Schultze-Seemann; Tobias Krauss; Michael Mix; Florian Schiller; Ulrich Wetterauer; Martin Werner; Mathias Langer; Michael Bock; Philipp T Meyer; Anca L Grosu
Journal:  Theranostics       Date:  2017-01-01       Impact factor: 11.556

View more
  6 in total

1.  Predicting Biochemical Failure in Irradiated Patients With Prostate Cancer by Tumour Volume Measured by Multiparametric MRI.

Authors:  Benedict Oerther; Moritz V Buren; Christina M Klein; Simon Kirste; Nils H Nicolay; Tanja Sprave; Simon Spohn; Deepa Darshini Gunashekar; Leonard Hagele; Lars Bielak; Michael Bock; Anca-L Grosu; Fabian Bamberg; Matthias Benndorf; Constantinos Zamboglou
Journal:  In Vivo       Date:  2020 Nov-Dec       Impact factor: 2.155

2.  Convolutional neural networks for head and neck tumor segmentation on 7-channel multiparametric MRI: a leave-one-out analysis.

Authors:  Lars Bielak; Nicole Wiedenmann; Arnie Berlin; Nils Henrik Nicolay; Deepa Darshini Gunashekar; Leonard Hägele; Thomas Lottner; Anca-Ligia Grosu; Michael Bock
Journal:  Radiat Oncol       Date:  2020-07-29       Impact factor: 3.481

3.  Automatic segmentation of head and neck primary tumors on MRI using a multi-view CNN.

Authors:  Jens P E Schouten; Samantha Noteboom; Roland M Martens; Steven W Mes; C René Leemans; Pim de Graaf; Martijn D Steenwijk
Journal:  Cancer Imaging       Date:  2022-01-15       Impact factor: 3.909

4.  Evaluation of deep learning-based multiparametric MRI oropharyngeal primary tumor auto-segmentation and investigation of input channel effects: Results from a prospective imaging registry.

Authors:  Kareem A Wahid; Sara Ahmed; Renjie He; Lisanne V van Dijk; Jonas Teuwen; Brigid A McDonald; Vivian Salama; Abdallah S R Mohamed; Travis Salzillo; Cem Dede; Nicolette Taku; Stephen Y Lai; Clifton D Fuller; Mohamed A Naser
Journal:  Clin Transl Radiat Oncol       Date:  2021-10-16

5.  Strategies for tackling the class imbalance problem of oropharyngeal primary tumor segmentation on magnetic resonance imaging.

Authors:  Roque Rodríguez Outeiral; Paula Bos; Hedda J van der Hulst; Abrahim Al-Mamgani; Bas Jasperse; Rita Simões; Uulke A van der Heide
Journal:  Phys Imaging Radiat Oncol       Date:  2022-08-13

6.  An Automated Segmentation Pipeline for Intratumoural Regions in Animal Xenografts Using Machine Learning and Saturation Transfer MRI.

Authors:  Wilfred W Lam; Wendy Oakden; Elham Karami; Margaret M Koletar; Leedan Murray; Stanley K Liu; Ali Sadeghi-Naini; Greg J Stanisz
Journal:  Sci Rep       Date:  2020-05-15       Impact factor: 4.379

  6 in total

北京卡尤迪生物科技股份有限公司 © 2022-2023.