Literature DB >> 29244902

Feasibility of predicting tumor motion using online data acquired during treatment and a generalized neural network optimized with offline patient tumor trajectories.

Troy P Teo1,2, Syed Bilal Ahmed1,2, Philip Kawalec1,2, Nadia Alayoubi1,2, Neil Bruce3, Ethan Lyn4, Stephen Pistorius1,5.   

Abstract

PURPOSE: The accurate prediction of intrafraction lung tumor motion is required to compensate for system latency in image-guided adaptive radiotherapy systems. The goal of this study was to identify an optimal prediction model that has a short learning period so that prediction and adaptation can commence soon after treatment begins, and requires minimal reoptimization for individual patients. Specifically, the feasibility of predicting tumor position using a combination of a generalized (i.e., averaged) neural network, optimized using historical patient data (i.e., tumor trajectories) obtained offline, coupled with the use of real-time online tumor positions (obtained during treatment delivery) was examined.
METHODS: A 3-layer perceptron neural network was implemented to predict tumor motion for a prediction horizon of 650 ms. A backpropagation algorithm and batch gradient descent approach were used to train the model. Twenty-seven 1-min lung tumor motion samples (selected from a CyberKnife patient dataset) were sampled at a rate of 7.5 Hz (0.133 s) to emulate the frame rate of an electronic portal imaging device (EPID). A sliding temporal window was used to sample the data for learning. The sliding window length was set to be equivalent to the first breathing cycle detected from each trajectory. Performing a parametric sweep, an averaged error surface of mean square errors (MSE) was obtained from the prediction responses of seven trajectories used for the training of the model (Group 1). An optimal input data size and number of hidden neurons were selected to represent the generalized model. To evaluate the prediction performance of the generalized model on unseen data, twenty tumor traces (Group 2) that were not involved in the training of the model were used for the leave-one-out cross-validation purposes.
RESULTS: An input data size of 35 samples (4.6 s) and 20 hidden neurons were selected for the generalized neural network. An average sliding window length of 28 data samples was used. The average initial learning period prior to the availability of the first predicted tumor position was 8.53 ± 1.03 s. Average mean absolute error (MAE) of 0.59 ± 0.13 mm and 0.56 ± 0.18 mm were obtained from Groups 1 and 2, respectively, giving an overall MAE of 0.57 ± 0.17 mm. Average root-mean-square-error (RMSE) of 0.67 ± 0.36 for all the traces (0.76 ± 0.34 mm, Group 1 and 0.63 ± 0.36 mm, Group 2), is comparable to previously published results. Prediction errors are mainly due to the irregular periodicities between cycles. Since the errors from Groups 1 and 2 are within the same range, it demonstrates that this model can generalize and predict on unseen data.
CONCLUSIONS: This is a first attempt to use an averaged MSE error surface (obtained from the prediction of different patients' tumor trajectories) to determine the parameters of a generalized neural network. This network could be deployed as a plug-and-play predictor for tumor trajectory during treatment delivery, eliminating the need for optimizing individual networks with pretreatment patient data.
© 2017 American Association of Physicists in Medicine.

Entities:  

Keywords:  DMLC tracking; image-guided adaptive radiotherapy; sliding window neural network; system latency; tumor motion prediction

Mesh:

Year:  2018        PMID: 29244902     DOI: 10.1002/mp.12731

Source DB:  PubMed          Journal:  Med Phys        ISSN: 0094-2405            Impact factor:   4.071


  7 in total

1.  Technical Note: Deriving ventilation imaging from 4DCT by deep convolutional neural network.

Authors:  Yuncheng Zhong; Yevgeniy Vinogradskiy; Liyuan Chen; Nick Myziuk; Richard Castillo; Edward Castillo; Thomas Guerrero; Steve Jiang; Jing Wang
Journal:  Med Phys       Date:  2019-03-12       Impact factor: 4.071

2.  Real-time prediction of tumor motion using a dynamic neural network.

Authors:  Majid Mafi; Saeed Montazeri Moghadam
Journal:  Med Biol Eng Comput       Date:  2020-01-08       Impact factor: 2.602

3.  Respiratory Prediction Based on Multi-Scale Temporal Convolutional Network for Tracking Thoracic Tumor Movement.

Authors:  Lijuan Shi; Shuai Han; Jian Zhao; Zhejun Kuang; Weipeng Jing; Yuqing Cui; Zhanpeng Zhu
Journal:  Front Oncol       Date:  2022-05-27       Impact factor: 5.738

4.  Adaptive respiratory signal prediction using dual multi-layer perceptron neural networks.

Authors:  Wenzheng Sun; Qichun Wei; Lei Ren; Jun Dang; Fang-Fang Yin
Journal:  Phys Med Biol       Date:  2020-09-14       Impact factor: 3.609

5.  A Super-Learner Model for Tumor Motion Prediction and Management in Radiation Therapy: Development and Feasibility Evaluation.

Authors:  Hui Lin; Wei Zou; Taoran Li; Steven J Feigenberg; Boon-Keng K Teo; Lei Dong
Journal:  Sci Rep       Date:  2019-10-16       Impact factor: 4.379

6.  Development of AI-driven prediction models to realize real-time tumor tracking during radiotherapy.

Authors:  Dejun Zhou; Mitsuhiro Nakamura; Nobutaka Mukumoto; Hiroaki Tanabe; Yusuke Iizuka; Michio Yoshimura; Masaki Kokubo; Yukinori Matsuo; Takashi Mizowaki
Journal:  Radiat Oncol       Date:  2022-02-23       Impact factor: 3.481

7.  Automatic diaphragm segmentation for real-time lung tumor tracking on cone-beam CT projections: a convolutional neural network approach.

Authors:  David Edmunds; Greg Sharp; Brian Winey
Journal:  Biomed Phys Eng Express       Date:  2019-03-12
  7 in total

北京卡尤迪生物科技股份有限公司 © 2022-2023.