Literature DB >> 35170838

Predicting machine's performance record using the stacked long short-term memory (LSTM) neural networks.

Min Ma1, Chenbin Liu2, Ran Wei1, Bin Liang1, Jianrong Dai1.   

Abstract

PURPOSE: The record of daily quality control (QC) items shows machine performance patterns and potentially provides warning messages for preventive actions. This study developed a neural network model that could predict the record and trend of data variations quantitively. METHODS AND MATERIALS: The record of 24 QC items for a radiotherapy machine was investigated in our institute. The QC records were collected daily for 3 years. The stacked long short-term memory (LSTM) model was used to develop the neural network model. A total of 867 records were collected to predict the record for the next 5 days. To compare the stacked LSTM, the autoregressive integrated moving average model (ARIMA) was developed on the same data set. The accuracy of the model was quantified by the mean absolute error (MAE), root-mean-square error (RMSE), and coefficient of determination (R2 ). To validate the robustness of the model, the record of four QC items was collected for another radiotherapy machine, which was input into the stacked LSTM model without changing any hyperparameters and ARIMA model.
RESULTS: The mean MAE, RMSE, and R 2 ${\rm{\;}}{R^2}$ with 24 QC items were 0.013, 0.020, and 0.853 in LSTM, while 0.021, 0.030, and 0.618 in ARIMA, respectively. The results showed that the stacked LSTM outperforms the ARIMA. Moreover, the mean MAE, RMSE, and R 2 ${\rm{\;}}{R^2}$ with four QC items were 0.102, 0.151, and 0.770 in LSTM, while 0.162, 0.375, and 0.550 in ARIMA, respectively.
CONCLUSIONS: In this study, the stacked LSTM model can accurately predict the record and trend of QC items. Moreover, the stacked LSTM model is robust when applied to another radiotherapy machine. Predicting future performance record will foresee possible machine failure, allowing early machine maintenance and reducing unscheduled machine downtime.
© 2022 The Authors. Journal of Applied Clinical Medical Physics published by Wiley Periodicals, LLC on behalf of The American Association of Physicists in Medicine.

Entities:  

Keywords:  long short-term memory networks (LSTM); predictive time series; quality control; radiotherapy

Mesh:

Year:  2022        PMID: 35170838      PMCID: PMC8906230          DOI: 10.1002/acm2.13558

Source DB:  PubMed          Journal:  J Appl Clin Med Phys        ISSN: 1526-9914            Impact factor:   2.102


INTRODUCTION

Linear accelerators (Linacs) undergo daily quality control (QC) items to ensure that radiation treatments are delivered safely and accurately, and that they meet the quality and safety criteria of AAPM TG 142. Daily QC items are normally performed by the radiotherapist using a conventional QC instrument or phantom. These sequential sets of the record of daily QC items measured over successive days can be considered time series. Therefore, in the context of time‐series predictive modeling, the question of predicting the future record and trend has been raised. Traditionally, statistical modeling techniques like to autoregressive integrated moving average model (ARIMA) and their variations (autoregressive model (AR), moving average model (MA), autoregressive moving average model (ARMA)), , , only capture the linear elements of the time series and may not be sufficient for the daily QC record. Nonlinear time series are best analyzed using recurrent neural network (RNN). However, RNN is difficult to deal with long time series. Therefore, the long short‐term memory (LSTM) network is proposed to tackle the forgetting problem, a type of RNN. LSTM has shown good performance in various fields (finance, public transportation, astronomy, environmental science, and medicine). In the previous studies of the predictive model development about daily QC in Linac, Li et al. used artificial neural networks (ANNs), and ARMA on 5‐year daily beam QC record, which showed ANN had better prediction performance than ARMA. Puyati et al. used statistical process control and ARIMA to forecast QC. However, there is no good performance to predict QC record and trend in Linac, and time lags exist in the predictive model. The daily QC record is used to track the Linac's long‐term stability when processing large quantities of record. With these records, medical physicists could calibrate the baseline and monitor the Linac's state to predict variation cycles and take preventive actions. To understand the underlying structure and functions that produce the observed QC tests, an appropriate modeling tool is needed to extract and analyze the longitudinal record of daily QC items and predict future trends. In this study, a generalized LSTM model was developed to predict the record and trend of daily QC items for two radiotherapy machines. Additionally, this study emphasized on discovering the common behaviors of the Linac performance so that physicists can be more confident in predicting the machine's future behavior and taking action in a planned way before the tolerance level is reached. Finally, to compare and provide context for our results, we also developed a prediction model with ARIMA on the same data set.

METHODS AND MATERIALS

Data acquisition

The Varian Edge Linacs (Varian Medical Systems, Inc., Palo Alto, CA) was installed and commissioned in our institute in May 2017. The machine performance check (MPC) is equipped for daily QC tests on the Edge Linac. MPC is a fully integrated KV and MV image‐based tool to examine and evaluate the machine's performance. On every workday, MPC process ran about 5 min by radiation technician. Twenty‐four QC items for 6 MV X‐ray were run, including isocenter, collimation, gantry, and couch tests. The QC tests are highly automated, and the user only should set up the IsoCal phantom and bracket on the treatment couch, then beam on for the predetermined energy. MPC application has been evaluated as a Linac daily QC tool by some investigators. , , , , In this study, the records of 24 QC items based on MPC were collected at our institution using Varian Edge for more than 3 years, from August 2017 to October 2020. A total of 867 data were collected to predict the record for the next 5 days.

Data preprocessing

Preprocessing data are a significant step before building a model. In this study, performing data preprocessing included cleaning, interpolation, normalization, and data split. The duplicate data were deleted at the starting point. Cubic interpolation was used to double the amount of data to improve the prediction accuracy. The data were normalized for the model, ranging from −1 to 1. The data were divided into three sets: 70% for training, 15% for validation, and 15% for testing. The training set and the validation set were used to train the model with different hyperparameter combinations (see Section 2.3.2). The testing set was used to assess the performance of the model with the optimal hyperparameter combination.

Building LSTM network

LSTM network

LSTM is very powerful in solving sequence prediction problems because it can store previous information, which is essential to predict the future record and trend of daily QC items. Through the standard recurrent layer, self‐loops, and the internal unique gate structure, the LSTM network effectively improves the forgetting and gradient vanishing problem existing in the traditional RNN. Besides, LSTM can learn to make a one‐shot multi‐step prediction useful for predicting the time series. An LSTM neural network unit combines four gates: an input gate, a cell state, a forgotten gate, and an output gate (Figure 1). The forget gate is used to determine which messages pass through the cell, then enter the input gate, which decides how many new messages to add to the cell state, and finally decide the output message through the output gate.
FIGURE 1

The structure of long short‐term memory (LSTM) as described by Varsamopoulos. Input gate (), input module gate (), forget gate (), and output gate (). b is bias vectors, is cell state, is the hidden state, and is the sigmoid activation function

The structure of long short‐term memory (LSTM) as described by Varsamopoulos. Input gate (), input module gate (), forget gate (), and output gate (). b is bias vectors, is cell state, is the hidden state, and is the sigmoid activation function The original LSTM model is comprised of a single hidden LSTM layer followed by a standard feedforward output layer. The stacked LSTM is an extension to this model that has multiple hidden LSTM layers where each layer contains multiple memory cells. The stacked LSTM hidden layers make the model deeper, more accurately earning the description as a deep learning technique. It is the depth of neural networks that are attributed to the approach's success on various challenging prediction problems. The stacked LSTM is now a stable technique for challenging sequence prediction problems. An LSTM model with many LSTM layers is a stacked LSTM architecture (Figure 2). An LSTM layer above provides a sequence output rather than a single value output to the LSTM layer below. Specifically, one output per input time step is one output time step for all input time steps. Therefore, in this study, the stacked LSTM was selected.
FIGURE 2

The stacked long short‐term memory (LSTM) architecture

The stacked long short‐term memory (LSTM) architecture For the ARIMA, there are three critical parameters in ARIMA: p (the past value used to predict the next value), q (past prediction error used to predict future values), and d (order of differencing). , ARIMA parameter optimization requires much time. Therefore, in this study, ARIMA selected the combination (5, 1, 0).

Model training

The experiment's LSTM model was built on the Keras API package (TensorFlow2.0) in Python 3.6 settings (Python Software Foundation, Wilmington, DE). In this study, networks with two LSTM layers were investigated. The loss value was evaluated by the root‐mean‐square error (RMSE). The activation functions used the rectified linear units (Relu) function. A greedy coordinate descent method was employed to find the optimal hyperparameter of the model. The length of time lags, the optimizers, the learning rates, the number of epochs, the number of hidden units, and the batch sizes were among the tuning parameters. First, we sought to find the optimal length of time lags when the optimizer was Adam, the learning rate was 0.01, the number of epochs was 150, the number of hidden units was 50, and the batch sizes were 32. Subsequently, we determined the type of optimizer with the optimal length of time lags. Next, the appropriate learning rate was determined by comparing results from various learning rates. Then, we sought to find the optimal number of epochs and hidden units in turn. Lastly, to determine the optimal batch size, a similar comparison was performed. The batch size was adjusted to avoid errors from memory shortage. By testing the parameter values of different combinations, and the model suitable for the data was finally found. Hyperparameters selection and optimization play an important role in obtaining superior accuracy with the LSTM network. The validation set's mean absolute error (MAE) was used to evaluate the model's performance for each parameter combination. The investigated hyperparameters and their range are listed in Table 1.
TABLE 1

The summary of long short‐term memory (LSTM) hyperparameters investigated in this study, and the recommended configurations, and the impact level of each parameter

HyperparametersRangeRecommended configurationImpact
Length of time lags(1, 5, 10, 15, 20, 30, 50)15Middle
Optimizer{Adam, RMSprop, Adagrad, Adadelta}AdamHigh
Learning rate(0.0001, 0.001, 0.01, 0.1)0.001High
Number of epochs(50, 100, 150, 200, 250, 300)300Low
Number of hidden units per layer(10, 30, 50, 70, 100)50Middle
Batch size(8, 16, 32, 64, 128, 256)32Low
The summary of long short‐term memory (LSTM) hyperparameters investigated in this study, and the recommended configurations, and the impact level of each parameter

Evaluation of predictive accuracy

To evaluate the error between the predicted and observed values in the testing set, the RMSE, MAE, and coefficient of determination () were selected.

The trend lines

The trend lines were used to analyze the trend of Linac operating status and thereby help medical physicists decide whether to take preventive actions. The stacked LSTM model was applied to predict the next 5‐day record of 24 QC items in this study. The trend lines were plotted by the polynomial fit from five‐step‐head predictive values. If the trend line exceeds the tolerance value of the QC item, preventive measures need to be taken.

Effectiveness evaluation

To evaluate the performance of the stacked LSTM model, the record of daily QC items was investigated for another radiotherapy machine. The evaluation data set was collected from the Novalis Tx (Varian, Palo Alto, CA) machine between May 2020 and November 2021. A total of 415 records were collected. On every workday, the daily QC item was measured by the technician using the QUICKCHECKwebline (PTW, Freiburg, Germany) phantom. The records of four QC items—output constancy, beam symmetry along gun target direction (GT) and left‐right direction (LR), and beam quality factor (BQF) for 6 MV X‐ray—were selected. The records of these four QC items were input into the stacked LSTM model without changing any hyperparameters chosen for Section 2.3.2 and ARIMA model. Then predictive accuracy was evaluated by the RMSE, MAE, and between the predicted and observed values. Finally, comparison of predicted and observed record of four QC items based on Quickcheck was plotted. Those aimed to observe the accuracy of the prediction and assess the robustness of the stacked LSTM model.

RESULTS

Hyperparameter tuning in LSTM

Figure 3 shows the MAE (in a relative unit) as a function of time lags, optimizers, learning rates, epochs, hidden units, and batch sizes. The optimal hyperparameter value is summarized in Table 1. Among them, the learning rate had the greatest impact on the model. The best performance was set to 0.001 of the learning rates, and the worst was set to 0.1, causing up to 0.039 difference in relative MAE. Furthermore, the type of optimizer had the second greatest impact on the model. In comparison, the length of time lags and the number of hidden units demonstrated only a modest impact on the model's predictive performance. Finally, the number of epochs and the batch size showed little impact on the predictive accuracy.
FIGURE 3

The mean absolute error (MAE) of predicted data (mean value in green, 95% confidence interval (CI) in blue) with different values of (a) the length of time lags, (b) the optimizers, (c) the learning rates, (d) the number of epochs, (e) the number of hidden units, and (f) the batch sizes

The mean absolute error (MAE) of predicted data (mean value in green, 95% confidence interval (CI) in blue) with different values of (a) the length of time lags, (b) the optimizers, (c) the learning rates, (d) the number of epochs, (e) the number of hidden units, and (f) the batch sizes

Predictive performance evaluation

Table 2 shows the performance of the stacked LSTM model in predicting daily QC items based on MPC using the optimal hyperparameter and ARIMA. The mean MAE, RMSE, and with 24 MPC items were 0.013, 0.020, and 0.853 in LSTM, while 0.021, 0.030, and 0.618 in ARIMA, respectively. LSTM performed better than ARIMA in 23 MPC items with the smaller MAE value, smaller RMSE value, and higher , except for gantry relative (LSTM: MAE = 0.006, RMSE = 0.007, and  = 0.095; ARIMA: MAE = 0.004, RMSE = 0.006, and  = 0.383). The best predictive performance of LSTM was couch rotation (MAE = 0.001, RMSE = 0.004, and  = 0.975), but the worst was gantry relative (MAE = 0.006, RMSE = 0.007, and  = 0.095). Additionally, Figure 4 shows the comparison of model performance in terms of the coefficient of determination (). The value of LSTM is higher point than ARIMA in Figure 4, except for the value of gantry relative. In general, the stacked LSTM outperformed the ARIMA.
TABLE 2

The accuracy of the stacked long short‐term memory (LSTM) and autoregressive integrated moving average model (ARIMA) model for 24 quality control (QC) items based on machine performance check (MPC)

MPC testMAERMSE R2
CategoriesQC itemsLSTMARIMALSTMARIMALSTMARIMA
IsocenterSize (mm)0.0020.0060.0030.0080.9150.399
MV imager projection offset (mm)0.0100.0130.0200.0200.9060.902
KV imager projection offset (mm)0.0130.0150.0180.0240.9150.859
CollimationMax offset leaves A (mm)0.0100.0090.0120.0130.9060.901
Max offset leaves B (mm)0.0080.0080.0100.0120.8880.860
Mean offset leaves A (mm)0.0070.0090.0100.0130.9120.849
Mean offset leaves B (mm)0.0060.0080.0080.0110.8940.823
Jaw X1 (mm)0.0120.0130.0180.0190.8720.847
Jaw X2 (mm)0.0110.0130.0190.0190.8280.823
Jaw Y1 (mm)0.0170.0440.0220.0620.9310.465
Jaw Y2 (mm)0.0170.0430.0220.0570.9350.557
Rotation offset (°)0.0300.0470.0390.0640.7320.278
GantryAbsolute (°)0.0030.0060.0050.0080.8080.462
Relative (°)0.0060.0040.0070.0060.0950.383
CouchLateral (mm)0.0050.0120.0060.0160.9180.393
Longitudinal (mm)0.0040.0100.0060.0140.9470.658
Pitch (°)0.0010.0020.0020.0030.8750.514
Roll (°)0.0020.0040.0020.0050.8770.406
Rotation (°)0.0010.0020.0010.0040.9750.436
Vertical (mm)0.0080.0160.0110.0210.8700.520
Rotation‐induced couch shift (mm)0.0080.0170.0110.0240.9090.565
BeamCenter shift (mm)0.0140.0270.0210.0370.8740.611
Beam output change (%)0.0690.0980.1200.1390.9310.907
Uniformity change (%)0.0540.0820.0780.1290.7650.412
Mean0.0130.0210.0200.0300.8530.618

Abbreviations: MAE, mean absolute error; RMSE, root‐mean‐square error.

FIGURE 4

Comparison graph of model performance for 24 quality control (QC) items based on machine performance check (MPC) in the coefficient of determination (). The purple line means long short‐term memory (LSTM), and the blue line meansautoregressive integrated moving average model (ARIMA)

The accuracy of the stacked long short‐term memory (LSTM) and autoregressive integrated moving average model (ARIMA) model for 24 quality control (QC) items based on machine performance check (MPC) Abbreviations: MAE, mean absolute error; RMSE, root‐mean‐square error. Comparison graph of model performance for 24 quality control (QC) items based on machine performance check (MPC) in the coefficient of determination (). The purple line means long short‐term memory (LSTM), and the blue line meansautoregressive integrated moving average model (ARIMA) Figure 5 depicts three representative cases (beam center shift, beam output change, and beam uniformity change) of the observed versus the predicted curves using the stacked LSTM model with the optimal hyperparameter combination in testing data.
FIGURE 5

Comparison of predicted and observed beam quality control (QC) records, including (a) beam center shift, (b) beam output change, and (c) beam uniformity change using the stacked long short‐term memory (LSTM) model with the optimal hyperparameters in testing data

Comparison of predicted and observed beam quality control (QC) records, including (a) beam center shift, (b) beam output change, and (c) beam uniformity change using the stacked long short‐term memory (LSTM) model with the optimal hyperparameters in testing data The weekly trend line for the beam is shown in Figure 6. All predictive values were within the tolerance. The trend was that the beam center shift drops but remains at normal‐stage levels. The trend of the beam output change and beam uniformity change rose, located in the normal range. This provided the opportunity to adjust the machine.
FIGURE 6

An example of the trend line to detect (a) the beam center shift, (b) beam output change, and (c) beam uniformity change

An example of the trend line to detect (a) the beam center shift, (b) beam output change, and (c) beam uniformity change

Validation of effectiveness

Table 3 shows the performance of the stacked LSTM model in predicting four QC items based on Quickcheck without changing any hyperparameters and ARIMA. The mean MAE, RMSE, and with four QC items were 0.102, 0.151, and 0.770 in LSTM, while 0.162, 0.375 and 0.550 in ARIMA, respectively.
TABLE 3

The accuracy of the stacked long short‐term memory (LSTM) and autoregressive integrated moving average model (ARIMA) model for four quality control (QC) items based on Quickcheck

QuickcheckMAERMSE R2
CategoriesQC itemsLSTMARIMALSTMARIMALSTMARIMA
BeamOutput dose0.2310.4580.3731.2230.7410.309
Symmetry GT0.0950.0960.1210.1450.7920.691
Symmetry LR0.0720.0750.0940.1030.7040.641
Beam quality factor0.0110.0200.0170.0280.8450.561
Mean0.1020.1620.1510.3750.7700.550

Abbreviations: GT, gun target direction; LR, left‐right direction; MAE, mean absolute error; RMSE, root‐mean‐square error.

The accuracy of the stacked long short‐term memory (LSTM) and autoregressive integrated moving average model (ARIMA) model for four quality control (QC) items based on Quickcheck Abbreviations: GT, gun target direction; LR, left‐right direction; MAE, mean absolute error; RMSE, root‐mean‐square error. The value of LSTM is higher point than ARIMA for four QC items in Figure 7. Figure 8 depicts four QC items (output dose, Symmetry GT, Symmetry LR, and Beam quality factor) of the observed versus the predicted curves using the stacked LSTM model without changing any hyperparameters based on Quickcheck.
FIGURE 7

Comparison graph of model performance for four quality control (QC) items based on Quickcheck in the coefficient of determination (). The purple line means long short‐term memory (LSTM), and the blue line means autoregressive integrated moving average model (ARIMA)

FIGURE 8

Comparison of predicted and observed record of four quality control (QC) items based on Quickcheck, including (a) output dose, (b) symmetry gun target direction (GT), (c) symmetry left‐right direction (LR), and (d) beam quality factor using the stacked long short‐term memory (LSTM) model without changing any hyperparameters

Comparison graph of model performance for four quality control (QC) items based on Quickcheck in the coefficient of determination (). The purple line means long short‐term memory (LSTM), and the blue line means autoregressive integrated moving average model (ARIMA) Comparison of predicted and observed record of four quality control (QC) items based on Quickcheck, including (a) output dose, (b) symmetry gun target direction (GT), (c) symmetry left‐right direction (LR), and (d) beam quality factor using the stacked long short‐term memory (LSTM) model without changing any hyperparameters

DISCUSSION

This study demonstrates the need to tuning the hyperparameters using a deep LSTM model for daily QC items to obtain good predictive results. The learning rate determines how fast your neural net “learns.” The challenge of using a learning rate is that their hyperparameters must be defined in advance, and they depend heavily on the type of model and problem. Adaptive gradient descent algorithms (Adagrad, Adadelta, RMSprop, Adam) provide a heuristic approach without requiring expensive work to manually tuning hyperparameters for the learning rate. According to the MAE value (Figure 3), Adam and learning rate setting to 0.001 was recommended to use in the stacked LSTM model. Besides, when adjusting the different lengths of time lags, the LSTM predictive effect will be delayed (Figure 9). value of the beam center shift is 0.603 (the lengths of time lag = 1), while value of the beam center shift is 0.874 (the lengths of time lag = 15). Lag observations for a univariate time series can be used as time lags for an LSTM model, which can improve forecast performance.
FIGURE 9

The predictive performance of the beam center shift based on machine performance check (MPC) with (a) the length of time lags = 15 (), and (b) the length of time lags = 1 () in the stacked long short‐term memory (LSTM) model

The predictive performance of the beam center shift based on machine performance check (MPC) with (a) the length of time lags = 15 (), and (b) the length of time lags = 1 () in the stacked long short‐term memory (LSTM) model This is the first study to implement a stacked LSTM model for daily QC record prediction to the best of our knowledge. It is one of the first few attempts to develop and evaluate a single generic stacked LSTM model. The stacked LSTM model allowed connections through time and provided a way to feed the hidden layers from previous steps (long‐term and short‐term) as additional inputs to the next stage, in contrast to earlier studies that only focused on studying the power of ANN. The stacked LSTM is effective at predicting daily MPC record. However, the generic stacked LSTM is poor in predicting the record of the gantry relative with two times cubic interpolation. In Figure 8a, the predictive range is significantly shifted up and slightly delayed. According to Wang et al. study about “Why are the ARIMA and SARIMA not sufficient?,” we guess that LSTM predictive performance is related to the signal frequency. Interpolation reduces high‐frequency signals and can greatly improve the predictive ability of the stacked LSTM model. Therefore, we try four times cubic interpolation and six times cubic interpolation in the stacked LSTM model, which significantly improves the accuracy performance (Figure 10b,c). The predictive performance of the gantry relative with six times cubic interpolation is great ( = 0.978) in the stacked LSTM model.
FIGURE 10

The predictive performance of the gantry relative to (a) two times cubic interpolation (, (b) four times cubic interpolation (), and (c) six times cubic interpolation () in the stacked long short‐term memory (LSTM) model

The predictive performance of the gantry relative to (a) two times cubic interpolation (, (b) four times cubic interpolation (), and (c) six times cubic interpolation () in the stacked long short‐term memory (LSTM) model For all daily MPC tests, the predicted record locates within the clinical tolerances (AAPM TG‐142), providing a window of opportunity to prevent the performance issue in advance. However, in the actual situation, besides keeping parameters within the tolerance, a clinical physicist should monitor trends in the machine performance and to know when the Linac needs to be maintained, thereby reducing the chance of Linac downtime. Here, a five‐step‐ahead prediction is appropriate to provide trends in Linac status. If the data point is within the tolerance, the newly entered data can be monitored. To illustrate the robustness of the LSTM model, the record of four QC items based on Quickcheck in another Varian Linac was applied. LSTM performed better than ARIMA in four QC items with the smaller MAE value, smaller RMSE value, and higher (Table 3). It indicates an idea that others don't have to optimize these parameters for each machine, and the model is reasonably robust. The optimal hyperparameters are recommended to select in the stacked LSTM model when applied to another Linac. For a clinical routine, it is unnecessary to retrain the neural network each day with the daily acquired QC record. If the stacked LSTM model works, it will be a great tool for medical physicists who are in charge of Linac's routine QC. It is possible that the model (LTSM) is overfitted resulting better performance compared to reference model (ARIMA). However, there exist some limitations in this study. Firstly, some hyperparameters correlate with each other and can result in different performances when optimized simultaneously rather than tuning. Secondly, due to prediction models being based on large learning‐phase data sets, the predictive models are not designed to detect large sudden one‐off jumps in data such as might be expected with a Linac component failure. Predictive QC is more suited to detecting and predicting gradual drifts and failures that repeat at regular intervals. The present study results suggest that the approach of predictive QC based on MPC tests is feasible. Moreover, the stacked LSTM model is robust when applied to another radiotherapy machine with four QC items based on Quickcheck. In future work, the QC items of other types of radiotherapy machines will be applied in the stacked LSTM model, which should be fine‐tuned to obtain better predictive performance.

CONCLUSIONS

This study developed and evaluated a generalized stacked LSTM model for daily QC prediction. Moreover, the stacked LSTM model is robustness applied in another radiotherapy machine. This model has a better performance than the ARIMA model and can reduce the unscheduled Linac downtime and allows Linac performance parameters to be controlled within tolerances in the clinic.

CONFLICT OF INTEREST

The authors have no conflict of interest to disclose.

AUTHOR CONTRIBUTIONS

Study conception, design, data acquisition, and wrote paper: Min Ma. Drafted the manuscript: Jianrong Dai, Chenbin Liu, Ran Wei, and Bin Liang. Supervised the study: Jianrong Dai. Critical revision of the manuscript for important intellectual content: Min Ma, Chenbin Liu, Ran Wei, Bin Liang, and Jianrong Dai.
  16 in total

1.  Learning to forget: continual prediction with LSTM.

Authors:  F A Gers; J Schmidhuber; F Cummins
Journal:  Neural Comput       Date:  2000-10       Impact factor: 2.026

2.  Task Group 142 report: quality assurance of medical accelerators.

Authors:  Eric E Klein; Joseph Hanley; John Bayouth; Fang-Fang Yin; William Simon; Sean Dresser; Christopher Serago; Francisco Aguirre; Lijun Ma; Bijan Arjomandy; Chihray Liu; Carlos Sandin; Todd Holmes
Journal:  Med Phys       Date:  2009-09       Impact factor: 4.071

3.  A Review of Recurrent Neural Networks: LSTM Cells and Network Architectures.

Authors:  Yong Yu; Xiaosheng Si; Changhua Hu; Jianxun Zhang
Journal:  Neural Comput       Date:  2019-05-21       Impact factor: 2.026

4.  LSTM: A Search Space Odyssey.

Authors:  Klaus Greff; Rupesh K Srivastava; Jan Koutnik; Bas R Steunebrink; Jurgen Schmidhuber
Journal:  IEEE Trans Neural Netw Learn Syst       Date:  2016-07-08       Impact factor: 10.451

5.  Predictive time-series modeling using artificial neural networks for Linac beam symmetry: an empirical study.

Authors:  Qiongge Li; Maria F Chan
Journal:  Ann N Y Acad Sci       Date:  2016-09-14       Impact factor: 5.691

6.  Evaluation of the Machine Performance Check application for TrueBeam Linac.

Authors:  Alessandro Clivio; Eugenio Vanetti; Steven Rose; Giorgia Nicolini; Maria F Belosi; Luca Cozzi; Christof Baltes; Antonella Fogliata
Journal:  Radiat Oncol       Date:  2015-04-21       Impact factor: 3.481

7.  Evaluation of the TrueBeam machine performance check (MPC) beam constancy checks for flattened and flattening filter-free (FFF) photon beams.

Authors:  Michael P Barnes; Peter B Greer
Journal:  J Appl Clin Med Phys       Date:  2016-11-30       Impact factor: 2.102

8.  Evaluation of the truebeam machine performance check (MPC): mechanical and collimation checks.

Authors:  Michael P Barnes; Peter B Greer
Journal:  J Appl Clin Med Phys       Date:  2017-04-17       Impact factor: 2.102

9.  Evaluation of the truebeam machine performance check (MPC): OBI X-ray tube alignment procedure.

Authors:  Michael P Barnes; Dennis Pomare; Frederick W Menk; Buiron Moraro; Peter B Greer
Journal:  J Appl Clin Med Phys       Date:  2018-09-03       Impact factor: 2.102

10.  Predicting machine's performance record using the stacked long short-term memory (LSTM) neural networks.

Authors:  Min Ma; Chenbin Liu; Ran Wei; Bin Liang; Jianrong Dai
Journal:  J Appl Clin Med Phys       Date:  2022-02-16       Impact factor: 2.102

View more
  1 in total

1.  Predicting machine's performance record using the stacked long short-term memory (LSTM) neural networks.

Authors:  Min Ma; Chenbin Liu; Ran Wei; Bin Liang; Jianrong Dai
Journal:  J Appl Clin Med Phys       Date:  2022-02-16       Impact factor: 2.102

  1 in total

北京卡尤迪生物科技股份有限公司 © 2022-2023.