Literature DB >> 34643323

Interinstitutional beam model portability study in a mixed vendor environment.

Sean P Frigo1, Jared Ohrt2, Yelin Suh2, Peter Balter2.   

Abstract

A 6 MV flattened beam model for a Varian TrueBeamSTx c-arm treatment delivery system in RayStation, developed and validated at one institution, was implemented and validated at another institution. The only parameter value adjustments were to accommodate machine output at the second institution. Validation followed MPPG 5.a. recommendations, with particular attention paid to IMRT and VMAT deliveries. With this minimal adjustment, the model passed validation across a broad spectrum of treatment plans, measurement devices, and staff who created the test plans and executed the measurements. This work demonstrates the possibility of using a single template model in the same treatment planning system with matched machines in a mixed vendor environment.
© 2021 The Authors. Journal of Applied Clinical Medical Physics published by Wiley Periodicals, LLC on behalf of The American Association of Physicists in Medicine.

Entities:  

Keywords:  RayStation; TrueBeam; beam model; commissioning; parameter value optimization; portability; validation

Mesh:

Year:  2021        PMID: 34643323      PMCID: PMC8664150          DOI: 10.1002/acm2.13445

Source DB:  PubMed          Journal:  J Appl Clin Med Phys        ISSN: 1526-9914            Impact factor:   2.102


INTRODUCTION

A mixed vendor environment (MVE) for radiation therapy provides the potential to combine best‐in‐class tools while meeting each institution's technology preferences and needs. This environment is comprised of an image acquisition system (IAS, simulator), treatment planning system (TPS), treatment management system (TMS), and treatment delivery system (TDS, Linac), all working together. To cover a broad range of delivery platform technologies, TPS vendors must support generalized machines, for example, a c‐arm TDS. Without direct access to a specific TDS technology and specifications, they must code to a generalized interface, and so an MVE comes at a cost in terms of potential integration challenges and added validation burdens. The TPS and TDS are developed and validated independently from one another by the different vendors. An example of this situation is with the RayStation TPS and a Varian TrueBeamSTx TDS. There is a broad spectrum of reported RayStation machine model parameter values in use. This lack of consensus reflects a number of things. First, the software, compared to other broadly used TPSs, is relatively new. Second, the size of the nascent user community has a correspondingly smaller collective knowledge and experience. Third, there has been variation in the interpretation of the multi‐leaf collimator (MLC) model parameter values as well as variation in input measured data. Though use has increased, and a number of RayStation machine models have been published, , , the results vary, consistent with a study by Imaging Radiation Oncology Core (IROC). Consequently, there still is not a readily available optimized RayStation model template for matched machines, although Hansen and Frigo demonstrated that this should be feasible. Building a fully validated machine model with a new TPS from scratch is a formidable task, involving data collection, parameter value optimization, and dosimetric validation. The Medical Physics Practice Guideline for Commissioning and QA of Treatment Planning Dose Calculations (MPPG 5.a.) cites reasonable time estimates of 2–4 weeks to commission a single energy photon beam, considering 12–16 h per day of 1.5–2.0 full‐time‐equivalent qualified medical physicist effort. First, the physicist must learn about the approximations and assumptions in the software. This is a challenge, as many clinical physicists do not have the time and background to achieve the intimate understanding needed in order to optimize model parameter values, especially for dynamic MLC beams. Second, the amount of work in creating a broad spectrum of test beams that cover a clinic's treatment approaches, executing measurements with those plans, and performing an analysis of the results, is significant. Much needs to be done outside of the TPS, using third‐party or home‐built tools. Compounding this effort are variations in measurement equipment as well as in their use. Errors in model construction are well documented. Jacqmin et al. presented an example implementation of the MPPG 5.a. guidance for two different TPSs (Pinnacle and Eclipse). Each TPS was tested in the context of a single institution, and focused on MPPG 5.a. implementation aspects, including analysis tools. Model performance across matched TDS and detailed intensity modulated radiation therapy (IMRT)/volume modulated arc therapy (VMAT) results across a broad spectrum of treatment plans were out of that work's primary aim. To our knowledge, there are no comprehensive MPPG 5.a. photon validation studies that span multiple institutions for the same TPS machine model for a mixed‐vendor environment of the respective systems (IAS, TPS, TMS, and TDS). Single vendor environments (SVEs) help address integration and validation challenges by providing bundled solutions. To bring some of the advantages of an SVE into an MVE, we present the results of a broad MPPG 5.a. validation of a single machine model, demonstrating MVE model portability for the first time. This was performed at two completely independent institutions, using different types of measurement equipment, multiple personnel, and multiple TDS instances that all meet a common vendor‐defined machine performance specification. As designs have evolved, and with advances in manufacturing processes and technology, modern TDSs now exhibit a consistent standard of performance. , , This makes it feasible to establish conformance to a single beam performance specification for each beam energy/modality. This has enabled the results of this work, which demonstrate the potential to use an unmodified RayStation machine model with any appropriately matched machine, without any need for further model parameter value optimization. Under these circumstances, the physicist can proceed directly to end‐to‐end validation testing. A type‐tested MVE template model only needing validation, heretofore only available in SVEs, is a benefit to the community.

METHODS

Treatment delivery systems

This study focuses on a single TDS class, the Varian TrueBeam with a high‐definition multi‐leaf collimator (TBSTx) (Varian Medical Systems, Palo Alto, CA, USA). Institution A has one TBSTx (Linac A1), and Institution B has two (Linac B1 and Linac B2). At each institution, the Linacs were demonstrated to pass standard vendor acceptance testing procedures and met the same performance specifications, including meeting the vendor's Enhanced Beam Conformance specifications. , In addition, standard beam commissioning data, including output factors, percent depth‐dose, and profiles, were compared to data in the literature from other institutions.

Treatment planning systems

Both institutions used the RayStation (RaySearch Laboratories, Stockholm, Sweden) TPS for test plan creation and dose calculation using a collapsed‐cone convolution dose calculation algorithm. Institution A used version 7.0.0.19 (RayStation 7) with the CCDose 3.5 dose engine, and Institution B used version 8.0.1.10 (RayStation 8A SP1) with the CCDose 4.1 dose engine. Between these two versions, there were two updates to the CC Dose algorithm. The first update had no effect on the TBSTx Linac class. However, it did require the beam model to be recommissioned in the software when upgrading. The second update affected the dynamic MLC (DMLC) fluence calculation and fixed an issue with rotated collimators and asymmetric primary sources. The second update is characterized as minor, and does not require a beam model to be recommissioned when upgrading. For Institution A, a TBSTx class machine was defined in the TPS, using vendor specifications for all mechanical properties. Dose engine (model) parameter values were determined by a three‐step process. First, non‐MLC parameter values were optimized using jaw‐defined beam measurement data using the TPS modeling tools within the RayPhysics module of the TPS. Second, the MLC parameter values were initialized using values from ray‐tracing performed outside of the TPS. Then, MLC parameter values were further optimized to obtain best agreement with ion chamber measurement of VMAT deliveries. The fluence parameter values of the model are summarized in Table 1.
TABLE 1

Institution A fluence parameter value summary

ParameterValue
Primary Source X‐width (cm)0.060
Primary Source Y‐width (cm)0.045
MLC X Offset (cm)0.013
MLC X Gain0.000
MLC X Curvature (cm–1)0.000
MLC Leaf Tip Width (cm)0.250
MLC Transmission0.0115
MLC Tongue Groove Width (cm)0.040
Institution A fluence parameter value summary The Institution A model was then validated using AAPM Medical Physics Practice Guideline 5.a., for static, step‐and‐shoot (SAS) IMRT, and volume modulated arc therapy (VMAT) delivery techniques following the formalism of Jacqmin et al. The resulting TBSTx machine with Institution A parameter values was then considered as the candidate base model for portability testing. A copy of the Institution A model was then provided to Institution B. Institution B verified that all nondosimetric parameters were valid for its TBSTx machines. To ensure that this model would be representative of the Institution B clinical standards and measured data, all of Institution A's measured beam data were removed and replaced with Institution B's. This entailed all percent depth dose, profile, and output factor entries. Institution A's 6 MV absolute calibration coefficient value of 0.664 cGy/MU was changed to match Institution B's in‐house absolute dose specification of 0.667 cGy/MU. The Institution A Dose Normalization factor of 3.8338 was changed to 3.8541 at Institution B to accommodate the slightly different calibration coefficient value. Institution B's measured output factors were slightly different than institution A's and it was decided to recompute the output factor corrections (OFCs). These output‐related changes were the only site‐specific updates to the RayStation machine. The different values and their ratios are summarized in Tables 2 and 3.
TABLE 2

Output factor values

FieldOutput factor
sizeInstitutionInstitutionA/B
(cm2)ABratio
1x10.7056
2x20.7902
3x30.83360.83001.0043
4x40.8640
5x50.89620.89501.0013
6x60.9210
8x80.96620.96601.0002
10x101.00001.0000
12x121.0280
15x151.05881.06100.9979
20x201.09951.10100.9986
25x251.1300
30x301.15111.15400.9975
35x351.1740
40x401.17291.18900.9865
TABLE 3

Output factor correction values

FieldOutput factor correction
sizeInstitutionInstitutionA/B
(cm2)ABratio
1x10.9879
2x20.9894
3x31.00100.99741.0036
4x41.0006
5x51.00371.00101.0027
6x61.0027
8x81.00421.00161.0026
10x101.00001.00001.0000
12x120.9973
15x150.99791.00100.9969
20x201.00501.00421.0007
25x251.0097
30x301.01071.0136
35x351.0187
40x401.00781.02520.9830
Output factor values Output factor correction values The beam profile and depth curves were recomputed in the TPS physics module and compared with Institution B's measured data as an initial validation step. Institution B independently validated the model using MPPG 5.a. The testing included all model performance specifications for static, SAS, and VMAT delivery techniques.

Treatment management systems

Each institution employed different TMS. Institution A utilized ARIA 13.6 (Varian Medical Systems, Palo Alto, CA, USA), while Institution B used Mosaiq 2.65 (Elekta, Stockholm, Sweden). All TPS plan data were exported from the institution's RayStation TPS instance to the TMS and then to their respective TDS for delivery. The integrity of the data chain was validated by standard end‐to‐end testing procedures.

Dose validation

To meet the testing recommendations in MPPG 5.a, both institutions used commercially available measurement systems for IMRT/VMAT deliveries. In total, four different devices were employed, each of a different design, two at Institution A and two at Institution B. All were calibrated per vendor procedures to produce absolute dose readings, using completely different ADCL‐calibrated ion chambers present at each institution. Institution A employed a basic VMAT test plan suite comprised of four geometrically based TG‐119 and three anatomically based patient care plans. A second broader suite of 24 plans was created based on earlier clinically delivered plans using institutional protocols and optimization techniques, designed to span the potential spectrum of potential treatment scenarios. Beam sets created from plans derived from anatomically based geometries were limited to targets 3 cm (15 cm3 volume) diameter or larger. These specific tests did not consider smaller targets, for example, for stereotactic radiosurgery (SRS) delivery techniques. A Tomo “Cheese” phantom (Accuray, Sunnyvale, CA, USA) with an array of six A1SL (Standard Imaging, Middleton, WI, USA) ion chambers was used at Institution A to measure the basic VMAT plans. All detectors were located in the low‐gradient high‐dose regions within target volumes. Estimated uncertainty in the A1SL dose values was 1%. All measurements were corrected for machine output. A local dose percent difference (PD) was calculated between the derived A1SL ion chamber dose measurement (M) and corresponding ROI average dose (C) in the RayStation TPS. The percent difference was defined as 100 * (M – C) / M. Institution A also utilized a Delta4‐Plus diode array (Scandidos, Uppsala, Sweden) for the broader suite of test plans. Both a 3D gamma and median dose difference (MDD) analyses were performed for each measurement. The MDD is defined as: for the distribution of N measured (M) and calculated (C) dose pairs, respectively, and expressed as a percentage. Gamma analyses utilized both current clinical levels of 3% global percent difference (GPD), 3 mm distance to agreement (DTA), and 20% dose threshold (DT), as well as tighter levels of 2% local percent difference (LPD), 2 mm DTA, and 20% DT. The Delta4 absolute dose measurement error is estimated to be 1%. All Delta4 measurements were corrected for machine output. Institution B measured a cohort of 60 clinically based plans representing the range of deliveries on their two TBSTx machines. These plans include SAS IMRT and VMAT, cover multiple treatment sites, and are generally organized into three categories: stereotactic spinal radiosurgery (SSRS), stereotactic body radiotherapy (SBRT), and nonstereotactic. An Octavius 4D ion chamber array (PTW, Freiburg, Germany) was used for SSRS deliveries. A gamma analysis was performed using clinical criteria (4% LPD, 2.5 mm DTA, and 30% DT), as well as with more stringent ones (2% LPD, 2 mm DTA, and 30% DT). In addition to gamma analysis, the MDD as defined above was also recorded. All measurements were corrected for machine output. An ArcCHECK (Sun Nuclear, Melbourne, FL, USA) diode array was used for all other cases (SBRT and nonstereotactic). , Clinical analysis criteria (3% GPD, 3 mm DTA, and 10% DT) were used when comparing measured dose at the detectors versus that calculated by the TPS. The ArcCHECK software does not report an MDD, and it was not possible to reanalyze the dose distribution with stricter gamma criteria (2% LPD, 2 mm DTA, and 10% DT). All measurements were corrected for machine output. Both institutions currently employ more relaxed gamma criteria than recommended in the TG‐218 report (3% GPD, 2 mm DTA, and 10% DT), hence, the inclusion of analyses using stricter criteria when possible (2% LPD, 2 mm DTA, and 20% DT).

Independent audit

Both institutions participated in an independent audit using the anthropomorphic SBRT Lung phantom provided by the IROC. The phantom contains dosimeters that measure absolute dose at a few points, film that measures relative planar dose, as well as a localization assessment. The phantom was scanned, planned, and treated by each institution being audited and returned to IROC for analysis and comparison with the 3D dose distribution from the TPS using point dose local percent difference (dosimeters) or a 2D gamma analysis (film).

RESULTS

The results are presented by institution, with the validation performed at Institution A first. Following thereafter are the changed model parameter values at Institution B and their subsequent validation results.

Institution A

The results for Institution A are presented in the following subsections. This includes MPPG 5.a. static beams with a 3D tank, and VMAT plans with a Tomo “Cheese” phantom as well as a Delta4 device.

MPPG 5.a. summary

All MPPG 5.a. results are presented in summary form in Table 4. This includes static and dynamic measurements.
TABLE 4

Institution A MPPG 5.a. result summary

TestComparisonToleranceResults
5.1Dose distributions in planning module versus modeling (physics) moduleIdentical a Pass
5.2Dose in test plan versus clinical calibration condition b 0.50%Pass
5.3Dose distribution calculated in planning system versus commissioning data2%Pass
5.4Small MLC‐shaped field (non‐SRS)Table c Pass
5.5Large MLC‐shaped field with extensive blockingTable c Pass
5.6Off‐axis MLC‐shaped field, with maximum allowed leaf over travelTable c Pass
5.7Asymmetric field at minimal anticipated SSDTable c Pass
5.8Field at oblique incidence (at least 20°)Table c Pass
5.9Large (> 15 cm) field for each nonphysical wedge angleTable c Pass
6.1Reported electron (or mass) densities against known valuesPass
6.2Heterogeneity correction distal to lung tissue3%Pass
7.1Small field PDDTable c Pass
7.2Small MLC‐defined field output2%Pass
7.3TG‐119 IMRT testsTG‐218Pass
7.4Clinical testsTG‐218Pass
7.5External reviewIROCPass

Within the expected statistical uncertainty.

TPS absolute dose at reference point.

MPPG 5.a. Table 5.

Institution A MPPG 5.a. result summary Within the expected statistical uncertainty. TPS absolute dose at reference point. MPPG 5.a. Table 5.
TABLE 5

Institution A Delta4 diode array anatomical test plan results

TargetSphereMedian dose
SitevolumediameterDeliveryDifferenceGamma (20% threshold)
IDname(cm3)(cm)technique(percent)2%L/2 mm3%G/3 mm
01Lung15.43.1VMAT−0.199.7100.0
02Brain19.63.3VMAT−0.6100.0100.0
03Brain28.03.8VMAT−1.994.999.3
04Spine50.94.6VMAT−1.796.4100.0
05Prostate51.74.6VMAT−1.396.4100.0
06Spine54.44.7VMAT−3.884.893.7
07Prostate67.15.0VMAT−1.792.6100.0
08Headneck120.96.1VMAT−1.592.899.8
09Pelvis178.37.0VMAT−1.096.3100.0
10Pelvis188.67.1VMAT−1.397.2100.0
11Central cylinder221.47.5VMAT−1.979.799.8
12Off‐axis cylinder221.57.5VMAT−1.495.499.3
13C‐Shape274.28.1VMAT−1.591.099.6
14Brain331.78.6SAS−0.899.8100.0
15Brain360.38.8VMAT−0.399.8100.0
16Pancreas380.19.0VMAT−1.096.299.9
17Headneck429.09.4VMAT−1.298.0100.0
18Lung506.69.9VMAT−0.2100.0100.0
19Breast1288.113.5VMAT−1.665.095.5
20Brain1297.613.5SAS0.696.799.7
21Breast1504.714.2VMAT−0.992.5100.0
22Central cylinder1621.714.6VMAT−1.493.5100.0
23Pelvis2655.217.2VMAT−0.297.399.0
24Pelvis2815.117.5VMAT0.197.8100.0
25BreastSAS−1.665.095.5
No PTV structureAverage−1.192.899.2
StDev0.99.61.7
Min−3.865.093.7
Max0.6100.0100.0
Spread4.435.06.3

Note: Plan ID 25 is a field‐in‐field tangent plan with no target volume. SAS is step‐and‐shoot IMRT delivery.

Tomo “Cheese” phantom

A representative measurement using ion chambers in the Tomo “Cheese” phantom is shown in Figure 1, where calculated and measured doses are shown. Similar results were obtained for the remaining six VMAT plans created for four geometrically based and three anatomically based targets.
FIGURE 1

Calculated dose (line) and ion chamber (symbol) dose for a representative Tomo “Cheese” phantom measurement at Institution A for the TG‐119 C‐shape plan. The target in this case was within the –5 to 0 cm region. Error bars in both the horizontal and vertical are equal to the symbol diameter

Calculated dose (line) and ion chamber (symbol) dose for a representative Tomo “Cheese” phantom measurement at Institution A for the TG‐119 C‐shape plan. The target in this case was within the –5 to 0 cm region. Error bars in both the horizontal and vertical are equal to the symbol diameter Results using ion chambers in the Tomo “Cheese” phantom for the seven VMAT plans are shown in Figure 2, displaying calculated and measured dose percent differences. In this case, the calculated dose is the average dose to corresponding ion chamber structures lying within the target, and the graphed PD is the average across all ion chambers for a given plan.
FIGURE 2

Output‐corrected ion chamber dose percent difference for all Tomo “Cheese” phantom measurements at Institution A. Each point is the average PD for all chambers in the high‐level, low‐gradient target region of the dose distribution for each plan, that is, those reading 90% of maximum dose or higher and within the target. Error bars are the average of all the standard deviations across all eligible ion chambers for all targets

Output‐corrected ion chamber dose percent difference for all Tomo “Cheese” phantom measurements at Institution A. Each point is the average PD for all chambers in the high‐level, low‐gradient target region of the dose distribution for each plan, that is, those reading 90% of maximum dose or higher and within the target. Error bars are the average of all the standard deviations across all eligible ion chambers for all targets

Delta4 phantom

Twenty‐five plans having target volumes of 15–2814 cm3 and equivalent sphere diameters of 3.1–17.5 cm were measured using the Delta4 device. Passing rates were 65–100% for the tighter gamma criteria (2% LPD, 2 mm DTA, and 20% DT), and 94–100% for clinical (3% GPD, 3 mm DTA, and 10% DT). The poorest performing targets were either a smaller highly modulated SBRT spine site or larger breast sites with target volumes lying near‐surface. These plans push the limits of the model and establish the boundary of model applicability. The average MDD was ‐1.1 ± 0.9%. A Delta4 result is shown in Figure 3 for a pelvis target (Plan ID 09), depicting typical level of agreement. Results from all plans are summarized in Table 5.
FIGURE 3

Representative Institution A Delta4 diode array result for a pelvis test plan (ID 09), including planar dose (top), line dose (middle), and median dose difference, distance‐to‐agreement, and gamma distributions (bottom, left to right). Gamma parameters are 2% local percent difference, 20% threshold, and 2 mm distance to agreement

Representative Institution A Delta4 diode array result for a pelvis test plan (ID 09), including planar dose (top), line dose (middle), and median dose difference, distance‐to‐agreement, and gamma distributions (bottom, left to right). Gamma parameters are 2% local percent difference, 20% threshold, and 2 mm distance to agreement Institution A Delta4 diode array anatomical test plan results Note: Plan ID 25 is a field‐in‐field tangent plan with no target volume. SAS is step‐and‐shoot IMRT delivery.

Independent audit

The Institution A IROC Lung phantom results are summarized in Table 6.
TABLE 6

Independent audit results for Institution A

TLD locationIROC‐H versus institutionCriteriaAcceptable
PTV_TLD_sup0.980.92–1.05Yes
PTV_TLD_inf0.990.92–1.05Yes
Independent audit results for Institution A

Institution B

The results for Institution B are presented below. This includes MPPG 5.a. static beams measured with a 3D tank, and VMAT plans using an Octavius as well as ArcCHECK devices. All MPPG 5.a. results are presented in summary form in Table 7. This includes static and dynamic measurements.
TABLE 7

Institution B MPPG 5.a. measurement result summary

TestComparisonToleranceResults
5.1Dose distributions in planning module versus modeling (physics) moduleIdentical a Pass
5.2Dose in test plan versus clinical calibration condition b 0.50%Pass
5.3Dose distribution calculated in planning system versus commissioning data2%Pass
5.4Small MLC‐shaped field (non‐SRS)Table c Pass
5.5Large MLC‐shaped field with extensive blockingTable c Pass
5.6Off‐axis MLC‐shaped field, with maximum allowed leaf over travelTable c Pass
5.7Asymmetric field at minimal anticipated SSDTable c Pass
5.8Field at oblique incidence (at least 20°)Table c Pass
5.9Large (> 15 cm) field for each nonphysical wedge angleTable c Pass
6.1Reported electron (or mass) densities against known valuesPass
6.2Heterogeneity correction distal to lung tissue3%Pass
7.1Small field PDDTable c Pass
7.2Small MLC‐defined field output2%Pass
7.3TG‐119 IMRT testsTG‐218Pass
7.4Clinical testsTG‐218Pass
7.5External reviewIROCPass

Within the expected statistical uncertainty.

TPS absolute dose at reference point.

MPPG 5.a. Table 5.

Institution B MPPG 5.a. measurement result summary Within the expected statistical uncertainty. TPS absolute dose at reference point. MPPG 5.a. Table 5.

Octavius phantom

Twenty SSRS spine plans were measured using the Octavius phantom. A representative analysis is shown in Figure 4, and the results are summarized in Table 8. Passing rates were 96.8 ± 2.8% for the tighter criteria (2% LPD, 2 mm DTA, and 30% DT) and were 99.6 ± 0.5% for the clinical ones (4% LPD, 2.5 mm DTA, and 30% DT). The MDD as a percentage of the maximum plan dose is –1.7 ± 0.5%.
FIGURE 4

Representative Octavius diode array result from Institution B featuring isodose line displays for the measured dose distribution (top‐left), planned dose distribution (bottom‐left), a profile comparison (top‐right), and gamma analysis (bottom‐right)

TABLE 8

Institution B Octavius results

TargetSphereMedian dose
SitevolumediameterDeliveryDifferenceGamma (10% threshold)
IDname(cm3)(cm)technique(percent)2%L/2 mm4%L/2.5 mm
01Brain14.23.0VMAT−1.999.6100.0
02Brain18.53.3VMAT−1.999.8100.0
03Brain23.83.6VMAT−1.399.299.9
04Brain25.53.7VMAT−1.498.8100.0
05Brain27.83.8VMAT−0.797.699.8
06T‐Spine29.93.9SAS−2.597.899.7
07T‐Spine35.04.1SAS−2.494.299.0
08Brain37.34.1VMAT−1.7100.0100.0
09Brain42.24.3VMAT−2.599.3100.0
10Brain44.74.4VMAT−2.196.299.9
11C‐Spine52.54.6SAS−1.294.499.3
12Brain53.94.7VMAT−1.496.5100.0
13Brain54.34.7VMAT−1.199.6100.0
14T‐Spine63.34.9SAS−1.097.099.4
15L‐Spine81.75.4SAS−1.395.599.6
16T‐Spine93.55.6SAS−2.197.399.7
17T‐Spine141.26.5SAS−1.596.499.5
18C‐Spine146.36.5SAS−1.692.498.7
19T‐Spine199.87.3SAS−1.896.399.8
20L‐Spine406.29.2SAS−2.289.098.4
Average−1.796.899.6
StDev0.52.80.5
Min−2.589.098.4
Max−0.7100.0100.0
Spread1.911.01.6

Note: SAS is step‐and‐shoot IMRT delivery.

Representative Octavius diode array result from Institution B featuring isodose line displays for the measured dose distribution (top‐left), planned dose distribution (bottom‐left), a profile comparison (top‐right), and gamma analysis (bottom‐right) Institution B Octavius results Note: SAS is step‐and‐shoot IMRT delivery.

ArcCHECK phantom

Twenty SBRT lung and abdominal plans and 20 nonstereotactic plans were measured using the ArcCHECK. A representative analysis is shown in Figure 5, and the results are summarized in Table 9. Passing rates were 99.2 ± 1.1% for clinical criteria (3% GPD, 3 mm DTA, and 10% DT). It was not possible to reanalyze the data with stricter criteria, and the ArcCHECK software does not report MDD information.
FIGURE 5

Representative ArcCHECK diode array result from Institution B showing an isodose comparison between the measured and planned dose distributions. Red dots indicate points failing Gamma with a measured dose higher than calculated and the blue dots indicate points failing with lower than calculated dose

TABLE 9

Institution B ArcCHECK results

TargetSphereMedian dose
SitevolumediameterDeliveryDifferenceGamma (10% threshold)
IDname(cm3)(cm)technique(percent)2%L/2 mm3%G/3 mm
01Thoracic SBRT2.51.7VMAT99.1
02Head & Neck SBRT3.01.8VMAT97.7
03Thoracic SBRT9.32.6VMAT99.6
04Thoracic SBRT10.42.7SAS99.7
05Thoracic SBRT11.02.8VMAT99.2
06Thoracic SBRT12.32.9VMAT99.2
07Thoracic SBRT12.52.9VMAT99.0
08Thoracic SBRT15.53.1VMAT99.6
09GI SBRT17.33.2SAS96.5
10Eye19.53.3VMAT100.0
11Thoracic SBRT20.63.4SAS100.0
12Thoracic SBRT23.83.6SAS99.7
13Thoracic SBRT25.93.7VMAT99.7
14Thoracic SBRT35.14.1VMAT99.2
15Head & Neck36.24.1VMAT100.0
16Head & Neck37.84.2VMAT96.1
17Thoracic SBRT39.44.2VMAT100.0
18Thoracic SBRT40.04.2VMAT98.0
19GI SBRT50.34.6SAS100.0
20Thoracic SBRT56.44.8VMAT99.7
21Brain69.75.1VMAT95.5
22GI SBRT73.55.2SAS99.4
23Brain97.95.7VMAT99.4
24Brain109.35.9VMAT99.3
25GI SBRT112.36.0SAS98.9
26Head & Neck126.36.2VMAT96.9
27Brain163.36.8VMAT99.3
28Brain164.56.8VMAT99.8
29Lung184.57.1VMAT100.0
30Brain193.17.2VMAT100.0
31Brain197.37.2VMAT100.0
32Brain200.47.3VMAT99.8
33Brain254.57.9VMAT99.3
34Brain304.78.3VMAT99.3
35Brain347.98.7VMAT100.0
36Brain350.98.8VMAT100.0
37GI SBRT380.69.0SAS98.9
38Lung456.69.6VMAT99.9
39Brain486.89.8VMAT99.9
40GI528.710.0VMAT100.0
Average99.3
StDev1.1
Min95.5
Max100.0
Spread4.5

Note: SAS is step‐and‐shoot IMRT delivery.

Representative ArcCHECK diode array result from Institution B showing an isodose comparison between the measured and planned dose distributions. Red dots indicate points failing Gamma with a measured dose higher than calculated and the blue dots indicate points failing with lower than calculated dose Institution B ArcCHECK results Note: SAS is step‐and‐shoot IMRT delivery. The Institution B IROC Lung phantom results are summarized in Table 10.
TABLE 10

Independent audit results for Institution B

TLD locationIROC‐H versus institutionCriteriaAcceptable
PTV_TLD_sup0.990.92–1.05Yes
PTV_TLD_inf0.980.92–1.05Yes
Independent audit results for Institution B

DISCUSSION

In RayStation, the calibration coefficient and output factors scale the input measured dose curve data, while the OFCs and normalization coefficient scale the dose calculation. Comparing the ratios in Tables 2 and 3, as well as those stated in the Results section, we see everything agrees to within 0.5% or better (except the 40 × 40 cm2 field size). One could argue that Institution B did not need to update the Institution A values with their own. Comparing the IMRT/VMAT QA results between institutions, there appears to be a systematic scaling offset for dynamic (VMAT) plans, as the MDD was in the neighborhood of –1% across the board. A future revision of the current model should take this scaling into account. We point out, however, that this consistency in the amount of offset actually points to the robustness of the model's performance across a broad plan and measurement spectrum. Commissioning a TPS in an MVE is one of the most challenging and time‐intensive tasks a clinical medical physicist can perform. In an SVE, the physicist has an option to accept a vendor‐provided model with some confidence. In that case, the task mainly is acceptance with few if any adjustments. In an MVE, the physicist is often faced with the challenge of having neither a matched machine nor an optimal preconfigured clinical model. The goal of this work is to demonstrate that a physicist in a multi‐vendor environment can have the same experience as they would in a single‐vendor environment. Another challenge in commissioning is in the TPS itself. Developing and validating a clinical model requires an understanding of the TPS representation of the real‐world machine and how the model parameters affect dose calculation in clinical situations. Parameter values ideally should begin with real‐world (physical) inputs, but often these values do not result in an acceptable clinical model in part due to simplifications made in the TPS representation of the physical machine. Two examples in RayStation are representing beam‐limiting devices, such as MLCs and jaws as having zero height, or the use of nontilting dose kernels. The model parameter values must be tuned to accommodate these algorithmic assumptions and implementation approximations in the actual TPS dose calculation engine. Clinical model development is rife with pitfalls. A clinical model contains a large number of parameter values that are needed to ensure dose calculation accuracy over a wide range of delivery scenarios. This poses a significant challenge to identify a set of values which is accurate and robust to a wide spectrum of treatment plans. This is because parameter values are coupled, that is, the optimal value of any one is dependent on one or more others. As these parameter values move away from physically based ones, it becomes more likely that the clinical model will land in one of many mostly indistinguishable local minima. In this situation, “reasonable” parameter value adjustments do not improve the model accuracy, and the likelihood of finding the parameter values that will result in a better clinical model is limited. Clinical model accuracy is influenced by a number of factors. These include limitations in the measured beam data, quality and implementation of patient‐specific QA devices, as well as tools within the TPS, all which can be significantly exacerbated by capabilities and experience of people involved with these. Measured scan beam data are limited in quality due to the trade‐off in noise, detector resolution, and mechanical positioning. Model optimization using a routine QA device requires additional independent devices for validation, which may not be at every institution's disposal. Lastly, missing tools in the TPS for model parameter value optimization, especially for MLCs, hinder optimization of those parameter values. Koger et al. pointed out the associated pitfalls when the software does not provide such tools. When significant work needs to be performed outside of the TPS, this can be a burdensome effort that entails significant tool development by the end user. As IROC data clearly show, there is wide variation in clinical model performance across their surveyed institutions, suggesting this exists within the broader radiation therapy community. Their surveys indicate that for any given TPS, there is a wide spectrum of model parameter values being used clinically. All of these shortcomings in a locally developed model can be addressed with the utilization of a portable preconfigured and optimized model. Many vendors have been able to standardize TDS performance to the degree that TDSs of a similar model can meet the same very tight performance specifications, and TPS representations employed by most vendors are able to create accurate and reliable clinical models. This leads one to the conclusion that the main variability is from the people driving the technology. The people creating the model affect the tuning of model parameter values, affect measuring the input beam data, and influence defining what is acceptable. It is much more probable now that variation in measured beam data is due to variation in equipment setup and data acquisition, not variation in TDS performance. Consequently, the quality of the clinical model is driven by the quality of the beam data used to optimize the model. If one is having difficulties developing an acceptable clinical model, it behooves the user to verify the quality of the measured beam data. In the past, machine performance, as well as measurement equipment variability obscured the role that individuals played in the existence of differing beam models. Now, the variation's greatest influence is not due to technology, but its use. A critical point is that any model parameter value tuning needs to be performed against measurement using well‐established, absolutely calibrated devices. Care must be taken not to build measurement device uncertainty into a clinical model. Therefore, final validation should be performed against different, well‐established, absolutely calibrated devices. More confidence is gained when a wider variety or broader spectrum of devices is utilized. In this work, the measurement devices span multiple vendors and two institutions. Model validation is a significant and time‐consuming exercise distinct from model parameter value optimization, the latter which can consume most of the allotted time, leading to reduced time available for validation. Using a template model relieves this pressure significantly, allowing for more extensive plan‐based validation across a broader spectrum of test plans. In addition, at many institutions, there is at most one QA device available, preventing independent validation of an in‐house developed model, which is critical. Although an independent audit, for example, using a service such as IROC, can help satisfy independent validation requirements, their tests often serve as a basic check designed to catch gross errors, and cannot be a replacement for a second independent device. A template that has undergone validation on multiple machines and QA devices can significantly reduce the above risks to or with necessary local model validation. A clinical model must pass validation on a number of levels. First, it must be able to reproduce the input beam data and meet institutional standards. Next, the beam model must accurately calculate dose for simple field geometries. Then, the model must perform well in clinically relevant scenarios. This should involve the use of a suite of test plans specific to the spectrum of an institution's planning approach and treatment site methodology. The model should pass the common clinical gamma test with typical metric (e.g., 3% GPD, 2 mm DTA, and 10% DT criteria), and also be tested using more stringent criteria (e.g., 2% LPD, 2 mm DTA, and 20% DT) to reveal where the model begins to break down. Finally, the model should pass an audit, preferably with an independent organization, such as IROC. Our results in this work focused on the validation of a single portable TrueBeamSTx model. All of the above considerations earlier in this section apply. First, Institution A performed requisite parameter value optimization and independent validations, that is, the extensive work any institution in a mixed‐vendor environment would do if they were starting from scratch. Validations were performed with two completely different measurement systems by a number of staff over an extended period. An independent IROC audit was also passed. This then set the stage for Institution B to consider utilizing the Institution A Model as‐is and proceeding directly to MPPG 5.a. validation. Special attention was paid to measurement‐based IMRT/VMAT QA for clinical cases representative of those treated at Institution B. We note that Institution B had independently and in parallel developed their own clinical model with similar significant effort. However, the portable one from Institution A performed slightly better, and they opted to go live with the latter. Adoption of the Institution A (source) model at Institution B required no additional time optimizing the latter model and allowed more time for validating. The current work demonstrates that it is possible to use a single class‐level mixed‐vendor solution at two completely different institutions. The same RayStation beam model for a TrueBeamSTx flattened beam, with minimal changes in output parameter values, was successfully validated for patient care using similar guidance, namely, MPPG 5.a., but for different TPS and TDS instances, as well as differing equipment, measurement methodologies, and personnel. The validation results point to the feasibility of a mixed‐vendor TPS model interinstitutional portability. There is a significant operational impact, as with a faster TPS/TDS implementation, a faster turn‐around can save significant resources while at the same time ensuring high quality.

CONCLUSION

We have validated that a single beam model can be used for three c‐arm TDSs (Linacs) that are of the same model and at two completely independent institutions. They have not been explicitly matched to each other, but meet the same vendor performance specifications. This indicates that, without any parameter value optimization work, it is possible to meet or exceed all MPPG5.a guidelines and TG‐218 criteria. This was achieved across a number of different measurement devices and planning techniques, thereby indicating robustness of the model and broad applicability. Developing a suite of portable beam models will improve the process of TPS implementation. This opens up the possibility of more accurate and uniform dose modeling across the community.

FUNDING

This research did not receive any specific grant from funding agencies in the public, commercial, or not‐for‐profit sectors.

CONFLICTS OF INTEREST

The authors have no conflicts of interest to report.

AUTHOR CONTRIBUTIONS

All listed authors above contributed equally to the intellectual content of the manuscript, including the design, acquisition, analysis, and interpretation. They participated equally in drafting, revising, and interpreting the material. All authors have read and approved the final submitted version of the manuscript. Additionally, none of the authors have any outside funding sources, which contributed to this work, and have not published the data or results prior.
  19 in total

1.  Optimizing beam models for dosimetric accuracy over a wide range of treatments.

Authors:  Josephine Chen; Olivier Morin; Brandon Weethee; Angelica Perez-Andujar; Justin Phillips; Mareike Held; Vasant Kearney; Dae Yup Han; Joey Cheung; Cynthia Chuang; Gilmer Valdes; Atchar Sudhyadhom; Timothy Solberg
Journal:  Phys Med       Date:  2019-01-24       Impact factor: 2.685

2.  Analyzing the performance of ArcCHECK diode array detector for VMAT plan.

Authors:  Rajesh Thiyagarajan; Arunai Nambiraj; Sujit Nath Sinha; Girigesh Yadav; Ashok Kumar; Vikraman Subramani
Journal:  Rep Pract Oncol Radiother       Date:  2015-12-02

3.  Application of the TRS 483 code of practice for reference and relative dosimetry in tomotherapy.

Authors:  Maria do Carmo Lopes; Tania Santos; Tiago Ventura; Miguel Capela
Journal:  Med Phys       Date:  2019-10-29       Impact factor: 4.071

Review 4.  Tolerance limits and methodologies for IMRT measurement-based verification QA: Recommendations of AAPM Task Group No. 218.

Authors:  Moyed Miften; Arthur Olch; Dimitris Mihailidis; Jean Moran; Todd Pawlicki; Andrea Molineu; Harold Li; Krishni Wijesooriya; Jie Shi; Ping Xia; Nikos Papanikolaou; Daniel A Low
Journal:  Med Phys       Date:  2018-03-23       Impact factor: 4.071

5.  Design, development, and implementation of the radiological physics center's pelvis and thorax anthropomorphic quality assurance phantoms.

Authors:  David S Followill; DeeAnn Radford Evans; Christopher Cherry; Andrea Molineu; Gary Fisher; William F Hanson; Geoffrey S Ibbott
Journal:  Med Phys       Date:  2007-06       Impact factor: 4.071

6.  Evaluation of the ArcCHECK QA system for IMRT and VMAT verification.

Authors:  Guangjun Li; Yingjie Zhang; Xiaoqin Jiang; Sen Bai; Guang Peng; Kui Wu; Qingfeng Jiang
Journal:  Phys Med       Date:  2012-05-12       Impact factor: 2.685

7.  Reference dataset of users' photon beam modeling parameters for the Eclipse, Pinnacle, and RayStation treatment planning systems.

Authors:  Mallory C Glenn; Christine B Peterson; David S Followill; Rebecca M Howell; Julianne M Pollard-Larkin; Stephen F Kry
Journal:  Med Phys       Date:  2019-11-15       Impact factor: 4.071

8.  Optimizing the MLC model parameters for IMRT in the RayStation treatment planning system.

Authors:  Shifeng Chen; Byong Yong Yi; Xiaocheng Yang; Huijun Xu; Karl L Prado; Warren D D'Souza
Journal:  J Appl Clin Med Phys       Date:  2015-09-08       Impact factor: 2.102

9.  AAPM Medical Physics Practice Guideline 5.a.: Commissioning and QA of Treatment Planning Dose Calculations - Megavoltage Photon and Electron Beams.

Authors:  Jennifer B Smilowitz; Indra J Das; Vladimir Feygelman; Benedick A Fraass; Stephen F Kry; Ingrid R Marshall; Dimitris N Mihailidis; Zoubir Ouhib; Timothy Ritter; Michael G Snyder; Lynne Fairobent
Journal:  J Appl Clin Med Phys       Date:  2015-09-08       Impact factor: 2.102

10.  Practical application of Octavius® -4D: Characteristics and criticalities for IMRT and VMAT verification.

Authors:  Patrizia Urso; Rita Lorusso; Luca Marzoli; Daniela Corletto; Paolo Imperiale; Annalisa Pepe; Lorenzo Bianchi
Journal:  J Appl Clin Med Phys       Date:  2018-07-16       Impact factor: 2.102

View more
  1 in total

1.  Interinstitutional beam model portability study in a mixed vendor environment.

Authors:  Sean P Frigo; Jared Ohrt; Yelin Suh; Peter Balter
Journal:  J Appl Clin Med Phys       Date:  2021-10-13       Impact factor: 2.102

  1 in total

北京卡尤迪生物科技股份有限公司 © 2022-2023.