| Literature DB >> 32450365 |
Kelly Kisling1, Carlos Cardenas1, Brian M Anderson2, Lifei Zhang1, Anuja Jhingran3, Hannah Simonds4, Peter Balter1, Rebecca M Howell1, Kathleen Schmeler5, Beth M Beadle6, Laurence Court7.
Abstract
PURPOSE: Automated tools can help identify radiation treatment plans of unacceptable quality. To this end, we developed a quality verification technique to automatically verify the clinical acceptability of beam apertures for 4-field box treatments of patients with cervical cancer. By comparing the beam apertures to be used for treatment with a secondary set of beam apertures developed automatically, this quality verification technique can flag beam apertures that may need to be edited to be acceptable for treatment. METHODS AND MATERIALS: The automated methodology for creating verification beam apertures uses a deep learning model trained on beam apertures and digitally reconstructed radiographs from 255 clinically acceptable planned treatments (as rated by physicians). These verification apertures were then compared with the treatment apertures using spatial comparison metrics to detect unacceptable treatment apertures. We tested the quality verification technique on beam apertures from 80 treatment plans. Each plan was rated by physicians, where 57 were rated clinically acceptable and 23 were rated clinically unacceptable.Entities:
Year: 2020 PMID: 32450365 PMCID: PMC8133770 DOI: 10.1016/j.prro.2020.05.001
Source DB: PubMed Journal: Pract Radiat Oncol ISSN: 1879-8500
Figure 1Comparison of treatment and verification beam apertures. The beam apertures (treatment apertures in red solid line and verification apertures [without postprocessing] in yellow dotted line) are shown for the anteroposterior and right lateral beams (left and right images, respectively). Panel A shows examples of beam apertures that were correctly classified as clinically acceptable by the quality assurance (QA) technique (a true negative result). Panels B and C show examples of beam apertures that were correctly classified as clinically unacceptable by the QA technique (true positive results).
Figure 2Histogram of the mean surface distance (MSD) values. Comparison of the treatment and verification beam apertures, shown for apertures rated clinically acceptable (blue) or unacceptable (red) by physicians. In each subfigure, the mean is reported, and the standard deviation is in parentheses for both the acceptable and unacceptable beams. Lower MSD values indicate better agreement. Abbreviations: AP = anteroposterior; LT = left lateral; PA = posteroanterior; RT = right lateral.
Figure 5Receiver operating characteristic (ROC) curves for each comparison metric. The areas under the curve (AUCs) for each metric and beam angle are shown in the corresponding subfigure. Abbreviations: AP = anteroposterior; LT = left lateral; PA = posteroanterior; RT = right lateral.
HD thresholds for 2 verification scenarios
| AP beam | PA beam | RT beam | LT beam | ||
|---|---|---|---|---|---|
| High sensitivity (TPF = 0.90) | Threshold | 7.0 mm | 8.3 mm | 14.1 mm | 14.1 mm |
| Corresponding FPF | 0.19 | 0.21 | 0.16 | 0.16 | |
| High specificity (FPF = 0.10) | Threshold | 11.0 mm | 10.4 mm | 19.3 mm | 19.9 mm |
| Corresponding TPF | 0.87 | 0.78 | 0.74 | 0.74 |
Abbreviations: AP = anteroposterior; FPF = false-positive fraction; HD = Hausdorff distance; LT = left lateral; PA = posteroanterior; RT = right lateral; TPF = true-positive fraction.