| Literature DB >> 21844862 |
Kenneth Sutherland1, Masayori Ishikawa, Gerard Bengua, Yoichi M Ito, Yoshiko Miyamoto, Hiroki Shirato.
Abstract
The purpose of this study was to evaluate a custom portal image - digitally reconstructed radiograph (DRR) registration software application. The software works by transforming the portal image into the coordinate space of the DRR image using three control points placed on each image by the user, and displaying the fused image. In order to test statistically that the software actually improves setup error estimation, an intra- and interobserver phantom study was performed. Portal images of anthropomorphic thoracic and pelvis phantoms with virtually placed irradiation fields at known setup errors were prepared. A group of five doctors was first asked to estimate the setup errors by examining the portal and DRR image side-by-side, not using the software. A second group of four technicians then estimated the same set of images using the registration software. These two groups of human subjects were then compared with an auto-registration feature of the software, which is based on the mutual information between the portal and DRR images. For the thoracic case, the average distance between the actual setup error and the estimated error was 4.3 ± 3.0 mm for doctors using the side-by-side method, 2.1 ± 2.4 mm for technicians using the registration method, and 0.8 ± 0.4mm for the automatic algorithm. For the pelvis case, the average distance between the actual setup error and estimated error was 2.0 ± 0.5 mm for the doctors using the side-by-side method, 2.5 ± 0.4 mm for technicians using the registration method, and 2.0 ± 1.0 mm for the automatic algorithm. The ability of humans to estimate offset values improved statistically using our software for the chest phantom that we tested. Setup error estimation was further improved using our automatic error estimation algorithm. Estimations were not statistically different for the pelvis case. Consistency improved using the software for both the chest and pelvis phantoms. We also tested the automatic algorithm with a database of over 5,000 clinical cases from our hospital. The algorithm performed well for head and breast but performed poorly for pelvis cases, probably due to lack of contrast in the megavoltage portal image. The software incorporates an original algorithm to fuse portal and DRR images, which we describe in detail. The offset optimization algorithm used in the automatic mode of operation is also unique, and may be useful if the contrast of the portal images can be improved.Entities:
Mesh:
Year: 2011 PMID: 21844862 PMCID: PMC5718652 DOI: 10.1120/jacmp.v12i3.3492
Source DB: PubMed Journal: J Appl Clin Med Phys ISSN: 1526-9914 Impact factor: 2.102
Figure 1The Portal‐DRR software. The user specifies three points on the portal (top left) and DRR (bottom left) images using grid spacing tick marks. The merged image (right) is used to compare internal bony structures in order to determine the setup error, which is displayed at the lower right.
Figure 2Conversion of rotation axis for image distortion.
Figure 3Tilt experiment setup. The imaging plate was tilted 10° horizontally. The test point (5 cm along the x‐ and y‐axis) was inside the working area for (a) and outside the working area for (b). The test point was transformed into the DRR image coordinates and the distance from the corresponding point specified on the DRR image was measured.
Figure 4Portal image taken without field or scale tick marks (a). Automatically generated treatment field and scale ticks applied to the phantom portal image (b).
Figure 5(b)Results of AP pelvis study.
Figure 5(a)Results of AP chest study.
Average distance from estimated and actual setup error.
|
|
|
|
|---|---|---|
| Side‐by‐side |
| |
| Chest | Registration |
|
| Auto |
| |
| Side‐by‐side |
| |
| Pelvis | Registration |
|
| Auto |
|
Average consistency of test subjects.
|
|
|
|
|---|---|---|
| Chest | Side‐by‐side |
|
| Registration |
| |
| Pelvis | Side‐by‐side |
|
| Registration |
|
Comparison of estimation methods (chest).
|
|
|
|---|---|
| Side‐by‐side ‐ Registration | 0.0067 |
| Side‐by‐side ‐ Auto | 0.0002 |
| Registration ‐ Auto | 0.0001 |
Comparison of estimation methods (pelvis).
|
|
|
|---|---|
| Side‐by‐side ‐ Registration | 0.0047 |
| Side‐by‐side ‐ Auto | 0.4547 |
| Registration ‐ Auto | 0.0593 |
Comparison of auto‐shift function with clinical case database.
|
|
|
|
|---|---|---|
| All | 5101 |
|
| AP Head | 407 |
|
| LR Head | 537 |
|
| AP Neck | 158 |
|
| LR Neck | 214 |
|
| OB Neck | 95 |
|
| AP Chest | 264 |
|
| LR Chest | 101 |
|
| OB Breast | 160 |
|
| AP Pelvis | 148 |
|
| LR Pelvis | 148 |
|
Comparison of auto‐shift function with humans (head).
|
|
|
|---|---|
| Good | 35 (57%) |
| Fair | 23 (38%) |
| Poor | 1 (2%) |
|
| 2 (3%) |
Comparison of auto‐shift function with humans (pelvis).
|
|
|
|---|---|
| Good | 4 (8%) |
| Fair | 9 (19%) |
| Poor | 16 (33%) |
|
| 19 (40%) |
Result of tilt experiment.
|
|
|
|---|---|
| Inside working area | 0.7 mm |
| Outside working area | 1.5 mm |