Juliane Goebel1, Julia Hoischen2, Carolin Gramsch3, Haemi P Schemuth4, Andreas-Claudius Hoffmann5, Lale Umutlu4, Kai Nassenstein4. 1. Department of Diagnostic and Interventional Radiology and Neuroradiology, University Hospital Essen, Hufelandstrasse 55, 45122, Essen, Germany. Juliane.Goebel@uk-essen.de. 2. Department of Radiology, Evangelic Hospital Duesseldorf, Duesseldorf, Germany. 3. Institute for Neuroradiology, University Hospital Giessen, Giessen, Germany. 4. Department of Diagnostic and Interventional Radiology and Neuroradiology, University Hospital Essen, Hufelandstrasse 55, 45122, Essen, Germany. 5. Department of Oncology and Hematology, Helios Hospital Ahrenshoop, Ahrenshoop, Germany.
Abstract
PURPOSE: Standardized computer-aided tumor response assessment is common in clinical trials. In contrast, unstructured free text reporting (UFTR) is common in daily routine. Therefore, this study aimed to discern and quantify differences between UFTR and computer-aided standardized tumor response evaluation based on RECIST 1.1 criteria (RECIST), serving as gold standard, in clinical workflow. METHODS: One-hundred consecutive patients with cancer eligible for RECIST 1.1 evaluation, who received five follow-up CTs of the trunk, were retrospectively included. All UFTRs were assigned to RECIST response categories [complete response, partial response (PR), stable disease (SD), progressive disease (PD)]. All CTs were re-evaluated using dedicated software (mint lesion™) applying RECIST 1.1. The accordance in tumor response ratings was analyzed using Cohen's kappa. RESULTS: At the first follow-up, 47 cases were rated differently with an SD underrepresentation and a PR and PD overrepresentation in UFTR. In the subsequent follow-ups, categorical differences were seen in 38, 44, 37, and 44%. Accordance between UFTR and RECIST was fair to moderate (Cohen's kappa: 0.356, 0.477, 0.390, 0.475, 0.376; always p < 0.001). Differences were mainly caused by the rating of even small tumor burden changes as PD or PR in UFTR or by comparison to the most recent prior CT scan in UFTR instead of comparison to nadir or baseline. CONCLUSIONS: Significant differences in tumor response ratings were detected comparing UFTR and computer-aided standardized evaluation based on RECIST 1.1. Thus, standardized reporting should be implemented in daily routine workflow.
PURPOSE: Standardized computer-aided tumor response assessment is common in clinical trials. In contrast, unstructured free text reporting (UFTR) is common in daily routine. Therefore, this study aimed to discern and quantify differences between UFTR and computer-aided standardized tumor response evaluation based on RECIST 1.1 criteria (RECIST), serving as gold standard, in clinical workflow. METHODS: One-hundred consecutive patients with cancer eligible for RECIST 1.1 evaluation, who received five follow-up CTs of the trunk, were retrospectively included. All UFTRs were assigned to RECIST response categories [complete response, partial response (PR), stable disease (SD), progressive disease (PD)]. All CTs were re-evaluated using dedicated software (mint lesion™) applying RECIST 1.1. The accordance in tumor response ratings was analyzed using Cohen's kappa. RESULTS: At the first follow-up, 47 cases were rated differently with an SD underrepresentation and a PR and PD overrepresentation in UFTR. In the subsequent follow-ups, categorical differences were seen in 38, 44, 37, and 44%. Accordance between UFTR and RECIST was fair to moderate (Cohen's kappa: 0.356, 0.477, 0.390, 0.475, 0.376; always p < 0.001). Differences were mainly caused by the rating of even small tumor burden changes as PD or PR in UFTR or by comparison to the most recent prior CT scan in UFTR instead of comparison to nadir or baseline. CONCLUSIONS: Significant differences in tumor response ratings were detected comparing UFTR and computer-aided standardized evaluation based on RECIST 1.1. Thus, standardized reporting should be implemented in daily routine workflow.
Authors: P Therasse; S G Arbuck; E A Eisenhauer; J Wanders; R S Kaplan; L Rubinstein; J Verweij; M Van Glabbeke; A T van Oosterom; M C Christian; S G Gwyther Journal: J Natl Cancer Inst Date: 2000-02-02 Impact factor: 13.506
Authors: Les R Folio; Chelsye J Nelson; Menashe Benjamin; Ayelet Ran; Guy Engelhard; David A Bluemke Journal: AJR Am J Roentgenol Date: 2015-09 Impact factor: 3.959
Authors: Temel Tirkes; Margaret A Hollar; Mark Tann; Marc D Kohli; Fatih Akisik; Kumaresan Sandrasegaran Journal: Radiographics Date: 2013 Sep-Oct Impact factor: 5.333
Authors: Lawrence H Schwartz; Lesley Seymour; Saskia Litière; Robert Ford; Stephen Gwyther; Sumithra Mandrekar; Lalitha Shankar; Jan Bogaerts; Alice Chen; Janet Dancey; Wendy Hayes; F Stephen Hodi; Otto S Hoekstra; Erich P Huang; Nancy Lin; Yan Liu; Patrick Therasse; Jedd D Wolchok; Elisabeth de Vries Journal: Eur J Cancer Date: 2016-05-26 Impact factor: 9.162
Authors: Shenying Fang; Yuling Wang; Dawen Sui; Huey Liu; Merrick I Ross; Jeffrey E Gershenwald; Janice N Cormier; Richard E Royal; Anthony Lucci; Christopher W Schacherer; Julie M Gardner; John D Reveille; Roland L Bassett; Li-E Wang; Qingyi Wei; Christopher I Amos; Jeffrey E Lee Journal: J Clin Oncol Date: 2015-03-16 Impact factor: 44.544
Authors: E A Eisenhauer; P Therasse; J Bogaerts; L H Schwartz; D Sargent; R Ford; J Dancey; S Arbuck; S Gwyther; M Mooney; L Rubinstein; L Shankar; L Dodd; R Kaplan; D Lacombe; J Verweij Journal: Eur J Cancer Date: 2009-01 Impact factor: 9.162
Authors: Lawrence H Schwartz; Saskia Litière; Elisabeth de Vries; Robert Ford; Stephen Gwyther; Sumithra Mandrekar; Lalitha Shankar; Jan Bogaerts; Alice Chen; Janet Dancey; Wendy Hayes; F Stephen Hodi; Otto S Hoekstra; Erich P Huang; Nancy Lin; Yan Liu; Patrick Therasse; Jedd D Wolchok; Lesley Seymour Journal: Eur J Cancer Date: 2016-05-14 Impact factor: 9.162
Authors: M Paesmans; J P Sculier; P Libert; G Bureau; G Dabouis; J Thiriaux; J Michel; O Van Cutsem; R Sergysels; P Mommen; J Klastersky Journal: Eur J Cancer Date: 1997-12 Impact factor: 9.162