AIMS: To test the effectiveness of a teaching resource (a decision tree with diagnostic criteria based on published literature) in improving the proficiency of Gleason grading of prostatic cancer by general pathologists. METHODS: A decision tree with diagnostic criteria was developed by a panel of urological pathologists during a reproducibility study. Twenty-four general histopathologists tested this teaching resource. Twenty slides were selected to include a range of Gleason score groups 2-4, 5-6, 7 and 8-10. Interobserver agreement was studied before and after a presentation of the decision tree and criteria. The results were compared with those of the panel of urological pathologists. RESULTS: Before the teaching session, 83% of readings agreed within +/- 1 of the panel's consensus scores. Interobserver agreement was low (kappa = 0.33) compared with that for the panel (kappa = 0.62). After the presentation, 90% of readings agreed within +/- 1 of the panel's consensus scores and interobserver agreement amongst the pathologists increased to kappa = 0.41. Most improvement in agreement was seen for the Gleason score group 5-6. CONCLUSIONS: The lower level of agreement among general pathologists highlights the need to improve observer reproducibility. Improvement associated with a single training session is likely to be limited. Additional strategies include external quality assurance and second opinion within cancer networks.
AIMS: To test the effectiveness of a teaching resource (a decision tree with diagnostic criteria based on published literature) in improving the proficiency of Gleason grading of prostatic cancer by general pathologists. METHODS: A decision tree with diagnostic criteria was developed by a panel of urological pathologists during a reproducibility study. Twenty-four general histopathologists tested this teaching resource. Twenty slides were selected to include a range of Gleason score groups 2-4, 5-6, 7 and 8-10. Interobserver agreement was studied before and after a presentation of the decision tree and criteria. The results were compared with those of the panel of urological pathologists. RESULTS: Before the teaching session, 83% of readings agreed within +/- 1 of the panel's consensus scores. Interobserver agreement was low (kappa = 0.33) compared with that for the panel (kappa = 0.62). After the presentation, 90% of readings agreed within +/- 1 of the panel's consensus scores and interobserver agreement amongst the pathologists increased to kappa = 0.41. Most improvement in agreement was seen for the Gleason score group 5-6. CONCLUSIONS: The lower level of agreement among general pathologists highlights the need to improve observer reproducibility. Improvement associated with a single training session is likely to be limited. Additional strategies include external quality assurance and second opinion within cancer networks.
Authors: B Helpap; G Kristiansen; M Beer; J Köllermann; U Oehler; A Pogrebniak; Ch Fellbaum Journal: Pathol Oncol Res Date: 2011-12-17 Impact factor: 3.201
Authors: Al B Barqawi; Ruslan Turcanu; Eduard J Gamito; Scott M Lucia; Colin I O'Donnell; E David Crawford; David D La Rosa; Francisco G La Rosa Journal: Int J Clin Exp Pathol Date: 2011-06-12
Authors: A Roosen; R Ganzer; B Hadaschik; J Köllermann; A Blana; T Henkel; A-B Liehr; D Baumunk; S Machtens; G Salomon; L Sentker; U Witsch; K U Köhrmann; M Schostak Journal: Urologe A Date: 2014-07 Impact factor: 0.639
Authors: Michael Goodman; Kevin C Ward; Adeboye O Osunkoya; Milton W Datta; Daniel Luthringer; Andrew N Young; Katerina Marks; Vaunita Cohen; Jan C Kennedy; Michael J Haber; Mahul B Amin Journal: Prostate Date: 2012-01-06 Impact factor: 4.104
Authors: Ken J Newell; John F Amrhein; Rashmikant J Desai; Paul F Middlebrook; Todd M Webster; Barry W Sawka; Brian F Rudrick Journal: Can Urol Assoc J Date: 2008-10 Impact factor: 1.862