| Literature DB >> 23822583 |
Margaret M O'Keeffe1, Todd M Davis, Kerry Siminoski.
Abstract
BACKGROUND: The surrogate indicator of radiological excellence that has become accepted is consistency of assessments between radiologists, and the technique that has become the standard for evaluating concordance is peer review. This study describes the results of a workstation-integrated peer review program in a busy outpatient radiology practice.Entities:
Mesh:
Year: 2013 PMID: 23822583 PMCID: PMC3711932 DOI: 10.1186/1471-2342-13-19
Source DB: PubMed Journal: BMC Med Imaging ISSN: 1471-2342 Impact factor: 1.930
Assigned and reviewed cases for each reviewing radiologist
| 1 | 254 | 254 (100) | 0 (0) |
| 2 | 85 | 71 (84) | 14 (16) |
| 3 | 133 | 133 (100) | 0 (0) |
| 4 | 388 | 315 (81) | 73 (19) |
| 5 | 39 | 23 (59) | 16 (41) |
| 6 | 529 | 479 (91) | 50 (9) |
| 7 | 261 | 220 (84) | 41 (16) |
| 8 | 87 | 56 (64) | 31 (36) |
| 9 | 177 | 79 (45) | 98 (55) |
| 10 | 288 | 75 (26) | 213 (74) |
| Total | 2,241 | 1,705 (76) | 535 (24) |
Numbers in brackets are percentages of total cases for each reviewing radiologist.
Assigned and reviewed cases by modality
| Radiography | 770 (34) | 611 (79) | 159 (21) |
| Fluoroscopy | 57 (3) | 51 (89) | 6 (11) |
| Mammography | 499 (22) | 385 (77) | 114 (23) |
| Nuclear Medicine | 247 (11) | 222 (90) | 25 (10) |
| Ultrasound | 668 (30) | 436 (65) | 232 (35) |
| Total | 2,241 | 1,703 (76) | 535 (24) |
Scoring of reviewed cases
| Radiography (%) | 0 (0) | 605 (99.0) | 2 (0.3) | 4 (0.7) | 0 (0) | 611 | 605 (99.0) | 607 (99.3) | 6 (1.0) | 4 (0.7) |
| Fluoroscopy (%) | 0 (0) | 50 (98.0) | 0 (0) | 0 (0) | 1 (2.0) | 51 | 50 (98.0) | 50 (98.0) | 1 (2.0) | 1 (2.0) |
| Mammography (%) | 0 (0) | 385 (100.0) | 0 (0) | 0 (0) | 0 (0) | 385 | 385 (100.0) | 385 (100.0) | 0 (0) | 0 (0) |
| Nuclear Medicine (%) | 1 (0.4) | 218 (98.2) | 0 (0) | 3 (1.4) | 0 (0) | 222 | 219 (98.6) | 219 (98.6) | 3 (1.4) | 3 (1.4) |
| Ultrasound (%) | 2 (0.4) | 432 (99.1) | 0 (0) | 2 (0.5) | 0 (0) | 436 | 434 (99.5) | 434 (99.5) | 2 (0.5) | 2 (0.5) |
| Total (%) | 3 (0.2) | 1690 (99.1) | 2 (0.1) | 9 (0.5) | 1 (0.1) | 1705 | 1693 (99.3) | (1695 (99.4) | 12 (0.7) | 10 (0.6) |
Numbers in brackets are percentages of each modality assigned the corresponding score. Score definitions are: 0 = positive feedback; 1 = agreement with original report. 2 = error in diagnosis- not usually made; 3 = error in diagnosis – should usually be made; 4 = error in diagnosis – should almost always be made.
Comparison of scoring in current study to published data
| Reference | Current | 27 | 4 | 5 | 3 | 19 | 22 |
| Year | 2013 | 1998 | 2004 | 2004 | 2009 | 2012 | 2012 |
| Grades | | ||||||
| 0 | 0.2 | NA | NA | NA | NA | NA | NA |
| 1 | 99.1 | 95.6 | 96.3 | 96.5 | 97.1 | 96.2 | 96.5 |
| 2 | 0.1 | 1.4 | 2.9 | NA | 2.5 | 3.6 | NA |
| 3 | 0.5 | NA | NA | NA | 0.3 | 0.2 | NA |
| 4 | 0.1 | NA | NA | NA | 0.1 | 0.0 | NA |
| Non-discrepant (0–1) | 99.3 | 95.6 | 96.3 | 96.5 | 97.1 | 96.2 | 96.5 |
| Concordant (0–2) | 99.4 | 97.0 | 99.2 | NA | 99.6 | 99.8 | NA |
| Discrepant (2–4) | 0.7 | 4.4 | 3.7 | 3.5 | 2.9 | 3.8 | 3.5 |
| Clinically Significant Discrepancy(3–4) | 0.6 | 3.0 | 0.8 | N/A | 0.4 | 0.2 | NA |
NA not available.
Values are percentage of cases in each scoring category.
*Siegle et al. used a slightly different scoring system, but it has been accepted by RADPEER; values in the table for this paper are as reported in Borgstede [4].
**Soffa et al. used a 4-point rating system with nominally different definitions of each score, but they are very close to the RADPEER system [4,5].