| Literature DB >> 24285989 |
Kowshik Jain1, Ravindra Badge, Rajeev Bansal, Anil Gambhir, Mukesh Hemmady.
Abstract
The aim of this study was to evaluate the intraobserver and interobserver variability in determining the socket version using the wire marker. 100 anteroposterior pelvis radiographs of cemented primary total hip replacements were reviewed by two orthopaedic consultants and registrars, twice. Intrarater and interrater reliability were assessed using Cohen's kappa. Intrarater kappas for junior doctors were 0.78 and 0.80, 0.73 and 0.62 for Consultants. Interrater kappas were 0.60 between the two Consultants and 0.63 between the two Junior Doctors. The kappas between Consultant A and Junior Doctor A was 0.61, between Consultant A and Junior Doctor B was 0.59, between Consultant B and Junior Doctor A was 0.53 and between Consultant B and Junior Doctor B was 0.46. Intrarater reliability was substantial for the two junior doctors and the two consultants. Interrater reliability was moderate-to-substantial between the two consultants, between the two junior doctors and between each pair of junior doctors/consultants.Entities:
Keywords: Interrater; Intrarater; Socket; THR; Version.
Year: 2013 PMID: 24285989 PMCID: PMC3841964 DOI: 10.2174/1874325001307010630
Source DB: PubMed Journal: Open Orthop J ISSN: 1874-3250
Landis and Koch’s Interpretation of Kappa
| Value | Interpretation |
|---|---|
| Below 0.0 | Poor |
| 0.00-0.20 | Slight |
| 0.21-0.40 | Fair |
| 0.41-0.60 | Moderate |
| 0.61-0.80 | Substantial |
| 0.81-1.00 | Almost perfect |
Overall Frequency of Ratings
| Grade | Frequency | Percent |
|---|---|---|
| 1 (anteversion) | 646 | 80.75 |
| 2 (neutral) | 126 | 15.75 |
| 3 (retroversion) | 27 | 3.38 |
| 4 (cannot say) | 1 | 0.13 |
| Total | 800 | 100.00 |
Frequency of Ratings by Doctor and Rating
| CA1 | CA2 | CB1 | CB2 | JDA1 | JDA2 | JDB1 | JDB2 | Total | |
|---|---|---|---|---|---|---|---|---|---|
| 1 (anteversion) | 81 | 78 | 88 | 92 | 78 | 83 | 74 | 72 | 646 |
| 2 (neutral) | 16 | 19 | 7 | 5 | 19 | 14 | 21 | 25 | 126 |
| 3 (retroversion) | 3 | 3 | 4 | 3 | 3 | 3 | 5 | 3 | 27 |
| 4 (cannot say) | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 1 |
| Total | 100 | 100 | 100 | 100 | 100 | 100 | 100 | 100 | 800 |
| CA1 | CA2 | CB1 | CB2 | JDA1 | JDA2 | JDB1 | JDB2 | Total | |
| 1 (anteversion) | 81 | 78 | 88 | 92 | 78 | 83 | 74 | 72 | 646 |
| 2 (neutral) | 16 | 19 | 7 | 5 | 19 | 14 | 21 | 25 | 126 |
| 3 (retroversion) | 3 | 3 | 4 | 3 | 3 | 3 | 5 | 3 | 27 |
| 4 (cannot say) | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 1 |
| Total | 100 | 100 | 100 | 100 | 100 | 100 | 100 | 100 | 800 |
Abbreviations: CA1- Consultant A 1st rating; CA2- Consultant A 2nd rating; CB1- Consultant B 1st rating; CB2- Consultant B 2nd rating; JDA1- Junior doctor A 1st rating; JDA2- Junior doctor A 2nd rating; JDB1- Junior doctor B 1st rating; JDB2- Junior doctor B 2nd rating.
Pairwise Kappa Coefficients (95% Confidence Interval) Between Doctors; Diagonal Entries are Intra-Rater Coefficients; Off-Diagonal Entries are Inter-Rater Coefficients
| CA | CB | JDA | JDB | |
|---|---|---|---|---|
| CA | 0.73 (0.54 to 0.88) | 0.60 (0.38 to 0.79) | 0.61 (0.40 to 0.79) | 0.59 (0.40 to 0.76) |
| CB | 0.62 (0.33 to 0.85) | 0.53 (0.32 to 0.73) | 0.46 (0.26 to 0.65) | |
| JDA | 0.78 (0.60 to 0.92) | 0.63 (0.45 to 0.79) | ||
| JDB | 0.80 (0.67 to 0.92) |