| Literature DB >> 33681838 |
Hannah Bougher1, Petra Buttner2, Jonathon Smith3, Jennifer Banks1, Hyun Su Na3, David Forrestal4, Clare Heal1.
Abstract
HYPOTHESIS: This study aimed to examine whether three-dimensionally printed models (3D models) could improve interobserver and intraobserver agreement when classifying proximal humeral fractures (PHFs) using the Neer system. We hypothesized that 3D models would improve interobserver and intraobserver agreement compared with x-ray, two-dimensional (2D) and three-dimensional (3D) computed tomography (CT) and that agreement using 3D models would be higher for registrars than for consultants.Entities:
Keywords: 3D modeling; Neer system; Proximal humeral fracture; fracture classification; interobserver agreement; intraobserver agreement
Year: 2020 PMID: 33681838 PMCID: PMC7910723 DOI: 10.1016/j.jseint.2020.10.019
Source DB: PubMed Journal: JSES Int ISSN: 2666-6383
Figure 1Representative images of proximal humeral fractures that observers were asked to classify using the Neer system: (A) X-ray, (B) 2D CT, (C) 3D CT, (D) 3D printed model. 2D, two-dimensional; 3D, three-dimensional; CT, computed tomography.
Landis and Koch criteria
| Kappa value | Agreement |
|---|---|
| Less than 0.00 | Poor agreement |
| 0.00-0.20 | Slight agreement |
| 0.21-0.40 | Fair agreement |
| 0.41-0.60 | Moderate agreement |
| 0.61-0.80 | Substantial agreement |
| 0.81-1.00 | Almost perfect agreement |
Interobserver agreement
| X-ray | 2D CT | 3D CT | 3D models | |
|---|---|---|---|---|
| Overall (consultants and registrars, n = 14) | ||||
| kappa | ||||
| κ | 0.29 | 0.30 | 0.35 | 0.47 |
| 95% CI | 0.26-0.31 | 0.27-0.33 | 0.33-0.38 | 0.44-0.50 |
| Agreement | Fair | Fair | Fair | Moderate |
| | <.001 | <.001 | <.001 | <.001 |
| % agreement | ||||
| % | 57.2 | 57.8 | 58.8 | 66.5 |
| 95% CI | 55.1-59.3 | 55.5-60.2 | 56.7-60.9 | 64.6-68.4 |
| Number of images | 30 | 29 | 28 | 26 |
| Consultants (n = 7) | ||||
| kappa | ||||
| κ | 0.26 | 0.37 | 0.30 | 0.48 |
| 95% CI | 0.20-0.32 | 0.31-0.43 | 0.24-0.36 | 0.42-0.55 |
| Agreement | Fair | Fair | Fair | Moderate |
| | <.001 | <.001 | <.001 | <.001 |
| % agreement | ||||
| % | 56.0 | 62.9 | 57.1 | 66.8 |
| 95% CI | 50.2-61.9 | 56.3-69.4 | 52.0-62.3 | 62.3-71.3 |
| Number of images | 30 | 29 | 30 | 27 |
| Registrars (n = 7) | ||||
| kappa | ||||
| κ | 0.35 | 0.25 | 0.39 | 0.51 |
| 95% CI | 0.29-0.40 | 0.19-0.31 | 0.34-0.45 | 0.44-0.57 |
| Agreement | Fair | Fair | Fair | Moderate |
| | <.001 | <.001 | <.001 | <.001 |
| % agreement | ||||
| % | 60.3 | 54.1 | 60.1 | 68.3 |
| 95% CI | 57.3-63.4 | 49.0-59.3 | 55.1-65.0 | 64.4-72.3 |
| Number of images | 30 | 30 | 28 | 29 |
2D, two-dimensional; 3D, three-dimensional; CI, confidence interval; CT, computed tomography.
Agreement has been defined using the Landis and Koch criteria (Table I).
P value less than .05 shows that kappa was statistically significantly different from “0”.
Figure 2Interobserver agreement using Landis and Koch criteria for each modality and group – all observers (top), consultants (middle), and registrars (bottom).
Intraobserver agreement
| X-ray | 2D CT | 3D CT | 3D models | |
|---|---|---|---|---|
| Consultants and registrars (n = 7) | ||||
| kappa | ||||
| Mean κ | 0.45 | 0.41 | 0.43 | 0.60 |
| Range of κ | 0.32-0.63 | 0.10-0.64 | 0.36-0.53 | 0.45-0.89 |
| 95% CI | 0.36-0.54 | 0.25-0.57 | 0.37-0.48 | 0.46-0.73 |
| Agreement | Moderate | Moderate | Moderate | Moderate |
| Range of | ||||
| Mean % agreement | ||||
| % | 70.5 | 65.6 | 65.5 | 75.0 |
| 95% CI | 62.8-78.1 | 53.9-77.3 | 61.2-69.9 | 66.4-83.6 |
| Number of images | 30 | 30 | 30 | 30 |
| Consultants (n = 3) | ||||
| kappa | ||||
| Mean κ | 0.46 | 0.45 | 0.39 | 0.69 |
| Range of κ | 0.42-0.48 | 0.33-0.56 | 0.36-0.44 | 0.51-0.89 |
| Agreement | Moderate | Moderate | Moderate | Substantial |
| Mean % agreement | 74.4 | 70.8 | 65.6 | 82.0 |
| Registrars (n = 4) | ||||
| kappa | ||||
| Mean κ | 0.44 | 0.37 | 0.45 | 0.52 |
| Range of κ | 0.32-0.63 | 0.10-0.64 | 0.40-0.53 | 0.45-0.60 |
| Agreement | Moderate | Fair | Moderate | Moderate |
| Mean % agreement | 67.5 | 61.7 | 65.5 | 69.8 |
2D, two-dimensional; 3D, three-dimensional; CI, confidence interval; CT, computed tomography.
Agreement has been defined using the Landis and Koch criteria (Table I).
P value less than .05 shows that kappa was statistically significantly different from “0”. Range of P values assessing kappa for each observer.
29 images for one observer
29 images for two observers.
Figure 3Intraobserver agreement using Landis and Koch criteria for each modality for all observers.