Ryota Nakaya1, Masaki Takao2, Hidetoshi Hamada3, Takashi Sakai3, Nobuhiko Sugano1. 1. Departments of Orthopaedic Medical Engineering, Osaka University Graduate School of Medicine, 2-2 Yamadaoka, Suita, Osaka 565-0871, Japan. 2. Departments of Orthopaedic Medical Engineering, Osaka University Graduate School of Medicine, 2-2 Yamadaoka, Suita, Osaka 565-0871, Japan. Electronic address: masaki-tko@umin.ac.jp. 3. Departments of Orthopaedic Surgery, Osaka University Graduate School of Medicine, 2-2 Yamadaoka, Suita, Osaka 565-0871, Japan.
Abstract
BACKGROUND: The Dorr classification is widely used to evaluate femoral bone quality, but it has no clear quantitative criteria. This study aimed to evaluate the reproducibility of the Dorr classification and examine its quantitative indices on plain radiographs, which are suitable for objective classification. HYPOTHESIS: Reproducibility of the Dorr classification is influenced by the clinical experience of the examiners, and radiographic indices are required for this classification. MATERIALS AND METHODS: One hundred and one patients were examined using their preoperative plain antero-posterior and lateral radiographs. To evaluate the reproducibility of the Dorr classification, the Dorr type of each patient was judged twice each by three expert hip surgeons and three junior hip surgeons. Indices measured using the plain radiographs were canal-to-calcar ratio, cortical index (CI), and canal flare index. A receiver operating characteristic curve was used to evaluate which measured parameters were suitable as indices for the Dorr classification which was determined by the consultation among three expert hip surgeons. RESULTS: Regarding intra-examiner reproducibility, kappa coefficients for the three junior hip surgeons were 0.36, 0.62, and 0.65, whereas those for the three expert hip surgeons were 0.70, 0.86, and 0.87. Regarding inter-examiner reproducibility, the kappa coefficient for the junior hip surgeons was 0.32, whereas that for the expert hip surgeons was 0.52. The CI on the lateral radiograph had the largest area under the curve (AUC) between types A and B, whereas the CI on the anteroposterior radiograph had the largest AUC between types B and C. The respective cutoff points of the CI on the anteroposterior radiograph were 0.58 between types A and B and 0.49 between types B and C. The respective cutoff points of CI on the lateral radiograph were 0.45 between types A and B and 0.28 between types B and C. CONCLUSION: The intra-examiner reproducibility of the Dorr classification ranged from "fair" to "almost perfect", whereas the inter-examiner reproducibility ranged from "fair" to "moderate". Both were influenced by the level of clinical experience of the examiners. The most suitable index for classification using plain radiographs of the hip is the CI on anteroposterior and lateral radiographs. LEVEL OF EVIDENCE: IV, retrospective study.
BACKGROUND: The Dorr classification is widely used to evaluate femoral bone quality, but it has no clear quantitative criteria. This study aimed to evaluate the reproducibility of the Dorr classification and examine its quantitative indices on plain radiographs, which are suitable for objective classification. HYPOTHESIS: Reproducibility of the Dorr classification is influenced by the clinical experience of the examiners, and radiographic indices are required for this classification. MATERIALS AND METHODS: One hundred and one patients were examined using their preoperative plain antero-posterior and lateral radiographs. To evaluate the reproducibility of the Dorr classification, the Dorr type of each patient was judged twice each by three expert hip surgeons and three junior hip surgeons. Indices measured using the plain radiographs were canal-to-calcar ratio, cortical index (CI), and canal flare index. A receiver operating characteristic curve was used to evaluate which measured parameters were suitable as indices for the Dorr classification which was determined by the consultation among three expert hip surgeons. RESULTS: Regarding intra-examiner reproducibility, kappa coefficients for the three junior hip surgeons were 0.36, 0.62, and 0.65, whereas those for the three expert hip surgeons were 0.70, 0.86, and 0.87. Regarding inter-examiner reproducibility, the kappa coefficient for the junior hip surgeons was 0.32, whereas that for the expert hip surgeons was 0.52. The CI on the lateral radiograph had the largest area under the curve (AUC) between types A and B, whereas the CI on the anteroposterior radiograph had the largest AUC between types B and C. The respective cutoff points of the CI on the anteroposterior radiograph were 0.58 between types A and B and 0.49 between types B and C. The respective cutoff points of CI on the lateral radiograph were 0.45 between types A and B and 0.28 between types B and C. CONCLUSION: The intra-examiner reproducibility of the Dorr classification ranged from "fair" to "almost perfect", whereas the inter-examiner reproducibility ranged from "fair" to "moderate". Both were influenced by the level of clinical experience of the examiners. The most suitable index for classification using plain radiographs of the hip is the CI on anteroposterior and lateral radiographs. LEVEL OF EVIDENCE: IV, retrospective study.
Authors: David Mevorach; Itay Perets; Alexander Greenberg; Leonid Kandel; Yoav Mattan; Meir Liebergall; Gurion Rivkin Journal: Int Orthop Date: 2022-06-20 Impact factor: 3.479