OBJECTIVE: The aim of this study was to evaluate the interobserver reliability and intraobserver reproducibility of the Universal, AO, Fernandez and Frykman classifications for distal radius fractures. METHODS: Fifty standard sets of posteroanterior and lateral roentgenograms of displaced distal radius fractures were classified two times by two groups of evaluators at 2-month intervals. The first group consisted of 10 orthopedic surgeons with a minimum of 5 years of experience. The second group consisted of 10 orthopedic residents in their first two years of practice. Interobserver reliability and intraobserver reproducibility were assessed using Cohen's kappa test. RESULTS: None of the classifications achieved good- very good reliability levels. The Fernandez classification had a moderate and the others had a fair interobserver agreement kappa coefficient. All classifications had fair kappa intraobserver agreement although the Frykman and Fernandez classifications had better results. CONCLUSION: None of the classification systems were superior in terms of reliability and reproducibility. The reliability and reproducibility rates of all four classifications were insufficient.
OBJECTIVE: The aim of this study was to evaluate the interobserver reliability and intraobserver reproducibility of the Universal, AO, Fernandez and Frykman classifications for distal radius fractures. METHODS: Fifty standard sets of posteroanterior and lateral roentgenograms of displaced distal radius fractures were classified two times by two groups of evaluators at 2-month intervals. The first group consisted of 10 orthopedic surgeons with a minimum of 5 years of experience. The second group consisted of 10 orthopedic residents in their first two years of practice. Interobserver reliability and intraobserver reproducibility were assessed using Cohen's kappa test. RESULTS: None of the classifications achieved good- very good reliability levels. The Fernandez classification had a moderate and the others had a fair interobserver agreement kappa coefficient. All classifications had fair kappa intraobserver agreement although the Frykman and Fernandez classifications had better results. CONCLUSION: None of the classification systems were superior in terms of reliability and reproducibility. The reliability and reproducibility rates of all four classifications were insufficient.
Authors: Valdênia das Graças Nascimento; Antonio Carlos da Costa; Diego Figueira Falcochio; Leonardo Depiere Lanzarin; Sérgio Luiz Checchia; Ivan Chakkour Journal: Hand (N Y) Date: 2015-06-04
Authors: Amelie von Schneider-Egestorf; Bernhard Meyer; Frank Wacker; Herbert Rosenthal; Christian von Falck Journal: Eur Radiol Date: 2017-03-13 Impact factor: 5.315
Authors: Younis Kamal; Hayat Ahmad Khan; Munir Farooq; Naseemul Gani; Ansar Ul Haq Lone; Adil Bashir Shah; Irfan Ahmad Latto; Mohammad Ashraf Khan Journal: Arch Trauma Res Date: 2015-03-20
Authors: Daniel Gonçalves Machado; Sergio Auto da Cruz Cerqueira; Alexandre Fernandes de Lima; Marcelo Bezerra de Mathias; José Paulo Gabbi Aramburu; Rodrigo Ribeiro Pinho Rodarte Journal: Rev Bras Ortop Date: 2016-01-21