Literature DB >> 26535194

Intra and interobserver concordance of the AO classification system for fractures of the long bones in the pediatric population.

Artur Yudi Utino1, Douglas Rene de Alencar1, Leonardo Fernadez Maringolo1, Julia Machado Negrão1, Francesco Camara Blumetti1, Eiffel Tsuyoshi Dobashi1.   

Abstract

OBJECTIVE: The AO classification for fractures of the long bones in the pediatric population was developed and validated in 2006. However, the complexity of this system has limited its use in clinical practice and few studies in the literature have evaluated its reproducibility and applicability. The present study had the objective of determining the intra and interobserver agreement using the pediatric AO system, among physicians with different levels of experience.
METHODS: After making the sample calculation, 108 consecutive radiographs on long-bone fractures in patients aged 0-16 years, coming from the digital files of the quaternary-level hospital, were selected. The radiographs were classified by five examiners with different levels of experience after prior explanations about the system. A chart containing images from the classification was made available for consultation. The evaluations were made at two different times by each observer. The Fleiss kappa index was used to ascertain the intra and interobserver agreement.
RESULTS: Intraobserver agreement that was at least substantial was obtained for all the items of the classification and it reached excellent levels for all observers in relation to five of the seven items considered. The interobserver evaluation presented excellent levels of agreement in two items, substantial in two items, moderate to substantial in one item and poor to moderate in one item. No influence from the observer's experience was observed with regard to obtaining higher or lower levels of agreement, either in the intraobserver or in the interobserver evaluation.
CONCLUSIONS: In this study, the intra and interobserver agreement was considered to be good or excellent for the pediatric AO classification system, for the parameters of bone, segment, paired bone, subsegment, standard and deviation. However, the intra and interobserver agreement was statistically unsatisfactory for the parameter of severity/side of avulsion. The levels of agreement obtained did not depend on the observer's level of experience within pediatric orthopedics.

Entities:  

Keywords:  Bone fractures/classification; Child; Method; Orthopedics

Year:  2015        PMID: 26535194      PMCID: PMC4610979          DOI: 10.1016/j.rboe.2015.08.001

Source DB:  PubMed          Journal:  Rev Bras Ortop        ISSN: 2255-4971


Introduction

The main reason for hospitalization within pediatric orthopedics is fractures of the long bones. Classification of the fractures is essential for determining the epidemiology, facilitating communication between orthopedists and defining treatment algorithms. Several classification systems have been developed based on the location and morphology of the injuries, in order to categorize each type of injury of long bones in children. The AO classification for long-bone fractures in adults is not used for the pediatric population because it does not take into consideration the bone elasticity, presence of the growth plate and anatomical characteristics of the epiphysis. The same trauma mechanism may produce different fracture patterns in children, such as plastic deformities, greenstick fractures and complex fractures. Another important characteristic is the greater fragility of the growth plate, which is less resistant than the surrounding bone, thus meaning that this structure is more easily injured. Any orthopedic classification system needs to be clinically relevant, reproducible and valid. To meet these objectives, the system needs to go through three investigative stages, as proposed by Audigé et al. In the case of pediatric fractures, the first stage should involve experienced pediatric orthopedists, in order to define a common language for describing the fracture patterns and classification process. The second stage relates to developing international multicenter agreement studies that involve surgeons with different levels of experience. The third stage relates to implementation of a prospective clinical study. The pediatric AO classification takes into consideration the AO system for long-bone fractures in adults and the most relevant pediatric fractures. The location of the fracture and its morphology are taken into consideration. The bone is subdivided into three segments: proximal (epiphysis + metaphysis), diaphyseal and distal (epiphysis + metaphysis). Regarding morphology, the disease code for the child and the fracture severity and displacement, which depend on the type of fracture, are considered. The authors of the pediatric AO classification have already reached the third stage of the validation process, i.e. application of the proposed system within the context of a prospective clinical study. However, the degree of complexity of this method and the difficulty in incorporating it into clinical practice lead us to believe that studies evaluating its reproducibility and accuracy are still needed, especially if less experienced orthopedists are taken into consideration. Thus, we conceived this study with the aim of estimating the intra and interobserver agreement of the AO classification system for long bones in children, among examiners with different levels of experience.

Materials and methods

This research project was submitted to the research ethics committee of the Brazil Platform for assessment and approval (approval number: 29073114.3.0000.5505).

Sample calculation

Firstly, we determined the number of radiographs that would be needed to obtain kappa values greater than 0.70, through tests with a significance level of 5% and power of 80%. The calculation showed that we would need to evaluate at least 95 radiographs. The formula used for this calculation was as follows: In which z(alpha) and z(beta) are obtained from the normal distribution; Q0 and Q1 are obtained from the table of the reference article for the sample size; and K1 and K0 are the kappa values obtained from the hypotheses of the test.For this analysis, we obtain:

Sample selection

These examinations were obtained consecutively between January 2013 and March 2014 in the imaging diagnostics department of a quaternary-level university hospital, with prior authorization. All the radiographs produced during this period that were identified in the digital files as images of segments of the appendicular skeleton were obtained for evaluation. These segments included the pelvis, thigh, knee, lower leg, ankle, shoulder, upper arm, elbow, forearm and wrist. Examinations performed on children aged 0–16 years who presented fracture of the long bones were included. The radiographs were selected so as to include examinations with two views and good radiographic quality. This was done by two orthopedists who did not participate in the classification process. Thus, 119 radiographs on fractures of the long bones were collected, in anteroposterior and lateral views. Among these, six were excluded due to poor quality and five because the growth plate had already closed. The study therefore included 108 radiographs.

Process of classifying the radiographs

The radiographs were classified by five examiners with different levels of experience. One was at expert level (>10 years of experience as a pediatric orthopedist – examiner 5), one was at advanced level (>5 years of experience as a pediatric orthopedist – examiner 4), one was at medium level (>1 year of experience as a pediatric orthopedist – examiner 3) and two were at basic level (general orthopedists – examiners 1 and 2). With the aim of minimizing bias due to difficulties in interpretation and inexperience with the classification system, the observers were given prior explanations regarding the classification systems used. Furthermore, during the classification process, a brochure containing the entire AO classification for pediatric long-bone fractures was available for each participant. The radiographs were organized in chronological order in a closed digital file. The classifications were made by five observers, at two different times, with an interval of 15 days between one evaluation and the other. Each of the five researchers evaluated and classified the radiographs independently. The observers were given all the time that they needed to evaluate the radiographs. The participants were instructed not to discuss the classification systems until the end of the classification stage. Furthermore, they did not have access to the patients’ histories or to any clinical data.

Statistical analysis

The statistical analysis on the results obtained was performed by a specialist professional in the field of medical statistics. The Fleiss kappa test was used to assess the intra and interobserver agreement for each scale.5, 6 It is considered that using the Fleiss kappa coefficient is the most appropriate method for situations in which multiple examinations or evaluations are made and when the scale evaluated presents several categories. The tests were interpreted in accordance with Altman as “proportional agreement with correction of random occurrences”. The kappa agreement coefficient has values ranging from +1 (perfect agreement), through 0 (agreement equal to chance) to –1 (complete discordance). There are no definitions regarding the agreement levels that are accepted, but some studies have suggested that results in the range of 0–0.2 show very low agreement, 0.21–0.40 poor agreement, 0.41–0.60 moderate agreement and 0.61–0.80 substantial agreement. Values greater than 0.80 are considered to be practically perfect agreement.4, 7, 8, 9

Fracture classification system

The overall structure of the classification is based on the location of the fracture and its morphology. The fracture locations covered are the different long bones and their respective segments and subsegments. The morphology of the fracture is described by a specific code that represents the fracture pattern, with a code for the severity and an additional code that is used for certain types of fractures (displaced supracondylar fractures of the humerus, displaced fractures of the head and neck of the radius and fractures of the femoral neck). The numbering system for the long bones (1–4) and for the segments (proximal = 1, diaphyseal = 2 and distal = 3) is similar to that of the AO system described by Müller for fractures of the long bones in adults. It differs in relation to the coding for malleolar fractures, such as fractures of the distal tibia or fibula. Moreover, the definitions of the three bone segments differ from those of adults. The letters R, U, T and F refer to the radius, ulna, tibia and fibula and are added to the code for the segment, in relation to paired bones, when only one bone is fractured or when both bones are fractures but with different patterns. With regard to the subsegments, segments 1 and 3 are subdivided into two subsegments: the epiphysis (E) and the metaphysis (M). Segment two is the same as the diaphyseal subsegment (D). The metaphysis is defined as a square in which the sides have the same length as the widest part of the growth place. In relation to paired bones such as the radius/ulna and tibia/fibula, both bones should be included in the square. The proximal femur is an exception: its metaphysis is not defined as a square but is located between the growth plate and the subtrochanteric line. If the center of the fracture line is located inside the abovementioned square, it is a metaphyseal fracture. If the epiphysis and the respective growth plate (physis) are included, it is an epiphyseal fracture. Intra and extra-articular ligament avulsions are epiphyseal and metaphyseal injuries, respectively. A certain number of fracture patterns that are important in children are described by the so-called “child code”. These fracture patterns are specific for the subsegments in which they are located and thus are grouped as E, M or D. This code also takes into consideration some internationally accepted classification systems for pediatric fractures (such as the classification of Salter–Harris).3, 10, 12 The severity code distinguishes between two grades: simple (.1) and multifragmented (.2). To describe the side of the avulsion, when necessary, the letter M would indicate medial ligament avulsion and the letter L, lateral. Supracondylar fractures of the humerus, which are classified as 13-M/3, are described using an additional code that takes into consideration the degree of displacement (I–IV), which is very similar to the classification of Gartland. When the paired bones (radius/ulna or tibia/fibula) both present the same fracture pattern, they should be documented by only one classification code. In this case, the severity code will be that of the bone that is more severely fractured. On the other hand, when only one bone is fractured, a lower-case letter defines this bone (r, u, t or f) and should be added to the code for the segment. For example, 22u describes a diaphyseal fracture of the ulna in isolation. Furthermore, when the two bones are fractured with different fracture patterns, each fracture should be classified separately and a lower-case letter should be included in the classification. For example, a complete spiral fracture of the radius and plastic deformity of the ulna are classified as 22r-D/5.1 and 22u-D/1.1. Fractures of the head and neck of the radius are described by an additional code (I–III) that takes into account the angle and grade of displacement. Fractures of the femoral neck are proximal metaphyseal fractures (M), with an intertrochanteric line that limits the metaphysis. These metaphyseal fractures can be divided into three types, which are represented by an additional code (I–III) that takes into account the position of the fracture in the proximal metaphysis: transcervical, basicervical and transtrochanteric.

Results

Intraobserver agreement

The data relating to the statistical evaluation on intraobserver agreement and the respective results according to the Fleiss kappa index are shown in Table 1, Table 2, Table 3, Table 4, Table 5, Table 6, Table 7. Each item that forms part of the classification was analyzed independently and is presented in a specific table.
Table 1

Statistical analysis on intraobserver agreement according to the Fleiss kappa index, described for each examiner and for each of the parameters analyzed in the pediatric AO classification: in this table, the bone. CI, confidence interval.

AO classificationFleiss kappa index95% CI
LowerUpper
Bone
 Examiner 1111
 Examiner 2111
 Examiner 3111
 Examiner 40.990.97041
 Examiner 5111
Table 2

Statistical analysis on intraobserver agreement according to the Fleiss kappa index, described for each examiner and for each of the parameters analyzed in the pediatric AO classification: in this table, the segment. CI, confidence interval.

AO classificationFleiss kappa index95% CI
LowerUpper
Segment
 Examiner 10.91410.84610.9822
 Examiner 2111
 Examiner 30.98640.95971
 Examiner 40.98640.961
 Examiner 5111
Table 3

Statistical analysis on intraobserver agreement according to the Fleiss kappa index, described for each examiner and for each of the parameters analyzed in the pediatric AO classification: in this table, the paired bone. CI, confidence interval.

AO classificationFleiss kappa index95% CI
LowerUpper
Paired bone
 Examiner 10.950.89351
 Examiner 20.98110.94411
 Examiner 30.91990.84290.9918
 Examiner 4111
 Examiner 5111
Table 4

Statistical analysis on intraobserver agreement according to the Fleiss kappa index, described for each examiner and for each of the parameters analyzed in the pediatric AO classification: in this table, the subsegment. CI = confidence interval.

AO classificationFleiss kappa index95% CI
LowerUpper
Subsegment
 Examiner 10.84670.74950.9439
 Examiner 2111
 Examiner 30.98900.96731
 Examiner 40.94830.89531
 Examiner 50.86850.78900.9480
Table 5

Statistical analysis on intraobserver agreement according to the Fleiss kappa index, described for each examiner and for each of the parameters analyzed in the pediatric AO classification: in this table, the pattern. CI, confidence interval.

AO classificationFleiss kappa index95% CI
LowerUpper
Pattern
 Examiner 10.80350.71100.8959
 Examiner 20.91420.84960.9788
 Examiner 30.96120.92790.9945
 Examiner 40.91130.84440.9781
 Examiner 50.85970.78280.9367
Table 6

Statistical analysis on intraobserver agreement according to the Fleiss kappa index, described for each examiner and for each of the parameters analyzed in the pediatric AO classification: in this table, the severity and side of the avulsion. CI = confidence interval.

AO classificationFleiss kappa index95% CI
LowerUpper
Severity and side of the avulsion
 Examiner 10.81780.61961
 Examiner 20.65240.28571
 Examiner 30.73910.46501
 Examiner 40.83470.67760.9917
 Examiner 50.75540.58990.9209
Table 7

Statistical analysis on intraobserver agreement according to the Fleiss kappa index, described for each examiner and for each of the parameters analyzed in the pediatric AO classification: in this table, the displacement. CI, confidence interval.

AO classificationFleiss kappa index95% CI
LowerUpper
Displacement
 Examiner 10.90680.76321
 Examiner 2111
 Examiner 30.95240.90251
 Examiner 40.87790.75600.9998
 Examiner 50.93610.87161
In general terms, substantial correlation of agreement was found in relation to practically all the items addressed in the classification. Excellent agreement levels were obtained by all the observers in relation to the items bone, segment, paired bone, subsegment, pattern and displacement. On the other hand, the severity and side of avulsion presented substantial agreement for three observers and excellent for the other two. Lastly, it was seen that greater observer experience did not necessarily imply a higher level of agreement.

Interobserver agreement

Table 8, Table 9, Table 10, Table 11, Table 12, Table 13, Table 14 show the results from the Fleiss kappa index relating to the interobserver analysis on the first and second assessments by the examiners involved in this study.
Table 8

Statistical analysis on interobserver agreement according to the Fleiss kappa index, described for each examiner and for each of the parameters analyzed in the pediatric AO classification: in this table, the bone. CI, confidence interval.

BoneExaminer 2Examiner 3Examiner 4Examiner 5
Examiner 11 (95% CI: 1–1)1 (95% CI: 1–1)1 (95% CI: 1–1)0.99 (95% CI: 0.97–1)
Examiner 21 (95% CI: 1–1)1 (95% CI: 1–1)0.99 (95% CI: 0.97–1)
Examiner 31 (95% CI: 1–1)0.99 (95% CI: 0.97–1)
Examiner 40.99 (95% CI: 0.97–1)
Table 9

Statistical analysis on interobserver agreement according to the Fleiss kappa index, described for each examiner and for each of the parameters analyzed in the pediatric AO classification: in this table, the segment. CI, confidence interval.

SegmentExaminer 2Examiner 3Examiner 4Examiner 5
Examiner 10.8886 (95% CI: 0.81–0.96)0.8886 (95% CI: 0.81–0.96)0.86 (95% CI: 0.77–0.94)0.83 (95% CI: 0.74–92)
Examiner 20.9729 (95% CI: 0.93–1)0.9727 (95% CI: 0.93–1)0.9457 (95% CI: 0.89–0.99)
Examiner 30.9454 (95% CI: 0.89–0.99)0.9186 (95% CI: 0.85–0.98)
Examiner 40.9453 (95% CI: 0.89–99)
Table 10

Statistical analysis on interobserver agreement according to the Fleiss kappa index, described for each examiner and for each of the parameters analyzed in the pediatric AO classification: in this table, the paired bone. CI, confidence interval.

Paired boneExaminer 2Examiner 3Examiner 4Examiner 5
Examiner 10.7988 (95% CI: 0.69–0.91)0.6593 (95% CI: 0.52–0.79)0.95 (95% CI: 0.89–1)0.95 (95% CI: 0.89–1)
Examiner 20.6439 (95% CI: 0.51–0.78)0.8497 (95% CI: 0.75–0.94)0.8497 (95% CI: 0.75–0.94)
Examiner 30.6510 (95% CI: 0.52–0.78)0.6510 (95% CI: 0.52–0.78)
Examiner 41 (95% CI: 1–1)
Table 11

Statistical analysis on interobserver agreement according to the Fleiss kappa index, described for each examiner and for each of the parameters analyzed in the pediatric AO classification: in this table, the subsegment. CI, confidence interval.

SubsegmentExaminer 2Examiner 3Examiner 4Examiner 5
Examiner 10.8378 (95% CI: 0.74–0.94)0.8114 (95% CI: 0.71–0.91)0.7977 (95% CI: 0.68–0.91)0.6445 (95% CI: 0.51–0.78)
Examiner 20.8718 (95% CI: 0.79–0.95)0.9585 (95% CI: 0.90–1)0.7442 (95% CI: 0.63–0.86)
Examiner 30.8318 (95% CI: 0.74–0.93)0.7414 (95% CI: 0.62–0.86)
Examiner 40.7464 (95% CI: 0.64–0.86)
Table 12

Statistical analysis on interobserver agreement according to the Fleiss kappa index, described for each examiner and for each of the parameters analyzed in the pediatric AO classification: in this table, the pattern. CI, confidence interval.

PatternExaminer 2Examiner 3Examiner 4Examiner 5
Examiner 10.7567 (95% CI: 0.64–0.87)0.7118 (95% CI: 0.60–0.82)0.7531 (95% CI: 0.64–0.87)0.4327 (95% CI: 0.34–0.53)
Examiner 20.7117 (95% CI: 0.59–0.84)0.8971 (95% CI: 0.82–0.97)0.4534 (95% CI: 0.36–0.55)
Examiner 30.7486 (95% CI: 0.63–0.86)0.4451 (95% CI: 0.35–0.54)
Examiner 40.4924 (95% CI: 0.4–0.59)
Table 13

Statistical analysis on interobserver agreement according to the Fleiss kappa index, described for each examiner and for each of the parameters analyzed in the pediatric AO classification: in this table, the severity and side of the avulsion. CI, confidence interval.

Severity and side of the avulsionExaminer 2Examiner 3Examiner 4Examiner 5
Examiner 10.1547 (95% CI: –0.7 to 0.38)0.4992 (95% CI: 0.18–0.82)0.8347 (95% CI: 0.68–0.99)0.3286 (95% CI: 0.09–0.57)
Examiner 2–0.03 (95% CI: –0.05 to 0)0.27 (95% CI: 0–0.54)0.1818 (95% CI: 0–0.38)
Examiner 30.4296 (95% CI: 0.08–0.77)0.0912 (95% CI: –0.07 to 0.25)
Examiner 40.3223 (95% CI: 0.09–0.55)
Table 14

Statistical analysis on interobserver agreement according to the Fleiss kappa index, described for each examiner and for each of the parameters analyzed in the pediatric AO classification: in this table, the displacement. CI, confidence interval.

DisplacementExaminer 2Examiner 3Examiner 4Examiner 5
Examiner 10.8160 (95% CI: 0.67–0.96)0.7840 (95% CI: 0.65–0.92)0.8160 (95% CI: 0.67–0.96)0.7584 (95% CI: 0.61–0.91)
Examiner 20.7850 (95% CI: 0.63–0.94)1 (95% CI: 1–1)0.6611 (95% CI: 0.46–0.86)
Examiner 30.8056 (95% CI: 0.66–0.95)0.6084 (95% CI: 0.41–0.80)
Examiner 40.6611 (95% CI: 0.46–0.86)
The interobserver agreement index was considered to be excellent for the items of bone and segment and substantial for the items of paired bone and subsegment. The item pattern showed moderate agreement only for one of the observers in comparison with the others, excellent for two other examiners and substantial agreement in the correlation between the remaining observers. Lastly, the item of severity and side of the injury was the one that presented greatest disparity of results. It reached an excellent agreement index only in the comparative analysis between two of the observers, while the others ranged from poor to moderate, at most. Once again, the results do not allow any correlation between the agreement levels obtained and the observers’ experience.

Discussion

The pediatric AO classification is a relatively new method for grouping and standardizing the descriptions of different types of long-bone fractures in children. In the orthopedic literature, only a very limited number of studies have addressed this topic. This stimulated our group to conduct the present study, with the aim of assessing the applicability and reproducibility of this system within our setting. An ideal classification system should conform to very well defined criteria, such as being easy to apply, being highly reproducible, having high accuracy, being capable of adequately guiding the treatment and being capable of indicating the prognosis for the injuries.2, 14, 15, 16, 17 In addition, an ideal classification should enable comparisons between the results obtained from different series, and should allow better documentation of epidemiological data. The AO group put forward a systematic method that covered all long-bone injuries in children and used Müller's classification for adults as its basis. This method is based on an alphanumeric system and aims to categorize the main descriptive elements of these fractures, such as their location and type. This classification was validated in a study published by Slongo et al., and started to be used in studies conducted by the authors who conceptualized it. Until then, each body segment of the immature skeleton had been studied in isolation. The classification of each body segment of the immature skeleton was studied separately and the classifications of the different types of fracture were determined by authors with particular interest in each of the regions studied. We observed that for this reason, there was a large number of classifications for childhood and adolescence, guided by different criteria. For example, we can cite the systems of Poland, Bergenfeldt, Aitken, Salter and Harris, and Peterson for growth plate injuries. We are aware that this multiplicity of classification methods is found for fractures of a variety of segments of the immature skeleton. However, Slongo et al. emphasized that almost none of these systems have been subjected to proper validation for subsequent clinical application. Independent of the classification method, it is ideally expected that there should be a high level of agreement among the professionals who use these methods. We observed in our study that for the variables of severity and pattern in the AO classification system for children, the level of agreement achieved was lower among some of the examiners. For the variable of pattern, there are nine subtypes for the length of the epiphysis and seven for the length of the diaphysis. Therefore, we take the view that the large number of options for each of these variables allows each examiner to have more choices that can be made, and that this is independent of the expertise and/or experience of those involved. The inference that we can make is that, despite the logic of the classification systems available, as advocated by their respective authors, they can be considered to be very complex, regardless of the detailing of each category. Therefore, this did not allow there to be an adequate level of confidence between the observers, when applied. A smaller number of options may also generate a more reliable classification system, but this may not resolve the problem of the classifications, in a general manner. For example, in the study by Sidor et al., reduction of the number of types of fracture in order to apply the modified Neer classification for the proximal humerus was not found to provide any increase in agreement. We believe that, in a general manner, our study presents several important points. Firstly, we brought together a large number of cases (108) that presented great variability of injuries. We observed that other studies presented series ranging in size from 10 to 275 cases.10, 14 In studies in the literature that involved the type of analysis used in our study, there was an average participation of five evaluators for every 50 cases. Secondly, our observers had a variety of levels of experience, which also made it possible to ascertain whether the degree of learning might interfere with the application of the different classification systems. In our study, greater experience among the examiners did not increase the agreement among the items evaluated, which denotes that it may be possible to make general use of the classification system for the entire community of orthopedic surgeons, independent of their experience of managing pediatric fractures. We support the idea that simplified classification systems would be expected to present higher levels of intra and interobserver agreement than would the systems evaluated in this study. They would also be expected to more efficiently predict what the best treatment method would be and what type would give rise to the lowest late complication rates. Thus, a system that encompasses the predicates of an ideal classification needs to be planned for long-bone fractures of the immature skeleton. In this manner, in our opinion, an ideal classification system has not yet been achieved. The complexity of the analysis on fractures that involve the locomotor apparatus during childhood and adolescence is directly related to several factors: age; differences in growth between different bone segments; growth patterns; bone remodeling rates; mechanical action on the bone; state of the adjacent structures; difference in growth rates between the proximal and distal growth plates; growth of the epiphysis; status of the circulation; energy of the trauma involved, etc. The need for comprehension of the influence of all these variables that change with growth of the locomotor apparatus makes creation of a single acceptable classification system a very complex task.

Conclusions

In this study, the intra and interobserver agreement for the pediatric AO classification system was considered to be good or excellent for the parameters of bone, segment, paired bone, subsegment, pattern and displacement. However, the intra and interobserver agreement relating to the parameters of severity and side of the avulsion was statistically unsatisfactory.

Conflicts of interest

The authors declare no conflicts of interest.
  13 in total

1.  Management of supracondylar fractures of the humerus in children.

Authors:  J J GARTLAND
Journal:  Surg Gynecol Obstet       Date:  1959-08

2.  Understanding interobserver agreement: the kappa statistic.

Authors:  Anthony J Viera; Joanne M Garrett
Journal:  Fam Med       Date:  2005-05       Impact factor: 1.756

3.  Development and validation of the AO pediatric comprehensive classification of long bone fractures by the Pediatric Expert Group of the AO Foundation in collaboration with AO Clinical Investigation and Documentation and the International Association for Pediatric Traumatology.

Authors:  Theddy Slongo; Laurent Audigé; Wolfgang Schlickewei; Jean-Michel Clavert; James Hunter
Journal:  J Pediatr Orthop       Date:  2006 Jan-Feb       Impact factor: 2.324

Review 4.  A concept for the validation of fracture classifications.

Authors:  Laurent Audigé; Mohit Bhandari; Beate Hanson; James Kellam
Journal:  J Orthop Trauma       Date:  2005-07       Impact factor: 2.512

Review 5.  Current classification of fractures. Rationale and utility.

Authors:  J S Martin; J L Marsh
Journal:  Radiol Clin North Am       Date:  1997-05       Impact factor: 2.303

6.  The AO comprehensive classification of pediatric long-bone fractures: a web-based multicenter agreement study.

Authors:  Theddy Slongo; Laurent Audigé; Jean-Michel Clavert; Nicolas Lutz; Steve Frick; James Hunter
Journal:  J Pediatr Orthop       Date:  2007-03       Impact factor: 2.324

7.  The Neer classification system for proximal humeral fractures. An assessment of interobserver reliability and intraobserver reproducibility.

Authors:  M L Sidor; J D Zuckerman; T Lyon; K Koval; F Cuomo; N Schoenberg
Journal:  J Bone Joint Surg Am       Date:  1993-12       Impact factor: 5.284

Review 8.  How reliable are reliability studies of fracture classifications? A systematic review of their methodologies.

Authors:  Laurent Audigé; Mohit Bhandari; James Kellam
Journal:  Acta Orthop Scand       Date:  2004-04

9.  Physeal fractures: Part 3. Classification.

Authors:  H A Peterson
Journal:  J Pediatr Orthop       Date:  1994 Jul-Aug       Impact factor: 2.324

10.  Reliable classification of children's fractures according to the comprehensive classification of long bone fractures by Müller.

Authors:  Terje Meling; Knut Harboe; Cathrine H Enoksen; Morten Aarflot; Astvaldur J Arthursson; Kjetil Søreide
Journal:  Acta Orthop       Date:  2012-12-18       Impact factor: 3.717

View more
  4 in total

1.  INTRA- AND INTER-OBSERVER AGREEMENT IN THE AO AND GARNAVOS SYSTEMS FOR DIAPHYSEAL HUMERUS FRACTURE.

Authors:  Roberto Meriqui; Rodrigo Yuzo Masuda; Artur Yudi Utino; Rafael Pierami; Fábio Teruo Matsunaga; Marcel Jun Sugawara Tamaoki
Journal:  Acta Ortop Bras       Date:  2017 Jul-Aug       Impact factor: 0.513

2.  The Radiographic Quality of Distal Radius Fracture Reduction Using Sedation Versus Hematoma Block.

Authors:  Lior Koren; Eyal Ginesin; Shahem Elias; Ronit Wollstein; Shlomo Israelit
Journal:  Plast Surg (Oakv)       Date:  2017-12-18       Impact factor: 0.947

3.  Evaluation of interobserver agreement in Albertoni's classification for mallet finger.

Authors:  Vinícius Alexandre de Souza Almeida; Carlos Henrique Fernandes; João Baptista Gomes Dos Santos; Francisco Alberto Schwarz-Fernandes; Flavio Faloppa; Walter Manna Albertoni
Journal:  Rev Bras Ortop       Date:  2017-12-12

4.  Evaluation of the Reproducibility of the Schatzker Classification Reviewed by Kfuri for Tibial Plateau Fractures.

Authors:  Henrique Mansur; Victor Luiz Bastos Corrêa; Bruno Abdo; Lucas Sacramento Ramos; Marcello Teixeira Castiglia
Journal:  Rev Bras Ortop (Sao Paulo)       Date:  2021-08-13
  4 in total

北京卡尤迪生物科技股份有限公司 © 2022-2023.