Literature DB >> 29367899

Evaluation of interobserver agreement in Albertoni's classification for mallet finger.

Vinícius Alexandre de Souza Almeida1, Carlos Henrique Fernandes1, João Baptista Gomes Dos Santos1, Francisco Alberto Schwarz-Fernandes2, Flavio Faloppa1, Walter Manna Albertoni1.   

Abstract

OBJECTIVE: To measure the reliability of Albertoni's classification for mallet finger.
METHODS: Agreement study. Forty-three radiographs of patients with mallet finger were assessed by 19 responders (12 hand surgeons and seven residents). Injuries were classified by Albertoni's classification. For agreement comparison, lesions were grouped as: (A) tendon avulsion; (B) avulsion fracture; (C) fracture of the dorsal lip; and (D) physis injury-and subgroups (each group divided into two subgroups). Agreement was assessed by Fleiss's modification for kappa statistics.
RESULTS: Agreement was excellent for Group A (k = 0.95 (0.93-0.97)) and remained good when separated into A1 and A2. Group B was moderate (k = 0.42 (0.39-0.44)) and poor when separated into B1 and B2. In the Group C, agreement was good (k = 0.72 (0.70-0.74)), but when separated into C1 and C2, it became moderate. Group D was always poor (k = 0.16 (0.14-0.19)). The general agreement was moderate, with (k = 0.57 (0.56-0.58)).
CONCLUSION: Albertoni's classification evaluated for interobserver agreement is considered a reproducible classification by the method used in the research.

Entities:  

Keywords:  Acquired hand deformities; Classification; Finger injuries; Reproducibility of results; Rupture; Tendon injuries

Year:  2017        PMID: 29367899      PMCID: PMC5771784          DOI: 10.1016/j.rboe.2017.12.001

Source DB:  PubMed          Journal:  Rev Bras Ortop        ISSN: 2255-4971


Introduction

Lesions of the extensor mechanism of the fingers are among the most prevalent in the orthopedic practice. The terminal extensor tendon, formed by the union of two lateral slips, is inserted into the dorsal surface at the base of the distal phalanx. Injury of this tendon, or intra-articular fractures at the base of the distal phalanx, lead to a flexion deformity of the distal interphalangeal joint (DIPJ) known as mallet finger. This lesion mainly affects the young population; it is common in sporting practices and may lead to a significant functional deficit if not treated properly. Several clinical classifications have been described, aiming to categorize this condition. In 1957, Pratt et al. classified mallet finger based on the etiology: laceration, crushing, and indirect trauma. In 1984, Wehbé and Schneider described a system that categorized these lesions into three types. Doyle et al. have also described another system widely used in the literature. In Brazil, Albertoni's. clinical–radiological classification, described in 1986, is widely used. A good quality classification should primarily be written in simple language and provide reliable guidelines to aid in treatment, prognosis, and reducing the possibility of complications. Moreover, it must be feasible, reliable, and reproducible; the latter characteristic is measured by interobserver agreement.1, 6 A classification is reproducible when several individuals are able to reproduce the same result at any time, anywhere. Thus, it becomes possible to compare the results of different centers with different patients and the respective outcomes for each type of treatment. Reproducibility studies are classic in the literature when measuring the quality of classification systems, especially in orthopedics. These studies usually include few observers, due to the difficulty in maintaining a reliable assessment. Any classification system worsens its agreement as the number of observers and categories increase. The low experience of observers in the assessed condition and multicenter studies also tend to decrease agreement. No studies on the reproducibility of the Albertoni classification were retrieved in the literature, nor any study on the reproducibility of any mallet finger classification. The authors conjectured that this classification has good interobserver agreement. This study is aimed at evaluating the interobserver agreement of the Albertoni classification for mallet finger, and to quantify its reproducibility in the management of this condition.

Materials

This study was approved by the Research Ethics Committee of the institution where it was conducted (under CAAE No. 49960815.8.0000.5505). A questionnaire survey was carried out in which 43 photographs of DIPJ radiographs in lateral view of hands with mallet finger injury were assessed. All radiographs were considered by the researchers to be of good quality. The Albertoni classification was presented at the beginning of the questionnaire. It divides the lesions according to findings on a DIPJ radiograph in lateral view, categorizing them into four types: (A), pure tendon lesion without fracture; (B), bone avulsion lesion; (C), lesion associated with fracture of the dorsal region of the base of the distal phalanx, comprising one-third or more of the articular surface; and (D), epiphyseal detachment in children. Each type is divided into two subtypes. In types A and B, subtype 1 is characterized by a flexion deformity of less than 30° and subtype 2, by a flexion deformity greater than or equal to 30°. Deformities greater than 30° indicate injury to the retinacular ligaments and capsular structures in types A2 and B2. Type C is subdivided into C1, congruent joint (stable), and C2, sub-dislocated or dislocated joint (unstable). Type D is subdivided into D1, epiphyseal detachment (Salter and Harris lesion type 1) and D2, fracture-detachment (Salter and Harris type 3).7, 8 Below each photograph of a radiograph, the options A1, A2, B1, B2, C1, C2, and D were presented, so that the observer could choose only one of them. Given the rarity of type D, it was not subdivided into D1 and D2. A goniometer and a pen were provided to the evaluator for the necessary measurements, so as to determine the subgroups 1 or 2 in type A and B, It was considered that all observers were able to measure the angles, as all have specialty degrees in orthopedics and traumatology. The questionnaire was applied to 19 observers, all from the same institution, divided into 12 hand surgeons and seven hand surgery residents. Each observer answered the questionnaire separately, with no debate among them.

Statistical methods

The interobserver agreement was calculated using the Fleiss-k agreement coefficient, a generalization for more than two evaluators based on Scott's agreement measure.9, 10 The standard error and consequently the confidence intervals were calculated according to Fleiss’ algorithm. The agreements of the seven possible classification options were compared. The study also compared the agreement when the classification was grouped into four major categories: A1 and A2, in A; B1 and B2, in B; C1 and C2, in C; and D. In this way, it was possible to assess the difficulty of differentiating the groups (A, B, C, D) or the subgroups (1 and 2). In all evaluations, in addition to the results from all 19 professionals, the results by the residents and hand surgeons were also compared separately. This comparison was made to assess the presence of differences of agreement between different levels of professional experience. For all inferential tests, the alpha error value was set at 0.05. The k-value ranges from −1 to 1, where 1 means total agreement, −1 means total disagreement, and zero means that the evaluators classified the items at random. The agreement classification scale of Landis and Koch was adopted (Table 1).
Table 1

Level of agreement, according to Landis and Koch's classification.

k-ValueStrength of agreement
<0.00No agreement
0.01–0.20Very poor
0.21–0.40Poor
0.41–0.60Moderate
0.61–0.80Good
0.81–1.00Excellent
Level of agreement, according to Landis and Koch's classification.

Results

Table 2 presents the responses of the observers.
Table 2

Number of responses for each Albertoni type, for each radiograph.

RadiographsOptions according to Albertoni classification
A1A2B1B2C1C2D
119000000
201900000
300101260
412100006
501900000
651400000
718100000
801900000
961300000
1001900000
1100001360
120070840
1300141400
1400121600
1500001810
1601900000
1719000000
180031780
1900410140
200020980
2100011170
2200111160
230011980
2400008110
2500160300
2600001810
2700001180
2800101800
2900001810
3000201700
3100001810
3200302140
3331600000
3400701020
350090703
360080803
3700001720
3812070000
3900111160
4019000000
4100201430
4214500000
4300101143
Number of responses for each Albertoni type, for each radiograph. Regarding the distribution of types per observer group, the most recurrent type was C1 in the groups of surgeons and residents; the least recurrent type was D for surgeons and B2 for residents (Table 3).
Table 3

Distribution of the Albertoni classification types attributed by hand surgeons and residents to 43 images.

The Albertoni classification
ObserverA1
A2
B1
B2
C1
C2
D
n%n%n%n%n%n%n%
Surgeons
 1614.0818.612.300.01227.91637.200.0
 2716.3716.312.300.01944.2716.324.7
 3818.6614.0511.600.01534.9920.900.0
 4511.6920.9614.012.31432.6818.600.0
 5920.9614.012.300.01125.61637.200.0
 6818.6614.01023.324.71432.624.712.3
 7818.6614.0818.600.01637.2511.600.0
 8614.0818.6716.312.31227.9818.612.3
 9716.3716.324.700.02251.2511.600.0
 10511.61023.3818.6511.61125.649.300.0
 11511.6920.9716.312.31227.9818.612.3
 12920.9614.037.037.01841.949.300.0
Total8316.18817.15911.4132.517634.19217.851.0



Residents
 13716.3716.31227.900.01125.6511.612.3
 14716.3818.624.700.01330.21125.624.7
 15614.0920.949.312.31125.61227.900.0
 16614.0920.9716.312.31534.949.312.3
 17511.6818.6511.612.31227.91023.324.7
 18614.0818.6716.300.01739.537.024.7
 19716.3818.6614.012.31534.949.324.7
Total4414.65718.94314.441.39431.24916.3103.3
Grand total12715.514517.710212.6172.127033.014117.3151.8
Distribution of the Albertoni classification types attributed by hand surgeons and residents to 43 images. When the Albertoni classification was grouped into larger categories for types A, B, C and D, the most prevalent type was C, followed by A, B, and D, both in the hand surgeons and residents groups (Table 4).
Table 4

Distribution of the Albertoni classification types, grouped into larger categories, attributed by hand surgeons and residents to 43 images.

Grouped Albertoni classification
A
B
C
D
n%n%n%n%
Surgeons
1432.612.32865.1
1432.612.32660.524.7
1432.6511.62455.8
1432.6716.32251.2
1534.912.32762.8
1432.61227.91637.212.3
1432.6818.62148.8
1432.6818.62046.512.3
1432.624.72762.8
1534.91330.21534.9
1432.6818.62046.512.3
1534.9614.02251.2
Total17133.17214.026851.9%51.0



Residents
1432.61227.91637.212.3
1534.924.72455.824.7
1534.9511.62353.5
1534.9818.61944.212.3
1330.2614.02251.224.7
1432.6716.32046.524.7
1534.9716.31944.224.7
Total10133.64715.614347.5103.3
Grand total27233.311914.641150.3151.8
Distribution of the Albertoni classification types, grouped into larger categories, attributed by hand surgeons and residents to 43 images. Table 5 presents the results for the agreement in the Albertoni classification according to the Fleiss agreement coefficient.
Table 5

General agreement coefficient for the evaluation of images with p-value < 0.0001.

Classificationk (95% confidence interval)Classificationk (95% confidence interval)
Surgeon
 A0.95 (0.91–0.99)A10.75 (0.71–0.78)
A20.84 (0.80–0.87)
 B0.34 (0.31–0.38)B10.34 (0.30–0.37)
B20.19 (0.15–0.23)
 C0.71 (0.67–0.75)C10.51 (0.48–0.55)
C20.44 (0.41–0.48)
 D0.10 (0.06–0.14)D0.10 (0.06–0.14)
 General0.72 (0.69–0.74)0.56 (0.54–0.58)



Resident
 A0.96 (0.89–1.00)A10.82 (0.76–0.89)
A20.90 (0.83–0.96)
 B0.55 (0.48–0.61)B10.46 (0.39–0.52)
B20.49 (0.43–0.56)
 C0.77 (0.70–0.83)C10.49 (0.43–0.56)
C20.37 (0.30–0.43)
 D0.24 (0.18–0.31)D0.24 (0.18–0.31)
 General0.76 (0.72–0.81)0.59 (0.55–0.62)



Surgeon + Resident
 A0.95 (0.93–0.97)A10.77 (0.74–0.79)
A20.86 (0.84–0.88)
 B0.42 (0.39–0.44)B10.38 (0.36–0.40)
B20.28 (0.26–0.30)
 C0.72 (0.70–0.74)C10.52 (0.50–0.54)
C20.42 (0.39–0.44)
 D0.16 (0.14–0.19)D0.16 (0.14–0.19)
 General0.73 (0.71–0.74)0.57 (0.56–0.58)
General agreement coefficient for the evaluation of images with p-value < 0.0001. Among the hand surgeons, classifications A1 (k = 0.75 [0.71–0.78]) and A2 (k = 0.84 [0.80–0.87]) presented higher agreement than the other groups, having good and excellent agreements, respectively. In types B1 (k = 0.34 [0.30–0.37]) and B2 (k = 0.19 [0.15–0.23]), the agreement was poor and very poor, respectively. Types C1 (k = 0.51 [0.48–0.55]) and C2 (k = 0.44 [0.41–0.48]) had moderate agreement, whereas in type D (k = 0.10 [0.06–0.14]), the agreement was very poor. The general agreement presented k = 0.56 (0.54–0.58), was considered moderate. Among the residents, the types A1 (k = 0.82 [0.76–0.89]) and A2 (k = 0.90 [0.83–0.96]) also presented better agreement than the others, and were considered to be excellent. The groups B1 (k = 0.46 [0.39–0.52]), B2 (k = 0.49 [0.43–0.56]), and C1 (k = 0.49 [0.43–0.56]) presented moderate agreement, while groups C2 (k = 0.37 ([0.30–0.43]) and D (k = 0.24 [0.18–0.31]), presented poor agreement. However, taking into account the confidence interval, it can be considered that groups B1 and B2 and C1 and C2 had the same agreement. The general agreement was k = 0.59 (0.55–0.62), considered moderate. When hand surgeons and residents were analyzed together, the results were similar. In A1 (k = 0.77 [0.74–0.79]) and A2 (k = 0.86 [0.84–0.88]), the agreement was considered good and excellent, respectively. In turn, in B1 (k = 0.38 [0.36–0.40]) and B2 (k = 0.28 [0.26–0.30]) the agreements were considered poor. Types C1 (k = 0.52 [0.50–0.54]) and C2 (k = 0.42 [0.39–0.44]) presented moderate agreement and type D (k = 0.16 [0.14–0.19]), very poor agreement. The general agreement was k = 0.57 (0.56–0.58), considered moderate. When assessing the classification into large groups (A, B, C, D), an improvement was observed for all groups, and the most significant improvement was observed in group C. Taking into account the 95% confidence interval, in all analyses combining subtypes 1 and 2 improved the agreement. Among the surgeons, the agreement in A (k = 0.95 [0.91–0.99]) was excellent, in B it was poor (k = 0.34 [0.31–0.38]), in C (k = 0.71 [0.67–0.75]) it was good and, in D it remained very poor. The overall agreement was good, with k = 0.72 (0.69–0.74). Among the residents, the agreement in A (k = 0.96 [0.89–1.00]) was excellent, in B (k = 0.55 [0.48–0.61]), moderate, in C (k = 0.77 [0.70–0.83]), good, and in D it remained poor. The overall agreement was good, with k = 0.76 (0.72–0.81). When combining hand surgeons and residents, the agreement in A (k = 0.95 [0.93–0.97]) was excellent, in B (k = 0.42 [0.39–0.44]), moderate, in C (k = 0.72 [0.70–0.74]), good, and in D it remained poor. The general agreement in this second evaluation was good, with k = 0.73 (0.71–0.74). The data plotted in the graphs present the respective confidence intervals (Fig. 1, Fig. 2).
Fig. 1

Generalized coefficient of agreement for each subgroup. The dots represent the coefficient value and the dashes, the 95% confidence interval. (a) Surgeons and residents, (b) hand surgeons, and (c) hand surgery residents.

Fig. 2

Generalized coefficient of agreement for the grouped data. The dots represent the coefficient value and the dashes, the 95% confidence interval. (a) Surgeons and residents, (b) hand surgeons, and (c) hand surgery residents.

Generalized coefficient of agreement for each subgroup. The dots represent the coefficient value and the dashes, the 95% confidence interval. (a) Surgeons and residents, (b) hand surgeons, and (c) hand surgery residents. Generalized coefficient of agreement for the grouped data. The dots represent the coefficient value and the dashes, the 95% confidence interval. (a) Surgeons and residents, (b) hand surgeons, and (c) hand surgery residents.

Discussion

Mallet finger is a very prevalent condition, mainly affecting an economically active age population. An efficient and reproducible classification is able to guide the professional to a more efficient treatment. Among the previously described classifications for this type of deformity, that of Pratt et al. only divides the etiology and does not lead to treatment or prognosis. The classification of Wehbé and Schneider, which uses the extent of joint surface involvement, has the same limitation. It stratifies each of the three types into three subtypes (A, less than one-third of the affected joint surface, B between one-third and two-thirds of the articular surface, and C, more than two-thirds of the affected surface). However, this does not provide guidance regarding treatment or prognosis. Doyle's classification provides a stratification based on clinical parameters that remain uncontemplated in the other systems, but there is no categorization of the radiographic patterns for all types. The latter is the most widely used in literature worldwide. Compared with other classifications, the Albertoni classification defines a therapeutic schedule, which changes according to the lesion in question. Each type has a specific treatment: type A1 and B1 injuries are classically treated with immobilization. Types C1 and D are treated with non-surgical reduction and immobilization with a metal splint. Type A2, B2, and C2 lesions usually require surgical treatment.5, 8 This classification is widely used among Brazilian orthopedic surgeons, hence the importance of assessing its reproducibility. In the literature, no study on the reproducibility of the Albertoni classification was retrieved, nor was any study on the reproducibility of mallet finger classifications. This demonstrates the originality of the present study. The Albertoni classification is based on radiographs. In the literature, several interobserver agreement studies have evaluated radiographic classifications. Audigé et al. evaluated 44 reproducibility studies on orthopedic classifications with the use of imaging criteria and found little uniformity in the methodology, which hinders the comparison of reproducibility. Belloti et al. assessed the reproducibility of distal radius fracture classifications, while Utino et al. studied the agreement in the AO classification for long bones in the pediatric population. The number of radiographs under evaluation is an important factor for assessing agreement. Both too few and too many evaluations tend to worsen agreement. In Audigé’s systematic review of orthopedic reproducibility studies, a large variation in the number of radiographs per study (from 14 to 200 evaluations) was observed. Berger et al., in a systematic review on the reproducibility of the Eaton classification for rhizarthrosis, assessed four studies. In these studies, the number of radiographs ranged from 40 to 43. Based on these studies, the authors consider that the 43 radiographs used in the present study were sufficient to evaluate the interobserver agreement in the Albertoni classification. The number of observers is another factor that interferes with the coefficient of agreement. The greater the quantity, the lower the probability of agreement. The literature also did not present uniformity regarding the number of observers. Thomsen et al. used only four observers, while Randsborg and Sivertsen worked with 12 observers and Audigé et al. analyzed works with 2–36 observers (median = 5). The present study included 19 observers in subgroups of residents and hand surgeons. The number of radiographs and observers in this study is therefore within the norms found in the literature.6, 13, 14, 16, 17, 18, 19, 20, 21, 22, 23, 24 The higher the number of categories in a classification, the worse the agreement.6, 18 Albertoni's classification, with seven possible options, would tend to have a worse agreement when compared with other classifications. This was demonstrated when grouping types A1 and A2 into A, B1 and B2 into B, and C1 and C2 into C; an increase of agreement was observed in all the analyses, since the number of categories had decreased to four. There is no consensus in the literature regarding which is the cut-off value of k to consider a classification as reproducible.13, 15 These values are arbitrarily defined by the authors. Fleiss consider k-values between 0.40 and 0.75 to present moderate to good agreement. Svanholm et al. only consider as good values of k greater than 0.75. In turn, Brage et al. consider k-values above 0.50 as reproducible. Landis and Koch, the parameter used in the present study (Table 1) and the most used today, considered moderate agreement as those in the range of 0.4–0.6, and good agreement, above 0.6. When assessed as a single group, Albertoni types A1 and A2 presented k = 0.95 (0.93–0.97), which indicates an excellent agreement. When stratified into A1 (k = 0.77 [0.74–0.79]) and A2 (k = 0.84 [0.80–0.87]), the coefficient of agreement decreased slightly, but remained good. The k coefficient presented a decrease of 16%, i.e., there is little change in the agreement when combining the categories and removing the parameter that differentiates them. This shows that an angle of <30° (A1) or >30° (A2) can be considered a reproducible parameter. Albertoni types B1 and B2, when evaluated as a single group, presented a moderate agreement coefficient, with (k = 0.42 [0.39–0.44]). When stratified into B1 (k = 0.38 [0.36–0.40]) and B2 (k = 0.28 [0.26–0.30]), the coefficient of agreement presented a 21% decrease, being classified as poor. Similarly to the findings for type A, there is no difficulty in differentiating between B1 and B2, as well as between A1 and A2, which corroborates the fact that the angle parameter is reproducible. Regarding the poor agreement, the authors believe that the main reason for this result is the low prevalence of type B in the questionnaire applied (14.6%), as seen in Table 4. When evaluated as a single group, Albertoni types C1 and C2 presented a good agreement coefficient, with k = 0.77 (0.70–0.83). When stratified into C1 (k = 0.52 [0.50–0.54]) and C2 (k = 0.42 [0.39–0.44]), the coefficient of agreement presented a pronounced decrease of 41%, which became moderate for C1 and poor for C2. This leads to the assumption that the joint congruence of the DIPJ is difficult to define. That is, in relation to joint congruence, agreement decreased considerably. The authors believe that one of the ways to improve agreement in type C would be to better define the criterion for joint congruence. This may guide future modifications to this classification. Another relevant factor is that out of the 43 radiographs, in 22 the observers presented doubt in the choice between B and C (Table 2). In contrast, when type A (A1 or A2) was chosen, it was always concordant, except for radiographs 4 and 38 (Table 2). This was evidenced by the lower agreement in types B and C when compared with type A. The authors believe that the parameter to distinguish between bone avulsion (type B) and fracture of the dorsal region of the base of the distal phalanx (type C) is not well understood by the observers. Due to the low incidence of this type of lesion, type D was not separated in D1 and D2. A very poor agreement was observed, with k = 0.16 (0.14–0.19). The low prevalence of this type (1.8%), in only four cases, also justifies the low agreement. The evaluators’ experience in the assessed condition tends to change agreement. Mattos et al. concluded that the lack of experience of the observers decreased agreement. However, in the present study, among the residents, the general agreement was k = 0.76 (0.72–0.81), while among hand surgeons it was k = 0.72 (0.69–0.74). Although the group of residents presented k-values greater than the group of hand surgeons, considering the 95% confidence interval (Fig. 1, Fig. 2), it cannot be stated that the agreement is higher in that group than in the group of surgeons. This is an advantage of the Albertoni classification, in which the agreement is not altered with the experience of the observers. The authors believe that there is a tendency for greater agreement among residents because they have similar levels of knowledge and experience, and because they are in training in the same center, which leads to uniformity. However, this was not demonstrated in the present study. Audigé mentions that in the evaluation of 44 studies on the reproducibility of orthopedic classifications, of the 86 coefficients of agreement calculated, only four were excellent (k > 0.80), 17 were good (between 0.60 and 0.80), 32 were moderate (0.40–0.60), and 33 were fair or poor (<0.40). The overall agreement in the Albertoni classification was moderate, with k = 0.57 (0.56–0.58), based on the classification by Landis and Koch. Despite the moderate agreement, when compared with the literature and taking into account all the factors discussed, the authors consider the Albertoni classification to be reproducible. A selection bias has to be considered, as the radiographs in the present study were not randomized, having been chosen by the researchers for their quality. Another relevant feature is that it cannot be guaranteed that measurements were made using the appropriate methods, even though observers were correctly instructed and provided correct measurement material. This was a single-center study, which tends to homogenize responses and improve agreement.

Conclusion

The Albertoni classification presented good or excellent interobserver agreement for types A1 and A2, moderate for types C1 and C2, and poor for types B1, B2, and D. Following the statistical methods employed, and compared with research in the literature, the authors consider the Albertoni classification to be reproducible. The authors believe that a better definition of the criteria for joint congruence would substantially improve agreement.

Conflicts of interest

The authors declare no conflicts of interest.
  18 in total

1.  Observer variation in the radiographic classification of ankle fractures.

Authors:  N O Thomsen; S Overgaard; L H Olsen; H Hansen; S T Nielsen
Journal:  J Bone Joint Surg Br       Date:  1991-07

2.  Reproducibility of histomorphologic diagnoses with special reference to the kappa statistic.

Authors:  H Svanholm; H Starklint; H J Gundersen; J Fabricius; H Barlebo; S Olsen
Journal:  APMIS       Date:  1989-08       Impact factor: 3.205

Review 3.  The kappa statistic in reliability studies: use, interpretation, and sample size requirements.

Authors:  Julius Sim; Chris C Wright
Journal:  Phys Ther       Date:  2005-03

4.  The reliability of a simplified Garden classification for intracapsular hip fractures.

Authors:  D Van Embden; S J Rhemrev; F Genelin; S A G Meylaerts; G R Roukema
Journal:  Orthop Traumatol Surg Res       Date:  2012-05-03       Impact factor: 2.256

Review 5.  Intra- and interobserver reliability of the Eaton classification for trapeziometacarpal arthritis: a systematic review.

Authors:  Aaron J Berger; Arash Momeni; Amy L Ladd
Journal:  Clin Orthop Relat Res       Date:  2014-04       Impact factor: 4.176

Review 6.  How reliable are reliability studies of fracture classifications? A systematic review of their methodologies.

Authors:  Laurent Audigé; Mohit Bhandari; James Kellam
Journal:  Acta Orthop Scand       Date:  2004-04

7.  Are distal radius fracture classifications reproducible? Intra and interobserver agreement.

Authors:  João Carlos Belloti; Marcel Jun Sugawara Tamaoki; Carlos Eduardo da Silveira Franciozi; João Baptista Gomes dos Santos; Daniel Balbachevsky; Eduardo Chap Chap; Walter Manna Albertoni; Flávio Faloppa
Journal:  Sao Paulo Med J       Date:  2008-05-01       Impact factor: 1.044

8.  Current concepts: mallet finger.

Authors:  Sreenivasa R Alla; Nicole D Deal; Ian J Dempsey
Journal:  Hand (N Y)       Date:  2014-06

9.  Intra and interobserver concordance between the different classifications used in Legg-Calvé-Perthes disease.

Authors:  André Cicone Liggieri; Marcos Josei Tamanaha; José Jorge Kitagaki Abechain; Tiago Moreno Ikeda; Eiffel Tsuyoshi Dobashi
Journal:  Rev Bras Ortop       Date:  2015-10-23

10.  Intra and interobserver concordance of the AO classification system for fractures of the long bones in the pediatric population.

Authors:  Artur Yudi Utino; Douglas Rene de Alencar; Leonardo Fernadez Maringolo; Julia Machado Negrão; Francesco Camara Blumetti; Eiffel Tsuyoshi Dobashi
Journal:  Rev Bras Ortop       Date:  2015-08-15
View more
  2 in total

1.  OUTCOME OF NON-SURGICAL TREATMENT OF MALLET FINGER.

Authors:  Stephan Alejandro Dávalos Barrios; Arturo Felipe de Jesús Sosa Serrano; Jorge Alberto Gama Herrera; Maria Fernanda Ramírez Berumen; Jose Manuel Pérez Atanasio
Journal:  Acta Ortop Bras       Date:  2020 Jul-Aug       Impact factor: 0.513

2.  Translation and Cross-cultural Adaptation of the "Thumb Disability Exam - TDX" questionnaire into Brazilian Portuguese.

Authors:  Vinícius Alexandre de Souza Almeida; Carlos Henrique Fernandes; Lia Miyamoto Meireles; João Batista Gomes Dos Santos; Flavio Faloppa; Benno Ejnisman
Journal:  Rev Bras Ortop (Sao Paulo)       Date:  2020-09-22
  2 in total

北京卡尤迪生物科技股份有限公司 © 2022-2023.