Literature DB >> 33681838

Interobserver and intraobserver agreement of three-dimensionally printed models for the classification of proximal humeral fractures.

Hannah Bougher1, Petra Buttner2, Jonathon Smith3, Jennifer Banks1, Hyun Su Na3, David Forrestal4, Clare Heal1.   

Abstract

HYPOTHESIS: This study aimed to examine whether three-dimensionally printed models (3D models) could improve interobserver and intraobserver agreement when classifying proximal humeral fractures (PHFs) using the Neer system. We hypothesized that 3D models would improve interobserver and intraobserver agreement compared with x-ray, two-dimensional (2D) and three-dimensional (3D) computed tomography (CT) and that agreement using 3D models would be higher for registrars than for consultants.
METHODS: Thirty consecutive PHF images were selected from a state-wide database and classified by fourteen observers. Each imaging modality (x-ray, 2D CT, 3D CT, 3D models) was grouped and presented in a randomly allocated sequence on two separate occasions. Interobserver and intraobserver agreements were quantified with kappa values (κ), percentage agreement, and 95% confidence intervals (CIs).
RESULTS: Seven orthopedic registrars and seven orthopedic consultants classified 30 fractures on one occasion (interobserver). Four registrars and three consultants additionally completed classification on a second occasion (intraobserver). Interobserver agreement was greater with 3D models than with x-ray (κ = 0.47, CI: 0.44-0.50, 66.5%, CI: 64.6-68.4% and κ = 0.29, CI: 0.26-0.31, 57.2%, CI: 55.1-59.3%, respectively), 2D CT (κ = 0.30, CI: 0.27-0.33, 57.8%, CI: 55.5-60.2%), and 3D CT (κ = 0.35, CI: 0.33-0.38, 58.8%, CI: 56.7-60.9%). Intraobserver agreement appeared higher for 3D models than for other modalities; however, results were not significant. There were no differences in interobserver or intraobserver agreement between registrars and consultants.
CONCLUSION: Three-dimensionally printed models improved interobserver agreement in the classification of PHFs using the Neer system. This has potential implications for using 3D models for surgical planning and teaching.
© 2020 The Author(s).

Entities:  

Keywords:  3D modeling; Neer system; Proximal humeral fracture; fracture classification; interobserver agreement; intraobserver agreement

Year:  2020        PMID: 33681838      PMCID: PMC7910723          DOI: 10.1016/j.jseint.2020.10.019

Source DB:  PubMed          Journal:  JSES Int        ISSN: 2666-6383


Three-dimensional (3D) printing is an emerging technology in orthopedics, with its uses ranging from the development of customized implants, surgical templates, and bioprinted bone to the use of models for teaching and surgical planning.,,, Proximal humeral fractures (PHFs) are the fourth most common osteoporotic fracture, affecting Australian men and women at rates of 40.6 and 73.2 per 100,000 person-years, respectively., Currently 43% of patients are hospitalized, 21% receive surgery, and 15% die within 3 months, and with an aging population, the incidence and burden of disease will likely increase. Accurate fracture classification has important implications for diagnosis, surgical management, and planning, as well as estimation of patient prognosis. PHFs are most commonly classified using the Neer system according to the number of parts displaced by greater than 1 cm or a 45o angle and grades complexity increasing from one- to four-part fractures., Without a gold standard for the Neer system, interobserver and intraobserver agreement can be used as surrogates for validity and reliability. Like other PHF classification systems, the Neer system has shown limited interobserver and intraobserver agreement.,,, A recent systematic literature review found that levels of interobserver and intraobserver agreement for PHF classification was lowest for x-ray, increased with two-dimensional (2D) computed tomography (CT), and highest with 3D CT. The same study suggested CT may increase interobserver agreement to a greater extent for less experienced observers. Conventional imaging modalities may limit the ability of surgeons to interpret in vivo anatomy., Three-dimensionally printed models (herein referred to as 3D models) have theoretical advantages over x-rays and CT as they allow tactile examination of anatomy,, and unlimited 360º visualization of the fracture,35, 36, 37 thereby avoiding the need to interpret 3D patho-anatomy from a 2D screen., The Neer system was designed to be applied after examining intraoperative anatomy. By replicating the fracture and simulating the intraoperative findings, 3D models allow classification to be applied similarly to the original design of the Neer system. The aim of our study was to investigate orthopedic surgeons’ interobserver and intraobserver agreement with 3D models using the Neer system. The primary hypothesis was that 3D models would improve interobserver agreement by a kappa value of 0.1 in comparison with x-ray. The secondary hypotheses were that 1) 3D models would improve agreement compared with 2D and 3D CT and 2) agreement using 3D models would be higher (kappa 0.15) for registrars than for consultants.

Materials and methods

Setting

The study was conducted from March to July 2019, at an Australian regional general hospital.

Participants

Fourteen observers (seven orthopedic registrars and seven orthopedic consultants, who comprised all relevant staff members at the participating regional hospital) were invited to participate. Registrars (the Australian term for the equivalent of the US resident) were principal house officers or held an Australian Orthopaedic Association Surgical Education and Training Program position. Consultants were general orthopedic surgeons employed as specialists in the private or public system. The head of the department retrospectively selected thirty eligible PHFs from a state-wide database from December 18, 2018, until equal numbers of consecutive two-, three-, and four-part fractures were available (age range: 49-96 years; median age: 73 years; 28 female). To be eligible for inclusion, x-rays and 2D and 3D CT scans must have been available. For the purpose of fracture selection only, fracture severity was determined by the head of the department with all available imaging according to the Neer system. One-part fractures were excluded as many of these patients do not receive a CT and the lack of displacement would have made fracture lines difficult to visualize with 3D models. All imaging had been used clinically and captured before callus formation. Two-dimensional CT had an axial primary image plane and slice thickness of 1 mm or less. Two-dimensional CT DICOM (Digital Imaging and Communications in Medicine) files were converted to STL (Standard Tessellation Language) files using Slicer, version 4.10.1, and Blender, version 2.79, before being printed with a 3D printer. Models were printed from polylactic acid thermoplastic material using an Ultimaker 2+ (Ultimaker B.V, Utrecht, Netherlands) 3D printer using the fused filament fabrication method. STL files containing the 3D model data were imported into Ultimaker Cura (4.2.1) software and converted to GCODE files containing the 3D printing machine instructions for manufacturing the models. Models were printed with a 0.4-mm nozzle, 0.15-mm layer higher, and 20% grid infill density, and sacrificial support material was added on regions of the model with 50º or greater overhang angle. To minimize the amount of required support material, models were aligned with the humeral head located on the build plate and the shaft extending vertically upward. After printing, the support material was removed before use in the study.

Procedure

Before classification session one, observers watched a 5-minute prerecorded PowerPoint presentation defining the Neer system. Classification was recorded on fixed response surveys (Appendix 1). Images were deidentified and presented in a randomly allocated sequence for each grouped image modality, to prevent observers correlating images across modalities. Observers classified fractures individually without time restriction (representative images for each modality are provided in Figure 1) for representation. No clinical details were provided. Anterior-posterior and lateral x-rays were displayed as JPEG files. Coronal, sagittal, and longitudinal 2D CT scans, as well as axially rotating 3D CT scans, were displayed on interactive software (InteleViewer, Intelerad, Montreal, Canada). Observers could manipulate imaging and handle 3D models.
Figure 1

Representative images of proximal humeral fractures that observers were asked to classify using the Neer system: (A) X-ray, (B) 2D CT, (C) 3D CT, (D) 3D printed model. 2D, two-dimensional; 3D, three-dimensional; CT, computed tomography.

Representative images of proximal humeral fractures that observers were asked to classify using the Neer system: (A) X-ray, (B) 2D CT, (C) 3D CT, (D) 3D printed model. 2D, two-dimensional; 3D, three-dimensional; CT, computed tomography. Observers completed a second identical classification session three to eight weeks later. The same images were viewed in a different randomly allocated sequence. No feedback was provided at any point in the study, and images were not available between sessions.

Sample size

It was calculated that a sample size of 30 fractures (ten each of two-, three-, and four-part fractures) would result in 95% confidence intervals (95% CIs) for kappa smaller than 0.1 in width when the assessments of fourteen observers are combined and 95% CIs for kappa of 0.15 in width for subgroups with seven registrars and seven consultants separately. This sample size was calculated to confirm or reject the primary hypothesis, comparing x-rays with 3D models. It assumed that the study would recruit 14 observers (seven registrars and seven consultants) and that two-, three-, and four-part fractures (three categories) would be differentiated, each occurring with similar frequency. The sample size estimation was adjusted for multiple testing (k = 6), allowing the assessment of the interobserver agreement of all fourteen observers together, of the subgroup of registrars and the subgroup of consultants, separately for x-rays and 3D models.

Statistical analysis

Stata IC 13 (StataCorp, College Station, TX, USA) was used to calculate interobserver and intraobserver agreement kappa (κ) values (nonunique raters, no weighting) and percentage agreement with 95% CIs. Interobserver and intraobserver percentage agreements were calculated as the mean values of overall agreement for all possible combinations of assessors within each imaging modality. Kappa for interobserver agreement was based on all assessors in the respective analysis. Intraobserver agreement κ values were calculated for each of the seven observers who repeated the classification session and then averaged. Kappa values were interpreted using Landis and Koch criteria (Table I). The difference between two κ values was considered statistically significant if 95% CIs did not overlap (P < .05). Furthermore, P values were calculated for each κ value to test for statistically significant difference from “0.”
Table I

Landis and Koch criteria

Kappa valueAgreement
Less than 0.00Poor agreement
0.00-0.20Slight agreement
0.21-0.40Fair agreement
0.41-0.60Moderate agreement
0.61-0.80Substantial agreement
0.81-1.00Almost perfect agreement
Landis and Koch criteria

Results

Fourteen observers (seven orthopedic registrars and seven orthopedic consultants) participated in the initial interobserver agreement study. Of these, seven (three consultants and four registrars) completed the intraobserver component.

Interobserver agreement

Three-dimensionally printed models significantly improved overall interobserver agreement compared with x-ray (κ = 0.47, CI: 0.44-0.50, 66.5%, CI: 64.6-68.4% and κ = 0.29, CI: 0.26-0.31, 57.2%, CI: 55.1-59.3%, respectively, Table II, Figure 2). They also produced significantly better agreement than 2D CT (κ = 0.30, CI: 0.27-0.33, 57.8%, CI: 55.5-60.2%) and 3D CT (κ = 0.35, CI: 0.33-0.38, 58.8%, CI: 56.7-60.9%, Table II, Figure 2). Three-dimensionally printed models achieved moderate interobserver agreement with the Landis and Koch criteria compared with fair agreement for x-rays and 2D and 3D CT (Table II, Figure 2).
Table II

Interobserver agreement

X-ray2D CT3D CT3D models
Overall (consultants and registrars, n = 14)
 kappa
 κ0.290.300.350.47
 95% CI0.26-0.310.27-0.330.33-0.380.44-0.50
 AgreementFairFairFairModerate
 P value<.001<.001<.001<.001
 % agreement
 %57.257.858.866.5
 95% CI55.1-59.355.5-60.256.7-60.964.6-68.4
 Number of images30292826
Consultants (n = 7)
 kappa
 κ0.260.370.300.48
 95% CI0.20-0.320.31-0.430.24-0.360.42-0.55
 AgreementFairFairFairModerate
 P value<.001<.001<.001<.001
 % agreement
 %56.062.957.166.8
 95% CI50.2-61.956.3-69.452.0-62.362.3-71.3
 Number of images30293027
Registrars (n = 7)
 kappa
 κ0.350.250.390.51
 95% CI0.29-0.400.19-0.310.34-0.450.44-0.57
 AgreementFairFairFairModerate
 P value<.001<.001<.001<.001
 % agreement
 %60.354.160.168.3
 95% CI57.3-63.449.0-59.355.1-65.064.4-72.3
 Number of images30302829

2D, two-dimensional; 3D, three-dimensional; CI, confidence interval; CT, computed tomography.

Agreement has been defined using the Landis and Koch criteria (Table I).

P value less than .05 shows that kappa was statistically significantly different from “0”.

Figure 2

Interobserver agreement using Landis and Koch criteria for each modality and group – all observers (top), consultants (middle), and registrars (bottom).

Interobserver agreement 2D, two-dimensional; 3D, three-dimensional; CI, confidence interval; CT, computed tomography. Agreement has been defined using the Landis and Koch criteria (Table I). P value less than .05 shows that kappa was statistically significantly different from “0”. Interobserver agreement using Landis and Koch criteria for each modality and group – all observers (top), consultants (middle), and registrars (bottom). There was no significant difference in the level of interobserver agreement produced by 3D models in registrars compared with consultants (κ = 0.51, CI: 0.44-0.57, 68.3%, CI: 64.4-72.3% and κ = 0.48, CI: 0.42-0.55, 66.8%, CI: 62.3-71.3%, respectively, Table II, Figure 2).

Intraobserver agreement

Three-dimensionally printed models produced better intraobserver agreement than x-rays (mean κ = 0.60, CI: 0.46-0.73, mean agreement: 75.0%, CI: 66.4-83.6% and mean κ = 0.45, CI: 0.36-0.54, mean agreement: 70.5%, CI: 62.8-78.1%, respectively, Table III, Figure 3), but results were not statistically significant. Three-dimensionally printed models also provided higher intraobserver agreement than 2D CT and 3D CT (Table III, Figure 3), although results were also not statistically significant. Consultants achieved higher intraobserver agreement than registrars with 3D models (mean κ = 0.69 and mean agreement: 82.0% for consultants versus mean κ = 0.52 and mean agreement: 69.8% for registrars, Table III, Figure 3). We did not analyze these results further because of the small sample size.
Table III

Intraobserver agreement

X-ray2D CT3D CT3D models
Consultants and registrars (n = 7)
 kappa
 Mean κ0.450.410.430.60
 Range of κ0.32-0.630.10-0.640.36-0.530.45-0.89
 95% CI0.36-0.540.25-0.570.37-0.480.46-0.73
 AgreementModerateModerateModerateModerate
 Range of P valuesP = .016 to P < .0001P = .201 to P < .0001P = .004 to P < .0001P = .0005 to P < .0001
 Mean % agreement
 %70.565.665.575.0
 95% CI62.8-78.153.9-77.361.2-69.966.4-83.6
 Number of images30303030§
Consultants (n = 3)
 kappa
 Mean κ0.460.450.390.69
 Range of κ0.42-0.480.33-0.560.36-0.440.51-0.89
 AgreementModerateModerateModerateSubstantial
 Mean % agreement74.470.865.682.0
Registrars (n = 4)
 kappa
 Mean κ0.440.370.450.52
 Range of κ0.32-0.630.10-0.640.40-0.530.45-0.60
 AgreementModerateFairModerateModerate
 Mean % agreement67.561.765.569.8

2D, two-dimensional; 3D, three-dimensional; CI, confidence interval; CT, computed tomography.

Agreement has been defined using the Landis and Koch criteria (Table I).

P value less than .05 shows that kappa was statistically significantly different from “0”. Range of P values assessing kappa for each observer.

29 images for one observer

29 images for two observers.

Figure 3

Intraobserver agreement using Landis and Koch criteria for each modality for all observers.

Intraobserver agreement 2D, two-dimensional; 3D, three-dimensional; CI, confidence interval; CT, computed tomography. Agreement has been defined using the Landis and Koch criteria (Table I). P value less than .05 shows that kappa was statistically significantly different from “0”. Range of P values assessing kappa for each observer. 29 images for one observer 29 images for two observers. Intraobserver agreement using Landis and Koch criteria for each modality for all observers.

Discussion

For the interobserver agreement component of the study, the primary hypothesis was confirmed, with 3D models significantly increasing interobserver agreement compared with x-ray. Three-dimensionally printed models also significantly improved interobserver agreement in comparison with 2D CT and 3D CT. There was no significant difference in agreement using 3D models between consultants and registrars. Under the Landis and Koch criteria, interobserver agreement was classified as moderate with 3D models, while fair agreement was achieved with x-rays, 2D CT, and 3D CT. Only seven of the original fourteen observers completed the intraobserver component of the study. While 3D models produced higher intraobserver agreement than x-ray, 2D CT, and 3D CT, this did not reach statistical significance. Three-dimensionally printed models achieved substantial intraobserver agreement with the Landis and Koch criteria, while all other imaging modalities achieved moderate agreement. This study had a number of strengths. First, the consecutive selection of fractures from a state-wide database resulted in clinically realistic images. Second, for the interobserver agreement component of the study, the predetermined sample size was achieved, and the study was adequately powered for the primary hypothesis, resulting in narrow CIs and significant results. This study included more observers than many previous studies.,,,, Finally, the use of a standard presentation before rating ensured a standard definition of the Neer system was applied by raters. This study also had a number of limitations. In the absence of a gold standard, the original selection of 10 of each of 2-, 3-, and 4-part fracture for the purpose of the study was conducted by the head of the department; however, equality of the number of each type of fracture cannot be assured. With the absence of a gold standard for classification, and also with low baseline levels of agreement, the allocation of fractures to a level of complexity to conduct a subanalysis by complexity would have been arbitrary; therefore, this subanalysis was not performed. The intraobserver part of the study was only conducted with seven observers, leading to reduced statistical power for the analysis. As a consequence, comparisons between observer groups were not conducted. In clinical practice, different imaging modalities are used simultaneously to assess a fracture. However, our study required grouping by image modality to allow agreement to be attributed to a specific imaging modality rather than cumulative familiarity with the fracture. The time between repeat fracture classifications varied for observers between three to eight weeks. Although identical timing would have been ideal, this was not feasible owing to doctor availability. Images were not available between sessions, and feedback was not given after the first session. We believe it is unlikely that observers could recall previous classification and consequently unlikely that the difference in timing impacted intraobserver results. Kappa values were used to allow comparison to previous studies. Although kappa statistics correct for agreement occurring by chance, they have limitations. Prevalence and bias effect, resulting from marginal proportions inherent in calculating kappa, means kappa can be low when percentage agreement is high. Because the Landis and Koch criteria are arbitrary, the effects of prevalence and bias effects on kappa should still be considered. Supplementing kappa with percentage agreement adds statistical credibility as it is not affected by prevalence and bias effects. Although the Neer classification system is known to have limited reliability and validity, it is the most widely used system and therefore the most appropriate for our study. There has been very limited research addressing 3D models as an imaging modality for PHF classification, and to the authors’ knowledge, this is the first study specifically measuring agreement., However, there has been more extensive research for other types of fractures. Three-dimensionally printed models have been found to increase interobserver agreement in the classification of acetabular, calcaneal, and coronoid and distal humeral fractures in comparison with 2D CT.,,, A previous study had hypothesized that 3D models might be more helpful for less experienced surgeons diagnosing PHFs. However, in the classification of acetabular and calcaneal fractures, 3D models did not produce greater agreement in less experienced observers., Similarly, registrars did not appear to benefit to a greater extent from the use of 3D models than consultants in our study. A key factor contributing to limited agreement with the Neer system is a lack of clarity regarding the threshold for displacement.,, Historically, this has varied leading to some uncertainty regarding its definition. Displacement threshold of 1 cm likely appears more substantial if the humeral head diameter is 3.5 cm rather than 5 cm, which could contribute to disagreement. Fractures with displacement closer to the 1-cm threshold are likely to be more contentious.,, Muscles attached to fragments may cause displacement to change between different images., Owing to the round structure, it is challenging to appreciate angulation or displacement of the humeral head. There is likely more disagreement about complex fractures,,,, where it becomes increasingly difficult to assess displacement. However, the amount of displacement helps determine management and is integral to PHF classification systems.25, 26, 27 Fracture classification is important for reporting of injury severity, surgical management, and surgical planning and estimating the likely prognosis, but there is currently no evidence that fracture classifications improve patient outcomes. However, the ability to visualize individual patient anatomy using 3D models for PHF surgery has been shown to decrease operating time, blood loss, and radiation exposure owing to reduced imaging and improve functional outcomes (including shoulder range of motion and Short Form-36 physical component summary scores) compared with the conventional 2D and 3D imaging. The improvement in agreement using the Neer system in our study is in essence a surrogate marker for the increase in ability for surgeons to visualize and understand fractures using 3D models compared with conventional imaging.

Clinical implications

We suggest that x-rays should still remain first line in view of low cost and limited exposure to radiation. The improved interobserver agreement in our study with the use of supports the current practice of adding when further information is required. The addition of 3D models does not require additional radiation or patient discomfort., With newer technology, 3D models will be quicker to produce, cheaper, and more widely available and therefore more feasible to use as an additional modality. Surgical planning with PHF 3D models has been shown to improve patient outcomes and reduce operative time, blood loss, and the use of intraoperative x-rays,, thereby reducing costs and potentially offsetting the cost of 3D printing. Use of 3D models for informed consent may also improve patient understanding and satisfaction and reduce potential litigation.,35, 36, 37

Research implications

Limited agreement was found with the Neer system, suggesting new classification systems should be investigated, including the HGLS system, which has been shown to have superior interobserver and intraobserver reliability compared with both the Neer and AO systems. Three-dimensionally printed models are likely to increase agreement for other classification systems. Future, adequately powered research should determine if 3D models are more useful for complex fractures. The use of an expert panel as a gold standard to define fracture complexity could be useful in this setting. Prospective research is required to confirm if improved agreement about PHF classification with 3D models improves treatment consistency and patient outcomes. A formal cost-effectiveness study could assess if improved outcomes justify the cost of 3D printing. The ability of PHF 3D models to improve patient understanding and expectations, and aid in gaining informed consent, should also be formally assessed.

Conclusions

The use of 3D models significantly improved interobserver agreement about the Neer classification system in comparison with x-rays. Three-dimensionally printed models also significantly improved interobserver agreement over 2D and 3D CT. There was also increased intraobserver agreement with 3D models, although not statistically significant owing to lack of statistical power. Three-dimensionally printed models did not significantly benefit agreement of registrars compared with consultants. Future research into the use of 3D models in PHF management should further investigate the ability of this technology to improve patient outcomes.

Acknowledgments

The authors thank the orthopedic consultants and registrars who participated in this study.

Disclaimer

No financial remunerations were received by the authors or their family members related to the subject of this article. The authors have no conflicts of interest to declare.
  34 in total

Review 1.  Four-segment classification of proximal humeral fractures: purpose and reliable use.

Authors:  Charles S Neer
Journal:  J Shoulder Elbow Surg       Date:  2002 Jul-Aug       Impact factor: 3.019

2.  The impact of stereo-visualisation of three-dimensional CT datasets on the inter- and intraobserver reliability of the AO/OTA and Neer classifications in the assessment of fractures of the proximal humerus.

Authors:  A Brunner; P Honigmann; T Treumann; R Babst
Journal:  J Bone Joint Surg Br       Date:  2009-06

3.  Incidence of four major types of osteoporotic fragility fractures among elderly individuals in Sado, Japan, in 2015.

Authors:  Norio Imai; Naoto Endo; Yugo Shobugawa; Takeo Oinuma; Yasuhito Takahashi; Kazuaki Suzuki; Yuya Ishikawa; Tatsuo Makino; Hayato Suzuki; Dai Miyasaka; Mayumi Sakuma
Journal:  J Bone Miner Metab       Date:  2018-06-28       Impact factor: 2.626

Review 4.  Imaging to improve agreement for proximal humeral fracture classification in adult patient: A systematic review of quantitative studies.

Authors:  Hannah Bougher; Archana Nagendiram; Jennifer Banks; Leanne Marie Hall; Clare Heal
Journal:  J Clin Orthop Trauma       Date:  2019-06-26

5.  Improved Interobserver Reliability of the Sanders Classification in Calcaneal Fractures Using Segmented Three-Dimensional Prints.

Authors:  Dominique Misselyn; Stefaan Nijs; Steffen Fieuws; Eman Shaheen; Tim Schepers
Journal:  J Foot Ankle Surg       Date:  2018-02-15       Impact factor: 1.286

6.  Evaluation of the Neer system of classification of proximal humeral fractures with computerized tomographic scans and plain radiographs.

Authors:  J Bernstein; L M Adler; J E Blank; R M Dalsey; G R Williams; J P Iannotti
Journal:  J Bone Joint Surg Am       Date:  1996-09       Impact factor: 5.284

7.  Rapid prototyping in the assessment, classification and preoperative planning of acetabular fractures.

Authors:  C Hurson; A Tansey; B O'Donnchadha; P Nicholson; J Rice; J McElwain
Journal:  Injury       Date:  2007-09-19       Impact factor: 2.586

8.  Interobserver agreement of Neer and AO classifications for proximal humeral fractures.

Authors:  Maritsa K Papakonstantinou; Melissa J Hart; Richard Farrugia; Belinda J Gabbe; Afshin Kamali Moaveni; Dirk van Bavel; Richard S Page; Martin D Richardson
Journal:  ANZ J Surg       Date:  2016-02-17       Impact factor: 1.872

Review 9.  3D printing and its applications in orthopaedic trauma: A technological marvel.

Authors:  Hitesh Lal; Mohit Kumar Patralekh
Journal:  J Clin Orthop Trauma       Date:  2018-08-03

10.  Classification and treatment of proximal humerus fractures: inter-observer reliability and agreement across imaging modalities and experience.

Authors:  Abtin Foroohar; Rick Tosti; John M Richmond; John P Gaughan; Asif M Ilyas
Journal:  J Orthop Surg Res       Date:  2011-07-29       Impact factor: 2.359

View more
  3 in total

1.  CORR Insights®: 3D-printed Handheld Models Do Not Improve Recognition of Specific Characteristics and Patterns of Three-part and Four-part Proximal Humerus Fractures.

Authors:  Konrad I Gruson
Journal:  Clin Orthop Relat Res       Date:  2022-01-01       Impact factor: 4.755

2.  3D-printed Handheld Models Do Not Improve Recognition of Specific Characteristics and Patterns of Three-part and Four-part Proximal Humerus Fractures.

Authors:  Reinier W A Spek; Bram J A Schoolmeesters; Jacobien H F Oosterhoff; Job N Doornberg; Michel P J van den Bekerom; Ruurd L Jaarsma; Denise Eygendaal; Frank IJpma
Journal:  Clin Orthop Relat Res       Date:  2022-01-01       Impact factor: 4.755

3.  Three-dimensional printing models increase inter-rater agreement for classification and treatment of proximal humerus fractures.

Authors:  Luiz Fernando Cocco; André Yui Aihara; Flávia Paiva Proença Lobo Lopes; Heron Werner; Carlos Eduardo Franciozi; Fernando Baldy Dos Reis; Marcus Vinicius Malheiros Luzo
Journal:  Patient Saf Surg       Date:  2022-01-20
  3 in total

北京卡尤迪生物科技股份有限公司 © 2022-2023.