Literature DB >> 12794462

Reliability of a structured method of selecting abstracts for a plastic surgical scientific meeting.

Lydia P E van der Steen1, J Joris Hage, Moshe Kon, Riccardo Mazzola.   

Abstract

There is no generally accepted method for assessing abstracts that are submitted for a medical scientific meeting. This article describes the development and prospective evaluation of such a method applied to the 220 abstracts submitted for the 2000 Annual Meeting of the European Association of Plastic Surgeons. Structured abstracts were evaluated in three categories: aesthetic surgery, basic research, and clinical study. Each anonymous abstract was assessed separately by 10 reputable European plastic surgeons. These reviewers used a structured rating questionnaire which resulted in a score given by each reviewer to each abstract between -6 and +6. The scores of all 10 reviewers were added for each abstract, and the papers were accepted in each of the three categories on the basis of this abridged score. To evaluate the reliability of this structured method of selection, the interrater agreement among the reviewers was tested by means of kappa analysis and the Cronbach alpha coefficient. The kappa values for agreement among reviewers regarding acceptability of abstracts were low, but the alpha coefficient indicated an acceptable degree of reliability of the average reviewers' ratings for all categories. Using a structured questionnaire can be helpful in the objective assessment of abstracts for a scientific meeting and may facilitate comparison of abstracts. Meritocratic dichotomy of abstracts by the reviewers is advocated to further improve reliability of the rating. Even though reliability generally increases with the number of reviewers, the annual increase of submitted abstracts may necessitate a decrease in the number of reviewers for each abstract.

Mesh:

Year:  2003        PMID: 12794462     DOI: 10.1097/01.PRS.0000061092.88629.82

Source DB:  PubMed          Journal:  Plast Reconstr Surg        ISSN: 0032-1052            Impact factor:   4.730


  6 in total

1.  Poster exhibitions at national conferences: education or farce?

Authors:  Gabriele Salzl; Stefan Gölder; Antje Timmer; Jörg Marienhagen; Jürgen Schölmerich; Johannes Grossmann
Journal:  Dtsch Arztebl Int       Date:  2008-02-01       Impact factor: 5.594

2.  Selecting the best clinical vignettes for academic meetings: should the scoring tool criteria be modified?

Authors:  Jeremiah Newsom; Carlos A Estrada; Danny Panisko; Lisa Willett
Journal:  J Gen Intern Med       Date:  2011-09-17       Impact factor: 5.128

3.  Leveling the field: Development of reliable scoring rubrics for quantitative and qualitative medical education research abstracts.

Authors:  Jaime Jordan; Laura R Hopson; Caroline Molins; Suzanne K Bentley; Nicole M Deiorio; Sally A Santen; Lalena M Yarris; Wendy C Coates; Michael A Gisondi
Journal:  AEM Educ Train       Date:  2021-08-01

4.  A reliability-generalization study of journal peer reviews: a multilevel meta-analysis of inter-rater reliability and its determinants.

Authors:  Lutz Bornmann; Rüdiger Mutz; Hans-Dieter Daniel
Journal:  PLoS One       Date:  2010-12-14       Impact factor: 3.240

5.  Analysis of full-text publication and publishing predictors of abstracts presented at an Italian public health meeting (2005-2007).

Authors:  S Castaldi; M Giacometti; W Toigo; F Bert; R Siliquini
Journal:  BMC Res Notes       Date:  2015-09-29

6.  How do Medical Societies Select Science for Conference Presentation? How Should They?

Authors:  Thomas M Kuczmarski; Ali S Raja; Daniel J Pallin
Journal:  West J Emerg Med       Date:  2015-07-02
  6 in total

北京卡尤迪生物科技股份有限公司 © 2022-2023.