Upreet Dhaliwal1, Rajeev Kumar. 1. Department of Ophthalmology, University College of Medical Sciences and GTB Hospital, New Delhi-110 095, India. upreetdhaliwal@yahoo.com
Abstract
AIMS: To determine the quality of reporting in the proceedings of the All India Ophthalmological Conference (AIOC) 2000, subsequent rate of publication in an indexed journal and differences between the proceedings and the journal version of these papers. DESIGN: Observational study. MATERIALS AND METHODS: All papers presented at the AIOC 2000 were retrieved from the proceedings and assessed for completeness of reporting. To determine the subsequent full publication, a Medline search was performed as of January 2007; consistency between the proceedings paper and the final publication was evaluated. STATISTICAL ANALYSIS: Chi square and Fisher's exact tests were used to compare publication rates based on geographical location, subspecialty and study design; Student's t -test was used to compare differences based on the number of authors and sample size. RESULTS: Two hundred papers were retrieved; many failed to include study dates, design or statistical methods employed. Thirty-three (16.5%) papers were subsequently published in indexed journals by January 2007. The published version differed from the proceedings paper in 27 (81.8%) instances, mostly relating to changes in author name, number or sequence. CONCLUSIONS: The overall quality of reporting of scientific papers in the proceedings of the AIOC 2000 was inadequate and many did not result in publication in an indexed journal. Differences between the published paper in journals and in proceedings were seen in several instances. Ophthalmologists should be cautious about using the information provided in conference proceedings in their ophthalmic practice.
AIMS: To determine the quality of reporting in the proceedings of the All India Ophthalmological Conference (AIOC) 2000, subsequent rate of publication in an indexed journal and differences between the proceedings and the journal version of these papers. DESIGN: Observational study. MATERIALS AND METHODS: All papers presented at the AIOC 2000 were retrieved from the proceedings and assessed for completeness of reporting. To determine the subsequent full publication, a Medline search was performed as of January 2007; consistency between the proceedings paper and the final publication was evaluated. STATISTICAL ANALYSIS: Chi square and Fisher's exact tests were used to compare publication rates based on geographical location, subspecialty and study design; Student's t -test was used to compare differences based on the number of authors and sample size. RESULTS: Two hundred papers were retrieved; many failed to include study dates, design or statistical methods employed. Thirty-three (16.5%) papers were subsequently published in indexed journals by January 2007. The published version differed from the proceedings paper in 27 (81.8%) instances, mostly relating to changes in author name, number or sequence. CONCLUSIONS: The overall quality of reporting of scientific papers in the proceedings of the AIOC 2000 was inadequate and many did not result in publication in an indexed journal. Differences between the published paper in journals and in proceedings were seen in several instances. Ophthalmologists should be cautious about using the information provided in conference proceedings in their ophthalmic practice.
Dissemination of research results are important in medical
sciences, as they convey new information to the scientific
community and focus on future research efforts.1-3
Dissemination, traditionally, is achieved either by presentation
of the results at a scientific meeting or publication in a
scientific journal. Presentations are valuable as they rapidly
provide new information. However, this data is not available
to the entire scientific community unless it is published in an
indexed journal. Although some societies publish conference
proceedings, in general, the information included is insufficient
to allow critical appraisal of the work.2 Also, conference
abstracts usually do not undergo rigorous peer-review prior
to acceptance. Therefore, data found only in abstracts or
proceedings may be misleading or inappropriate.4,5
Studies have shown that results presented at scientific meetings may be
different from the versions that appear later in peer-reviewed
publications.6,7 This can have serious implications, as
scientists and clinicians who attend specialty annual conferences with a
view to learn more on the research front may use the research
findings to make decisions about patient management.8The All India Ophthalmological Society annual conference
(AIOC) is the primary research conference of ophthalmology
in India. Although acceptance of an abstract at a large scientific
gathering such as the AIOC is prestigious, it is publication
of this research in a peer-reviewed journal that validates
the significance of these data and methods.9 In addition, the
publication rate of presentations are claimed to be the indicator
of the level of scientific quality of a meeting.10,11We sought to determine the completeness of reporting of
papers presented at the AIOC in the year 2000, the proportion
that were ultimately published in peer-reviewed journals and
differences between the presented paper and that published
in a journal, if any.
Materials and Methods
Using the proceedings of the AIOC, the full text of papers
presented at the annual conference of the All India
Ophthalmological Society in 2000 was obtained. The year 2000
was chosen to allow sufficient time for the presented papers
to reach publication. Both authors independently assessed
each proceedings paper for completeness of reporting that
included the following key features: whether the authors
provided adequate correspondence details, dates defining the
period of study, objectives stated clearly enough to measure
the outcomes, appropriate study design, appropriate statistical
methods and adequate results. For randomized clinical trials
(RCT) in addition to the features mentioned above, we assessed
whether the authors provided details of the method of random
allocation, sample size calculation and allocation concealment
and masking. A score of 1 was given for each feature that was
appropriately described; based on the features assessed, the
maximum possible score varied with the study design. It was
5 for a descriptive paper, 6 for an observational or experimental
study and 9 for a RCT. To make them comparable across the
study designs, the scores were standardized by dividing the
actual score by the maximum score possible for that study
design and multiplying by 10. A score of 10 was taken to
indicate a methodologically sound paper.To determine if completeness of reporting had changed with
time, papers of the proceedings of 2000 were compared with
more recent proceedings (2006). For this purpose, 100 out of 264
papers were chosen from the proceedings of 2006. We settled
on 100 papers instead of the whole 264 for several reasons. The
primary outcome measure was completeness of reporting in
the proceedings of 2000; review of papers for completeness of
reporting was time consuming; 100 was a statistically valid
number to compare with 200. To avoid bias of subspecialty,
we included the first five papers of the first 20 subspecialties
published in the proceedings of 2006.To determine subsequent full publication, a detailed
computerized search of articles indexed by Index Medicus
was performed using the PubMed server as of January 2007.
Therefore, the evaluation period extended for a maximum
of 7 years. Appropriate key words from the title combined
with each author′s name were used in order to identify the
corresponding publication. In case a hit was not obtained, the
process was repeated with each author, groups of authors and
finally with all authors. A published manuscript was considered
to be a full publication of a proceedings paper when it satisfied
both of the following criteria: (i) at least one of the authors of
the proceedings paper was an author of the publication and
(ii) at least one of the outcomes from the proceedings paper
was an outcome of the publication.The data were recorded independently by both authors. The
type of journal (national or international; ophthalmological or
other), month and year of publication, time lag to publication,
number of authors, sample size, geographical location of
the study, organization of origin (medical college or other),
design of the study (descriptive, observational or experimental
[RCT or non-RCT]) and subspecialty (as categorized in
the proceedings) were noted. Both authors independently
recorded any differences between the proceedings paper and
the published version with regard to sample size, analysis
methods, results, number of authors and change in authorship
sequence or names. The authors of this article were not masked
to the authors and institutions. In case of discrepancy in the
findings between the two authors, the results were discussed
to determine the type of difference, if any. When it pertained to
difference in the type of journal, month and year of publication,
number of authors, sample size, geographical location and
organization of origin of the study, completeness score,
subspecialty and difference between the proceedings paper and
the published version, the proceedings or the published paper
was revisited and the discrepancy appropriately corrected.
When it pertained to design of the study, the decision of the
statistical author was final.The data were entered into an Excel spreadsheet and
the mean, standard deviation, and range for each category
were determined. Univariate significance testing with the
chi square test and Fisher′s exact test was used to compare
completeness of reporting in the Proceedings 2000 with that
in Proceedings 2006 and to determine whether there were
significant differences between papers with completeness
scores of 10/10 and those with lower scores, in publication
rates and study design. Univariate significance testing was
also used to determine if there were significant differences
in publication rates based on geographical location of the
study, organization of origin, subspecialty and study design.
To determine influence of geographical region of the study,
study design, and subspecialty on subsequent publication in
an indexed journal, we computed the P-value and odds ratio
by comparing one category with all other categories combined,
making a two-by-two contingency table.12 One-way ANOVA
with Tukey test was used to compare completeness scores
based on geographical region and subspecialty. Student′s t-test
was used to determine differences in publication rates based
on the number of authors and sample size and differences in
completeness scores based on organization of origin.
Results
The number of papers published in the Proceedings of the
AIOC 2000 was 200; all were retrieved for the study. A large
majority of studies failed to include the dates of the study,
description of study design or statistical methods employed
[Table 1]. In comparison, the Proceedings of 2006 showed
significant improvement in reporting of objectives and study
design, while correspondence details were less likely to be
reported. Of the 200 papers in the Proceedings of 2000, 13
were RCT; none of these gave information on methods used
to generate the random allocation, calculate sample size or
methods for allocation concealment and masking.
Table 1
Completeness of reporting in the proceedings of 2000 compared with that in proceedings of 2006
Scores for completeness are shown in Table 2; when the 13
RCTs were considered, scores were low and ranged between
3.3 and 6.7 (average 4.9 ± 0.97). Only 14 (7%) papers scored 10
out of 10 points for completeness. These included five of all
descriptive studies, seven of all observational and two of all
experimental studies (12.5%, 14.6%, and 1.8%, respectively;
P = 0.004). Their publication rates were comparable with
papers that had lower scores (P = 0.18). Ninety-three papers
originated from medical colleges and 107 from private
centers. Completeness scores were comparable regardless
of organization of origin (P = 0.21). Papers from South India
had significantly higher completeness scores than those from
the West (P = 0.03), all other regions being comparable. When
subspecialty was considered, papers from the uvea session had
significantly higher scores than those from refractive surgery
(P = 0.03), all other subspecialties being comparable.
Table 2
Standardized completeness of reporting scores based on study design
Thirty-three (16.5%) of the papers published in Proceedings
2000 were subsequently published as 34 papers in journals
indexed by Index Medicus by January 2007 (within 7 years
of presentation; Table 3). Twelve papers were published in
indexed national journals; 10 (83.3%) of them in an ophthalmic
journal. Twenty-two papers were published in indexed
international journals (one study was published in two journals,
each reporting different aspects); 19 (86.4%) were ophthalmic
journals. Time from presentation to publication for 31 papers
ranged from 2 to 77 months (average 22.8 ± 16.4 months, median
20 months); the majority were published in the first 3 years of
presentation [Table 4]. Three papers were published 1 month,
5 months, and 6 months before they were presented.
Table 3
Indexed journals in which 33* proceedings papers were published
Table 4
Time period to publication of 34 papers* in indexed journals after presentation at the conference in 2000
The number of authors in papers published in the
proceedings varied from 1 to 9 (average 3.5 ± 1.7); those that
were not subsequently published in indexed journals had an
average of 3.4 ± 1.6 authors, while those that were published
had 4.1 ± 1.8 authors on an average (P = 0.02). The sample
size in papers published in the proceedings varied from 1 to
7733 (average 219 ± 736, median 60); those that were not
subsequently published in indexed journals had an average
sample size of 214.3 ± 774.3, while it was 244.3 ± 511.9 in those
that were published (P = 0.07).The majority of presentations at the conference were by
authors from South and North India. However, geographical
location did not influence the rate of subsequent publication in an
indexed journal [Table 5]. Publication rate was not influenced by
the organization of origin (P = 0.31). Experimental studies were
the most common study design but their publication rates were
not significantly different from other study designs [Table 6].
Of the 13 RCTs, three (23.1%) were published as full papers.
There was no difference in publication rates between RCTs and
non-RCT experimental studies (P = 0.39). Presentations dealing
with glaucoma were significantly more likely to be subsequently
published in an indexed journal [Table 7].
Table 5
Influence of geographical region of the study on subsequent publication in an indexed journal
Table 6
Influence of study design on subsequent publication in an indexed journal
Table 7
Influence of subspecialty on subsequent publication in an indexed journal
The published version differed from the proceedings paper
in 27 (81.8%) instances. The types of differences are depicted in
[Table 8]; they were mostly related to changes in author name,
number or sequence.
Table 8
Ways in which the proceedings paper differed from the version published in an indexed journal (n = 27)
Discussion
A look at the abstract book of AIOC 2000 shows that 278
abstracts were submitted as free papers for the conference. That
only 200 were published in the proceedings indicates that many
authors either did not eventually present or did not submit a
full version for the proceedings. Papers presented at scientific
meetings have one important purpose: to disseminate research
findings as soon as possible. However, medical scientists have
questioned the quality of such presentations, as many are not
reported in sufficient detail to enable judgments to be made
about the validity of their results.4,13,14 Our study too reveals
that details of study design were available in only about
one-third of the proceedings papers in the year 2000; even
fewer experimental studies reported the statistical methods
employed. Moreover, authors of RCTs omitted to mention the
methods used to generate randomization, calculate sample size,
allocation concealment, and masking. Thus, scientists desirous
of duplicating the methodology in their own set-up would not
have access to sufficient information. To compound the issue,
correspondence details were missing from 20% of the studies,
making it difficult for others to contact researchers for more
information. The situation was not much better in the year 2006,
when study design, though reported significantly more often,
was still not available in half of the studies and correspondence
details were missing from one-third.When completeness scores were considered, only 7%
papers scored 10/10; experimental studies, both RCTs and
non-RCTs were significantly poorly reported. Poor reporting
in the proceedings did not influence publication rates and
was not related to organization of origin of the paper. Thus,
it may simply have resulted from a casual attitude by the
researchers towards the version published in the proceedings
as opposed to the presented version of the paper. On the other
hand, it may be a reflection of unsound methodology. Either
way, these findings assume significance as practitioners may
choose to alter their clinical practice based on results presented
at scientific meetings.7 Thus, researchers should interpret
information presented at meetings with caution. Conference
organizers could provide clearer guidelines, which outline the
key elements that must be reported in all studies.The subsequent rate of publication of papers presented at the
AIOC 2000 (16.5%) was much lower than that reported by other
specialty conferences (33-44.6%).1,7-9 To avoid a temporal bias,
we considered only those conferences for comparison, which
had been conducted at about the same time (between 1998 and
2001) as the AIOC 2000. However, none of these ′other specialty′
conferences were conducted in India. A Medline literature
search did not reveal any article assessing publication rates of
papers presented at conferences in India. Since publication rates
may be influenced by geographical region of origin of the article,
it may not be appropriate to compare publication rates across
the globe.14-17 Recent studies show that the
publication rates continue to largely vary between 25 and 68%; 17-20 however, some specialty conferences report rates that are higher than 80%.21,22 These authors conclude that high publication rates reflect well
on the abstract selection process and the scientific quality of the
meeting. Caution is advised when referencing or generalizing
from abstracts that have not been published in full.To improve the quality of meetings, scientific committees
should be encouraged to be more selective.1,23
However, since the number of papers submitted each year for presentation
keeps increasing, it presumably makes the selection process
more difficult.9 Simultaneously, investigators should be
encouraged to publish their data after presentation. It has
been suggested that failure to publish an adequate account of
a well-designed clinical trial is a form of scientific misconduct
that can lead those caring for patients to make inappropriate
treatment decisions.13Though the AIOC publication rates were low, the time lag
to publication was comparable to other studies; most were
published within the first 3 years of presentation, usually
in a journal of the same specialty.1,7-10 It has been suggested
that presenters need help in submitting and publishing their
work.10 To that end, the Indian Journal of Ophthalmology, being
the only national, indexed, ophthalmic journal, could offer
further peer review and guidance to conference papers after
presentation in an effort to encourage researchers to publish.
More papers (nearly two-thirds) were subsequently published
in international journals as opposed to Indian journals. Perhaps,
impact factor of the journal prompted researchers to choose one
journal over another. There are several specialty ophthalmic
journals available in the international scenario as seen from
Table 3. It is likely that researchers working in a particular
specialty prefer to send their papers to a journal specializing
in that topic. Finally, the prestige attached to an international
publication may have influenced the choice of journal. These
factors were not specifically studied and may form the basis
for future research.Though the difference in average number of authors
between proceedings papers that were subsequently published
and those that were not is only about half an author, it was
statistically significant. The significance is possibly because
of the large range (between 1 and 9 authors per paper). Other
reports24 have also found that papers with a larger author byline
are significantly more likely to be published in an indexed
journal. Presumably, having more authors on the byline ensures
that one or the other carries the paper to full publication.Subspecialties providing the largest number of abstracts
have higher publication rates.25 However, in the AIOC,
although cataract and retinal subspecialties provided the
largest volume of presentations, studies dealing with glaucoma
were more likely to be published. We were unable to explain
this discrepancy. Completeness of reporting was not deemed to
be responsible as scores were comparable for all subspecialties
except uvea, which had the most methodologically sound
papers and refractive surgery, which had the worst. Perhaps,
glaucoma research requires a more sophisticated set-up
that might co-exist with an awareness or requirement for
publication of research results. However, the same may be true
for other specialties like retina or squint.Authors report that randomized or controlled clinical
trials are more likely to be published.12,25,26 However, our
study found that they were published at the same rate as non-
RCT experimental studies. Since large clinical trials are the
standard for making treatment decisions, the consequences
of non-publication of the results of trials are significant; non-
publication can lead to bias in the literature and contribute
to inappropriate medical decisions.27 Sample size was not
predictive of publication in our study. Sample size was
extremely variable, ranging between one case and several
thousands, perhaps accounting for its lack of importance.Researchers from South India had higher completeness
of reporting scores than those from the West. However,
geographical location did not affect publication rates. Perhaps,
a larger sample size could bring out statistically significant
geographical differences in publication rates. This study
also highlights that there are geographical differences in the
number of papers presented at the conference. This may in
part be due to the fact that the conference was held in South
India, but if confirmed, the All India Ophthalmological Society
could consider measures to rectify this regional disparity.
Publication rates were not influenced by organization of
origin. This probably is an indication that researchers in private
institutions are under some pressure to publish, no different
from researchers in medical colleges.Differences between the paper presented at the AIOC 2000
and the version published in an indexed journal were found
in more than 80% instances. This figure is much higher than
that reported in other studies (18-59%).8,19,26 However, the
type of difference varies between studies. The most common
discrepancy seen in our study was in the author byline, with
81.4% of papers having replaced one or more proceedings′
authors with new ones. Other studies have shown that change
in author names and number is not uncommon.8,26,28 Such
changes may result from pressure to grant gift authorship to
persons in a position of relative power; persons who might
otherwise cause conflict or mar the chances of presentation
or publication.29 On the other hand, it is possible that authors
contribute to the presentation but not to the actual research
and publication, or have moved on and are difficult to trace.
It might be logical and helpful to apply the same authorship
criteria to conference presentations as are required for
indexed publications.30 Though seen infrequently in our study,
differences in study design, results and outcomes have been
seen to occur in 10-19% of papers.25,31 Such
differences may result when investigators fail to review and carefully report
their work at the time of abstract submission owing to the
pressure of submission deadlines. Moreover, results may be
poorly interpreted at the time of presentation. On the other
hand, investigators tend to be more careful when submitting
data for publication in peer-reviewed journals, while some
changes in the data occur during the peer-review process due
to editing by the editorial staff of journals.8 Some changes
between the presented and the published paper may result
from audience feedback at the conference, which possibly helps
improve the quality of the finished study.26There were some limitations in the present study. It is
possible that we missed articles published more than 7 years
after the AIOC 2000. Moreover, we restricted our search to
Medline-indexed journals and may have missed some articles
published in journals that are not indexed in Medline. We did
not specifically go into the reasons for conference presentations
failing to get published in indexed journals. Other studies
suggest that presented material may not be published, as
investigators do not submit them for publication or because the
work is not scientifically valid and may not meet the scrutiny
of the peer-review process required for full publication.10,32 One possible reason that scientists do not publish their
presentation material in indexed journals is that they need the
presentation only to be allowed to attend and be reimbursed for
the conference. They may not be interested in doing anything
further with the data after that. This speculation is supported by
the fact that 278 abstracts were selected for presentation at the
AIOC 2000, but only 200 were published in the proceedings.While the proceedings are an appropriate measure to
study, they are at best a surrogate for the presentations at
the conference. Thus, the actual presentations may be totally
different and may not be reflected in the print version that
follows (proceedings and publications). However, since the
actual presentation content is impossible to revisit at a later
date, it is the proceedings that other researchers will access;
this makes it imperative that presenters give the same attention
to the version they submit for the proceedings as they give to
the presentation.We conclude that the overall quality of reporting of scientific
papers in the proceedings of the AIOC 2000 was inadequate
and many did not go on to publication in an indexed journal.
Differences between the published paper in a journal and in
the proceedings were common. Ophthalmologists should
be cautious about using information provided in conference
proceedings in their ophthalmic practice.Guidelines to scientific committees for more rigorous
selection of abstracts could improve the reporting of studies and
increase the publication rate. Researchers should be encouraged
to publish their data. Future studies should look at barriers to
the publication of research findings and identify ways to assist
the publication process.
Authors: Mohit Bhandari; P J Devereaux; Gordon H Guyatt; Deborah J Cook; Marc F Swiontkowski; Sheila Sprague; Emil H Schemitsch Journal: J Bone Joint Surg Am Date: 2002-04 Impact factor: 5.284
Authors: Roberta W Scherer; Joerg J Meerpohl; Nadine Pfeifer; Christine Schmucker; Guido Schwarzer; Erik von Elm Journal: Cochrane Database Syst Rev Date: 2018-11-20
Authors: Juan Carlos Valderrama-Zurián; Máxima Bolaños-Pizarro; Francisco Jesús Bueno-Cañigral; F Javier Alvarez; José Antonio Ontalba-Ruipérez; Rafael Aleixandre-Benavent Journal: Subst Abuse Treat Prev Policy Date: 2009-11-04