Literature DB >> 33979339

Media and social media attention to retracted articles according to Altmetric.

Stylianos Serghiou1,2, Rebecca M Marton2, John P A Ioannidis1,2,3,4.   

Abstract

The number of retracted articles has grown fast. However, the extent to which researchers and the public are made adequately aware of these retractions and how the media and social media respond to them remains unknown. Here, we aimed to evaluate the media and social media attention received by retracted articles and assess also the attention they receive post-retraction versus pre-retraction. We downloaded all records of retracted literature maintained by the Retraction Watch Database and originally published between January 1, 2010 to December 31, 2015. For all 3,008 retracted articles with a separate DOI for the original and its retraction, we downloaded the respective Altmetric Attention Score (AAS) (from Altmetric) and citation count (from Crossref), for the original article and its retraction notice on June 6, 2018. We also compared the AAS of a random sample of 572 retracted full journal articles available on PubMed to that of unretracted full articles matched from the same issue and journal. 1,687 (56.1%) of retracted research articles received some amount of Altmetric attention, and 165 (5.5%) were even considered popular (AAS>20). 31 (1.0%) of 2,953 with a record on Crossref received >100 citations by June 6, 2018. Popular articles received substantially more attention than their retraction, even after adjusting for attention received post-retraction (Median difference, 29; 95% CI, 17-61). Unreliable results were the most frequent reason for retraction of popular articles (32; 19%), while fake peer review was the most common reason (421; 15%) for the retraction of other articles. In comparison to matched articles, retracted articles tended to receive more Altmetric attention (23/31 matched groups; P-value, 0.01), even after adjusting for attention received post-retraction. Our findings reveal that retracted articles may receive high attention from media and social media and that for popular articles, pre-retraction attention far outweighs post-retraction attention.

Entities:  

Year:  2021        PMID: 33979339      PMCID: PMC8115781          DOI: 10.1371/journal.pone.0248625

Source DB:  PubMed          Journal:  PLoS One        ISSN: 1932-6203            Impact factor:   3.240


Introduction

Retraction refers to the formal withdrawal of a publication, most often due to scientific misconduct or an error that invalidates the purported conclusions [1]. The number of retracted articles has increased dramatically over the last decade, with less than 100 reported per annum before 2000, to almost 1000 in 2014 [2] and 1,772 in 2019 [3]. Such retractions are often publicised by the journal itself in the form of a retraction notice (albeit not all journals issue a retraction notice upon retraction) and initiatives, such as Retraction Watch [4] of the Center for Scientific Integrity, keep track of these retractions. However, the extent to which researchers or the public are made aware of these retractions and the amount of attention that they receive is unknown. Concerningly, current evidence suggests widespread misinformation. As of May 2019, the most highly cited retracted article had received 371 citations since its retraction in 2018 and seven out of ten most highly cited retracted articles had received at least 100 citations since their retraction [5]. Some of these citations may be citing the work as being unreliable and acknowledging its retraction, but this is not necessarily the case in many citations. For example, in a study of all 25 papers by anaesthesiologist Scott S. Reuben that have been retracted, it transpired that 74% of citations received post-retraction did not clearly state that the work they were referring to had been retracted [6]. These results align well with literature on case studies [7, 8], specific disciplines [9] and the broader literature [10], which identify that at least 80% of retracted articles receive positive post-retraction citations. Such perpetuated misinformation is not inconsequential: guidelines and meta-analyses seem to be very rarely updated to remove retracted articles [11] and a recent preprint suggests that doing so would lead to a median reduction in estimated effect size of 13% and an average reduction of 30% [12]. One particular type of impact for scientific articles is the attention they receive in media and social media. This type of impact is complementary to citations in the scientific literature and may be more relevant when it comes to understanding how an article fares in the wider community, beyond just expert scientists. It would be very interesting to understand how much attention retracted articles receive and how retraction may affect the attention that they receive. This is feasible to examine and to compare also against citation counts by using readily available databases. The Altmetric database [13] tracks any media or social media attention to articles with a digital object identifier (DOI) and Crossref [14] maintains a citation count for such articles. We integrated the Retraction Watch Database [3], which systematically captures retracted articles, with data from Altmetric and Crossref to (a) describe retracted article characteristics and associated amount of impact and attention received, (b) compare whether the amount of attention received changed before and after retraction, (c) describe the amount of attention received by retracted articles in comparison to the amount received by their retraction notice, and (d) compare the amount of attention that eventually retracted articles received to that of similar matched unretracted articles published in the same journal issues.

Results

The RetractionWatch database

As of August 14, 2020, the RetractionWatch database contained 22,200 publications with a unique Digital Object Identifier (DOI), PubMed ID (PMID) or title (when DOI or PMID were not available) published between 1923 and 2020. Of these, we retained 11,807 unique publications published between 2010–2015 (S1 Table), which we chose a priori as a representative sample with sufficient time to accrue retractions and data about the impact of those retractions. Of these, most studies were designated by Retraction Watch as either conference abstracts (6,561; 56%), research articles (4,046; 34%) or clinical studies (450; 3.8%); overall, we identified 4,603 (39%) studies that we define as research articles (see Materials and Methods). Most research articles were classified by Retraction Watch under at least one of Biological sciences (2,387; 52%), followed by the Health sciences (2,031; 44%) and the Physical sciences (1,233; 27%) (Table 1; S2 Table). The most common subcategory was cellular biology (1,060; 23%). There was a very large number of journals represented (n = 2,239 journals) and out of 392 publishers, the most commonly occurring were Elsevier (939; 20%), followed by Springer (719; 16%) and Wiley (333; 7.2%). The most highly represented countries were China (1,260; 27%), United States (891; 19%) and India (402; 8.7%) (Fig 1).
Table 1

Descriptive statistics for 4,603 unique eligible research articles.

CountPercent
Date (original)201065014%
201167415%
201282718%
201370315%
201491220%
201583718%
Date (retraction)20101283%
20113017%
201247310%
201357612%
201461913%
201594921%
201686019%
201752611%
20181754%
Article typeResearch Article4,04488%
Clinical Study45010%
Meta-analysis1203%
CountryChina1,26027%
United States91519%
India4029%
Iran3097%
South Korea2275%
Other (n = 110)2,04745%
Retraction reasonDuplication of Article66715%
Fake Peer Review59413%
Plagiarism of article4129%
Other (n = 88)3,73781%
MedianIQR
DateOriginal20132011–2014
Retraction20152013–2016
AASOriginal0.250.00–7.00
Retraction0.250.00–8.79
Citation countOriginal31–10
Retraction00–0

For Article type, Country and Retraction reason the proportions do not add up to 100% because each article could be classified under multiple article types, have multiple reasons for retraction and have affiliations from multiple countries. Missing values: Country (1, 0%), AAS—Original (758, 17%), AAS—Retraction (847, 18%), Citation count—Original (817, 18%) and Citation count—Retraction (914, 20%). The large number of missing AAS and citation counts is due to the subsequent addition of articles not initially present in our version of the Retraction Watch Database (see Materials and Methods). IQR = Interquartile Range. AAS = Altmetric Attention Score.

Fig 1

Proportion of retracted research articles by country.

Proportion of retracted research in relation to all peer-reviewed documents published in 2010–2015 for countries with >500 peer-reviewed documents within those 6 years, as indicated by the National Science Foundation (see Materials and Methods). This varies substantially by country and by continent. The continent with most retractions is Asia and the continent with least retractions is Europe. The country with the highest proportion of retractions was the Republic of Congo (3/922; 0.3%) and the country with the least proportion was Hungary (3/55,609, 0.005%). Note that NSF counts of total peer-reviewed documents do not include letters, which represent 21/4,603 (0.5%) of our research articles as per our eligibility criteria. Grey signifies no data for those countries.

Proportion of retracted research articles by country.

Proportion of retracted research in relation to all peer-reviewed documents published in 2010–2015 for countries with >500 peer-reviewed documents within those 6 years, as indicated by the National Science Foundation (see Materials and Methods). This varies substantially by country and by continent. The continent with most retractions is Asia and the continent with least retractions is Europe. The country with the highest proportion of retractions was the Republic of Congo (3/922; 0.3%) and the country with the least proportion was Hungary (3/55,609, 0.005%). Note that NSF counts of total peer-reviewed documents do not include letters, which represent 21/4,603 (0.5%) of our research articles as per our eligibility criteria. Grey signifies no data for those countries. For Article type, Country and Retraction reason the proportions do not add up to 100% because each article could be classified under multiple article types, have multiple reasons for retraction and have affiliations from multiple countries. Missing values: Country (1, 0%), AAS—Original (758, 17%), AAS—Retraction (847, 18%), Citation count—Original (817, 18%) and Citation count—Retraction (914, 20%). The large number of missing AAS and citation counts is due to the subsequent addition of articles not initially present in our version of the Retraction Watch Database (see Materials and Methods). IQR = Interquartile Range. AAS = Altmetric Attention Score. Out of 4,142 retraction notices with a unique DOI or PMID, 44 referred to the retraction of more than one original article—the largest retraction was published in Tumor Biology, retracting 103 unique articles due to fake peer review [15]. The commonest reasons for retraction were Duplication of article (667; 15%), Fake peer-review (594; 13%) and Plagiarism of article (412; 9%) (S3 Table). For all 4,609 unique article-retraction pairs, the median time from publication to retraction was 457 days (IQR, 179–956 days) (S1 Fig).

Altmetric attention score and citations

Of 4,324/4,603 original articles with a DOI, 3,363 (81%) had a different DOI for the original article than the retraction notice. For 3,097 (92%) of these we extracted data about Altmetric attention and citations; the discrepancy exists because we are reporting on an updated version of the Retraction Watch Database than the one originally used to extract AAS and citation counts in June 6, 2018. Within these, the median Altmetric Attention Score (AAS) for an original article was 0.50 (Interquartile range (IQR), 0.00–7.3) and for a retraction notice it was 0.25 (IQR, 0.0–9.0) (Fig 2); the AAS is a composite measure of total media (e.g. news outlets) or social media (e.g. Twitter) attention (see Materials and Methods). Out of 3,097 research articles, 1,733 (56%) articles in our dataset received any media and social media attention (AAS>0) and 168 (5.4%) received substantial media and social media attention (AAS>20). These popular articles were published in 108 different journals, the most common being Science (12/168; 7%) and Nature (10, 6%) (Fig 3A). The publisher with most popular retracted articles was Springer-Nature (26/168; 15%) (Fig 3A). Articles with AAS<20 were published in a much larger array of journals (n = 1,445). The publisher with the most such retractions was Elsevier (542/2,923; 19%). The commonest reason for retraction of popular articles was Unreliable results (33/168; 20%) while for other articles it was Fake peer review (487/2,923; 17%).
Fig 2

AAS and citation count across original articles and retraction notices.

The distribution of AAS between 3,097 original articles and their retraction notices is fairly similar. However, a small number of original articles tend to receive more extreme AAS scores (5 articles with very high original AAS (Range, 1019–3166) are not shown for clarity). Unlike AAS, the citation count in 3,008 original articles is far greater than that in their retraction notices.

Fig 3

Features across popularity and impact.

(A) The distribution of Country, Journal, Publisher and Reason for retraction is different across levels of popularity in 3,097 original articles. Popular retracted articles often came from the USA, were published in journals such as Nature and Science and were mostly retracted because of unreliable results. On the contrary, other retracted articles often came from China, were published in journals such as Tumor Biol and J Biol Chem and were primarily retracted because of fake peer review. (B) The distribution of Country, Journal, Publisher and Reason for retraction across levels of impact for 3,008 original articles had a similar pattern as the pattern seen across levels of Altmetric attention. However, the commonest reason for retraction in highly cited research (>100 citations) was duplication or manipulation of images.

AAS and citation count across original articles and retraction notices.

The distribution of AAS between 3,097 original articles and their retraction notices is fairly similar. However, a small number of original articles tend to receive more extreme AAS scores (5 articles with very high original AAS (Range, 1019–3166) are not shown for clarity). Unlike AAS, the citation count in 3,008 original articles is far greater than that in their retraction notices.

Features across popularity and impact.

(A) The distribution of Country, Journal, Publisher and Reason for retraction is different across levels of popularity in 3,097 original articles. Popular retracted articles often came from the USA, were published in journals such as Nature and Science and were mostly retracted because of unreliable results. On the contrary, other retracted articles often came from China, were published in journals such as Tumor Biol and J Biol Chem and were primarily retracted because of fake peer review. (B) The distribution of Country, Journal, Publisher and Reason for retraction across levels of impact for 3,008 original articles had a similar pattern as the pattern seen across levels of Altmetric attention. However, the commonest reason for retraction in highly cited research (>100 citations) was duplication or manipulation of images. Of 3,570 unique articles with Crossref citation data for both the original article and its retraction notice, 3,008 (84%) had a separate DOI for the original article and its retraction notice. The median citations for these 3,008 articles were 4 for original articles (IQR, 1–12) and 0 for retraction notices (IQR, 0–0) as of June 6, 2018 (Fig 2). 28 of the original articles (0.9%), but none of the retraction notices, received at least 100 citations. The most common journal for highly cited (>100 citations) retracted articles was Cell (both 4/28; 14%) (Fig 3B). The commonest reason for retraction of highly cited articles was Manipulation/Duplication of Images (8/28; 29%), whereas for other articles (<100 citations) it was Fake peer review (487/2,979; 16%).

Attention and citations to the original article vs. its retraction notice

Overall, for 3,097 original articles and their retraction notice, the AAS of each original article did not differ substantially from its retraction notice (Median difference, 0; IQR, -1.0–1.0; P-value, 0.54). However, popular original articles received substantially higher media and social media attention than their retraction notice (Median difference, 30; IQR, 14–91; P-value < 10−16) (Fig 4). 109/168 (65%) popular articles did not have a popular retraction notice and 10/168 (6%) popular articles had a retraction notice with no attention received at all (AAS = 0). Overall, the original article received more attention than its retraction notice on 1,056 occasions and the retraction notice more than the original article on 1,016 occasions (P-value, 0.39). For popular articles, the numbers were 156 versus 12 (P-value < 10−16).
Fig 4

Change in attention by popularity.

All points in green represent an original article that received more media and social media attention than its retraction notice and all points in red represent the opposite; points in grey represent no difference between the two. The large point and solid line in grey represent the median and its interquartile range. The difference is rather balanced for 2,923 articles that are not popular—the extreme negative values at -395 came from a single retraction in Tumor Biol, which retracted 103 unique original articles. In 168 popular articles, the difference is skewed to the right such that most popular articles did not have an equally popular retraction.

Change in attention by popularity.

All points in green represent an original article that received more media and social media attention than its retraction notice and all points in red represent the opposite; points in grey represent no difference between the two. The large point and solid line in grey represent the median and its interquartile range. The difference is rather balanced for 2,923 articles that are not popular—the extreme negative values at -395 came from a single retraction in Tumor Biol, which retracted 103 unique original articles. In 168 popular articles, the difference is skewed to the right such that most popular articles did not have an equally popular retraction. However, the above results do not take into account attention received by the original article because of its retraction. Within our sample of 3,097 articles, 279 were retracted within a year since we retrieved data from Altmetric. For these articles we could use data provided by Altmetric on cumulative attention received across the past 1 month, 3 months, 6 months and 1 year. Over the previous year, the 279 articles gained most attention in the months following their retraction (Fig 5A), in contrast to articles retracted more than a year ago, which did not experience appreciable attention gain during the previous year (Fig 5B, S2 Fig). Of 279 articles retracted within a year of our data collection, 179 (64%) articles received at least some attention before or after retraction—87 (49%) received most attention before retraction and 88(49%) before retraction (4 had similar attention before and after retraction).
Fig 5

Altmetric Attention Score (AAS) over time.

(A) Cumulative AAS for 279 articles retracted within a year of Altmetric data retrieval (from June 6, 2017 to June 6, 2018). The horizontal axis denotes how many months it has been since the article was retracted from the day of Altmetric data retrieval and the vertical axis the amount of AAS gained before and after retraction. It illustrates that, over the past year, most gains in AAS occurred after retraction. (B) Even though the median gain is zero in both 2,812 recently and 279 not recently retracted articles, proportionally many more recent articles experienced a gain, as denoted by the large interquartile range and the mean (grey dot denotes mean and range denotes the bootstrapped 95% CI).

Altmetric Attention Score (AAS) over time.

(A) Cumulative AAS for 279 articles retracted within a year of Altmetric data retrieval (from June 6, 2017 to June 6, 2018). The horizontal axis denotes how many months it has been since the article was retracted from the day of Altmetric data retrieval and the vertical axis the amount of AAS gained before and after retraction. It illustrates that, over the past year, most gains in AAS occurred after retraction. (B) Even though the median gain is zero in both 2,812 recently and 279 not recently retracted articles, proportionally many more recent articles experienced a gain, as denoted by the large interquartile range and the mean (grey dot denotes mean and range denotes the bootstrapped 95% CI). Considering only the 279 articles for which we could retrieve data on changes in AAS over time, the effects observed when considering total AAS were attenuated (Table 2, S3 Fig). However, the median attention received by popular articles was still markedly higher than that of their retraction (Median difference, 31; IQR, 21–82; Median original-to-retraction ratio, 4.1; IQR of original-to-retraction ratio, 2.8–16.2). In considering 179 articles with non-zero original attention, the attention received by the original article exceeded that of the retraction most of the time (121 vs. 42), even though the median difference was small (Median, 0.8). These numbers were only slightly attenuated in sensitivity analyses where all of the attention received by the original over the last year was removed (instead of only removing attention received after the publication of the retraction) (S4 Table), but more substantially attenuated when this attention was then added to the retraction notice (S5 Table).
Table 2

Pairwise comparison of original article vs. retraction notice with and without post-retraction AAS.

OverallOriginal ≥ 20 AASOriginal > 0 AAS
TotalBeforeTotalBeforeTotalBefore
N = 279N = 275N = 20N = 15N = 179N = 124
Median difference (IQR)0 (-0.3–1.5)0 (-1.0–0.3)31 (21–82)27 (15–66)0.8 (0–5)0.5 (-1-3)
Median ratio (IQR)1.4 (0.4–24.7)0.3 (0.0–4.3)4.1 (2.8–16.2)3.04 (2.3–11.4)2.5 (1.0–100.6)2.5 (0.5-Inf)
Original > Retraction121 (43%)79 (28%)20 (100%)14 (93%)121 (68%)79 (64%)
Retraction > Original82 (29%)123 (44%)0 (0%)1 (7%)42 (23%)40 (32%)
Equal76 (27%)77 (28%)0 (0%)016 (9%)5 (4%)
P-value0.0070.0022 x 10−60.0015 x 10−104 x 10–4

Total = total AAS of original to date; Before = AAS received by original before retraction; Median ratio = median of the ratio AAS of original article / AAS of retraction notice; Inf = Infinity. The p-value is from a Binomial test for the number of articles with greater original vs. greater retraction attention.

Total = total AAS of original to date; Before = AAS received by original before retraction; Median ratio = median of the ratio AAS of original article / AAS of retraction notice; Inf = Infinity. The p-value is from a Binomial test for the number of articles with greater original vs. greater retraction attention. It could be that popular original articles attracted attention because of their retraction. As such, in a sensitivity study we examined the tweets associated with all 17/317 recently retracted articles that were popular even before retraction (317 instead of 279 because all original articles with a DOI were considered; also, note that tweets only constitute part of the AAS, which is a multi-factorial metric). 14/17 received at least one tweet (Median, 41; IQR, 12–95), of which 8/14 were openly available on Altmetric. Of these 8, the median number of pre-retraction tweets was 17 (IQR, 4–45) and none was negative. Similarly, the median number of post-retraction tweets was 2 (IQR, 1–3)—all tweets were negative for 5 articles, 1 article did not receive any tweets and, surprisingly, 2 articles exclusively received non-negative tweets [16, 17]. The first article was published in JAMA Pediatrics by Wansink et al. and concluded that branding school lunches can improve uptake of healthy food by school children [16]. This article received 4 non-negative tweets before retraction and 743 non-negative tweets after retraction—742/743 were retweets of a sentence that suggested that stickers make children choose fruit over cookies. The second article was published in the International Journal of Neuropsychopharmacology and concluded that ketamine is efficacious as a rapid-onset antidepressant in the emergency department [17]. It received 24 non-negative tweets before retraction and 2 non-negative tweets after retraction, both praising ketamine. In terms of citations, out of 3,008 records, the median difference in citations between the original and its retraction notice was 4 (IQR, 1–11; P-value < 10−16) and for 31 highly cited original articles (>100 citations) it was 138 (IQR, 123–172; P-value < 10−16). Overall, the original article received more citations than its retraction notice on 2,393 (80%) occasions and the retraction more than its original on only 122 (4%) occasions (for 493, the citations were equal).

Attention to retracted vs. matched unretracted articles

We first compared 572 retracted articles matched with 2,832 unretracted articles, creating 450 distinct groups (see Methods). The largest such group contained 48 articles (40 unretracted, 8 retracted) and the smallest such group contained 3 articles (2 unretracted, 1 retracted) (Median, 6 articles; IQR, 6–6). Within groups, the median retracted article received a higher AAS than its median control on 253 occasions, a lower AAS on 57 occasions and the same AAS on 140 occasions (Relative risk, 4.43; 95% CI, 3.31–6.04). The median difference between median retracted and unretracted articles within groups was 0.50 (95% CI, 0.25–0.76). We then restricted our analyses to articles retracted within a year of retrieving data from Altmetric (i.e. between June 6, 2017 and June 6, 2018). Out of 2,932 eligible articles with a unique PMID, 292 had been retracted within this time period. Of these, 55 had been matched with 387 unretracted articles, creating 47 distinct groups. The largest such group contained 31 articles and the smallest such group contained 4 (Median, 6 articles; IQR, 6–11). The large number of unretracted articles per group occurred because many groups originally contained more than 1 retracted article, which had been matched to 5 distinct unretracted articles. Within matched groups, the median difference between retracted and unretracted articles in terms of all-time AAS was 1.00 (IQR, 0.00–7.35; 95% CI, 0.25–2.04) (Fig 6). After excluding the last year, the median difference between the two became 0 (IQR, 0–3.87; 95% CI, 0.00–0.00), despite the strong right skew (Mean, 6.06; 95% CI, 2.03–6.95). Out of 47 groups, in terms of all-time AAS, the median retracted article had a higher AAS on 30 occasions, versus 8 for the median unretracted article (P-value, 0.0005). Excluding the last year, the retracted article had a higher AAS on 23 occasions, versus 8 for the unretracted article (Relative risk, 2.88; 95% CI, 1.24–7.40; P-value, 0.01).
Fig 6

Difference in attention within 47 matched groups of articles retracted within a year of Altmetric data extraction.

Each point represents the difference in AAS between the median retracted vs. the median unretracted article within matched groups. Green points represent groups in which the original article received more media and social media attention than matched unretracted articles and red points represent the opposite; points in grey represent no difference between the two. The “With last year” column represents all of the Altmetric attention received, whereas the “Without last year” column represents all Altmetric attention minus the last year. The large point and solid line in grey represent the median and its interquartile range. The median difference between retracted and matched unretracted articles is small (Median, 1) when including attention received over the last year and 0 otherwise. However, substantially more retracted articles received higher attention than matched unretracted articles, regardless of including attention from last year or not.

Difference in attention within 47 matched groups of articles retracted within a year of Altmetric data extraction.

Each point represents the difference in AAS between the median retracted vs. the median unretracted article within matched groups. Green points represent groups in which the original article received more media and social media attention than matched unretracted articles and red points represent the opposite; points in grey represent no difference between the two. The “With last year” column represents all of the Altmetric attention received, whereas the “Without last year” column represents all Altmetric attention minus the last year. The large point and solid line in grey represent the median and its interquartile range. The median difference between retracted and matched unretracted articles is small (Median, 1) when including attention received over the last year and 0 otherwise. However, substantially more retracted articles received higher attention than matched unretracted articles, regardless of including attention from last year or not.

Discussion

In this literature-wide study of retraction, the number of retractions was found to vary substantially by country, journal, publisher and field of science. At least half of retracted research articles studied received some amount of Altmetric attention, almost 5% were considered popular and almost 1% had received more than 100 citations. Popular articles tended to receive substantially more attention than their retraction, even after adjusting for attention received post-retraction, and were often retracted because of unreliable data/results. This was unlike most other articles, where fake peer review was the most frequent reason for retraction. In comparison to matched articles, retracted articles were 1.2–7.4 times more likely to receive more Altmetric attention, even after adjusting for attention received post-retraction. Our results indicate that retracted articles do receive attention because of their original publication, but they also receive substantial attention because of their retraction. In fact, 100/175 (57%) retracted articles received most of their attention after retraction. However, this is not the case for the popular articles, which by the nature of being popular may also be the ones most likely to spread misinformation. These articles tend to receive 2.5 times the amount of attention received by their retraction after adjusting for attention received because of retraction. There may be some reluctance of publishers to publish a retraction notice, to publish clear and informative notices and to make all potential readers of a retracted article aware that this article has been retracted [18]. Indeed, as we describe above, in one of our sensitivity analyses we surprisingly identified that in two of the articles for which post-retraction tweets were examined, retraction was directly or indirectly associated with the promotion of the initial misinformation (by being followed by further tweets spreading the initial results), rather than correcting the record. In a study of 88 articles by anaesthesiologist Dr. Boldt, which 18 journals had agreed to retract in 2011, 9/88 (10%) had yet to be retracted by 2013 [19]. Of the 79 retracted, only 15 (19%) were accompanied with an “adequate” retraction notice and only 48 (61%) were adequately marked as retracted. A similar study found that out of 235 studied retractions, 21 (9%) did not offer a detailed reason and 52 (22%) articles were available with no mention of retraction [20]. The problems are even worse for articles kept on central repositories or personal libraries [21, 22], despite clear guidelines by the Committee On Publication Ethics (COPE) [23] and the National Library of Medicine [24] recommendations. All of these issues amalgamate into a critical problem, which is not inconsequential and which substantially hinders the ability of science to self-correct [25, 26]. In the presence of impediments, such as unclear and inconspicuous retractions, this self-correcting process may become unnecessarily slow, inefficient and ineffective [27]. The good news is that we do have the technology required to substantially improve our ability to flag retractions [28]. PubMed has been flagging retractions for years and it recently introduced a larger banner to help make retractions more apparent [29]. Similarly, the reference manager Zotero now automatically checks a user’s database for retracted articles and issues a warning upon clicking on it and upon trying to cite it [30]. The effect of these initiatives on reducing misinformation remains to be studied.

Limitations

This study has a number of limitations. First, this was a retrospective study, for which reason we could not access longitudinal data required to control our analyses for the possible effects of retraction over time. Even though we tried to adjust for the estimated effect of retraction by studying a subsample of our data with a suitable time window from retraction, more granular prospectively collected data would substantially help reduce risk of bias. Second, this was a descriptive, exploratory analysis. Even though we try to mitigate the bias inherent to exploratory analyses by presenting all of our analyses and avoiding focusing on p-values, a further pre-registered study would substantially reduce the risk of bias. Third, analyses of popular articles included a relatively small sample, especially when trying to adjust for the effects of retraction over the last year.

Conclusions

Allowing for these limitations, our analysis documents that most eventually retracted articles and their retraction notice receive media and social media attention. In fact, eventually retracted articles tend to receive more media and social media attention than very similar, matched unretracted articles. Even though the original and its retraction tend to receive similar amounts of attention, popular articles receive substantially more attention than their retraction notice. Such popular articles are most commonly retracted due to unreliable results, errors or misconduct, unlike other articles, which are primarily retracted due to fake peer review or duplication. Worryingly, popular articles receive additional attention upon retraction and this attention does not always reflect their retraction, but may perpetuate the original misconception.

Materials and methods

This study uses a retrospective cohort design to investigate attention to the original articles vs. their retraction notice and a case-control design to investigate attention to retracted articles vs. matched articles that were not retracted. The report was compiled using the guiding principles of the STROBE statement for retrospective cohort studies [31].

Attention to the original article vs. its retraction notice

Data acquisition

This is a retrospective cohort study of the retracted literature found in the Retraction Watch Database shared with us under a data use agreement on August 14, 2020 in compliance with their terms and conditions. The Retraction Watch Database is a repository of retracted articles and their retraction notice compiled by the Center of Scientific Integrity’s Retraction Watch [32, 33] and represents the most comprehensive database of retracted literature that we know of. It was made available to the public in October, 2018 [2], at which point it hosted more than 18,000 articles published from the 1970s to 2018. Even though wherever applicable this manuscript describes the updated version of the database made available to us on August 14, 2020, all analyses utilizing Altmetric attention and citation data refer to the articles and retractions identified from a beta version of this database accessed on May 29, 2018. We then downloaded all data available on Altmetric for the retrieved articles with a PubMed ID (PMID) or DOI on June 6, 2018 using the Altmetric Details Page API [13] and the R package rvest [34]. Altmetric gathers media and social media mentions of the published literature (e.g. mentions on Twitter, Facebook or news media), which it then compiles into a composite attention score, known as the Altmetric Attention Score (AAS) [35]. We retrieved citation data for all articles on the Retraction Watch Database with a PMID or DOI using the rcrossref package [36] in R. These are taken from Crossref, which is a not-for-profit association that interlinks and tracks citations between a variety of published research literature sources—at the time of writing Crossref had 46,723,946 articles with references deposited [37]. Finally, we extracted the total number of peer-reviewed documents published in science and engineering per country between 2010–2015 from the publication output table S5a-2 of the National Science Foundation [38]. We are using the total number of documents that mention each country at least once in their affiliations (called “whole count”), not the fraction of affiliations attributed to each country (called “fractional count”).

Eligibility criteria

All peer-reviewed research articles on the Retraction Watch Database that were originally published between 2010–2015 and retracted by May 29, 2018 were eligible for our study. The 2010–2015 time-frame was chosen so as to allow sufficient time for most eventually retracted articles of this period to be retracted and for their retraction to receive most of the Altmetric attention it is likely to receive. We hereby define research articles as any studies labelled by Retraction Watch as any one of the following types of article: Research Article, Clinical Study, Meta-analysis, Letter. Preprints and dissertations were excluded because they are not peer-reviewed publications.

Characteristic variables

We retrieved and analyzed the following characteristic variables from the Retraction Watch database: title, author names, journal, publisher, institute, country of affiliation, open access (yes or no), category (e.g. Physical sciences; determined by Retraction Watch), subcategory or subject (e.g. Geotechnical and Geological Engineering; determined by Retraction Watch), type of article (e.g. research article), type of notice (e.g. retraction, correction, etc.) and reason of retraction (e.g. plagiarism). The database also provided the following information for most original articles and related notices: PubMed ID (PMID), DOI and date of publication (for the original and its retraction).

Outcome variables

The outcomes of interest were the total Altmetric Attention Score (AAS), the change in AAS across 1 month, 3 months, 6 months and 1 year since we last retrieved our data (these were the only time points made available by Altmetric) and the citation count. Altmetric indicates that articles with an AAS > 20 are thought to be doing “far better than their contemporaries” [39] - we call articles with an AAS > 20 in our sample “popular”.

Sensitivity analyses

Even though any attention received by the original article after its retraction was removed in comparing original articles to their retraction (for the subset of articles for which this information existed), it could be that original articles start accumulating retraction-related attention before their retraction. As such, we investigated the sensitivity of the observed effects to (a) removing all attention gained by popular articles over the past year, rather than only removing attention since their retraction and (b) adding the attention removed from the original article to the attention received by the retraction notice. Likewise, it could be that some of the attention attributed to the original article emanates from concerns about the article (i.e. negative attention). To address this possibility, we examined all tweets gathered by Altmetric for all popular articles retracted within a year since we downloaded our Altmetric data. We then counted how many tweets were tweeted before and after retraction, how many of the pre- and post-retraction tweets we perceived based on our impression as negative (e.g. “this article has now been retracted”) or non-negative (e.g. “this article presents impressive results”) and copied the first negative and the first non-negative pre- and post-retraction tweet (always the first to avoid bias).

Attention to retracted vs. matched unretracted literature

In addition to the aforementioned retrospective cohort study, we designed a case-control study. We treated all eligible articles retrieved from the Retraction Watch Database with a PMID as cases and then, for a random sample of 572 of these articles, we automatically identified a maximum of 5 random unretracted full articles from the same issue and journal. We indicate a “maximum of 5” as we could not always find 5 unretracted articles in the same issue. In the case of very large journal issues, such as issues that refer to a whole year, we matched cases to controls published for the first time (on PubMed this is “Date—Create”) within the same issue and within 3 months of the publication of the case. We then extracted all information held by PubMed about the cases and matched controls using the package RISmed [40] in R. We also extracted all data from Altmetric and Crossref for the matched controls, as we had previously done for the cases. All identified matched controls were eligible for further analysis.

Variables

No characteristic variables other than those required to identify the matched groups were used in further analysis. The outcome variables for this study were the same as the ones for the retrospective cohort study described above.

Missing and duplicated data

As far as we could tell, none of the characteristics or outcomes of interest were missing from the Retraction Watch Database. All records with a DOI or PMID and no Altmetric record we assume have not received any attention, as indicated by Altmetric (personal communication). 287 original articles and 380 retraction notices with no DOI represent less than 10% of all 4,603 eligible unique records and thus they were excluded from Altmetric analysis with no attempt to impute missing data. We could not retrieve a citation count from CrossRef for 59/3,732 original articles with DOI and 71/3,647 retraction notices with a DOI—these also represent less than 10% of all eligible records, so all records with missing citation counts were excluded from the citation analysis with no attempt to impute the missing data. The Retraction Watch database had two records for 11 articles with a DOI, each referring to a different notice. For example, an article may be retracted and replaced and then its replacement may also be retracted, leading to two separate retractions of an article with the same DOI.Even though it is possible to identify notices referring to the same original article by using the PMID or DOI, for 140/4,6011 (3%) records the database did not offer any of these two external and common identifiers. As such, for articles with no PMID or DOI, we identified notices referring to the same original article by using the title. Using this process we identified that none of the 140 articles was duplicated and we confirmed this by visual inspection.

Statistical analysis

We produced descriptive statistics and visualizations for all exposure and outcome variables. Apart from date, all exposure variables were categorical and are presented in terms of counts and proportions. The outcome variables of interest (Altmetric Attention Score and citation counts) are heavily right-skewed, for which reason we primarily report the median and interquartile range, apart from a few exceptions where we felt the mean is also informative, in which case we present both. All continuous data were visualized in terms of boxplots and all discrete data in terms of bar charts. We also report the relevant sample size and missingness whenever applicable. All paired (original article and retraction notice) or grouped (retracted article and matched unretracted article) data were described both as grouped and ungrouped data and both in absolute (median difference of original vs. retraction) and relative (median ratio of original vs. retraction) terms. Paired comparisons were done using the nonparametric Wilcoxon signed-rank test (for the comparison of the original article to its retraction notice), the Binomial sign test (for the comparison of proportions) and the non-parametric bootstrap with percentile confidence intervals (for the comparison of cases to matched controls). Results are presented in terms of effect size and the frequentist uncertainty in the effect size in terms of the p-value and the 95% Confidence Interval (CI). To mitigate the potential bias inherent within exploratory analyses, such as this one, this report and the attached code contain the entirety of our analyses and all presented p-values and CIs were calculated and included after the completion of this report. All data processing and analysis was done in the programming language R [41].

Data sharing

Data extracted from Altmetric, CrossRef and all matched PubMed articles have been deposited on OSF (Open Science Framework) and may be accessed at https://www.doi.org/10.17605/OSF.IO/7T32U under a CC-By Attribution 4.0 License. The Retraction Watch Database is available from Retraction Watch and requests for this data should be sent to: team@retractionwatch.com.

Code sharing

The analytic code is available on GitHub under the GNU-3 License and may be accessed at https://github.com/serghiou/retraction-misinformation. We have also made all of our analyses available as a Markdown document, which can also be accessed on the same GitHub repository. We have turned the code required to download all data of interest from PubMed into an R package called metareadr available for download from https://github.com/serghiou/metareadr.

Distribution of original publication versus retraction notice.

The distribution of retracted literature is roughly uniform across 6-month periods, whereas the distribution of their retractions follows a bell-curve with a left skew. This skew reflects that more publications are retracted very early rather than very late. (TIF) Click here for additional data file.

Altmetric attention before and after last year by recency of retraction.

There is no meaningful difference between the distribution of 2,733 articles retracted more than a year ago between AAS received before versus after the last year. On the contrary, for the 275 articles retracted within the last year, there is marked increase in total AAS when considering the last year. This implies that a substantial proportion of the total attention is received after retraction. The distribution of total AAS in articles that were recently retracted versus not, is not meaningfully different. The vertical axis is transformed to reflect log(AAS + 1). The grey dots are the mean and its bootstrapped 95% CI. (TIF) Click here for additional data file.

Change in attention by popularity in 275 recent research articles.

All points in green represent an original article that received more media and social media attention than its retraction notice and all points in red represent the opposite; points in grey represent no difference between the two. The large point and solid line in grey represent the median and its interquartile range. The difference is rather balanced for 260 articles that are not popular. In 15 popular articles, the difference is skewed to the right such that most popular articles did not have an equally popular retraction. (TIF) Click here for additional data file.

Descriptive statistics for all 11,807 articles on Retraction Watch for 2010–2015.

(HTML) Click here for additional data file.

Descriptive statistics for 4,603 research articles on Retraction Watch for 2010–2015.

(HTML) Click here for additional data file.

Top 20 reasons for retraction in 4,603 research articles.

Count = number retracted for specific reason; Percent = proportion retracted for specific reason. Note that the proportions add up to >100% because articles could be retracted for more than one reason. (DOCX) Click here for additional data file.

Pairwise comparison of original vs. retraction notice with or without counting the last year of Altmetric attention to the original.

The “Total” columns compare the total AAS received by the original articles against the total AAS received by their retraction notice. The “Without last” columns compare the AAS received by the original articles without the last year versus the total AAS received by their retraction notice. The values in parentheses are the IQR for the median and the standard deviation for the mean. The p-value is from a Binomial test of articles with greater original vs. retraction attention. Not all articles for “Total” qualified for “Without last”. (DOCX) Click here for additional data file.

Pairwise comparison of original article vs. retraction notice when considering total post-retraction AAS.

The “Total” columns compare the total AAS received by the original article against the total AAS received by their retraction notice. The “Pre-retraction” columns compare the AAS received by the original article before retraction versus the attention received by its retraction notice plus any post-retraction attention directed to the original article. The values in parentheses are the IQR for the median and the standard deviation for the mean. The p-value is from a Binomial test of articles with greater original vs. retraction attention. Not all articles for “Total” qualified for “Pre-retraction”. (DOCX) Click here for additional data file. 24 Jul 2020 PONE-D-20-17634 Media and social media attention to retracted articles according to Altmetric PLOS ONE Dear Dr. Ioannidis, Thank you for submitting your manuscript to PLOS ONE. After careful consideration, we feel that it has merit but does not fully meet PLOS ONE’s publication criteria as it currently stands. Therefore, we invite you to submit a revised version of the manuscript that addresses the points raised during the review process. Please submit your revised manuscript by Sep 07 2020 11:59PM. If you will need more time than this to complete your revisions, please reply to this message or contact the journal office at plosone@plos.org. When you're ready to submit your revision, log on to https://www.editorialmanager.com/pone/ and select the 'Submissions Needing Revision' folder to locate your manuscript file. Please include the following items when submitting your revised manuscript: A rebuttal letter that responds to each point raised by the academic editor and reviewer(s). You should upload this letter as a separate file labeled 'Response to Reviewers'. A marked-up copy of your manuscript that highlights changes made to the original version. You should upload this as a separate file labeled 'Revised Manuscript with Track Changes'. An unmarked version of your revised paper without tracked changes. You should upload this as a separate file labeled 'Manuscript'. If you would like to make changes to your financial disclosure, please include your updated statement in your cover letter. Guidelines for resubmitting your figure files are available below the reviewer comments at the end of this letter. If applicable, we recommend that you deposit your laboratory protocols in protocols.io to enhance the reproducibility of your results. Protocols.io assigns your protocol its own identifier (DOI) so that it can be cited independently in the future. For instructions see: http://journals.plos.org/plosone/s/submission-guidelines#loc-laboratory-protocols We look forward to receiving your revised manuscript. Kind regards, Nikolaos Pandis Academic Editor PLOS ONE Journal Requirements: When submitting your revision, we need you to address these additional requirements. 1. Please ensure that your manuscript meets PLOS ONE's style requirements, including those for file naming. The PLOS ONE style templates can be found at https://journals.plos.org/plosone/s/file?id=wjVg/PLOSOne_formatting_sample_main_body.pdf and https://journals.plos.org/plosone/s/file?id=ba62/PLOSOne_formatting_sample_title_authors_affiliations.pdf 2. In your Methods section, please include additional information about your dataset and ensure that you have included a statement specifying whether the collection method complied with the terms and conditions for the websites from which you have collected data. 3.  Please indicate if the authors of the tweets included in Fig. 6 were contacted and agreed for their tweets to be reproduced in this paper. Alternatively, please confirm with your IRB, data protection committee or other relevant research ethics committee that consent from these authors is not required. [Note: HTML markup is below. Please do not edit.] Reviewers' comments: Reviewer's Responses to Questions Comments to the Author 1. Is the manuscript technically sound, and do the data support the conclusions? The manuscript must describe a technically sound piece of scientific research with data that supports the conclusions. Experiments must have been conducted rigorously, with appropriate controls, replication, and sample sizes. The conclusions must be drawn appropriately based on the data presented. Reviewer #1: Partly Reviewer #2: Yes ********** 2. Has the statistical analysis been performed appropriately and rigorously? Reviewer #1: Yes Reviewer #2: Yes ********** 3. Have the authors made all data underlying the findings in their manuscript fully available? The PLOS Data policy requires authors to make all data underlying the findings described in their manuscript fully available without restriction, with rare exception (please refer to the Data Availability Statement in the manuscript PDF file). The data should be provided as part of the manuscript or its supporting information, or deposited to a public repository. For example, in addition to summary statistics, the data points behind means, medians and variance measures should be available. If there are restrictions on publicly sharing data—e.g. participant privacy or use of data from a third party—those must be specified. Reviewer #1: Yes Reviewer #2: Yes ********** 4. Is the manuscript presented in an intelligible fashion and written in standard English? PLOS ONE does not copyedit accepted manuscripts, so the language in submitted articles must be clear, correct, and unambiguous. Any typographical or grammatical errors should be corrected at revision, so please note any specific errors here. Reviewer #1: Yes Reviewer #2: Yes ********** 5. Review Comments to the Author Please use the space provided to explain your answers to the questions above. You may also include additional comments for the author, including concerns about dual publication, research ethics, or publication ethics. (Please upload your review as an attachment if it exceeds 20,000 characters) Reviewer #1: Thank you for the opportunity to review this manuscript. Overall, this approach to the topic of retractions is very timely and innovative. However, as we detail below, the authors appear to have relied on a dataset that was clearly marked “beta” at the time they extracted the data (May 2018) and was not launched until October 2018. At the very least, that fact should be clearly referenced throughout the manuscript (and the supplementary information, publication of which violates the data use agreement scholars have agreed to in exchange for the data). But more to the point, that seems to throw into doubt the accuracy and reproducibility of the findings. We would suggest that the authors contact us -- the creators of the Retraction Watch Database -- for a complete download of the dataset, at which point they can repeat their analyses with confidence. Specific: Introduction Page 4: “Retraction refers to the formal withdrawal of a peer-reviewed publication, mostly due to …” the referenced citation does not have a stipulation for “peer-reviewed, and to be accurate the “mostly due to” should probably read “most often due to”. (The reference says: “This is the formal withdrawal of one or more papers by one or all of the authors. In most circumstances, retraction happens when new findings, or an inability by other groups to replicate results, spur the authors to withdraw a paper.”) Page 4: “These results align well with broader studies,” The three citations refer to specific cases or a medical specialty – which do not count as “broader studies.” Broader studies to cite might be: https://asistdl.onlinelibrary.wiley.com/doi/full/10.1002/pra2.2016.14505301055 (disclosure: one of the reviewers is a co-author), or https://asistdl.onlinelibrary.wiley.com/doi/10.1002/pra2.35. There are others. Page 5: “authors of guidelines and meta-analyses seem to very rarely update their work to remove retracted articles [10,11]” Citation 10 is a review (not research) article – it would be a secondary source at best, but it does not discuss guidelines or meta-analyses. In addition, while an author may request revisions, corrections or otherwise, a journal/publisher may refuse to allow them, so perhaps taking “authors” out of the equation and simply saying these are very rarely updated would be more accurate. Page 5: “We hereby integrated the Retraction Watch Database..” Minor point, but “hereby” seems needlessly formal and can be deleted. Results Page 6: “Out of 4,217 research articles, 3,972 (94.2%) received a retraction notice (Table 1; S2 Table). This needs some context, and we would refer the authors to question 11 of the Retraction Watch Database User Guide: https://retractionwatch.com/retraction-watch-database-user-guide/ Corrections and Expressions of Concern (EoCs) have been entered as a point of interest related to authors with retractions. Unlike with retractions, there is no intention to be comprehensive on other types of citations. Page 8: “It is unclear whether higher rates of retraction in certain journals/countries are due to less robust research practices versus more robust efforts to identify papers that should be retracted or confounded by other factors (e.g. type and impact of research done)” This comment seems more appropriate in a discussion section, and would be improved with some citations or evidence to support the speculation. Page 11: “The difference is rather balanced for 2,843 articles that are not popular - the extreme negative values around -400 came from a single retraction in Tumor Biol.” Was this the retraction notice that pulled more than 100 articles at one time? This would make the weighting of this notice very unequal to other “single article” notices. Page 14: “2 articles only received non-negative tweets [16,17] .” Citation 16 and 17 belong to the articles presumably receiving “non-negative tweets”. Perhaps the authors could change it to “2 articles only (Wansink et al. and Larkin et al.) only received non-negative tweets [16,17]” to make it clear that the citations do not belong to a substantiation of the tweet numbers. The authors identify the two in the subsequent paragraph – but perhaps best to clarify it in the first sentence. Page 14: “Green signifies a non-negative tweet and red signifies a negative tweet.” Using colors as the sole indicator is problematic in readability for colorblind readers, which one of the reviewers is. Could the authors combine the colors with some other type of indicator – such as bolded letter vs non-bolded, stars and squares for data points instead of simply colors? Discussion: Page 18: “At least half of retracted research articles received some amount of Altmetric attention.” Would suggest adding “studied” after “articles” to clarify the sample analyzed. Page 19: “Indeed, in one of our sensitivity analyses we surprisingly noticed that in two of the articles examined, retraction led to the promotion of the initial misinformation (by being followed by further tweets spreading the initial results), rather than correcting the record.” We would suggest adding an example to demonstrate this. Page 19: “PubMed has been flagging retractions for years and it recently introduced a larger banner to help make retractions more apparent [28].” The authors may want to note that the banner effect is only available when the journal/publisher sends the retraction information in an appropriate format so the articles are linked. Retractions are still showing up without the original article being flagged, based on the metadata and information provided by the journal/publisher. Page 20: “Finally, a number of inconsistencies were identified while working with the data extracted from the Retraction Watch Database (see Methods). Even though we tried to correct as many of these as we could, it is possible that more of these exist.” It is deeply problematic to include this comment without noting that the authors chose to scrape a database clearly marked “beta” at the time, and did not contact the (well-known) creators of the database to either request a full download -- which is made freely available to scholars subject to a simple data use agreement -- or query the inconsistencies. The “beta” designation was not removed until October 2018, when the database was officially launched: https://www.sciencemag.org/news/2018/10/what-massive-database-retracted-papers-reveals-about-science-publishing-s-death-penalty As noted in our general comments, we would strongly recommend that the authors now request a download and repeat their analyses. If they choose not to do so, the manuscript requires prominent notices that the authors chose to use data marked “beta” that was known to be incomplete at the time. Methods: Page 22: “It was made available to the public in October, 2018 [2] … We extracted all records available on this database as of May 29, 2018 that were originally published between 2010-2015.” This sentence is somewhat confusing. It suggests, as we have noted elsewhere, that the authors extracted all the data prior to the “beta” designation being removed. If so, how did they extract it? Or did they extract it after the database went public, in which case it is unclear how they would know what was available on May 29, 2018? Did they have permission to acquire the information, as Retraction Watch has data usage requirements? If the data were extracted prior to the public data, then they were not complete, in which case the data and thus the analyses/results are based on incomplete and potentially biased data without being marked that way. While no resource will ever be 100% comprehensive for a variety of reasons that make retraction notices very difficult to find (e.g., notice in print form only and yet to be discovered, foreign language issues, journals removing articles from Table of Contents without notices, etc.), the authors have exacerbated these issues without noting that limitation in their manuscript, instead referring without comment to inconsistencies. Also, cross-checking the database for retractions made up to May 29, 2018, using Article type: Research Article or Clinical Study or Letter, and restricting the dates of publication from 01/01/2010 to 12/31/2015, only 4108 entries are returned. Again, the data until this point were incomplete, and the authors extracted data before a “study” was completed and are using the data prematurely. Page 23: “A publication was considered a research article when labelled by Retraction Watch Database as any of: Research Article, Clinical Study, Clinical Study/Research Article, Letter or Letter/Research Article.” The number of retractions was compared with the number of publications per year, using the NSF counts (ref 36), but the NSF excludes letters in their counts: “The articles exclude editorials, errata, letters, and other material that do not present or discuss scientific data, theories, methods, apparatuses, or experiments. The articles also exclude working papers, which are not generally peer reviewed.” Did the authors confirm that the retractions associated with “Letters” would have been included in the NSF counts? Page 23: “In 37/3,972 research articles, the reported date for the original article was paradoxically later than the reported date for its retraction - we excluded these articles when reporting on the duration between publication of the original and its retraction.” Why did the authors not just attempt to locate the correct dates and then include the articles? Did they consider contacting the creators of the database to alert them to the pre-launch errors? Page 23: “Altmetric indicates that articles with an AAS > 20 are thought to roughly represent the top 5% of the literature” Citation? Page 26: “we automatically identified a maximum of 5 random unretracted full articles from the same issue and journal. We indicate a “maximum of 5” as we could not always find 5 unretracted articles in the same issue.” The concept of randomness is fine - except when the selections for control are so disparate. Perhaps the authors could separate journal issues into groups of comparable sizes, then choose random samples from each. Page 25: “we examined all tweets gathered by Altmetric for all popular articles retracted within a year since we downloaded our Altmetric data.” As Altmetric uses multiple sources (blogs, tweets, various main and social media sources), using tweets as an indication of the positive or negative measure of attention seems a bit narrow. Users of Twitter is a very biased group in itself (e.g. https://www.pewresearch.org/internet/2019/04/24/sizing-up-twitter-users/), may actually not be the widest used social media commenting platform, (https://blog.hootsuite.com/twitter-demographics/), and may not be appropriately comparable to the measure of attention an article is receiving across main media outlets or filed-related blogs. Did the authors cross-reference their Twitter findings against other platforms and genres? Page 27: “for 2,944/10,370 (14.5%) records the database did not offer any of the two and did not offer any other unique identifier” The database does not issue these numbers -- PubMed and CrossRef do, when publishers apply for them -- and would only have them if they were available. Perhaps language change to “No PMID or DOI were available for 2,944/10,370 (14.5%) records in the database, which relies on these two external and common identifiers.” Reviewer #2: This was an interesting work examining the Altmetric Attention Score for retracted papers both before and after retraction. The authors chose 2010-2015 works from RetractionWatch and grabbed data from Crossref, PubMed and Altmetric.com to supplement information. My small concern that the authors should address is that Altmetric.com started tracking attention Oct, 2011. Thus, articles published before Oct 2011 could have misleading AAS and this needs to be addressed in the paper. The statistics, supplemental information, and overall stats seem appropriate, but I am not an expert on statistics and would defer to other peer reviewers with more knowledge in this area to determine if other measures could/should have been used. Overall, I found the work to be well written and logical. The information visualizations seem appropriate and paint the picture the authors are telling. I would consider focusing a bit more on how positive/negative tweets were analyzed; was there a sentiment analysis performed and how was it performed? Was it just based on authors' impressions? More clarity here, please. This is of interest because it suggests that Twitter users were aware of the retraction and posted messages to help others become aware. Were these tweets warnings to others? Did they call out misinformation spread? There are plenty of articles performing sentiment analysis on tweets and the authors should perform a quick review and specify what is applicable to tweets and what is not (e.g., http://ucrel.lancs.ac.uk/crs/attachments/UCRELCRS-2013-05-16-Thelwall-Slides.pdf). ********** 6. PLOS authors have the option to publish the peer review history of their article (what does this mean?). If published, this will include your full peer review and any attached files. If you choose “no”, your identity will remain anonymous but your review may still be made public. Do you want your identity to be public for this peer review? For information about this choice, including consent withdrawal, please see our Privacy Policy. Reviewer #1: Yes: Ivan Oransky and Alison Abritis Reviewer #2: No [NOTE: If reviewer comments were submitted as an attachment file, they will be attached to this email and accessible via the submission site. Please log into your account, locate the manuscript record, and check for the action link "View Attachments". If this link does not appear, there are no attachment files.] While revising your submission, please upload your figure files to the Preflight Analysis and Conversion Engine (PACE) digital diagnostic tool, https://pacev2.apexcovantage.com/. PACE helps ensure that figures meet PLOS requirements. To use PACE, you must first register as a user. Registration is free. Then, login and navigate to the UPLOAD tab, where you will find detailed instructions on how to use the tool. If you encounter any issues or have any questions when using PACE, please email PLOS at figures@plos.org. Please note that Supporting Information files do not need this step. 23 Jan 2021 We sincerely thank the reviewers for thorough and insightful feedback. All of our responses to their comments may be found within the "Response to reviewers" document. Submitted filename: response-to-reviewers.docx Click here for additional data file. 16 Feb 2021 PONE-D-20-17634R1 Media and social media attention to retracted articles according to Altmetric PLOS ONE Dear Dr. Ioannidis, Thank you for submitting your manuscript to PLOS ONE. After careful consideration, we feel that it has merit but does not fully meet PLOS ONE’s publication criteria as it currently stands. Therefore, we invite you to submit a revised version of the manuscript that addresses the points raised during the review process. Please submit your revised manuscript by Apr 02 2021 11:59PM. If you will need more time than this to complete your revisions, please reply to this message or contact the journal office at plosone@plos.org. When you're ready to submit your revision, log on to https://www.editorialmanager.com/pone/ and select the 'Submissions Needing Revision' folder to locate your manuscript file. Please include the following items when submitting your revised manuscript: A rebuttal letter that responds to each point raised by the academic editor and reviewer(s). You should upload this letter as a separate file labeled 'Response to Reviewers'. A marked-up copy of your manuscript that highlights changes made to the original version. You should upload this as a separate file labeled 'Revised Manuscript with Track Changes'. An unmarked version of your revised paper without tracked changes. You should upload this as a separate file labeled 'Manuscript'. If you would like to make changes to your financial disclosure, please include your updated statement in your cover letter. Guidelines for resubmitting your figure files are available below the reviewer comments at the end of this letter. If applicable, we recommend that you deposit your laboratory protocols in protocols.io to enhance the reproducibility of your results. Protocols.io assigns your protocol its own identifier (DOI) so that it can be cited independently in the future. For instructions see: http://journals.plos.org/plosone/s/submission-guidelines#loc-laboratory-protocols We look forward to receiving your revised manuscript. Kind regards, Nikolaos Pandis Academic Editor PLOS ONE [Note: HTML markup is below. Please do not edit.] Reviewers' comments: Reviewer's Responses to Questions Comments to the Author 1. If the authors have adequately addressed your comments raised in a previous round of review and you feel that this manuscript is now acceptable for publication, you may indicate that here to bypass the “Comments to the Author” section, enter your conflict of interest statement in the “Confidential to Editor” section, and submit your "Accept" recommendation. Reviewer #1: (No Response) ********** 2. Is the manuscript technically sound, and do the data support the conclusions? The manuscript must describe a technically sound piece of scientific research with data that supports the conclusions. Experiments must have been conducted rigorously, with appropriate controls, replication, and sample sizes. The conclusions must be drawn appropriately based on the data presented. Reviewer #1: Yes ********** 3. Has the statistical analysis been performed appropriately and rigorously? Reviewer #1: Yes ********** 4. Have the authors made all data underlying the findings in their manuscript fully available? The PLOS Data policy requires authors to make all data underlying the findings described in their manuscript fully available without restriction, with rare exception (please refer to the Data Availability Statement in the manuscript PDF file). The data should be provided as part of the manuscript or its supporting information, or deposited to a public repository. For example, in addition to summary statistics, the data points behind means, medians and variance measures should be available. If there are restrictions on publicly sharing data—e.g. participant privacy or use of data from a third party—those must be specified. Reviewer #1: No ********** 5. Is the manuscript presented in an intelligible fashion and written in standard English? PLOS ONE does not copyedit accepted manuscripts, so the language in submitted articles must be clear, correct, and unambiguous. Any typographical or grammatical errors should be corrected at revision, so please note any specific errors here. Reviewer #1: Yes ********** 6. Review Comments to the Author Please use the space provided to explain your answers to the questions above. You may also include additional comments for the author, including concerns about dual publication, research ethics, or publication ethics. (Please upload your review as an attachment if it exceeds 20,000 characters) Reviewer #1: We appreciate the authors' responsiveness to previous comments. We only have a few minor comments to add: 1. In Methods, the authors note: “we extracted the total number of peer-reviewed documents”, and in the following paragraph “peer-reviewed” was added to the statement: “All research articles on the Retraction Watch Database that were originally published between 2010-2015...” The authors further added: “Preprints and dissertations were not included because they are not peer-reviewed publications.” Did the authors confirm that all the articles in their sample were indeed from peer-reviewed journals as there is no filtering requirement for peer-reviewed journals for inclusion in the database? 2. We previously commented about a paragraph from Page 5 in the prior manuscript: “Page 5: “authors of guidelines and meta-analyses seem to very rarely update their work to remove retracted articles [10,11]” Citation 10 is a review (not research) article – it would be a secondary source at best, but it does not discuss guidelines or meta-analyses. In addition, while an author may request revisions, corrections or otherwise, a journal/publisher may refuse to allow them, so perhaps taking “authors” out of the equation and simply saying these are very rarely updated would be more accurate.” There are no author comments regarding this, although the citation now is changed to “11,12.” Again, citation (now) 11 is a review article and does not discuss guidelines and meta-analyses in particular. And we reiterate that authors may have little control over the updating of their work. So again – perhaps modifying the reference choice and changing the language to remove the perceived burden of updating being solely that of the authors would be more appropriate. 3. The data availability section for the manuscript acceptance says ”yes- all data are fully available without restriction.” However, elsewhere the authors note (correctly) that data from the Retraction Watch Database are available from the Center For Scientific Integrity. We would suggest ensuring that these responses are consistent. ********** 7. PLOS authors have the option to publish the peer review history of their article (what does this mean?). If published, this will include your full peer review and any attached files. If you choose “no”, your identity will remain anonymous but your review may still be made public. Do you want your identity to be public for this peer review? For information about this choice, including consent withdrawal, please see our Privacy Policy. Reviewer #1: Yes: Ivan Oransky and Alison Abritis [NOTE: If reviewer comments were submitted as an attachment file, they will be attached to this email and accessible via the submission site. Please log into your account, locate the manuscript record, and check for the action link "View Attachments". If this link does not appear, there are no attachment files.] While revising your submission, please upload your figure files to the Preflight Analysis and Conversion Engine (PACE) digital diagnostic tool, https://pacev2.apexcovantage.com/. PACE helps ensure that figures meet PLOS requirements. To use PACE, you must first register as a user. Registration is free. Then, login and navigate to the UPLOAD tab, where you will find detailed instructions on how to use the tool. If you encounter any issues or have any questions when using PACE, please email PLOS at figures@plos.org. Please note that Supporting Information files do not need this step. 28 Feb 2021 All responses have been included in the "Response to Reviewers" document. Submitted filename: response-to-reviewers.docx Click here for additional data file. 5 Apr 2021 Media and social media attention to retracted articles according to Altmetric PONE-D-20-17634R2 Dear Dr. Ioannidis, We’re pleased to inform you that your manuscript has been judged scientifically suitable for publication and will be formally accepted for publication once it meets all outstanding technical requirements. Within one week, you’ll receive an e-mail detailing the required amendments. When these have been addressed, you’ll receive a formal acceptance letter and your manuscript will be scheduled for publication. An invoice for payment will follow shortly after the formal acceptance. To ensure an efficient process, please log into Editorial Manager at http://www.editorialmanager.com/pone/, click the 'Update My Information' link at the top of the page, and double check that your user information is up-to-date. If you have any billing related questions, please contact our Author Billing department directly at authorbilling@plos.org. If your institution or institutions have a press office, please notify them about your upcoming paper to help maximize its impact. If they’ll be preparing press materials, please inform our press team as soon as possible -- no later than 48 hours after receiving the formal acceptance. Your manuscript will remain under strict press embargo until 2 pm Eastern Time on the date of publication. For more information, please contact onepress@plos.org. Kind regards, Pablo Dorta-González, Ph.D. Academic Editor PLOS ONE Additional Editor Comments (optional): Reviewers' comments: Reviewer's Responses to Questions Comments to the Author 1. If the authors have adequately addressed your comments raised in a previous round of review and you feel that this manuscript is now acceptable for publication, you may indicate that here to bypass the “Comments to the Author” section, enter your conflict of interest statement in the “Confidential to Editor” section, and submit your "Accept" recommendation. Reviewer #1: All comments have been addressed ********** 2. Is the manuscript technically sound, and do the data support the conclusions? The manuscript must describe a technically sound piece of scientific research with data that supports the conclusions. Experiments must have been conducted rigorously, with appropriate controls, replication, and sample sizes. The conclusions must be drawn appropriately based on the data presented. Reviewer #1: Yes ********** 3. Has the statistical analysis been performed appropriately and rigorously? Reviewer #1: Yes ********** 4. Have the authors made all data underlying the findings in their manuscript fully available? The PLOS Data policy requires authors to make all data underlying the findings described in their manuscript fully available without restriction, with rare exception (please refer to the Data Availability Statement in the manuscript PDF file). The data should be provided as part of the manuscript or its supporting information, or deposited to a public repository. For example, in addition to summary statistics, the data points behind means, medians and variance measures should be available. If there are restrictions on publicly sharing data—e.g. participant privacy or use of data from a third party—those must be specified. Reviewer #1: No ********** 5. Is the manuscript presented in an intelligible fashion and written in standard English? PLOS ONE does not copyedit accepted manuscripts, so the language in submitted articles must be clear, correct, and unambiguous. Any typographical or grammatical errors should be corrected at revision, so please note any specific errors here. Reviewer #1: Yes ********** 6. Review Comments to the Author Please use the space provided to explain your answers to the questions above. You may also include additional comments for the author, including concerns about dual publication, research ethics, or publication ethics. (Please upload your review as an attachment if it exceeds 20,000 characters) Reviewer #1: We thank the authors for responding thoughtfully and comprehensively to our comments, and we recommend acceptance. ********** 7. PLOS authors have the option to publish the peer review history of their article (what does this mean?). If published, this will include your full peer review and any attached files. If you choose “no”, your identity will remain anonymous but your review may still be made public. Do you want your identity to be public for this peer review? For information about this choice, including consent withdrawal, please see our Privacy Policy. Reviewer #1: Yes: Ivan Oransky and Alison Abritis 9 Mar 2021 PONE-D-20-17634R2 Media and social media attention to retracted articles according to Altmetric Dear Dr. Ioannidis: I'm pleased to inform you that your manuscript has been deemed suitable for publication in PLOS ONE. Congratulations! Your manuscript is now with our production department. If your institution or institutions have a press office, please let them know about your upcoming paper now to help maximize its impact. If they'll be preparing press materials, please inform our press team within the next 48 hours. Your manuscript will remain under strict press embargo until 2 pm Eastern Time on the date of publication. For more information please contact onepress@plos.org. If we can help with anything else, please email us at plosone@plos.org. Thank you for submitting your work to PLOS ONE and supporting open access. Kind regards, PLOS ONE Editorial Office Staff on behalf of Dr. Nikolaos Pandis Academic Editor PLOS ONE
  18 in total

1.  Reporting of article retractions in bibliographic databases and online journals.

Authors:  Kath Wright; Catriona McDaid
Journal:  J Med Libr Assoc       Date:  2011-04

2.  Why Science Is Not Necessarily Self-Correcting.

Authors:  John P A Ioannidis
Journal:  Perspect Psychol Sci       Date:  2012-11

3.  Opinion: Medical misinformation in the era of Google: Computational approaches to a pervasive problem.

Authors:  Scott R Granter; David J Papke
Journal:  Proc Natl Acad Sci U S A       Date:  2018-06-19       Impact factor: 11.205

4.  Can branding improve school lunches?

Authors:  Brian Wansink; David R Just; Collin R Payne
Journal:  Arch Pediatr Adolesc Med       Date:  2012-10

5.  The persistence of error: a study of retracted articles on the Internet and in personal libraries.

Authors:  Philip M Davis
Journal:  J Med Libr Assoc       Date:  2012-07

6.  Opinion: Reproducibility failures are essential to scientific inquiry.

Authors:  A David Redish; Erich Kummerfeld; Rebecca Lea Morris; Alan C Love
Journal:  Proc Natl Acad Sci U S A       Date:  2018-05-15       Impact factor: 11.205

7.  Visibility of retractions: a cross-sectional one-year study.

Authors:  Evelyne Decullier; Laure Huot; Géraldine Samson; Hervé Maisonneuve
Journal:  BMC Res Notes       Date:  2013-06-19

8.  What studies of retractions tell us.

Authors:  Adam Marcus; Ivan Oransky
Journal:  J Microbiol Biol Educ       Date:  2014-12-15

9.  Fate of articles that warranted retraction due to ethical concerns: a descriptive cross-sectional study.

Authors:  Nadia Elia; Elizabeth Wager; Martin R Tramèr
Journal:  PLoS One       Date:  2014-01-22       Impact factor: 3.240

10.  Post retraction citations in context: a case study.

Authors:  Judit Bar-Ilan; Gali Halevi
Journal:  Scientometrics       Date:  2017-03-03       Impact factor: 3.238

View more
  7 in total

Review 1.  Threats to scholarly research integrity arising from paper mills: a rapid scoping review.

Authors:  Iván Pérez-Neri; Carlos Pineda; Hugo Sandoval
Journal:  Clin Rheumatol       Date:  2022-05-06       Impact factor: 2.980

2.  Dynamics of cross-platform attention to retracted papers.

Authors:  Hao Peng; Daniel M Romero; Emőke-Ágnes Horvát
Journal:  Proc Natl Acad Sci U S A       Date:  2022-06-14       Impact factor: 12.779

3.  Continued Visibility of COVID-19 Article Removals.

Authors:  Christopher J Peterson; Caleb Anderson; Kenneth Nugent
Journal:  South Med J       Date:  2022-06       Impact factor: 0.810

4.  Improving the Reliability of Literature Reviews: Detection of Retracted Articles through Academic Search Engines.

Authors:  Elena Pastor-Ramón; Ivan Herrera-Peco; Oskia Agirre; María García-Puente; José María Morán
Journal:  Eur J Investig Health Psychol Educ       Date:  2022-05-04

5.  Bibliometric and Altmetric Analysis of Retracted Articles on COVID-19.

Authors:  Hiba Khan; Prakash Gupta; Olena Zimba; Latika Gupta
Journal:  J Korean Med Sci       Date:  2022-02-14       Impact factor: 2.153

6.  Beliefs and misperceptions about naloxone and overdose among U.S. laypersons: a cross-sectional study.

Authors:  Jon Agley; Yunyu Xiao; Lori Eldridge; Beth Meyerson; Lilian Golzarri-Arroyo
Journal:  BMC Public Health       Date:  2022-05-10       Impact factor: 4.135

7.  Identifying science in the news: An assessment of the precision and recall of Altmetric.com news mention data.

Authors:  Alice Fleerackers; Lise Nehring; Lauren A Maggio; Asura Enkhbayar; Laura Moorhead; Juan Pablo Alperin
Journal:  Scientometrics       Date:  2022-10-01       Impact factor: 3.801

  7 in total

北京卡尤迪生物科技股份有限公司 © 2022-2023.