Literature DB >> 35657791

Who tweets climate change papers? investigating publics of research through users' descriptions.

Rémi Toupin1, Florence Millerand2, Vincent Larivière3,4.   

Abstract

As social issues like climate change become increasingly salient, digital traces left by scholarly documents can be used to assess their reach outside of academia. Our research examine who shared climate change research papers on Twitter by looking at the expressions used in profile descriptions. We categorized users in eight categories (academia, communication, political, professional, personal, organization, bots and publishers) associated to specific expressions. Results indicate how diverse publics may be represented in the communication of scholarly documents on Twitter. Supplementing our word detection analysis with qualitative assessments of the results, we highlight how the presence of unique or multiple categorizations in textual Twitter descriptions provides evidence of the publics of research in specific contexts. Our results show a more substantial communication by academics and organizations for papers published in 2016, whereas the general public comparatively participated more in 2015. Overall, there is significant participation of publics outside of academia in the communication of climate change research articles on Twitter, although the extent to which these publics participate varies between individual papers. This means that papers circulate in specific communities which need to be assessed to understand the reach of research on social media. Furthermore, the flexibility of our method provide means for research assessment that consider the contextuality and plurality of publics involved on Twitter.

Entities:  

Mesh:

Year:  2022        PMID: 35657791      PMCID: PMC9165795          DOI: 10.1371/journal.pone.0268999

Source DB:  PubMed          Journal:  PLoS One        ISSN: 1932-6203            Impact factor:   3.752


Introduction

In recent years, Twitter became a key platform for the dissemination research [1]. As traces left by scholarly documents in tweets may reflect communication beyond traditional citations and in the public sphere, they were heralded as potential indicators of the so-called « societal impact of research » along with other social media metrics [2, 3]. However, the strict focus on event count (i.e. number of tweets, number of retweets) was confronted to a lack of theoretical grounding as to what these traces really measure. Scholars thus looked to investigate the contexts in which research circulate on Twitter, understood as the dimensions that give meaning to indicators [4]. Challenges remain in capturing these contextual elements as digital scholarly communication studies need to move between the scales of individual documents and aggregated corpora where contexts may shift [5]. Methodological framework also need to account that information provided on Twitter is generated by users as well as not directly organized for research purposes [6-8]. Meanwhile, discussions about issues like climate change, public health, and artificial intelligence moved to social media, highlighting the political ramifications of research [9-12]. As such, studies about science communication, policy and evaluation increasingly aim to understand the reach of scholarly outputs in the public sphere. Our study focuses on the case of climate change as representative of environmental challenges. Specifically, climate change communication aims to foster environmental action by influencing decision making and translating new knowledge in everyday practices to limit our ecological footprint [13]. Climate change issues reflect other increasingly urgent matters, such as biodiversity loss, extreme weather events, massive migrations, and scarcer access to basic resources [11, 14]. As discussions about climate change are increasingly salient on Twitter, scholars and other actors of the public diffusion of research moves to the platform to share relevant knowledge and engage with stakeholders more broadly [11, 15–17]. As some social media platforms like Twitter make their data accessible to the scholarly community, it makes it possible to directly examine the resonance of climate change research in the public sphere.

Public conversations of climate change research on Twitter

Reflecting a large scope of topics and issues, a diversity of publics are concerned with climate change [16, 17]. On the one hand, a tracking of the release of the IPCC 5th assessment report found that the majority of Twitter engagement came from individual bloggers and concerned citizens who provided alternative framing than that of decision-makers, journalists, and scientists [17]. On the other hand, scientists discussing climate change on Twitter engage mostly with other scientists but have been seen to communicate their research to decision-makers, journalists and the general public as well [16]. Typically, scientific knowledge production and communication begins with scholars and research institutions from all disciplines, and synthesis are produced for policymakers [14]. Journalists, medias, scientists and other communication professionals then play a role communicating and framing issues in the public sphere [18, 19]. Civil society, concerned citizens, health or environment professionals, as well as political organizations and advocates also engage with climate change for personal, political or professional motivations [20, 21]. As all these actors contribute differently to discussions about climate change, the visibility of research documents on social media made the communication of related issues no longer the prerogative of scientists and journalists only [17]. The reach of climate change research may thus be modulated by the influence, background and motivationf of those who share it on Twitter. Twitter play a significant role in informational communication as well as political discussion and action, especially for issues like climate change [16, 17, 22–24]. Within these conversations, scholars discuss relevant research with colleagues, foster new collaborations, engage in political actions, share their work more broadly, or keep in touch with the latest news [25]. This increased participation by researchers is linked with a significant volume of scholarly documents being shared, with variations across disciplines, publication channels and cultural contexts [1, 26]. While patterns of scholarly communication on Twitter remain to be documented in detail, one key promise is that it has democratized access to research by allowing it to circulate more broadly and outside of academia. The digital traces left by scholarly documents have been heralded as potential indicators of the « societal impact of research », or altmetrics [27]. Previous research focused on the analysis of traces on Twitter as 1) data collection is easier than for most other platforms; 2) scholarly output is readily shareable through the inclusion of links in tweets; and 3) tweets are available to non-academic publics [28, 29]. However, it remains unclear what these traces reflect. On the one hand, there are multiple understandings of what is called the « societal impact of research » [3, 29–31]. On the other hand, as the communication of research on Twitter involves mediation processes and is not intended toward a clear objective, Twitter scholarly metrics do not reflect a clear phenomenon [1]. The initial focus on counts has now shifted to more comprehensive studies of the contexts in which documents are shared and what they mean for the communication of research outside of academia [4, 32]. Our study aims to further describe these contexts by focusing on the publics of climate change research as understood by their Twitter profile descriptions. Climate change research topics range from the physical processes of climate change to its direct and indirect repercussions on communities and the environment, as well as means of mitigation, adaptation and communication to counter the ongoing process [33, 34]. Two events have been at the core of climate change research and policy discussions in recent years: the publication of the IPCC 5th Assessment Report (IPCC 5AR) in 2013 and the COP 21 leading to the Paris Agreement in 2015 [17, 35–37]. In one instance, non-elite actors–individual users who are not affiliated to a specific media, nonprofit, or scientific organization, such as bloggers, activists, or the general public–were able to draw attention, as indicated by their presence (35%) in the hundred most mentioned users, in discussing and framing the IPCC 5AR Working Group 1 contribution [17]. Scholars and research professionals contributed to both discussions–the IPCC 5AR and COP 21 events -, highlighting their interest for different topics than the general public, and indicating a shift in their public communication patterns. As such, scholars were seen to have a hybrid role of communicator and advocate, while mostly communicating with journalists and other scholars rather than directly to policymakers and the general public [16]. Conversations on Twitter build on a series of affordances, such as hashtags (#), mentions (@) or links to external documents, as well as metadata that allows for the characterization of every tweets (ex. time of publication, number of likes) and users (profile description, picture, number of followers, etc.) [16, 38, 39]. Users usually engage with accounts they are familiar with or which post content relevant to them [40, 41]. Scholarly communication on Twitter relies on the possibility of adding links to external documents to make research outputs visible [39]. Authors and publishers may tweet links to their papers to promote them, and eventually foster engagement that will benefit them–such as an higher number of citations–whereas scholars may tweet or retweet documents they find relevant [3]. Altogether, users who actively engage with a paper may do so by being prompted by other accounts, whether because a publication was relevant, funny or controversial [42, 43]. Influential users, such as communicators or celebrities, may engage their network more easily, while communities may be created around specific documents to topics [4, 44].

Investigating and representing users in public communication of research on Twitter

On Twitter, the abundance of informational content fostered an engagement by political users, communication professionals and organizations, as well as a representation of actors of the knowledge economy [16, 45]. An account may represent an individual, a project, an organization, or a feed of content [46, 47]. The demographics of Twitter reflect those of the high-middle class of the population, mostly white young professionals, although these demographics changes as we scale down to specific communities [48]. As for scholars, doctoral students and young researchers tend to be at the forefront, while some disciplines, mostly those dealing with social issues such as health sciences, economics or social sciences, are more visible [3, 49]. Our study focuses on the analysis of words and expressions in Twitter user bios as a proxy of who tweet climate change research papers since assessing who share scholarly documents at a larger scale entails reducing identity to specific markers [28, 46]. We hypothesize that the expressions employed by users to describe themselves act as identity markers through which they engage with other users [50]. Building on previous work on the identification of users sharing scholarly documents on Twitter, we identified 8 relevant categories of markers–academia, communication, political, professionnal, personal, organizations, publishers, and bots–to classify who tweet climate change research papers [5, 16, 28, 51]. Methods that capture who engage with scholarly documents on Twitter usually rely on automatic textual analysis of Twitter bios [16, 28, 51–53] or manual coding [41, 54]. Altmetric also provides an identification of users in its database in four categories distinguishing between researchers, science communicators, practitioners and general public [55]. However, their approach has limitations as it encompasses the “general public” as all the users who do not match to the first three categories. Other approaches relied on the characterization of the social network by which documents flow [16, 32, 54, 56]. Usually employed in conjuncture with textual analysis of Twitter profile description, these methods aim to understand how discussions or communities build up around scholarly documents. More direct approaches rely on the matching of bibliometric information with Twitter data to capture the scholars involved on Twitter [57], as well as the use of Twitter lists [51, 58]. Our method build on these by investigating the categories of users sharing climate change research papers on Twitter through specific expressions in Twitter profiles descriptions. As such, we account for the multiple identity markers used in a description to further assess the complexity through which someone may engage with research documents. We did not take into account the order in which these markers appear as we wanted to have a general overview of how users present themselves without apposing any judgement on which identities are more important.

Purpose of the study

Profile description is often the primary information through which we assess someone else identity on Twitter [28]. As such, it is useful to assess someone else inclination toward specific topics on Twitter. Twitter bios also are a widely used proxy in informetrics studies to determine who are the users engaging in scholarly communication on Twitter [16, 28, 51–53]. As such, our study examines a categorization of users by analyzing specific keywords in Twitter descriptions. Our main objective is to look how much research papers about climate change issues permeate outside of academia by examining the specific categories of users sharing said papers. As such, we focus our analysis on general categories of “markers” about users who may have an interest toward such research. We classify the descriptions of accounts who shared at least one link to a climate change research paper by linking them to the expressions collated for each category. Thus, we aim to assess the reach of scholarly documents in specific categories of users involved in the public communication of climate change research on Twitter: RQ: Who share climate change research papers outside of academia? Methodologically, word detection highlights which expressions users provide in their Twitter descriptions. It contributes to our understanding of scholarly communication on social media by assessing how different identity markers may be used within single bios and the communities linked to these expressions. However, it elicits methodological discussions as our approach does not aim to attach unique identity markers to accounts, but rather highlights the multiple ways through which users may express themselves on Twitter. Our paper addresses these considerations by investigating who share climate change research articles through a word detection method of Twitter bios. Thus, we aim to provide insights on how users present themselves to others in climate change research discussions, especially outside of academia.

Material and methods

Data collection and Twitter metrics

For this study, we built a dataset of 2015 and 2016 research articles (n = 4 730) indexed in the Web of Science (WoS) of Clarivate Analytics through the internal database of the Observatoire des sciences et des technologies (OST). We then collected tweets containing a link to these papers as well as information about the users who published these tweets by cross-referencing the information gathered from WoS with the Altmetric–a division of Digital Science (Springer) tracking scholarly documents on social media–database via the Digital Object Identifier (DOI). We accessed the database through an October 2017 copy provided to the Observatoire des sciences et des technologies (OST). We used data from the WoS database as it indexes a large number of research documents from several fields as well as extensive bibliometric information about said research documents [59]. However, attention is directed to a specific set of scientific literature published in English [60]. To select relevant papers about climate change research, we focused on those published in 2015 in 2016 that included the keywords “climate change”, “global warming” or “IPCC” in the title and for which a DOI (Digital Object Identifier), a unique identifier referencing online documents was provided. We chose the years as the Paris Agreement, approved on 12 December 2015, marks a critical juncture for science communication and climate change engagement [35, 37]. Since Altmetric information was provided through a data dump in October 2017, they also were the latest years for which we had coverage of all the research articles through both years at the time of data collection in September 2018. We focused on the title as it is a direct metadata to assess a paper relevance [39]. It is also the information that appears the most in tweets sharing a link to a paper. This query does not retrieve all publications in climate change research; rather, we wanted to collect a set of papers directly related to climate change. As such, our aim is not to provide an extensive analysis of the field, as is done elsewhere [33, 61]. Data collected includes the paper DOI, title, abstract, name of first author, journal of publication, NSF discipline and specialty, number of pages, number of references, number of authors, number of citations, number of tweets, number of accounts, time of first and last tweet, and tweetspan. Collected tweets metadata include paper DOI, tweet author ID, tweet ID, tweet content, time of publication and retweet order. User information include, at the time of tweet, the author ID, author name, account description, account URL, geographic stamp, number of followers, number of papers tweeted, number of tweets, time of first and last tweet, and tweetspan. Queries to the WoS and Altmetric database were made in SQL and we exported the results for further access. Data collection complied with the terms and conditions of WoS, Altmetric and Twitter through data providing agreements with the Observatoire des sciences et des technologies (OST). Following recommendations from previous studies, we computed several Twitter metrics to further describe our dataset for tweet activity [1, 4]. Computed metrics include the number of papers tweeted, number of tweets, Twitter coverage (i.e., % of tweeted papers), Twitter density (i.e., number of tweets per paper) and intensity (i.e., number of tweets per tweeted paper), number of users, user density (number of users per paper) and intensity (i.e., number of users per tweeted paper), number of papers retweeted, retweet coverage (i.e.,% of retweeted papers), share of retweets (i.e., % of tweets that are retweets), retweet density (i.e., number of retweets per paper), and retweet intensity (i.e. number of retweets per tweeted document). These metrics further characterize our dataset by providing an assessment of Twitter engagement across scholarly and social media objects. We computed all metrics by uploading our dataset in an R dataframe and handling it using the tidyverse package as well as basic calculation functions [62]. Plots were created using the ggplot2 package [63]. Our dataset includes 2 376 papers published in 2015 and 2 354 in 2016. The papers were published in 1 062 journals, from which 46 have published more than 20. The journals Climatic Change (n = 178), PLOS ONE (n = 139), Global Change Biology (n = 97), Regional Environmental Change (n = 70), Scientific Reports (n = 63), Environmental Research Letters (n = 55), Journal of Climate (n = 54) and Nature Climate Change (n = 51) each have published more than 50 articles. This illustrates how climate change research is a broad field encompassing various disciplines, as well as showing the diverse possibilities of publication whether in specialized or more general journals. We collected information from 41 108 tweets–among which 23 831 were retweets–sent by 21 844 unique accounts linking to 2 628 papers. The 56% Twitter coverage of our dataset is comparable to that of medical and health research and average of more than eight users sharing tweeted papers indicate significant engagement toward the papers gathered in our study (1). Among tweeted papers, 1 961 were shared by at least two users, 667 by more than ten, 338 by more than twenty, 129 by more than fifty, and 47 by over than a hundred user. Also, 1 319 and 1 308 papers published respectively in 2015 and 2016 were shared at least once, for 21 985 and 19 349 tweets by 12 815 and 11 461 unique users. Tweeted papers were published in 646 journals, 16 of which published more than 20 papers. Climatic Change, PLOS ONE and Global Change Biology published most tweeted papers, whereas Nature Climate Change, Science and PNAS account for the three journals publishing the most papers tweeted by more than a hundred users (Fig 1).
Fig 1

Distribution of climate change research tweeted papers in scientific journals.

Depicted in the above histogram are the ten journals that published the most tweeted papers in our dataset, and below are the ten journals that published the most papers tweeted by more than 100 users.

Distribution of climate change research tweeted papers in scientific journals.

Depicted in the above histogram are the ten journals that published the most tweeted papers in our dataset, and below are the ten journals that published the most papers tweeted by more than 100 users.

Textual analysis of Twitter profile descriptions

To understand how climate change research permeate outside of academia, we focused on the textual analysis of these descriptions. Specifically, we looked at specific expressions that indicate how users present themselves on Twitter [28]. As such, our analysis does not aim to provide an exact mapping of the scientists or journalists on Twitter [57, 64]. Rather, we understand expressions in Twitter descriptions as a proxy to investigate the potential reach of scholarly documents outside of academia. We looked at Twitter descriptions using a dictionary of expressions for eight relevant categories of identity markers built on previous research [5, 16, 28, 51] (Table 1). We built a first version of the codebook by manually coding a sample of a thousand descriptions, and we then improved it through several iterations of the analysis, running the code, comparing with our manual coding, and then modifying the codebook accordingly. We removed all accounts with no descriptions (NULL), lowered all cases, and removed numbers, URLS, emojis, stopwords and punctuation signs except for the hash (#) and at signs (@) from all remaining descriptions. Our inputted data frame featured one user per paper per row, and we removed duplicates as we filtered down our analysis. We assigned expressions to categories using the tidyverse packages and then matching them with corresponding words in the descriptions [62]. Our method may assign more than one category to a description, and thus considers the possibility for someone to provide multiple identity markers. We completed our analysis by looking at the table of descriptions sorted in different categories to assess the potential representations of users mobilizing specific words. We organized our specific observations by looking at the coverage of these categories in 10 highly tweeted papers. Thus, we describe how well these categories may be investigated in studies about the communication of research on Twitter, specifically climate change research. We provided a category to 70% of Twitter bios for this study. Unidentified descriptions include those not written in English or French–language for which we were sufficiently fluent–and those not specific enough to be matched to one of our categories.
Table 1

Categories and matching expressions used for textual analysis of Twitter descriptions.

CategoriesEx. of specific expressionsEx. of Twitter descriptions
Academia researcher, professor, phd, biologist, postdocPost-doctoral coastal scientist / engineer @unisouthampton, UK. Researches #sealevelrise #impacts #adaptation #islands #deltas. Also likes #cows.
Personal yoga, music, father, mother, catCurious, Mother of two, Retired.
Professional physician, manager, engineer, strategist, veterinarianEnvironmental attorney. Climate change terrifies me.
Political advocate, policy, councillor, social justice, #standupforscienceMayor of @CityKitchener. Community promoter of Kitchener & @WRAwesome-ness. Past Prez of @FCM_online. Treasurer of @uclg_org. Motto: Live ~ Love ~ Laugh
Communication journalist, writer, author, podcast, youtuberJournaliste, directrice de la rédaction de @Sante_Magazine. Mes tweets n’engagent que moi. Compte perso
Organization university, institue, media, association, research centreUpdates from AAAS, the American Association for the Advancement of Science. Open minds. Join us. http://tinyurl.com/JoinAAAS
Publishers Wiley, Sage, Elsevier, issn, journalPublished by Oxford University Press, AoB PLANTS features peer-reviewed articles on all aspects of environmental and evolutionary plant biology.
Bots bots, RSS, paper alerts, retweets from, daily updatesA Bots tweeting new research from the Canadian Government (NRC, AAFC, EC, DFO & NRCan). Not affiliated the Government of Canada
Unassigned An unknown particle in this Universe

The above table presents the categories used in our study with a selection of five correspondings expressions and an example of Twitter profile descriptions. The complete list of expressions can be found at https://doi.org/10.6084/m9.figshare.8236598.v3 and https://github.com/toupinr/twitterprofiles/blob/master/code_publics/20210617_PrepPublicsPropre.R [65].

The above table presents the categories used in our study with a selection of five correspondings expressions and an example of Twitter profile descriptions. The complete list of expressions can be found at https://doi.org/10.6084/m9.figshare.8236598.v3 and https://github.com/toupinr/twitterprofiles/blob/master/code_publics/20210617_PrepPublicsPropre.R [65].

Results

General results

Our final dataset included 19 783 unique Twitter accounts. We assigned at least one category to 69.9% of the accounts (n = 13 821) by using our code to detect expressions in Twitter bios, and 36.2% (n = 7 155) matched to only one category, and 33.7% (n = 6 666) to multiple categories (Table 2). Academia is the largest category across all papers at 5 545 users, representing 28% of our dataset. Personal assignations represent a quarter of the dataset. Publishers and bots are the less visible categories, both under 2%. However, accounts posting automated content do not usually identify as such. Therefore, a low representation of automated accounts is mostly an artefact of the method used in our study, as it is probably much higher (24). The number of unique users was slightly higher in 2015 (n = 11 745) than 2016 (n = 10 467). We also notice a higher uptake by Personal, Professional, Political and Communication public in 2015, potentially indicating higher engagement outside of academia. Academia, Organizations, Publishers and Robots are proportionally more represented in 2016. Academia and Organizations engaged with the most papers in our dataset, whereas Political, Publishers and Bots assigned users engaged with the least. Bots and Academia have the lowest median in number of followers, Communication and Publishers the highest. This may indicate that communicators and publishers tend to fulfill a role of sharing research with a larger network of persons than other group of users. These trends are similar for 2015 and 2016, although number of followers of users tweeting the papers of our dataset were higher in 2015.
Table 2

Summary of results across all papers.

All papers20152016
Type of publicsTotal n of users% w unique assignationsN of papersMedian n of followersTotal n of users% w unique assignationsN of papersMedian n of followersTotal n of users% w unique assignationsN of papersMedian n of followers
Academia 5 54532.2%1 6135023 19331.1%8166033 34334.4%797504
Personal 4 93932.0%1 4606553 07132.2%7607572 38729.6%700636
Professional 2 96324.0%1 0458191 78222.1%5279501 50624.4%518790
Political 2 56428.0%9218631 65127.0%4699901 24028.0%452833
Communication 2 23726.2%1 0201 1281 39325.9%50512451 14025.9%5151 111
Organization 4 20137.6%1 6567262 46238.1%8368702 37837.0%820701
Publishers 35738.1%6501 49921338.5%327177821739.2%3231 688
Bots 10161.4%4664406165.6%2024826838.8%264592
Unassigned 5 9621 6806323 4578597243033821616
Total 19 78336.2%11 74536.1%10 46737.0%

The above table presents the absolute number of profiles assigned to each category (Total n of users), the% of profiles with only one assignation for each category (% w unique assignations), the number of papers tweeted by at least one user per category (N of papers), and the median number of followers of the users assigned to each category (Media n of followers). Results are presented for the whole dataset investigated in this study as well as splited between years.

The above table presents the absolute number of profiles assigned to each category (Total n of users), the% of profiles with only one assignation for each category (% w unique assignations), the number of papers tweeted by at least one user per category (N of papers), and the median number of followers of the users assigned to each category (Media n of followers). Results are presented for the whole dataset investigated in this study as well as splited between years. Looking at overlaps between categories, Academia/Personal and Academia/Organizations are the most present in users’ descriptions (Table 3). The large number of overlaps across all categories indicate that the assignations of based on unique identity markers significantly reduces the complex ways in which users present themselves to others on Twitter. As such, the complexity of identifying who share research papers on Twitter may be best documented by looking at multiple categories and in the context of individual papers.
Table 3

Number of Twitter bios with unique of multiple categories.

Type of publicsBotsPublishersOrganizationCommunicationPoliticalProfessionalPersonalAcademia
Academia 3481 5287256568701 6281 786
Personal 13357097728521 0171 579
Professional 427674478554710
Political 329474317718
Communication 1255318586
Organization 61101 578
Publishers 8136
Bots 62

The above table presents the absolute number of dual overlaps in individual Twitter profile descriptions per category. Cells in blue present the number of profiles assigned to only one category. Cells at the intersection of two categories present the number of Twitter profiles assigned to both categories.

The above table presents the absolute number of dual overlaps in individual Twitter profile descriptions per category. Cells in blue present the number of profiles assigned to only one category. Cells at the intersection of two categories present the number of Twitter profiles assigned to both categories.

Publics in highly tweeted papers

We looked at the five most tweeted papers per year to categorize who are users sharing popular scholarly documents about climate change research on Twitter (Table 4). Most papers have between 65 and 75% of the users sharing them assigned to at least one category, except the article Oxygen isotope in archaeological bioapatites from India: Implications to climate change and decline of Bronze Age Harappan civilization at 46.6%. The representation of users assigned to Academia is lower than the mean of 28% of our dataset for nine out of the ten. Overall, variations between how categories are represented between articles highlight the different contexts in which individual papers are shared, thus providing a basis to assess how and why they get attention.
Table 4

General results of the word detection analysis.

AcadPersoProPolCommOrgPubBots
TitleJournalYearN of users% of users% of users% of users% of users% of users% of users% of users% of users% unnassigned
Total 19 78328.0%25.0%15.0%13.0%11.2%21.2%1.8%0.5%30.1%
Climate change in the Fertile Crescent and implications of the recent Syrian drought [66] PNAS20151 76013.9%34.7%12.2%19.1%14.8%10.5%0.7%0.2%36.1%
The geographical distribution of fossil fuels unused when limiting global warming to 2 degrees C [67] Nature20151 26523.1%27.7%19.4%23.8%13.0%20.2%0.6%0.3%25.1%
Accelerating extinction risk from climate change [68] Science201574920.3%31.5%12.7%18.4%12.3%13.5%0.1%0.3%34.8%
Health and climate change: policy responses to protect public health [69] Lancet201548126.2%30.4%23.1%19.8%11.4%27.4%1.2%0.2%21.8%
Climate change impacts on bumblebees converge across continents [70] Science201533727.0%30.0%17.5%11.0%11.3%24.0%1.2%0.0%27.9%
Analysis and valuation of the health and climate change cobenefits of dietary change [71] PNAS201665925.2%30.3%17.6%20.2%12.7%16.4%1.2%0.0%29.3%
Oxygen isotope in archaeological bioapatites from India: Implications to climate change and decline of Bronze Age Harappan civilization [72] Scientific Reports201653710.4%21.2%15.6%8.2%9.3%5.6%0.4%1.5%53.4%
Global and regional health effects of future food production under climate change: a modelling study [73] Lancet201634721.6%28.8%18.2%16.1%9.2%22.2%2.0%0.0%28.8%
Ecological networks are more sensitive to plant than to animal extinction under climate change [74] Nature Communications201627647.8%21.4%10.9%9.1%9.1%26.1%1.4%0.0%25.4%
Assessing the Performance of EU Nature Legislation in Protecting Target Bird Species in an Era of Climate Change [75] Conservation Letters201623826.1%32.4%18.9%13.0%7.6%22.3%0.8%0.0%26.5%

The above table presents a summary of the results of the word detection analysis on the whole dataset and the 5 most tweeted papers of 2015 and 2016. Columns ranging from Acad to Bots (Acad = Academia; Perso = Personal; Pro = Professional; Pol = Political; Comm = Communication; Org = Organization; Pub = Publishers) represent the percentage of Twitter bios assigned to each category according to the number of users (N of users). The last column indicate the percentage of Twitter bios not assigned to any category.

The above table presents a summary of the results of the word detection analysis on the whole dataset and the 5 most tweeted papers of 2015 and 2016. Columns ranging from Acad to Bots (Acad = Academia; Perso = Personal; Pro = Professional; Pol = Political; Comm = Communication; Org = Organization; Pub = Publishers) represent the percentage of Twitter bios assigned to each category according to the number of users (N of users). The last column indicate the percentage of Twitter bios not assigned to any category. The Ecological networks are more sensitive to plant than to animal extinction under climate change was shared by 47.8% of the accounts assigned to Academia, indicating a significant engagement by the research community. For some papers, the Twitter profiles of users assigned to Academia have important overlap with other categories (S1 Table in S1 File). For example, the paper Health and climate change: policy responses to protect public health was tweeted by 27.8% of the Academia assignations overlapping with Professional category. A manual validation of the results indicates that most assignations to Academia indeed represent scholars and researchers, with very few discrepancies. This highlights the potential of word detection to assess the representation of scholars in a dataset of papers shared on Twitter, at least in terms of precision. However, there is a possibility that scholars were not categorized as such depending on the words used in their Twitter bios. Other methods are better suited to assess overall participation by scholars [3, 57, 58], whereas our method focuses on their participation according to other groups of users. The proportion of users assigned to the Communication category is higher when more users shared a specific paper. Some papers show important overlaps between Communication and Academia such as Ecological networks are more sensitive to plant than to animal extinction under climate change (56%) and Assessing the Performance of EU Nature Legislation in Protecting Target Bird Species in an Era of Climate Change (44.4%) (S2 Table in S1 File). There is also large variations for the Communication and Political overlaps, ranging from 2.6% (Climate change impacts on bumblebees converge across continents) to 23.9% (Accelerating extinction risk from climate change), and the Communication and Professional overlaps, ranging from 12.5% (Global and regional health effects of future food production under climate change: a modelling study) to 34.2% (Climate change impacts on bumblebees converge across continents). The Personal category has the most overlap with Communication in the most tweeted papers, with no paper having a proportion lower than 32% and the highest at 52.5% (Accelerating extinction risk from climate change). This indicate that communicators may use a large share of personal keywords and expressions to build their perceived identity on Twitter. Assignations to the Political category range from 8.2% (Oxygen isotope in archaeological bioapatites from India: Implications to climate change and decline of Bronze Age Harappan civilization) to 23.9%. (The geographical distribution of fossil fuels unused when limiting global warming to 2 degrees C). Papers focusing on sensitive topics (such as The geographical distribution of fossil fuels unused when limiting global warming to 2 degrees C and Health and climate change: policy responses to protect public health) may engage more users with significant political motivations. Two papers have an elevated overlap between Academia and Political (Ecological networks are more sensitive to plant than to animal extinction under climate change; 36% and Assessing the Performance of EU Nature Legislation in Protecting Target Bird Species in an Era of Climate Change; 32.3%) (S3 Table in S1 File). This may indicate that a significant share of users from the research community also embrace political action when it comes to climate change. Users assigned to the Professional category range from 10.9% (Ecological networks are more sensitive to plant than to animal extinction under climate change) to 23.1%. (Health and climate change: policy responses to protect public health). The paper Oxygen isotope in archaeological bioapatites from India: Implications to climate change and decline of Bronze Age Harappan civilization has close to half (45.2%) of its Professional assignations not overlapping with any other categories (S4 Table in S1 File). Two papers, Global and regional health effects of future food production under climate change: a modelling study and Assessing the Performance of EU Nature Legislation in Protecting Target Bird Species in an Era of Climate Change, have a low share of overlap with Communication at respectively 6.3% and 8.9%. However, this paper has a large share of overlap with the Personal category at 57.8%. Overall, Professional and Personal overlapping is high across all the most tweeted papers, with the lowest at 29.8%. The largest share of Personal assignations is with the most tweeted paper, Climate change in the Fertile Crescent and implications in the recent Syrian drought, at 34.7%, whereas two papers (Oxygen isotope in archaeological bioapatites from India: Implications to climate change and decline of Bronze Age Harappan civilization at 21.2% and Ecological networks are more sensitive to plant than to animal extinction under climate change at 21.4%) have the lowest share. These two papers represent both extremes of the range of unique assignations, at respectively 47.4% and 18.6% of accounts assigned to the Personal category (S5 Table in S1 File). The second most important share of unique assignations is with the paper Climate change in the Fertile Crescent and implications in the recent Syrian drought at 42.9%. The paper Ecological networks are more sensitive to plant than to animal extinction under climate change show here again a high overlap of Personal assignations with the Academia category, at 59.3%. Finally, the paper Assessing the Performance of EU Nature Legislation in Protecting Target Bird Species in an Era of Climate Change has a low overlap of Personal assignations with the Communication category (7.8%) while high with the Professional category (33.8%). These results highlight the significant use of personal identity markers across all publics. In this regard, users assigned only to the Personal category may represent what is coined as the lay public.

Discussion

A key challenge in assessing who tweets scholarly documents through Twitter profile descriptions is defining the use and meaning of identity markers in expressions employed by users. Analysis relying only on Twitter data is a complex endeavor as we seldom know who is exactly behind an account. To circumvent some of these issues, our analysis categorized profile according to eight types of users sharing climate change research papers on Twitter. Specifically, we categorized expressions and keywords used in Twitter profile descriptions to assess how they represent identity markers, and so type of users sharing climate change research papers. As with other studies, the detection of words related to the academic world is precise in that it reflects potential individual users involved in research, although it does not distinguish how close their research interests are to the topic at hand [57, 76]. The overlap with other categories also shows how actors of research are not restricted to this role, whether through communicational (science communication), professional (administrative functions), political (policy making) or simply personal (being a parent, having pets, hobbies) activities [77]. Profiles categorized in Communication mostly encompasses journalists and communication professionals, authors, artists, and overlaps with political and professional expressions represents users who may engage in political campaigning, policymaking. Political assignations highlight users who present themselves through social issues and activism, some through related professional work. Professional assignations highlight those whose work relate closely to climate change mitigation efforts, for example risk management, or other specific professional activities, such as veterinarians or lawyers. Finally, personal assignations indicate how users identify themselves through their hobbies or personal interests and relationships. When it is the only categorization, it may indicate users who engage with research through pure curiosity, thus incarnate what we commonly refer to as the “general public” [46, 78]. Assignations to organizations represent both organizational accounts (universities, departments, centers, governmental institutions, private companies, etc.) and individuals who employ expressions relating to these institutions [53]. Thus, the proportion of assignations to Organizations represent both these organizations through their specific accounts as well as individuals who use these organizations as identity markers. However, the Organizations category relies on words that have meanings beyond clear groups or institutions, such as “society”. A word like society may express a delineated entity, such as the “Society for X Research”, or an abstract entity, such as society as the realm of social interactions. Future research needs to take this into account, whether by adding to the dictionary, excluding problematic expressions, or refining interpretations, depending on the goals at hand. The relative frequencies of assignations within articles provide an overall assessment of the potential groups who shared research documents on Twitter. Focusing on a selection of highly tweeted papers in climate change research in 2015 and 2016, our results indicate that expressions relating to Academia and Personal identity markers are used to a large extent, whether through unique assignations or overlap other categories. The academic community is usually the largest group sharing research on Twitter, with a diversity of different publics across individual papers. Meanwhile, political assignations look to be more present in papers discussing sensitive topics, such as fossil fuel consumption or the contribution of climate change to geopolitical conflicts. Knowing how various publics engage with individual papers may help detect trends in public communication of research, both in general datasets or specific subsets. It may also help to document how papers circulate within specific communities. For example, we see quite different patterns between the papers Ecological networks are more sensitive to plant than to animal extinction under climate change and Oxygen isotope in archaeological bioapatites from India: Implications to climate change and decline of Bronze Age Harappan civilization. The first one has a significant share of users assigned to Academia, hereby indicating a significant participation by the research community. The other paper has the lowest share of academic participation of the papers showcased in this study. These examples illustrate how estimates of the variety of users sharing individual papers provide a basis to further contextualize the repercussions of research dissemination on Twitter, especially for topics like climate change that resonate differently across distinct groups such as conservation specialists or concerned citizens. A qualitative survey of the results then provides meaning to these estimates and helps validate and refine each category. Our study has some limits due to the mediated characteristics of Twitter data, and the epistemological and technical choices made prior and through the design of the study. Word detection works as a proxy for the categorization of users but does not provide a direct assessment of “who” really participates to discussions about climate change research on Twitter [1, 28]. It is hardly possible to access the user behind an account through automated data analysis. We also rely on information chosen explicitly by the user and do not have access to all the choices made for the preparation of their Twitter profile description. While, this allows users to identify themselves to others in their own words and make their identities visible in ways they chose, our interpretations are based solely the identify markers we have access to. We also chose to categorize users Twitter profile by assigning categories to them and by going back and forth between an automatic method of expressions detection and a manual coding. These categories were chosen according the litterature to make sense of public research communication activities on Twitter [1, 3, 16, 28, 51]. We hypothesized that examining the words and expressions employed by users would allow to investigate the extent to which various communities share climate change research on Twitter. Our analysis relies mainly on the presence or absence of specific expressions. This allows for flexibility to assess who tweets different sets of documents as context is dynamic. However, choices need to be made clear to provide interpretation about how specific expressions are used in various contexts. Moreover, some categories may need to be revisited for further development or be assessed in conjunction with other methods, such as distinguishing between individuals or organizations or detecting accounts posting automated content [46]. Despite these caveats, this study present elements to assess the potential publics of climate change research on Twitter by taking context into account. We see how individual documents are shared beyond strict scholarly communities and in specific groups. Our results, while focusing on highly tweeted papers about climate change, indicate that academia is the main group involved, but that specific papers also reach a variety of publics, whether professional, political and personal, depending on the context in which they are shared. The method we deployed is readily usable across large sets of documents, flexible in that words and categories may be modulated and refined according to research objectives, and provide key insights about ‘who’ tweets research on Twitter. It may also be used in conjunction with other methods to further describe these assessments as well as provide statistical or qualitative observations to what is observed. By focusing on the Twitter descriptions of users, we can work directly with identity markers chosen by users on Twitter. Future research may refine the choices made building this tool. Overall, it serves as a step for future work about who tweets research document in conjonction with other methods, such as social network analysis [32]. It also provides new elements to contextualize the reach of scholarly documents on Twitter.

Conclusion

This study focused on the categorization of users sharing climate change research papers on Twitter by using a word detection method based on profile descriptions. While our results do not provide a direct assessment of who tweet research due to the characteristics of identity markers in Twitter profile descriptions, it provides insights about how documents may permeate outside of academia and in various communities. Focusing on a subset of highly tweeted papers about climate change, we see how different group of users share research papers on Twitter. As such, we provide information about who tweets individual documents to further describe the specific contexts in which this research circulates. We proposed a framework that is flexible as we presented one set of categories and expression. These may be easily changed, based on qualitative assessment, to assess different group of users in other Twitter scholarly communication research. Moreover, moving between automated word analysis and qualitative assessment helps inform the interpretations of who is represented through these groups. As such, it highlights how contextual observations will help to better inform the reach of research documents on social media. (ZIP) Click here for additional data file. 23 Sep 2021
PONE-D-21-17099
Who tweets climate change papers? Investigating publics of research through users’ descriptions
PLOS ONE Dear Dr. Toupin, Thank you for submitting your manuscript to PLOS ONE. After careful consideration, we feel that it has merit but does not fully meet PLOS ONE’s publication criteria as it currently stands. Therefore, we invite you to submit a revised version of the manuscript that addresses the points raised during the review process. Please take into consideration all Reviewers comments. However, I would like to emphasize few key issues 1. Clear language, clearly defined terms and claims as pointed out by R#1, R#2 and R#3. Please either use different wordings or generally accepted definisions (backed up by proper references). If they do not exist, please make sure that they are defined in the manuscript. 2. Please add missing references which would support the statements and assumptions in the manuscript, as well as provide a complete picture of previous research on the topic of the manuscript(R#1, R#2) 3. Please ensure that conclusions are justified by the results (R#1, R#2) 4. Please ensure that the paper has a clear research objective (R#2) 5. Please make sure that the data availability is in line with  PLOS Data policy (R#3) Please submit your revised manuscript by Nov 07 2021 11:59PM. If you will need more time than this to complete your revisions, please reply to this message or contact the journal office at plosone@plos.org. When you're ready to submit your revision, log on to https://www.editorialmanager.com/pone/ and select the 'Submissions Needing Revision' folder to locate your manuscript file. Please include the following items when submitting your revised manuscript:
A rebuttal letter that responds to each point raised by the academic editor and reviewer(s). You should upload this letter as a separate file labeled 'Response to Reviewers'. A marked-up copy of your manuscript that highlights changes made to the original version. You should upload this as a separate file labeled 'Revised Manuscript with Track Changes'. An unmarked version of your revised paper without tracked changes. You should upload this as a separate file labeled 'Manuscript'. If you would like to make changes to your financial disclosure, please include your updated statement in your cover letter. Guidelines for resubmitting your figure files are available below the reviewer comments at the end of this letter. If applicable, we recommend that you deposit your laboratory protocols in protocols.io to enhance the reproducibility of your results. Protocols.io assigns your protocol its own identifier (DOI) so that it can be cited independently in the future. For instructions see: https://journals.plos.org/plosone/s/submission-guidelines#loc-laboratory-protocols. Additionally, PLOS ONE offers an option for publishing peer-reviewed Lab Protocol articles, which describe protocols hosted on protocols.io. Read more information on sharing protocols at https://plos.org/protocols?utm_medium=editorial-email&utm_source=authorletters&utm_campaign=protocols. We look forward to receiving your revised manuscript. Kind regards, Piotr Bródka Academic Editor PLOS ONE Journal Requirements: When submitting your revision, we need you to address these additional requirements. 1. Please ensure that your manuscript meets PLOS ONE's style requirements, including those for file naming. The PLOS ONE style templates can be found at https://journals.plos.org/plosone/s/file?id=wjVg/PLOSOne_formatting_sample_main_body.pdf and https://journals.plos.org/plosone/s/file?id=ba62/PLOSOne_formatting_sample_title_authors_affiliations.pdf 2. In your Methods section, please include additional information about your dataset and ensure that you have included a statement specifying whether the collection method complied with the terms and conditions for the websites from which you have collected data. 3. Thank you for stating the following in the Acknowledgments Section of your manuscript: “We would like to thank Stefanie Haustein and Juan Pablo Alperin from the ScholCommLab for their help and feedback regarding the analysis. We would also like to thank Matisse Dagenais and Sandrine Dagenais in helping build the codebook. This research was funded through a SSHRC Joseph-Armand Bombardier Canada Graduate Scholarship (767-2017-1329), the SSHRC Insight Grant Understanding the societal impact of research through social media, and received financial contribution from the CIRST.” We note that you have provided additional information within the Acknowledgements Section that is not currently declared in your Funding Statement. Please note that funding information should not appear in the Acknowledgments section or other areas of your manuscript. We will only publish funding information present in the Funding Statement section of the online submission form. Please remove any funding-related text from the manuscript and let us know how you would like to update your Funding Statement. Currently, your Funding Statement reads as follows: “This study was funded by the Social Sciences and Humanities Research Council of Canada, the Fonds de recherche du Québec - Société et Culture, the Centre interuniversitaire de recherche sur la science et la technologie, the Université du Québec à Montréal and the Canada Research Chairs program. The funders had no role in study design, data collection and analysis, decision to publish, or preparation of the manuscript.” Please include your amended statements within your cover letter; we will change the online submission form on your behalf. 4. Please include captions for your Supporting Information files at the end of your manuscript, and update any in-text citations to match accordingly. Please see our Supporting Information guidelines for more information: http://journals.plos.org/plosone/s/supporting-information. [Note: HTML markup is below. Please do not edit.] Reviewers' comments: Reviewer's Responses to Questions Comments to the Author 1. Is the manuscript technically sound, and do the data support the conclusions? The manuscript must describe a technically sound piece of scientific research with data that supports the conclusions. Experiments must have been conducted rigorously, with appropriate controls, replication, and sample sizes. The conclusions must be drawn appropriately based on the data presented. Reviewer #1: Partly Reviewer #2: Partly Reviewer #3: Yes ********** 2. Has the statistical analysis been performed appropriately and rigorously? Reviewer #1: Yes Reviewer #2: Yes Reviewer #3: N/A ********** 3. Have the authors made all data underlying the findings in their manuscript fully available? The PLOS Data policy requires authors to make all data underlying the findings described in their manuscript fully available without restriction, with rare exception (please refer to the Data Availability Statement in the manuscript PDF file). The data should be provided as part of the manuscript or its supporting information, or deposited to a public repository. For example, in addition to summary statistics, the data points behind means, medians and variance measures should be available. If there are restrictions on publicly sharing data—e.g. participant privacy or use of data from a third party—those must be specified. Reviewer #1: Yes Reviewer #2: Yes Reviewer #3: No ********** 4. Is the manuscript presented in an intelligible fashion and written in standard English? PLOS ONE does not copyedit accepted manuscripts, so the language in submitted articles must be clear, correct, and unambiguous. Any typographical or grammatical errors should be corrected at revision, so please note any specific errors here. Reviewer #1: Yes Reviewer #2: Yes Reviewer #3: Yes ********** 5. Review Comments to the Author Please use the space provided to explain your answers to the questions above. You may also include additional comments for the author, including concerns about dual publication, research ethics, or publication ethics. (Please upload your review as an attachment if it exceeds 20,000 characters) Reviewer #1: This is a very interesting manuscript that touches upon an important topic (climate change) and employs a viable method (iterative textual analysis) to analyse Twitter data. Yet, while I see the potential of this approach and would highly like to encourage to continue and further refine this, there are some issues I have encountered that should be addressed in order to further enhance the quality of the manuscript Introduction - Please elaborate on what you mean by “messy data from Twitter” (lines 53-55) - “As such, science communication and policy increasingly aim to understand the resonance of research in the public sphere.” (lines 58-59) – Really? Please provide more information on this and elaborate why you think this is the case. More specifically, policy has often been criticized for detaching itself from research and rather follow “popular opinion”. Public conversations of climate change research on Twitter - The cited articles (lines 71-82) seem to fit the argument. However, more information is needed on the specific studies, in order to better understand and contextualize their importance for this manuscript. - “While patterns of scholarly communication on Twitter remain to be documented in detail, it is considered to have democratized access to research.” (lines 91-92) – this is a very bold statement and should be supported by further references. More specifically, one could argue that OER and OpenAccess publications have contributed more to this suggested process than Twitter. - The authors refer to Haustein, as well as Diaz-Faes – this needs more information and elaboration as it appears to be key to the manuscript. - What exactly are “non-elite” actors? (line 121) Investigating and representing users in public communication of research on Twitter - I am missing any link or reference to topics such as “fake news”, “post facts”, etc. - “Our study focuses on the analysis of perceived users as a proxy of the publics of research communication on Twitter.” (line162-163) – ok, but is this common standard? Please provide a context that underlines that this is an agreed upon, or suggested approach. - “As such, we account for the multiple identity markers mobilized in individual description to further assess the complexity through which a user may engage with research documents.” (lines 185-187) – ok, but do you also consider the chronological order to markers? For example, a user could be mother, activist and researcher. This is complex. But do you account for mother first and then researcher? One could argue that the list is a meaningful choice by the person. Purpose of the study - “As such, it is useful to assess another user position on Twitter, especially when there is no known relationship.” (lines 191-192) – true. But how can this be done without social network analysis? Please elaborate. - “As such, our study analyzes perceived users through the mobilization of specific keywords in Twitter descriptions.” (lines 194-195) – this is very vague to me. At this stage of the manuscript, I am still not sure what “perceived users” and “mobilization of specific keywords” really means. Please elaborate and explain exactly what this is. Data collection and Twitter metrics - “They also were the latest years for which we had complete Twitter information at the time of data collection in September 2018.” (lines 234-236) – what does this really mean? Complete Twitter information can be quite a lot of things. Moreover, the authors indicated before that Twitter data is “messy”. So what does this really entail? - “It also frequently appears in tweets sharing a link to a paper, and so is highly visible to all users.” (lines 237-238) – highly visible is, I believe, an overstatement. If an individual researcher shares the link to her paper and uses a commonly used hashtag, her post will more than likely drown in the information overload. Please elaborate. - “we computed several Twitter metrics to further describe our dataset for tweet activity.” (lines 259-260) – which metrics did you compute? It becomes apparent later on, but I think it should be stated here. - Overall, the paper does not explain, to the best of my knowledge, how users’ profile data was collected. Please make sure that this is included and properly described. Textual analysis of Twitter profile descriptions - The first paragraph (lines 297-308) remains descriptive and does not include any references to previous research that has done similar work. Please rectify this. - “we then improved it through several iterations of the analysis.” (lines 314-315) – how exactly did you do this? Presenting our observations - Again this section remains rather descriptive and does not really strike a link to previous research and studies. Please provide more information on how the research fits into the larger picture. Discussion - “To circumvent some of these issues, our analysis focused on a method to assess perceived users as publics involved in climate change research Twitter communication.” (lines 486-487) – After reading the manuscript and argumentation, I am unfortunately not convinced that the authors can really make such a statement. The Tweets were selected based on DOI. This neglects a wide range of hashtags that are commonly used in this space, where the public gets their information and where researchers “need to tap-in to”, in order to gain recognition. I also wonder about the content of the Tweets – which has been neglected. How did people engage? Conclusion - “While our results do not provide a direct assessment of who tweet research due to the characteristics of Twitter data, it provides insights about how documents may permeate outside of academia and in specific groups.” (lines 600-602) – while I agree with the second part of the statement, I tend to disagree with the first part. Social Network Analyses, among others, has been proven as a valuable tool to analyze Twitter communication streams. Hence, the “messy nature” of Twitter data cannot explain why this manuscript has not provided applicable information. Overall, this is my main issue with the manuscript. The authors remain descriptive on a wide range of key issues that would justify the chosen method framework. Moreover, some more insights are not elaborated on and merely mentioned. Finally, some statements are made based on very shaky ground, particularly in view of previous, interdisciplinary research that has been done on Twitter communication. I would like to encourage the authors to carefully reconsider their argumentation and justification, in order to enhance the quality of the manuscript, which otherwise provides an interesting approach to the field. Reviewer #2: Overall, I think the paper is clever and uses some interesting methodology and data. However, I think that language might be the major issue here, since using words such as 'engagement' and 'mobilised' would imply a very different type of analysis. For example, answering Q1 would require an analysis of actual engagement: likes, retweets, comments, quotes etc. The authors state a more realistic and appropriate goal for the paper on pg. 14: " Rather, we understand expressions in Twitter descriptions as a proxy to investigate the potential publics engaging with specific scholarly documents." But even here, the word "engaging" is problematic. Simply sharing/retweeting a research article is not necessarily indicative of engagement. Other problematic terms that are used in the manuscript are ‘resonance’, and 'publics of climate change’; these are undefined terms and their relevance is not argued for. ‘Distinct communicational contexts’ is another example of wording that seems vague or conflated. What is the meaning of ‘context’ in this manuscript? The main comment that thus arises is: What is actually the (main) research objective? Who (re)tweets papers? (user focused) or how documents may permeate a social media environment? Both current research questions are formulated too ambitious if I take a critical look at the type of analyses and results. It is hard to understand how user characteristics translate into indications of ‘resonance’ (RQ1). RQ2 focuses on ‘implications’, but based on what results? This questions seems more suitable for the Discussion. At best, the paper could make claims about assessing the types of users/publics that contribute to the spreading of climate change research or co-creating a -- research informed -- climate change narrative. In many ways, the manuscript seems to be more a methodological paper (i.e. presenting an innovative methodology) than actually presenting meaningful results. Other points that need attention: Besides the need for a stronger rationale for the study, the introduction seems to provide opportunities for a stronger structure. Descriptive part and concluding remarks mixed haphazardly. Some sections show overlap or repetition (e.g., scholars including hyperlinks). Structure in Methods is not very clear. Notably, information about article dataset is scattered throughout this section, and description is mixed with Twitter dataset. The structure in the Results can be structured more clearly, as the focus shifts from papers to users and vice versa. The Results seem more top-down than suggested in Introduction; because there are 8 predefined categories. Was there room for a more explorative examination of users? Otherwise I suggest building a stronger argument for these categories, and earlier in the manuscript. Conclusions can be based more explicitly on results. I find it difficult to see the merit of percentages of user categories related to article titles. It seems a big ‘leap’ to conclusions based on the kind of data. Reviewer #3: The manuscripts presents methodology developed to automatically detect certain keywords in Twitter profiles in order to learn and categorize the Twitter users to predefined categories. The results are promising and the authors write that their methods and system could be adapted to investigate Twitter users in general or some other specific subgroup, besides those connected to climate change research. It is' however, clear whether the scripts have been made or will be made openly available for other researchers to test and use. The results are sound and discussed in detail. The authors also identify some caveats in their work and discuss those accordingly. The term "perceived users" needs a bit more explaining. The paper has some minor issues with the use of prepositions and between singular/plural form of words. Please check the following lines for such (and some minor typos): 55, 134, 152, 176, 184, 186, 190, 191, 201, 227, 237, 264, 278, 280, 285 (reference form), 287, 305, 359 (reference form), 380, 408, 416, 460, 502. ********** 6. PLOS authors have the option to publish the peer review history of their article (what does this mean?). If published, this will include your full peer review and any attached files. If you choose “no”, your identity will remain anonymous but your review may still be made public. Do you want your identity to be public for this peer review? For information about this choice, including consent withdrawal, please see our Privacy Policy. Reviewer #1: No Reviewer #2: Yes: Bob C Mulder Reviewer #3: No [NOTE: If reviewer comments were submitted as an attachment file, they will be attached to this email and accessible via the submission site. Please log into your account, locate the manuscript record, and check for the action link "View Attachments". If this link does not appear, there are no attachment files.] While revising your submission, please upload your figure files to the Preflight Analysis and Conversion Engine (PACE) digital diagnostic tool, https://pacev2.apexcovantage.com/. PACE helps ensure that figures meet PLOS requirements. To use PACE, you must first register as a user. Registration is free. Then, login and navigate to the UPLOAD tab, where you will find detailed instructions on how to use the tool. If you encounter any issues or have any questions when using PACE, please email PLOS at figures@plos.org. Please note that Supporting Information files do not need this step. 5 Apr 2022 Dear Editors and Reviewers, Please find below our replies to editors and referees’ comments (in red), as well as relevant line numbers associated with our modifications to the manuscript (for comments from the reviewers). Editor comments 1. Clear language, clearly defined terms and claims as pointed out by R#1, R#2 and R#3. Please either use different wordings or generally accepted definisions (backed up by proper references). If they do not exist, please make sure that they are defined in the manuscript. We clarified the language and definitions when applicable. 2. Please add missing references which would support the statements and assumptions in the manuscript, as well as provide a complete picture of previous research on the topic of the manuscript (R#1, R#2) We added references to support our arguments as recommended by the reviewers. 3. Please ensure that conclusions are justified by the results (R#1, R#2) We precised the conclusions according to the results. 4. Please ensure that the paper has a clear research objective (R#2) We precised our objectives and research questions. 5. Please make sure that the data availability is in line with PLOS Data policy (R#3) We verified the availability of our data according to the PLOS Data policy. 1. Please ensure that your manuscript meets PLOS ONE's style requirements, including those for file naming. The PLOS ONE style templates can be found at https://journals.plos.org/plosone/s/file?id=wjVg/PLOSOne_formatting_sample_main_body.pdf and https://journals.plos.org/plosone/s/file?id=ba62/PLOSOne_formatting_sample_title_authors_affiliations.pdf We have changed to paper to match PLOS ONE’s format. We reformatted the title and section headings according to PLOS ONE’s style requirements. 2. In your Methods section, please include additional information about your dataset and ensure that you have included a statement specifying whether the collection method complied with the terms and conditions for the websites from which you have collected data. We added a statement about the compliance of our data collection protocol according to all parties involved. 3. Thank you for stating the following in the Acknowledgments Section of your manuscript: “We would like to thank Stefanie Haustein and Juan Pablo Alperin from the ScholCommLab for their help and feedback regarding the analysis. We would also like to thank Matisse Dagenais and Sandrine Dagenais in helping build the codebook. This research was funded through a SSHRC Joseph-Armand Bombardier Canada Graduate Scholarship (767-2017-1329), the SSHRC Insight Grant Understanding the societal impact of research through social media, and received financial contribution from the CIRST.” We note that you have provided additional information within the Acknowledgements Section that is not currently declared in your Funding Statement. Please note that funding information should not appear in the Acknowledgments section or other areas of your manuscript. We will only publish funding information present in the Funding Statement section of the online submission form. Please remove any funding-related text from the manuscript and let us know how you would like to update your Funding Statement. Currently, your Funding Statement reads as follows: “This study was funded by the Social Sciences and Humanities Research Council of Canada, the Fonds de recherche du Québec - Société et Culture, the Centre interuniversitaire de recherche sur la science et la technologie, the Université du Québec à Montréal and the Canada Research Chairs program. The funders had no role in study design, data collection and analysis, decision to publish, or preparation of the manuscript.” Please include your amended statements within your cover letter; we will change the online submission form on your behalf. We removed funding information from the Acknowledgements and added instructions to update to our Funding Statement to the cover letter. 4. Please include captions for your Supporting Information files at the end of your manuscript, and update any in-text citations to match accordingly. Please see our Supporting Information guidelines for more information: http://journals.plos.org/plosone/s/supporting-information. We added captions for the Supporting Information files and updated in-text citations according to PLOS ONE’s guidelines. REVIEWERS' COMMENTS: Reviewer #1: - Please elaborate on what you mean by “messy data from Twitter” (lines 53-55) We clarified this part of the sentence, we meant that Twitter data is mostly generated by users and readily organized for research purposes. - “As such, science communication and policy increasingly aim to understand the resonance of research in the public sphere.” (lines 58-59) – Really? Please provide more information on this and elaborate why you think this is the case. More specifically, policy has often been criticized for detaching itself from research and rather follow “popular opinion”. We clarified this sentence. We meant that studies and policy documents about the communication and evaluation of research put an increasing focus on the impact of research in the public sphere for evaluation pruposes. - The cited articles (lines 71-82) seem to fit the argument. However, more information is needed on the specific studies, in order to better understand and contextualize their importance for this manuscript. We added information about the cited articles in order to contextualize their contribution to the manuscript. - “While patterns of scholarly communication on Twitter remain to be documented in detail, it is considered to have democratized access to research.” (lines 91-92) – this is a very bold statement and should be supported by further references. More specifically, one could argue that OER and OpenAccess publications have contributed more to this suggested process than Twitter. We clarified this sentence. Mostly, we want to highlight that there are promises that sharing research on social media is democratizing research, independantly of the degree to which this may or not be the main factor. - The authors refer to Haustein, as well as Diaz-Faes – this needs more information and elaboration as it appears to be key to the manuscript. We restructured the paragraph (line 95-117) to put the focus on the content as a whole (why we should investigate the contexts of research communication on Ywitter), and not strictly on Haustein and Diaz-Faes. - What exactly are “non-elite” actors? (line 121) We added a precision about what are “non-elite” actors according to Newman’s study. Investigating and representing users in public communication of research on Twitter - I am missing any link or reference to topics such as “fake news”, “post facts”, etc. We did not include references to fake news or post facts as it was not directly related to the topic of the paper. While it is true that these are key issues for science communication on Twitter (and social media in general), this section aimed to provide an overview of the methods used so far to investigate who share research on Twitter. - “Our study focuses on the analysis of perceived users as a proxy of the publics of research communication on Twitter.” (line162-163) – ok, but is this common standard? Please provide a context that underlines that this is an agreed upon, or suggested approach. We added some more information to this. The idea is that investigating who tweet research paper usually entails reducing identity to specific markers, such as this is a scientist or not. - “As such, we account for the multiple identity markers mobilized in individual description to further assess the complexity through which a user may engage with research documents.” (lines 185-187) – ok, but do you also consider the chronological order to markers? For example, a user could be mother, activist and researcher. This is complex. But do you account for mother first and then researcher? One could argue that the list is a meaningful choice by the person. We did not took the chronological order into account as we wanted to avoid presuming which identities are more important. Basically, our method relies on asking questions such as “does this user present itself through keywords related to academia or not?” Purpose of the study - “As such, it is useful to assess another user position on Twitter, especially when there is no known relationship.” (lines 191-192) – true. But how can this be done without social network analysis? Please elaborate. We changed “position” for “orientation or interest” to highlight that Twitter bios may provide cues about the inclination toward specific topics, instead of a position from a network perspective. Some of our future work about this will focus on social network analysis. - “As such, our study analyzes perceived users through the mobilization of specific keywords in Twitter descriptions.” (lines 194-195) – this is very vague to me. At this stage of the manuscript, I am still not sure what “perceived users” and “mobilization of specific keywords” really means. Please elaborate and explain exactly what this is. We removed mention of “perceived users” as we felt it did not improve the paper. We also clarified our objective. Data collection and Twitter metrics - “They also were the latest years for which we had complete Twitter information at the time of data collection in September 2018.” (lines 234-236) – what does this really mean? Complete Twitter information can be quite a lot of things. Moreover, the authors indicated before that Twitter data is “messy”. So what does this really entail? We clarified this statement. We meant that they were the most recent years for which we had altmetric information for all papers published in both years. - “It also frequently appears in tweets sharing a link to a paper, and so is highly visible to all users.” (lines 237-238) – highly visible is, I believe, an overstatement. If an individual researcher shares the link to her paper and uses a commonly used hashtag, her post will more than likely drown in the information overload. Please elaborate. We clarified this sentence to highlight that it is usually the most visible information in tweets sharing research articles. - “we computed several Twitter metrics to further describe our dataset for tweet activity.” (lines 259-260) – which metrics did you compute? It becomes apparent later on, but I think it should be stated here. The metrics are listed in the following lines (lines 252-259 of the revised manuscript) - Overall, the paper does not explain, to the best of my knowledge, how users’ profile data was collected. Please make sure that this is included and properly described. We added information about this at line 220 of the revised manuscript. Textual analysis of Twitter profile descriptions - The first paragraph (lines 297-308) remains descriptive and does not include any references to previous research that has done similar work. Please rectify this. We cut the first part of the paragraph which was redundant with elements mentionned in the introduction and merged the remaining part with the second paragraph. - “we then improved it through several iterations of the analysis.” (lines 314-315) – how exactly did you do this? We added some elements about the iterating process of the analysis and how we built the codebook. Presenting our observations - Again this section remains rather descriptive and does not really strike a link to previous research and studies. Please provide more information on how the research fits into the larger picture. We removed this section as it did not add critical information to the manuscript. Discussion - “To circumvent some of these issues, our analysis focused on a method to assess perceived users as publics involved in climate change research Twitter communication.” (lines 486-487) – After reading the manuscript and argumentation, I am unfortunately not convinced that the authors can really make such a statement. The Tweets were selected based on DOI. This neglects a wide range of hashtags that are commonly used in this space, where the public gets their information and where researchers “need to tap-in to”, in order to gain recognition. I also wonder about the content of the Tweets – which has been neglected. How did people engage? We rephrased this statement to precise that we focus on a method to categorized who share research articles on Twitter. Our units are the tweets that include a link to a climate change research paper, so we didn’t include the hashtags in our analysis as it wasn’t the focus of our study. We also didn’t look at the content of the tweets for now, but we plan to for a subsequent study. Conclusion - “While our results do not provide a direct assessment of who tweet research due to the characteristics of Twitter data, it provides insights about how documents may permeate outside of academia and in specific groups.” (lines 600-602) – while I agree with the second part of the statement, I tend to disagree with the first part. Social Network Analyses, among others, has been proven as a valuable tool to analyze Twitter communication streams. Hence, the “messy nature” of Twitter data cannot explain why this manuscript has not provided applicable information. Overall, this is my main issue with the manuscript. The authors remain descriptive on a wide range of key issues that would justify the chosen method framework. Moreover, some more insights are not elaborated on and merely mentioned. Finally, some statements are made based on very shaky ground, particularly in view of previous, interdisciplinary research that has been done on Twitter communication. I would like to encourage the authors to carefully reconsider their argumentation and justification, in order to enhance the quality of the manuscript, which otherwise provides an interesting approach to the field. We added references from interdisciplinary research about scholarly communication on Twitter and refraned the paper to make our argument more clear. Our main objective is to provide a proxy and some new groundwork about ‘who’ tweets climate change research papers for future studies about the communication of science on Twitter in particular and digital media in general. Our approach can also be used in conjonction with other methods to improve what we know about the diffusion of research by providing a quick and flexible typology of users based on what they chose to put in their Twitter profile descriptions. This typology can be then used in Social Network Analyses to categorize the nodes that shared research papers on Twitter, as an example. Reviewer #2: Overall, I think the paper is clever and uses some interesting methodology and data. However, I think that language might be the major issue here, since using words such as 'engagement' and 'mobilised' would imply a very different type of analysis. For example, answering Q1 would require an analysis of actual engagement: likes, retweets, comments, quotes etc. We rectified the language and clarified our study in regard to these considerations. The authors state a more realistic and appropriate goal for the paper on pg. 14: " Rather, we understand expressions in Twitter descriptions as a proxy to investigate the potential publics engaging with specific scholarly documents." But even here, the word "engaging" is problematic. Simply sharing/retweeting a research article is not necessarily indicative of engagement. Other problematic terms that are used in the manuscript are ‘resonance’, and 'publics of climate change’; these are undefined terms and their relevance is not argued for. ‘Distinct communicational contexts’ is another example of wording that seems vague or conflated. What is the meaning of ‘context’ in this manuscript? We revised wording through the manuscript. We added a definition to “context” in regard to our study : Scholars thus looked to investigate the contexts, understood as the dimensions that give meaning to indicators, in which research circulate on Twitter (4). (p.3) The main comment that thus arises is: What is actually the (main) research objective? Who (re)tweets papers? (user focused) or how documents may permeate a social media environment? Both current research questions are formulated too ambitious if I take a critical look at the type of analyses and results. It is hard to understand how user characteristics translate into indications of ‘resonance’ (RQ1). RQ2 focuses on ‘implications’, but based on what results? This questions seems more suitable for the Discussion. We rephrased RQ1 as our main research question (RQ) : How do climate change research papers, both individually and as a whole, get shared outside of academia? (p. 10) We also removed QR2 as it was not central to the manuscript and added information about the implications and limitations of our method in the Discussion. At best, the paper could make claims about assessing the types of users/publics that contribute to the spreading of climate change research or co-creating a -- research informed -- climate change narrative. In many ways, the manuscript seems to be more a methodological paper (i.e. presenting an innovative methodology) than actually presenting meaningful results. We revised the manuscript according to a more precise goal : assess how much climate change research papers are being shared outside of academia. Other points that need attention: Besides the need for a stronger rationale for the study, the introduction seems to provide opportunities for a stronger structure. Descriptive part and concluding remarks mixed haphazardly. Some sections show overlap or repetition (e.g., scholars including hyperlinks). We revised the introduction and clarified its structure. We also revised the manuscript to minimize overlap when applicable. Structure in Methods is not very clear. Notably, information about article dataset is scattered throughout this section, and description is mixed with Twitter dataset. We revised the structure of the Methods section to make it clearer. The structure in the Results can be structured more clearly, as the focus shifts from papers to users and vice versa. We revised the structure of the Results to make them clearer. We however kept the general structure of general results first, then results for highly tweeted papers for the categories Academia, Communication, Political, Professional, and then Personal. The Results seem more top-down than suggested in Introduction; because there are 8 predefined categories. Was there room for a more explorative examination of users? Otherwise I suggest building a stronger argument for these categories, and earlier in the manuscript. We added information about the chosen categories in the litterature review. Conclusions can be based more explicitly on results. I find it difficult to see the merit of percentages of user categories related to article titles. It seems a big ‘leap’ to conclusions based on the kind of data. We revised the conclusions according to the results. Reviewer #3: The manuscripts presents methodology developed to automatically detect certain keywords in Twitter profiles in order to learn and categorize the Twitter users to predefined categories. The results are promising and the authors write that their methods and system could be adapted to investigate Twitter users in general or some other specific subgroup, besides those connected to climate change research. It is' however, clear whether the scripts have been made or will be made openly available for other researchers to test and use. We uploaded the scripts to Github for further improvement of the method and dictionnaries. The results are sound and discussed in detail. The authors also identify some caveats in their work and discuss those accordingly. The term "perceived users" needs a bit more explaining. We removed mentions of the term “perceived users” as it was confusing and not useful pour the interpretation of the paper. We focused instead on the categories of users we can assess by examining Twitter profile descriptions. The paper has some minor issues with the use of prepositions and between singular/plural form of words. Please check the following lines for such (and some minor typos): 55, 134, 152, 176, 184, 186, 190, 191, 201, 227, 237, 264, 278, 280, 285 (reference form), 287, 305, 359 (reference form), 380, 408, 416, 460, 502. We corrected the wording, issues and typos in the above-mentionned lines. Submitted filename: Responses to Reviewers.pdf Click here for additional data file. 13 May 2022 Who tweets climate change papers? Investigating publics of research through users’ descriptions PONE-D-21-17099R1 Dear Dr. Toupin, We’re pleased to inform you that your manuscript has been judged scientifically suitable for publication and will be formally accepted for publication once it meets all outstanding technical requirements. Within one week, you’ll receive an e-mail detailing the required amendments. When these have been addressed, you’ll receive a formal acceptance letter and your manuscript will be scheduled for publication. An invoice for payment will follow shortly after the formal acceptance. To ensure an efficient process, please log into Editorial Manager at http://www.editorialmanager.com/pone/, click the 'Update My Information' link at the top of the page, and double check that your user information is up-to-date. If you have any billing related questions, please contact our Author Billing department directly at authorbilling@plos.org. If your institution or institutions have a press office, please notify them about your upcoming paper to help maximize its impact. If they’ll be preparing press materials, please inform our press team as soon as possible -- no later than 48 hours after receiving the formal acceptance. Your manuscript will remain under strict press embargo until 2 pm Eastern Time on the date of publication. For more information, please contact onepress@plos.org. Kind regards, Piotr Bródka Academic Editor PLOS ONE 23 May 2022 PONE-D-21-17099R1 Who tweets climate change papers? Investigating publics of research through users’ descriptions Dear Dr. Toupin: I'm pleased to inform you that your manuscript has been deemed suitable for publication in PLOS ONE. Congratulations! Your manuscript is now with our production department. If your institution or institutions have a press office, please let them know about your upcoming paper now to help maximize its impact. If they'll be preparing press materials, please inform our press team within the next 48 hours. Your manuscript will remain under strict press embargo until 2 pm Eastern Time on the date of publication. For more information please contact onepress@plos.org. If we can help with anything else, please email us at plosone@plos.org. Thank you for submitting your work to PLOS ONE and supporting open access. Kind regards, PLOS ONE Editorial Office Staff on behalf of Dr. Piotr Bródka Academic Editor PLOS ONE
  24 in total

1.  Climate change. Accelerating extinction risk from climate change.

Authors:  Mark C Urban
Journal:  Science       Date:  2015-05-01       Impact factor: 47.728

2.  Climate change in the Fertile Crescent and implications of the recent Syrian drought.

Authors:  Colin P Kelley; Shahrzad Mohtadi; Mark A Cane; Richard Seager; Yochanan Kushnir
Journal:  Proc Natl Acad Sci U S A       Date:  2015-03-02       Impact factor: 11.205

3.  Global and regional health effects of future food production under climate change: a modelling study.

Authors:  Marco Springmann; Daniel Mason-D'Croz; Sherman Robinson; Tara Garnett; H Charles J Godfray; Douglas Gollin; Mike Rayner; Paola Ballon; Peter Scarborough
Journal:  Lancet       Date:  2016-03-03       Impact factor: 79.321

4.  Analysis and valuation of the health and climate change cobenefits of dietary change.

Authors:  Marco Springmann; H Charles J Godfray; Mike Rayner; Peter Scarborough
Journal:  Proc Natl Acad Sci U S A       Date:  2016-03-21       Impact factor: 11.205

5.  Climate change on Twitter: topics, communities and conversations about the 2013 IPCC Working Group 1 report.

Authors:  Warren Pearce; Kim Holmberg; Iina Hellsten; Brigitte Nerlich
Journal:  PLoS One       Date:  2014-04-09       Impact factor: 3.240

6.  Ecological networks are more sensitive to plant than to animal extinction under climate change.

Authors:  Matthias Schleuning; Jochen Fründ; Oliver Schweiger; Erik Welk; Jörg Albrecht; Matthias Albrecht; Marion Beil; Gita Benadi; Nico Blüthgen; Helge Bruelheide; Katrin Böhning-Gaese; D Matthias Dehling; Carsten F Dormann; Nina Exeler; Nina Farwig; Alexander Harpke; Thomas Hickler; Anselm Kratochwil; Michael Kuhlmann; Ingolf Kühn; Denis Michez; Sonja Mudri-Stojnić; Michaela Plein; Pierre Rasmont; Angelika Schwabe; Josef Settele; Ante Vujić; Christiane N Weiner; Martin Wiemers; Christian Hof
Journal:  Nat Commun       Date:  2016-12-23       Impact factor: 14.919

7.  Scientific networks on Twitter: Analyzing scientists' interactions in the climate change debate.

Authors:  Stefanie Walter; Ines Lörcher; Michael Brüggemann
Journal:  Public Underst Sci       Date:  2019-04-26

8.  Oxygen isotope in archaeological bioapatites from India: Implications to climate change and decline of Bronze Age Harappan civilization.

Authors:  Anindya Sarkar; Arati Deshpande Mukherjee; M K Bera; B Das; Navin Juyal; P Morthekai; R D Deshpande; V S Shinde; L S Rao
Journal:  Sci Rep       Date:  2016-05-25       Impact factor: 4.379

9.  Frequency distribution of journalistic attention for scientific studies and scientific sources: An input-output analysis.

Authors:  Markus Lehmkuhl; Nikolai Promies
Journal:  PLoS One       Date:  2020-11-11       Impact factor: 3.240

View more

北京卡尤迪生物科技股份有限公司 © 2022-2023.