| Literature DB >> 34785707 |
Travis G Coan1, Constantine Boussalis2, John Cook3,4, Mirjam O Nanko1.
Abstract
A growing body of scholarship investigates the role of misinformation in shaping the debate on climate change. Our research builds on and extends this literature by (1) developing and validating a comprehensive taxonomy of climate contrarianism, (2) conducting the largest content analysis to date on contrarian claims, (3) developing a computational model to accurately classify specific claims, and (4) drawing on an extensive corpus from conservative think-tank (CTTs) websites and contrarian blogs to construct a detailed history of claims over the past 20 years. Our study finds that the claims utilized by CTTs and contrarian blogs have focused on attacking the integrity of climate science and scientists and, increasingly, has challenged climate policy and renewable energy. We further demonstrate the utility of our approach by exploring the influence of corporate and foundation funding on the production and dissemination of specific contrarian claims.Entities:
Year: 2021 PMID: 34785707 PMCID: PMC8595491 DOI: 10.1038/s41598-021-01714-4
Source DB: PubMed Journal: Sci Rep ISSN: 2045-2322 Impact factor: 4.379
Figure 1Taxonomy of claims made by contrarians. This figure displays the three layers of claim-making by climate change contrarian actors. The original version of this taxonomy with more detailed claim descriptions can be found in Supplementary Table S2.
Figure 2Prevalence of super- and sub-claims by CTTs and contrarian blogs. (a) illustrates the share of claim-making paragraphs related to the sub-claims of our taxonomy by CTTs (circle) and blogs (hollow square). (b) and (c) Display the share of 515,005 claim-making paragraphs devoted to the following super-claim categories: 1. Global warming is not happening (green hollow circle), 2. Humans are not causing global warming (yellow diamond), 3. Climate impacts are not bad (blue filled square), 4. Climate solutions won’t work (black circle), and 5. Climate movement/science is unreliable (orange hollow square). Note that estimates prior to 2007 in (c) are derived from a relatively small number of blogs.
Figure 3Prevalence of selected contrarian sub-claims in CTT communication. This figure illustrates the temporal variation (quarterly) in the proportion of sub-claims found in CTT documents related to (a) “Climate policies are harmful”, “Clean energy won’t work”, and (b) “Climate movement is unreliable”, “Climate science is unreliable”. Highlighted periods in the time series include: (A) 2003 Climate Stewardship Act; (B, C) 2005 and 2007 Climate Stewardship and Innovation Acts; (D) Climate Security Act of 2007; (E) American Clean Energy and Security Act; (F) Clean Power Plan; (G-I) An Inconvenient Truth and Al Gore Nobel/IPCC Prize; (J) “Climategate”; and (K) Peter Gleick/Heartland Institute affair. Note that darker lines represent cubic splines used to aid interpretation.
Figure 4CTT super-claim prevalence and funding from key donors. This figure includes scatterplots and linear regression results (see Supplementary Table S6 for the full results) showing the relationship between the share of CTT funding from “key” conservative donors and the prevalence of claims from the following categories: (a) “Climate movement/science is unreliable” [Category 5] (, , ), (b) “Climate solutions won’t work” [Category 4] (, , ), and (c) “Global warming is not happening”, “Human GHGs are not causing global warming” & “Climate impacts are not bad” [Categories 1–3] (, , ). Total funding in millions of US dollars over the period 2003-2010 is displayed in (d) along with the share of funding from DonorsTrust/DonorsCapital (red), key donors other than DonorsTrust/DonorsCapital (yellow), and other donors (blue).
Average annotator performance by class.
| Code | Claim label | Average coder accuracy |
|---|---|---|
| 0 | No claim | 0.50 |
| 1 | Global warming is not happening | 0.95 |
| 2 | Human greenhouse gases are not causing climate change | 0.96 |
| 3 | Climate impacts/global warming is beneficial/not bad | 0.97 |
| 4 | Climate solutions won’t work | 0.97 |
| 5 | Climate movement/science is unreliable | 0.86 |
Out-of-sample classification performance.
| Validation set (noisy) | Test set (noise free) | |||||
|---|---|---|---|---|---|---|
| Precision | Recall | F1 | Precision | Recall | F1 | |
| Logistic (unweighted) | 0.71 | 0.55 | 0.62 | 0.83 | 0.57 | 0.68 |
| Logistic (weighted) | 0.62 | 0.68 | 0.65 | 0.75 | 0.70 | 0.72 |
| SVM (unweighted) | 0.66 | 0.56 | 0.61 | 0.77 | 0.58 | 0.66 |
| SVM (weighted) | 0.60 | 0.68 | 0.64 | 0.74 | 0.70 | 0.72 |
| ULMFiT | 0.69 | 0.69 | 0.69 | 0.77 | 0.67 | 0.72 |
| ULMFiT (weighted) | 0.66 | 0.60 | 0.62 | 0.76 | 0.60 | 0.65 |
| ULMFiT (over sample) | 0.41 | 0.73 | 0.50 | 0.46 | 0.75 | 0.55 |
| ULMFiT (focal Loss) | 0.66 | 0.58 | 0.60 | 0.73 | 0.56 | 0.61 |
| ULMFiT-logistic | 0.71 | 0.70 | 0.70 | 0.77 | 0.72 | 0.75 |
| ULMFiT-SVM | 0.74 | 0.65 | 0.70 | 0.81 | 0.63 | 0.71 |
| RoBERTa | 0.75 | 0.77 | 0.76 | 0.82 | 0.75 | 0.77 |
| RoBERTa-logistic | 0.76 | 0.77 | 0.76 | 0.83 | 0.75 | 0.79 |
The table provides macro-averaged precision, recall, and F1 score to compare model fit across “shallow” descriptive classifiers and “deep” transfer learning architectures. Logistic (Unweighted): Logistic regression classifier using TF-IDF weighted features and optimized via grid-search. Logistic (Weighted): Logistic regression classifier using TF-IDF weighted features, weighting for class imbalance, and optimized via grid-search. SVM (Unweighted): A linear support vector machine classifier using TF-IDF weighted features and optimized via grid-search. SVM (Weighted): A linear support vector machine classifier using TF-IDF weighted features, weighting for class imbalance, and optimized via grid-search. ULMFiT models: We start with a pre-trained language model which utilizes the Wiki-103 corpus. We then tuned the pre-trained model using 1) our training set and a large, random sample of unannotated blog and CTT paragraphs. Second, we trained the classification model using the training and validation sets described above. Given observed class imbalances, we examined four variations of the ULMFiT architecture: a model that (1) ignored class imbalance; (2) applies oversampling of each minibatch to adjust for class imbalance; (3) weights the loss function for class imbalance following the “balanced” procedure used in the scikit-learn library; and (4) uses a focal loss function. RoBERTa models: See discussion in Methods.
Classification performance by class (claims and sub-claims).
| Code | Claim label | Precision | Recall | F1 | |
|---|---|---|---|---|---|
| 0 | 0.0 | 0.90 | 0.95 | 0.93 | |
| 1 | 0.92 | 0.80 | 0.86 | ||
| 1.1 | Ice/permafrost/snow cover isn’t melting | 0.92 | 0.69 | 0.79 | |
| 1.2 | We’re heading into an ice age/global cooling | 0.73 | 0.76 | 0.74 | |
| 1.3 | Weather is cold/snowing | 0.88 | 0.73 | 0.80 | |
| 1.4 | Climate hasn’t warmed/changed over the last (few) decade(s) | 0.84 | 0.67 | 0.74 | |
| 1.6 | Sea level rise is exaggerated/not accelerating | 0.88 | 0.92 | 0.91 | |
| 1.7 | Extreme weather isn’t increasing/has happened before/isn’t linked to climate change | 0.93 | 0.86 | 0.90 | |
| 2 | 0.82 | 0.88 | 0.85 | ||
| 2.1 | It’s natural cycles/variation | 0.82 | 0.86 | 0.84 | |
| 2.3 | There’s no evidence for greenhouse effect/carbon dioxide driving climate change | 0.69 | 0.79 | 0.73 | |
| 3 | 0.91 | 0.92 | 0.91 | ||
| 3.1 | Climate sensitivity is low/negative feedbacks reduce warming | 0.82 | 0.85 | 0.83 | |
| 3.2 | Species/plants/reefs aren’t showing climate impacts/are benefiting from climate change | 0.81 | 0.90 | 0.85 | |
| 3.3 | CO2 is beneficial/not a pollutant | 0.90 | 0.96 | 0.93 | |
| 4 | 0.86 | 0.64 | 0.74 | ||
| 4.1 | Climate policies (mitigation or adaptation) are harmful | 0.70 | 0.55 | 0.61 | |
| 4.2 | Climate policies are ineffective/flawed | 0.88 | 0.44 | 0.59 | |
| 4.4 | Clean energy technology/biofuels won’t work | 0.72 | 0.72 | 0.72 | |
| 4.5 | People need energy (e.g. from fossil fuels/nuclear) | 0.78 | 0.50 | 0.61 | |
| 5 | 0.82 | 0.75 | 0.78 | ||
| 5.1 | Climate-related science is unreliable/uncertain/unsound (data, methods & models) | 0.77 | 0.80 | 0.77 | |
| 5.2 | Climate movement is unreliable/alarmist/corrupt | 0.78 | 0.61 | 0.69 | |
Performance measures are calculated by assessing the final RoBERTa-Logistic ensemble classifier using the “error-free” validation set (see Methods).