Literature DB >> 34241709

Science communication: challenges and dilemmas in the age of COVID-19.

Konstantina Antiochou1.   

Abstract

A pandemic of misinformation is said to spread alongside the COVID-19 pandemic. The need to properly inform the public is stronger than ever in the fight against misinformation, but what 'properly' means in this context is a quite controversial issue. In what follows, I discuss the challenges we face in communicating COVID-19 health information to the public, with the aim to shed light on some ethical and policy issues emerging in science (communication) in times of crises.
© 2021. Springer Nature Switzerland AG.

Entities:  

Keywords:  COVID-19; Science communication; Uncertainty

Mesh:

Year:  2021        PMID: 34241709      PMCID: PMC8267502          DOI: 10.1007/s40656-021-00444-0

Source DB:  PubMed          Journal:  Hist Philos Life Sci        ISSN: 0391-9714            Impact factor:   1.205


As the novel coronavirus disease (COVID-19) continues to spread around the world, we face the challenge of a rapid and far-reaching spread of information related to the virus, that is often false or misleading. Science communication can play a key role in the fight against misinformation. However, the attempt to counter coronavirus misinformation by providing (access to) more accurate information is prevented by the uncertainties surrounding the coronavirus pandemic. We are facing an extremely ‘wicked problem’ (Rittel & Webber, 1973), which means that there is no definite formulation or explanation of it. So, the need to properly inform the public about COVID-19 gives rise to the following two questions: How should we talk about it? What language should we use for the transmission of scientific information? What should we say? The first question mainly refers to the problem of ‘thick concepts’ and metaphors used in science communication, when it comes to COVID-19 prevention and control guidelines. The language of COVID-19 information involves definitions and classifications based on current epidemiological data and subject to constant changes and updates. Although the discussion of clinical COVID-19 data is conducted primarily in English, having realized the importance for access to accurate information of an accessible language, many institutions offer health information about COVID-19 in a variety of languages. The aim is not to so much to make research data publicly accessible, as to provide advices or recommendations on to reduce the spread of COVID-19. And yet the language used for the transmission of this information is not merely descriptive or unequivocal. Any reference to ‘public health emergency’ or ‘pandemic’ is legitimized on the basis of evaluative judgments, to give but one example. The WHO’s declaration that the global spread of coronavirus disease is a pandemic was meant to send a powerful signal to countries that urgent action was essential to combat the spread of the disease. It was intended to raise awareness. But it could also instill panic and fear in people. And that was why the appropriateness of this declaration as well as of the time at which it was made have been the subject of considerable criticism. Besides, sometimes scientists rely on metaphors to make sense of scientific explanations or abstract scientific concepts. But despite their utility, metaphors can also constrain scientific reasoning and increase public misunderstanding (Elliott, 2017). ‘Pandemics are extremely fat-tailed events’, says Nassim Taleb, which means that the ‘the risk needs to be killed in the egg’ (Taleb, 2020). He clearly suggests that we should take all the precautionary measures needed to prevent the spread of coronavirus, when he notes that ‘waiting for the accidents before putting the seat belt on, or evidence of fire before buying insurance would make the perpetrator exit the gene pool’ (Taleb, 2020). However, the appropriateness of these analogies has been challenged (Ioannidis et al., 2020) and, given the public’s growing distrust in science, Taleb’s reference to fortune-cookie evidentiary methods could exacerbate this tendency. Was he nevertheless justified in using this metaphor if his purpose was to communicate uncertainty? The second problem has to do with the risk of error involved in decision making under conditions of uncertainty, what is now known as ‘inductive risk’ in the philosophy of science literature. Depending on the range of these decisions, we can distinguish different versions of this problem. The assumption is always that there is a gap between data and hypotheses, which allows non-epistemic values to enter scientific reasoning. But while, in a traditional version of this argument (Rudner, 1953), the risk is limited to the final decision that a scientist must make on whether or not to accept a hypothesis (the decision, that is, that the evidence is sufficiently strong to warrant the acceptance of the hypothesis), according to a more recent version (Douglas, 2009), inductive risk is present from the beginning and throughout all scientific process: in the choice of methodology, in the decision of the models used in science, in evidence characterization, as well as in the analysis or interpretation of data. So even a purely methodological decision, such as the choice of a level of statistical significance involves values according to this argument—an appropriate balance between the two kinds of error (false positives/false negatives) and therefore a decision on which errors we should mostly avoid. In cases of public health emergency, such as the coronavirus pandemic, where research is conducted under time pressure and the need to move expeditiously is of vital importance, we also need to find the appropriate balance between the need to act and the desire for more reliable findings, which would nevertheless be time-consuming. The recent debate between Ioannidis and Taleb on COVID-19 forecasts (Ioannidis et al., 2020; Taleb, 2020) was quite revealing of the dilemmas posed in the context of decision making and the above-mentioned interplay of science and values in assessing and regulating different (types of) risks under conditions of uncertainty. It started from the question of whether forecasting for COVID-19 failed. But it actually focused on the need to take (or refrain from taking any) strict but costly measures to prevent and control the spread of the disease, which is a rather political issue. It involves trade-off decisions which go beyond the best available science (Scheufele et al., 2020) or even scientists’ authority, and consequently it shouldn’t be left to them. It could (and/or should) probably be best handled with informed public contributions—or at least consent. Hence, the question that arises is what of the research results could responsibly be communicated to the general public, when these results are controversial; they are revised or updated almost daily; they are often inaccurate or conflicting and they furthermore lead to non-accountable (and/or irresponsible) decisions, in the sense that the responsibility of these decisions can be transferred from the political level to healthcare workers and vice versa. A concern expressed here is that the presence of scientific discourse in the public sphere may create confusion and distrust (in both science and government) or undermine the consistency of message, if people see research data and recommendations being constantly revised or scientists failing to reach an agreement on them. But it is also argued that precisely because the scientific basis for decision-making is indeterminate and scientific controversies may reflect ideological or political differences (Priest, 2018), all information should be made publicly available in terms of full transparency (Elliott, 2017) and with the aim to involve in decision making process those affected by or interested in these decisions. One way out of this dilemma may be to communicate not only the results but also the nature of scientific research and knowledge with the public and/or the impact these results may have on people’s goals or values (de Melo-Martin & Intemann, 2018; Elliott, 2017). It is prudent to recognize that absolute certainty is elusive or that science findings are open to revision in light of new evidence or reinterpretation of prior evidence. And so, one goal of science communication could be to point out conflict and show that it results from uncertainty surrounding scientific research. But therein lies an important challenge. Can we tell the difference between a ‘reasonable disagreement’ that may arise in the context of a properly functioning science, on the one hand, and misinformation or dissemination of false news, on the other? A transparency policy requires that research data and results be made available in open access even when they are inconclusive or conflicting. But unless we can distinguish between a legitimate disagreement among scientists and controversies arising from science denial or disinformation, transparency could cause more confusion. So, what is a ‘reasonable’ scientific disagreement and how should we communicate uncertainty?
  2 in total

1.  Forecasting for COVID-19 has failed.

Authors:  John P A Ioannidis; Sally Cripps; Martin A Tanner
Journal:  Int J Forecast       Date:  2020-08-25
  2 in total
  1 in total

1.  Key topics in pandemic health risk communication: A qualitative study of expert opinions and knowledge.

Authors:  Siv Hilde Berg; Marie Therese Shortt; Jo Røislien; Daniel Adrian Lungu; Henriette Thune; Siri Wiig
Journal:  PLoS One       Date:  2022-09-30       Impact factor: 3.752

  1 in total

北京卡尤迪生物科技股份有限公司 © 2022-2023.