Literature DB >> 22427809

A collaboratively-derived science-policy research agenda.

William J Sutherland1, Laura Bellingan, Jim R Bellingham, Jason J Blackstock, Robert M Bloomfield, Michael Bravo, Victoria M Cadman, David D Cleevely, Andy Clements, Anthony S Cohen, David R Cope, Arthur A Daemmrich, Cristina Devecchi, Laura Diaz Anadon, Simon Denegri, Robert Doubleday, Nicholas R Dusic, Robert J Evans, Wai Y Feng, H Charles J Godfray, Paul Harris, Sue E Hartley, Alison J Hester, John Holmes, Alan Hughes, Mike Hulme, Colin Irwin, Richard C Jennings, Gary S Kass, Peter Littlejohns, Theresa M Marteau, Glenn McKee, Erik P Millstone, William J Nuttall, Susan Owens, Miles M Parker, Sarah Pearson, Judith Petts, Richard Ploszek, Andrew S Pullin, Graeme Reid, Keith S Richards, John G Robinson, Louise Shaxson, Leonor Sierra, Beck G Smith, David J Spiegelhalter, Jack Stilgoe, Andy Stirling, Christopher P Tyler, David E Winickoff, Ron L Zimmern.   

Abstract

The need for policy makers to understand science and for scientists to understand policy processes is widely recognised. However, the science-policy relationship is sometimes difficult and occasionally dysfunctional; it is also increasingly visible, because it must deal with contentious issues, or itself becomes a matter of public controversy, or both. We suggest that identifying key unanswered questions on the relationship between science and policy will catalyse and focus research in this field. To identify these questions, a collaborative procedure was employed with 52 participants selected to cover a wide range of experience in both science and policy, including people from government, non-governmental organisations, academia and industry. These participants consulted with colleagues and submitted 239 questions. An initial round of voting was followed by a workshop in which 40 of the most important questions were identified by further discussion and voting. The resulting list includes questions about the effectiveness of science-based decision-making structures; the nature and legitimacy of expertise; the consequences of changes such as increasing transparency; choices among different sources of evidence; the implications of new means of characterising and representing uncertainties; and ways in which policy and political processes affect what counts as authoritative evidence. We expect this exercise to identify important theoretical questions and to help improve the mutual understanding and effectiveness of those working at the interface of science and policy.

Entities:  

Mesh:

Year:  2012        PMID: 22427809      PMCID: PMC3302883          DOI: 10.1371/journal.pone.0031824

Source DB:  PubMed          Journal:  PLoS One        ISSN: 1932-6203            Impact factor:   3.240


Introduction

The importance of understanding and using science for public policy-making has long been recognised [1], but recent years have seen a growing debate over how this is best achieved [2]–[4]. Still more recently, ‘evidence-based policy’ has become the desired norm in many fields (even if its meaning is still disputed), and this has led to a greater embedding of scientists, both natural and social, alongside other specialists in public policy–making processes. In many governments, scientists are engaged at a senior level. The US, for example, has the President's Council of Advisors on Science and Technology, while the UK has Chief Scientific Adviser posts in all government departments, in addition to a Government Chief Scientific Adviser with a place in some Cabinet Committees. In spite of their acknowledged importance however, relations between science and policy are sometimes troubled [5], and periodically erupt into controversy. Prominent examples include the acrimonious debate over scientific understandings of climate change [6], further inflamed by the ‘Climategate’ email controversy, disputes over the use of genetically modified crops and foods in Europe, the failure to acknowledge the risk of possible BSE transmission to humans [7], and conflict over stem cell research, which is particularly acute in the United States. In 2009, the public sacking of the Chair of the UK Advisory Council on the Misuse of Drugs began a row not only about appropriate policy (in this case for drugs classification), but also about the proper place of independent scientific advice in the policy-making process. Such troubles are symptomatic of the complexity of science-policy interactions, and suggest that there is still much to understand about the nature of scientific authority and processes of policy formation and change [8]–[10]. Against this backdrop, this paper reports the results of an exercise that sought to identify the most important outstanding questions in this domain. Precedents for attempts to identify ‘key questions’ go back to the learned civic societies of enlightenment England and France. For example, the Royal Society for the Encouragement of Arts, Manufactures, and Commerce (founded 1752) and the French National Institute (1795–1983) identified specific policy-relevant questions for which they offered prizes to promote commercial and social applications of science [11]. Other examples include Hilbert's famous set of mathematical questions [12], Paul Erdös' posing of mathematical questions with cash prizes for those who solved them [13] and Steffen et al's [14] listing of questions in the environmental sciences. Contemporary ‘top down’ examples include the US National Research Council, in its assessment of strategic directions for the geographical sciences [15], and the International Council for Science, with its Grand Challenges in Global Sustainability Research [16]. We have adopted a rather different, bottom-up, approach, bringing together researchers, policy makers and practitioners with interests in relations between science and policy to identify priority, researchable questions in this field. The method is similar to that used in conservation biology [17]–[21] and agricultural science [22]. Previous exercises have been remarkably influential [23]. For example, two of the resulting papers [17], [22] were the most downloaded ever from their respective journals, and one [17] was explicitly cited as the basis for the priority research questions identified within the UK Marine Science Strategy [24]. Our aim has been to identify key questions which, if addressed through focused research and enquiry, might not only help resolve important theoretical challenges but might also improve the mutual understanding and effectiveness of those who work at the interface of science and policy. The questions presented below were generated through a democratic, transparent and collaborative process similar to those used in previous exercises [23]. There are interesting differences in this case, however, because the existence of a pre-determined research and policy community is much less evident. Participants were therefore selected to cover a wide range of academic disciplines (including the biological, environmental, medical, physical, and social sciences) as well as governmental and non-governmental organisations, consultancies and industry. Initially, each participant was invited to produce a list of questions, consulting widely if they wished to do so (see the Materials and Methods section below). The 239 questions submitted at this first stage are presented in the Material S1. A process of voting, deliberation and further voting (the final stages of which took place at a meeting of participants over two days) subsequently reduced the initial list to a final set of 40 questions. During this process the questions were also redrafted and grouped thematically. They are presented in the following section, ordered by theme but not in rank order. The outcomes of an exercise such as this are inevitably influenced by the composition of the set of participants, as well as by the process. Clearly, therefore, the results are not ‘reproducible’ (in the sense that a re-run with different people could be expected to produce exactly the same set of questions). Nevertheless, if the exercise were to involve a similarly large and diverse group of participants, and were to be conducted, like this one, through several rounds of voting, deliberation and editing, we consider it highly likely that broadly similar general themes would emerge. This is, of course, an empirically testable proposition. How do different political cultures and institutions affect the acquisition and treatment of scientific evidence in policy formulation, implementation and evaluation? How do scientists and policy makers recognise and convey the limitations of scientific advice? At what stages during the development of policy does scientific evidence have the greatest impact on the decisions made? Under what conditions does scientific evidence legitimise political decisions? What roles have science and other forms of expertise played in international governance regimes, such as the World Trade Organisation? Are there conditions under which scientific evidence may help resolve value-laden conflict and if so, what are those conditions? What factors affect the utility and legitimacy of formal decision support, assessment and evaluation tools, and their adoption (or otherwise) by policy makers? What influences the form and application of monitoring and evaluation practices in the development of policy informed by science? How do policy makers decide which questions they should ask their expert advisors and when in the policy cycle they should be asked? What are the most effective mechanisms for identifying the evidence required to inform policy-making on new and emerging problems? How, and with what consequences, have the sources of scientific evidence and advice used by policy makers changed over recent decades? In what ways do different political cultures shape the frameworks through which evidence and advice are sourced? In what circumstances are policy problems likely to require the inclusion of experts with conflicting views? When is it considered appropriate to consult experts with conflicting views, and what mechanisms can ensure that this takes place? What factors influence whether different disciplines are included effectively when defining and addressing complex policy problems? What are the mechanisms by which budgetary pressures and societal constraints on policy-making influence the prioritisation and funding of research? What is the effectiveness of different techniques for anticipating future policy issues requiring science input? How are national science advisory systems constructed and to what extent do different systems result in different outcomes? How and why does the role of scientific advice in policy-making differ among local, regional, national and international levels of governance? Which commissioning and operational arrangements lead to the most effective use of science in policy-making? Policy makers typically use networks of experts, formal and informal. How does the structure and composition of such networks influence the outcomes of decision making? How do different ways of using and organising in-house scientific expertise affect the quality and use of scientific evidence and advice in policy-making? What are the consequences of different approaches to institutionalising, professionalising and building capacity in the exchange of knowledge between science and policy? How can the effectiveness of knowledge-brokering [5] be assessed? How is agreement reached on what counts as sufficient evidence to inform particular policy decisions? How is scientific evidence incorporated into representations of, and decision-making about, so-called “wicked” problems, which lack clear definition and cannot be solved definitively? Can distinctions be made in scientific advice between facts and values; to the extent that this is possible, how effective are policy makers in distinguishing them and what factors influence their effectiveness? How can risks, and the associated uncertainties, complexities, ambiguities and ignorance, be effectively characterised and communicated? How do policy makers understand and respond to scientific uncertainties and expert disagreements? Do different approaches to building consensus, or illuminating lack of consensus, result in different consequences for policy and, if so, why? What factors (for example, openness, accountability, credibility) influence the degree to which the public accept as trustworthy an expert providing advice? What governance processes and enabling conditions are needed to ensure that policymaking is scientifically credible, while addressing a perceived societal preference for policy processes that are more democratic than technocratic? How might the attitudes and values of diverse publics relating to science and technology, and their governance, be incorporated effectively into debates about the use of evidence in policy-making? What has been the influence of scrutinising institutions, such as those of legislative bodies (e.g. Parliament, Congress, National Assembly or Bundestag) on the roles of science in policy-making? What are the implications for their effectiveness of opening up expert advisory processes to different forms of transparency? What are the implications for science-policy relations, and for the democratisation of science, of novel methods of engagement and dissemination (such as citizen science, and new media technologies, including social media)? What factors shape the ways in which scientific advisors and policy makers make sense of their own and each other's roles in the policy process? How and why have the conceptual models of science-policy relations held by policy makers, scientists and other stakeholders changed over time, and with what consequences? How is guidance on the handling and communication of risk, uncertainty and ambiguity interpreted by policy makers, and what impact do their views have on the uptake and implementation of recommendations? What impact has research on the relationship between science and policy actually had on science policy?

Discussion

Although it may seem self-evident that policy should be informed by scientific understanding, and should therefore be evidence-based, this normative assumption is itself based on surprisingly weak evidence. Debates continue, for example, about what exactly constitutes good evidence, where and how such evidence should be sought, and at what stage in the policy process different forms of evidence might be more or less appropriate. That such debate persists reflects the fact that there are many open questions about the nature of science-policy interactions, as this exercise has revealed. In short therefore, we need to ask not just how science can best inform policy, but also how policy and political processes affect what counts as authoritative evidence in the first place. Jasanoff's [2] seminal study of science advisers showed that the value of science in policy stemmed in part from its capacity for detailed engagement with practical policy problems. At the same time, the authority of science was seen to depend on maintaining its independence from politics through separation, in what has been referred to as ‘boundary-work’ [25]–[26]. Rhetorical commitments in the policy world to a clear distinction between facts and values were ever-present. Since then, however, experience in many different contexts, both national and issue-based, has brought about a much greater awareness of the processes of interconnection among science, politics, policy-making and publics [27]–[28]. As Bijker et al. [8] note, an appreciation of the limits of science as an impartial arbiter among policy options comes at exactly the moment when demands for scientific input to policy are increasing. This tension is reflected and articulated in many of the questions generated by the interdisciplinary exercise reported here. The six broad themes around which the questions have been organized constitute a potential framework for formulating research priorities, if we seek to develop better understandings of how science-policy interactions occur, and of evidence-based policy in practice. Beginning with a set of questions that consider the formal role that science might be expected to play in policy-making, we move on to two sets of more empirical questions about the ways in which science is selected and evaluated within the policy process, and how advisory processes actually work as an established system of governance; both sets of questions bear on the issues of expertise and authority. The following two themes then consider some of the limits to scientific knowledge, specifically in relation to inherent uncertainty and pervasive interdisciplinarity, and the roles of democratic participation and accountability in science-policy interactions. Taken together, these first five themes suggest a maturing appreciation of complexity and mutual interdependence in these relations; of the value and ubiquity of science in contemporary policy making; of the limits of ‘speaking truth to power’; and of the considerable effort that goes into the routine tasks of managing science policy. Perhaps most interestingly, the final theme opens up a series of questions about how reflection on, and better understanding of, the nature of science-policy relations might help to improving the ways in which scientific evidence and advice is commissioned, constructed and transmitted when developing forms of evidence-based policy. The exercise reported here may therefore be seen as a contribution to developing a broad research agenda for investigating this critical, complex and contested relationship, perhaps in ways that could enhance its capacity to bring the best available knowledge effectively to bear on twenty-first century problems.

Materials and Methods

The methods used in this exercise are similar to those described in Sutherland et al. [23] based on the experience of a series of attempts to identify priority questions [17]–[19], [21]–[22], [29]. The 52 participants were selected to cover a wide range of approaches to science and policy across government, non-governmental organisations, academia and industry. All participants are authors of this paper; the address list indicates their affiliations. Each participant was permitted to consult widely among their own colleagues in obtaining an initial list of questions. We asked participants how many people they had actively consulted (for example, in workshops, meetings or email discussions, but not including those who were sent details and did not respond). From the responses we know at least an additional 83 beyond the participants were involved in devising questions. In total, 239 questions were submitted. These questions were collated into twelve themes. They were then sent to all participants, who were asked to select around fifty that they considered to be the most important. 29 voted. 11 questions obtained no votes. Participants were also invited to suggest alternative wording. The final screening took place at a two-day workshop held in Cambridge in April 2011. On the first evening the process was discussed and potential misunderstandings and problems resolved. Prior to the meeting all participants had been provided with the number of votes for each question and any suggested rephrasing. On the following day, the workshop was divided into three 105 minute sessions, each with four groups meeting in parallel – twelve discussion groups in total, one discussing each question theme. Each group was charged with reducing one of the twelve-question themes to three priority questions plus a ranked list of three reserves. A rapporteur (from outside the team) was assigned to each session to incorporate changes to questions and capture the shortlist of the emergent top six; participants observed the editing process (projected onto a screen) as it was being carried out. Each group had a different, pre-allocated chair (three of whom had previous experience of chairing sessions in similar exercises). A guidance note for chairs suggested that early decisions could be made to drop questions with zero or very few votes from the initial voting round; and also that groups of questions that clearly addressed similar issues could be identified. This process was designed to assist the group in identifying priorities, removing redundancy, and rewording questions to eliminate overlap and improve clarity. The group then voted on the remaining questions in order to select those considered the most important. Chairs also needed to maintain structure and direction in what were invariably vigorous and challenging deliberations. In a final plenary session chaired by WJS, the top 36 questions (three from each of the twelve groups) were presented as a printed list to each participant to identify overlaps, problem questions and potential clarifications. Editing was again projected onto a screen and so was visible to all. When disagreement could not be resolved by discussion, decisions about inclusion or exclusion of questions, and about specific wording, were made by majority voting. Seven questions were removed by this process. The 12 top-ranking second-level questions were examined and the top 6 of these selected by voting (each participant having 6 votes). They were then discussed further to resolve any overlaps. The next 12 secondary questions were examined along with the remaining top ranked questions and the final five questions selected with each participant having five votes. Selected questions were then clustered into 6 categories by placing related questions together, and edited by the entire group to produce the questions set out in this paper. During this process, after discussion and another round of voting, one question was removed and one short-listed question was added. As with previous exercises [23] most questions changed considerably from initial submission to final product. Forty-three participants made comments on or edits to the 64 successive versions of the paper that were circulated to all participants. We did not obtain ethics approval for this exercise, as it was agreed from the outset that all those participating in the voting and selection of questions were to become authors of the resulting paper. However, all submitted questions were treated anonymously; and an agreement was made to publish in an open-access journal, if possible, in order to facilitate general accessibility for those in policy communities. The questions submitted to this exercise. (DOCX) Click here for additional data file.
  3 in total

1.  Environment and development. Earth system science for global sustainability: grand challenges.

Authors:  W V Reid; D Chen; L Goldfarb; H Hackmann; Y T Lee; K Mokhele; E Ostrom; K Raivio; J Rockström; H J Schellnhuber; A Whyte
Journal:  Science       Date:  2010-11-12       Impact factor: 47.728

2.  One hundred questions of importance to the conservation of global biological diversity.

Authors:  W J Sutherland; W M Adams; R B Aronson; R Aveling; T M Blackburn; S Broad; G Ceballos; I M Côté; R M Cowling; G A B Da Fonseca; E Dinerstein; P J Ferraro; E Fleishman; C Gascon; M Hunter; J Hutton; P Kareiva; A Kuria; D W Macdonald; K Mackinnon; F J Madgwick; M B Mascia; J McNeely; E J Milner-Gulland; S Moon; C G Morley; S Nelson; D Osborn; M Pai; E C M Parsons; L S Peck; H Possingham; S V Prior; A S Pullin; M R W Rands; J Ranganathan; K H Redford; J P Rodriguez; F Seymour; J Sobel; N S Sodhi; A Stott; K Vance-Borland; A R Watkinson
Journal:  Conserv Biol       Date:  2009-04-22       Impact factor: 6.560

3.  Generation of priority research questions to inform conservation policy and management at a national level.

Authors:  Murray A Rudd; Karen F Beazley; Steven J Cooke; Erica Fleishman; Daniel E Lane; Michael B Mascia; Robin Roth; Gary Tabor; Jiselle A Bakker; Teresa Bellefontaine; Dominique Berteaux; Bernard Cantin; Keith G Chaulk; Kathryn Cunningham; Rod Dobell; Eleanor Fast; Nadia Ferrara; C Scott Findlay; Lars K Hallstrom; Thomas Hammond; Luise Hermanutz; Jeffrey A Hutchings; Kathryn E Lindsay; Tim J Marta; Vivian M Nguyen; Greg Northey; Kent Prior; Saudiel Ramirez-Sanchez; Jake Rice; Darren J H Sleep; Nora D Szabo; Geneviève Trottier; Jean-Patrick Toussaint; Jean-Philippe Veilleux
Journal:  Conserv Biol       Date:  2010-12-22       Impact factor: 6.560

  3 in total
  16 in total

1.  Science policy: Beyond the great and good.

Authors:  Robert Doubleday; James Wilsdon
Journal:  Nature       Date:  2012-05-16       Impact factor: 49.962

2.  What research ethics should learn from genomics and society research: lessons from the ELSI Congress of 2011.

Authors:  Gail E Henderson; Eric T Juengst; Nancy M P King; Kristine Kuczynski; Marsha Michie
Journal:  J Law Med Ethics       Date:  2012       Impact factor: 1.718

3.  Mapping the translational science policy 'valley of death'.

Authors:  Eric M Meslin; Alessandro Blasimme; Anne Cambon-Thomsen
Journal:  Clin Transl Med       Date:  2013-07-27

Review 4.  How research funding agencies support science integration into policy and practice: an international overview.

Authors:  Pernelle A Smits; Jean-Louis Denis
Journal:  Implement Sci       Date:  2014-02-24       Impact factor: 7.327

Review 5.  Predictive systems ecology.

Authors:  Matthew R Evans; Mike Bithell; Stephen J Cornell; Sasha R X Dall; Sandra Díaz; Stephen Emmott; Bruno Ernande; Volker Grimm; David J Hodgson; Simon L Lewis; Georgina M Mace; Michael Morecroft; Aristides Moustakas; Eugene Murphy; Tim Newbold; K J Norris; Owen Petchey; Matthew Smith; Justin M J Travis; Tim G Benton
Journal:  Proc Biol Sci       Date:  2013-10-02       Impact factor: 5.349

6.  Ecosystem service valuations of mangrove ecosystems to inform decision making and future valuation exercises.

Authors:  Nibedita Mukherjee; William J Sutherland; Lynn Dicks; Jean Hugé; Nico Koedam; Farid Dahdouh-Guebas
Journal:  PLoS One       Date:  2014-09-22       Impact factor: 3.240

7.  Identifying the science and technology dimensions of emerging public policy issues through horizon scanning.

Authors:  Miles Parker; Andrew Acland; Harry J Armstrong; Jim R Bellingham; Jessica Bland; Helen C Bodmer; Simon Burall; Sarah Castell; Jason Chilvers; David D Cleevely; David Cope; Lucia Costanzo; James A Dolan; Robert Doubleday; Wai Yi Feng; H Charles J Godfray; David A Good; Jonathan Grant; Nick Green; Arnoud J Groen; Tim T Guilliams; Sunjai Gupta; Amanda C Hall; Adam Heathfield; Ulrike Hotopp; Gary Kass; Tim Leeder; Fiona A Lickorish; Leila M Lueshi; Chris Magee; Tiago Mata; Tony McBride; Natasha McCarthy; Alan Mercer; Ross Neilson; Jackie Ouchikh; Edward J Oughton; David Oxenham; Helen Pallett; James Palmer; Jeff Patmore; Judith Petts; Jan Pinkerton; Richard Ploszek; Alan Pratt; Sophie A Rocks; Neil Stansfield; Elizabeth Surkovic; Christopher P Tyler; Andrew R Watkinson; Jonny Wentworth; Rebecca Willis; Patrick K A Wollner; Kim Worts; William J Sutherland
Journal:  PLoS One       Date:  2014-05-30       Impact factor: 3.240

8.  Seventy-one important questions for the conservation of marine biodiversity.

Authors:  E C M Parsons; Brett Favaro; A Alonso Aguirre; Amy L Bauer; Louise K Blight; John A Cigliano; Melinda A Coleman; Isabelle M Côté; Megan Draheim; Stephen Fletcher; Melissa M Foley; Rebecca Jefferson; Miranda C Jones; Brendan P Kelaher; Carolyn J Lundquist; Julie-Beth McCarthy; Anne Nelson; Katheryn Patterson; Leslie Walsh; Andrew J Wright; William J Sutherland
Journal:  Conserv Biol       Date:  2014-04-29       Impact factor: 6.560

9.  Developing a Collaborative Agenda for Humanities and Social Scientific Research on Laboratory Animal Science and Welfare.

Authors:  Gail F Davies; Beth J Greenhough; Pru Hobson-West; Robert G W Kirk; Ken Applebee; Laura C Bellingan; Manuel Berdoy; Henry Buller; Helen J Cassaday; Keith Davies; Daniela Diefenbacher; Tone Druglitrø; Maria Paula Escobar; Carrie Friese; Kathrin Herrmann; Amy Hinterberger; Wendy J Jarrett; Kimberley Jayne; Adam M Johnson; Elizabeth R Johnson; Timm Konold; Matthew C Leach; Sabina Leonelli; David I Lewis; Elliot J Lilley; Emma R Longridge; Carmen M McLeod; Mara Miele; Nicole C Nelson; Elisabeth H Ormandy; Helen Pallett; Lonneke Poort; Pandora Pound; Edmund Ramsden; Emma Roe; Helen Scalway; Astrid Schrader; Chris J Scotton; Cheryl L Scudamore; Jane A Smith; Lucy Whitfield; Sarah Wolfensohn
Journal:  PLoS One       Date:  2016-07-18       Impact factor: 3.240

Review 10.  Community-based adaptation research in the Canadian Arctic.

Authors:  James D Ford; Ellie Stephenson; Ashlee Cunsolo Willox; Victoria Edge; Khosrow Farahbakhsh; Christopher Furgal; Sherilee Harper; Susan Chatwood; Ian Mauro; Tristan Pearce; Stephanie Austin; Anna Bunce; Alejandra Bussalleu; Jahir Diaz; Kaitlyn Finner; Allan Gordon; Catherine Huet; Knut Kitching; Marie-Pierre Lardeau; Graham McDowell; Ellen McDonald; Lesya Nakoneczny; Mya Sherman
Journal:  Wiley Interdiscip Rev Clim Change       Date:  2015-11-25       Impact factor: 7.385

View more

北京卡尤迪生物科技股份有限公司 © 2022-2023.