| Literature DB >> 32910293 |
Henrik Nygård1, Floris M van Beest2, Lisa Bergqvist3, Jacob Carstensen2, Bo G Gustafsson3,4, Berit Hasler5, Johanna Schumacher6,7, Gerald Schernewski6,7, Alexander Sokolov3, Marianne Zandersen5, Vivi Fleming8.
Abstract
Decision-support tools (DSTs) synthesize complex information to assist environmental managers in the decision-making process. Here, we review DSTs applied in the Baltic Sea area, to investigate how well the ecosystem approach is reflected in them, how different environmental problems are covered, and how well the tools meet the needs of the end users. The DSTs were evaluated based on (i) a set of performance criteria, (ii) information on end user preferences, (iii) how end users had been involved in tool development, and (iv) what experiences developers/hosts had on the use of the tools. We found that DSTs frequently addressed management needs related to eutrophication, biodiversity loss, or contaminant pollution. The majority of the DSTs addressed human activities, their pressures, or environmental status changes, but they seldom provided solutions for a complete ecosystem approach. In general, the DSTs were scientifically documented and transparent, but confidence in the outputs was poorly communicated. End user preferences were, apart from the shortcomings in communicating uncertainty, well accounted for in the DSTs. Although end users were commonly consulted during the DST development phase, they were not usually part of the development team. Answers from developers/hosts indicate that DSTs are not applied to their full potential. Deeper involvement of end users in the development phase could potentially increase the value and impact of DSTs. As a way forward, we propose streamlining the outputs of specific DSTs, so that they can be combined to a holistic insight of the consequences of management actions and serve the ecosystem approach in a better manner.Entities:
Keywords: Baltic Sea; DAPSIWRM; Decision-making; Ecosystem approach; Marine management
Mesh:
Year: 2020 PMID: 32910293 PMCID: PMC7686007 DOI: 10.1007/s00267-020-01356-8
Source DB: PubMed Journal: Environ Manage ISSN: 0364-152X Impact factor: 3.266
DESTONY DST definition criteria (DC)
| # | Definition criteria (DC) |
|---|---|
| (1) | The tool is interactive in the sense that the end user is requested for input data or information and will subsequently get outputs related to that. If the tool is based on a non-dynamic model that cannot show different outcomes, the tool is not considered interactive. |
| (2) | The tool is virtual in the sense that it can be accessed and operated through the internet. A tool is not virtual if you need to download it to your computer. |
| (3) | The purpose of the tool is to support decision-making in relation to degradation of the aquatic environment at local, regional, national, or international management scale. A single indicator is not seen as a tool. |
| (4) | The tool is primarily developed for use in the Baltic Sea or its drainage basin. If the tool is originally developed for other sea areas but adapted primarily to the Baltic Sea the criteria applies. If the tool is restricted to national waters, it needs to cover any of the DESTONY participating countries (Finland, Sweden, Germany, Denmark). |
| (5) | The tool is applicable and accessible by the end user (whether policy maker or expert involved in management) without unreasonable effort. The criteria does not apply in case of unreasonable efforts such as the tool cannot be found, or the tool needs to be used by the host. |
The performance criteria (PC) and scoring classes (1–5) used to evaluate the tools
| Definition of the performance criteria (PC) | Evaluation scale |
|---|---|
| PC1: Scientific documentation | |
| Has the DST been documented in scientific publication, etc.? | 1 = no documentation found |
| 2 = earlier version in web but outdated | |
| 3 = earlier version in report or scientific paper but outdated | |
| 4 = updated documentation in web | |
| 5 = updated documentation in report and/or scientific paper | |
| PC2: Complexity of method | |
| How simple or complex is the method used for calculating the output? | 1 = no quantitative analysis is applied; method is qualitative |
| 2 = simple quantitative method or descriptive statistics, e.g., one-out-all-out, sum, average, median | |
| 3 = fairly simple quantitative statistics, e.g., weighted average, regression | |
| 4 = complex quantitative methods, e.g., multidimensional statistics | |
| 5 = very complex analysis, e.g., dynamic models or combinations of several tools and processes | |
| PC3: Transparency of the DST | |
| Are all the processing described, is the code public, documentation, understandable? Are underlying methods/calculations transparent for the user? | 1 = no description of processes |
| 2 = basic idea of the DST is explained but not in detail | |
| 3 = basic idea explained and some metaproducts can be viewed | |
| 4 = process is described in detail and all steps can be viewed | |
| 5 = process is described in such detail that it could be repeated, code/tool may be viewed, all steps can be obtained | |
| PC4: Management relevance to the Baltic Sea | |
| To what extent is the output related to making decisions on responses/measures? | 1 = not directly related to decision-making |
| 2 = output is related to questions that require decision-making | |
| 3 = output can be processed further to support decision-making | |
| 4 = output can easily be combined with other information to support decision-making | |
| 5 = output is directly supporting decision-making | |
| PC5: Spatial limitations | |
| Is the spatial scale of the tool restricted or can it be adapted according to management needs (e.g., applied on a local as well as national level)? | 1 = permanently fixed spatially |
| 2 = spatially fixed, and could be changed only through excessive reconfiguration of the tool | |
| 3 = spatially fixed, but could be extended through relatively simple adjustments to the tool | |
| 4 = there is a couple of alternative spatial options or some flexibility, but not unlimited possibilities | |
| 5 = no spatial restrictions, DST can be adapted according to management needs | |
| PC6: Temporal limitations | |
| Is the tool dynamic, i.e., describing changes over time? Does the output have a temporal dimension that can be expressed as years? | 1 = output has no temporal dimension |
| 2 = a temporal dimension can easily be achieved through repetition | |
| 3 = output shows results between two points in time | |
| 4 = output shows results between several points in time | |
| 5 = output extends over time, to the extent that it can express detailed changes resulting to management responses | |
| PC7: Confidence assessment of results/level of uncertainty | |
| Does the tool assess the uncertainty associated with the output, and does this assessment account for all or a subset of potential uncertainties? | 1 = no confidence expressed, or confidence expressed only for meta-products but not the end product. Uncertainty assessed only using alternative scenario modeling, sensitivity analyses, or expected outcomes of different scenarios |
| 2 = simple confidence criteria, e.g., qualitative expert judgment | |
| 3 = non-comprehensive confidence criteria, covers only one or two aspects (spatio-temporal, methodological, or confidence-of-classification) | |
| 4 = multifaceted confidence assessment partly relying on expert judgment, including spatio-temporal, methodological, and confidence-of-classification | |
| 5 = completely data-driven multifaceted confidence assessment, including spatio-temporal, methodological, and confidence-of-classification | |
| PC8: Data dependencies | |
| Does the tool work with missing values? Is it sensitive to changes in the type of input? Quantitative/qualitative data? | 1 = can use only one type of information (whether qualitative or quantitative), very sensitive to missing values |
| 2 = can handle only one type of information or strong restrictions to the format or type of input data, but can handle missing values | |
| 3 = flexible to different types of input data but with some restrictions, can deal with only qualitative or quantitative information, can handle missing values | |
| 4 = no restrictions to the type of input data, can handle missing values, but can deal with only qualitative or quantitative information | |
| 5 = input data can be qualitative/quantitative, is not sensitive to different types of input data, can handle missing values | |
| PC9: Testing and validation | |
| Has the DST been applied to different systems and tested independently? | 1 = no testing involved |
| 2 = has been tested/applied once | |
| 3 = has been tested/applied in several cases but in a limited number of systems | |
| 4 = has been tested/applied in several contexts | |
| 5 = has been applied to several cases in different types of contexts, and tested thoroughly | |
| PC10: Transferability | |
| How easily can the tool be adapted to other systems (e.g., North Sea, fresh water systems, etc.) by the end user? | 1 = not applicable to other systems |
| 2 = applying to other systems would require reconstruction | |
| 3 = applying to other systems would require considerable updates | |
| 4 = can be applied to other systems with minor adjustments | |
| 5 = can be directly applied to other systems | |
| PC11: Thematic broadness | |
| How generic is the DST? For example, which and how many policy issues (e.g., eutrophication, biodiversity, pollution, maritime activities etc.) does it address? | 1 = the DST is specific to an environmental policy issue (e.g., eutrophication) and highly specific to its aspects; it addresses e.g., a specific species, habitat, nutrient levels, etc. |
| 2 = the DST is specific to an environmental policy issue (e.g., eutrophication) but can address different aspects of it (e.g., indirect and direct effects) | |
| 3 = a single application of the tool can deal with different environmental policy issues, but only one at a time (e.g., it can be applied to biodiversity or eutrophication, but not simultaneously) | |
| 4 = the DST addresses two environmental policy issue at once (e.g., eutrophication and biodiversity) | |
| 5 = the DST is highly flexible and can address various environmental policy issues at once | |
| PC12: Broadness of components of the DPSIR/DAPSIWRM addressed | |
| How broadly does the tool handle the management chain of events, from drivers to pressures, state changes, impacts to environment, social impacts, and responses of society (e.g., components in the DPSIR/DAPSIWRM cycle?) How many components does it address? | 1 = very narrow use, restricted to one or few interactions |
| 2 = narrow, covers only one segment in the DAPSIWRM cycle, and inspects it narrowly | |
| 3 = covers only one segment in the DAPSIWRM cycle, but inspects it broadly | |
| 4 = covers two segments in the DAPSIWRM cycle | |
| 5 = very generic, covering three or more segments in the DAPSIWRM cycle broadly | |
| PC13: Suitability to components operationally applied in the Baltic Sea | |
| How well does the tool fit in with the approaches and methodology already agreed upon in the area? Are the existing operational input components, e.g., monitoring data, indicators, compatible with the tool when applied in the Baltic Sea? Or should the input data be created/collected separately. Is the output directly suitable as input, or collaborative interpretation with output from other operational tools? | 1 = tool is not compatible to operational input or output components |
| 2 = tool is not fully compatible to operational input and output components, but could be applied with some adjustments | |
| 3 = tool is not fully compatible to input components, but output can be applied operationally | |
| 4 = tool is fully compatible to input components applied in the Baltic Sea, but output requires further adjustments | |
| 5 = both input and output components are applied operationally in the Baltic Sea | |
| PC14: Ease of use/expertise required | |
| Is the tool generally applicable to non-expert users or restricted to experts? Is the DST easy to apply? Is there need for expertise in a specific field (e.g., marine ecology, economics, policy, etc.)? | 1 = can be applied only by dedicated experts throughout the process |
| 2 = tool is applied by experts, but less experienced users can interact during selected phases | |
| 3 = application of the tool requires participation in a special training course | |
| 4 = anyone can apply the tool after extensive reading of the manual | |
| 5 = the tool is easy to use, and no special expertise are required | |
| PC15: Time effort | |
| How much time is needed to apply the DST? i.e., How much time is needed from the choice of the tool for a specific problem to the output of concrete/usable results? | 1 = both preparation and application of the DST are time consuming (weeks or months) |
| 2 = preparation of the tool is rather quick (days), but application is time consuming (weeks or months) | |
| 3 = preparation of the application is time consuming (weeks or months), but the application is rather quick (days) | |
| 4 = both preparation and application of the DST are rather quick (days) | |
| 5 = the DST can be directly applied and provides immediate results (e.g., in (stakeholder) meetings) (within hours/one day) | |
Identified decision-support tools (DSTs) listed in alphabetic order
| Name | Category | Problem class | DAPSIWRM componenta | Reference |
|---|---|---|---|---|
| Model | Contaminants | P, S, IW | Oltmans et al. ( | |
| BALTCOST | Model | Eutrophication | IW, RM | Hasler et al. ( |
| Planning tool | Sea-area use | D, A, P, S, IW, RM | BONUS BASMATI Project ( | |
| BALTSEM-POP | Model | Contaminants | P, S | Undeman et al. ( |
| Assessment tool | Biodiversity and conservation | S | Nygård et al. ( | |
| Model | Noise | P, S | Fyhr and Nikolopoulos ( | |
| BSII | Assessment tool | Cumulative effects | A, P, S | Korpinen et al. ( |
| BSPI | Assessment tool | Cumulative effects | A, P | Korpinen et al. ( |
| Assessment tool | Nonindigenous species | A, P | Ruiz and Sethuraman ( | |
| Assessment tool | Contaminants | S | Andersen et al. ( | |
| Assessment tool | Cumulative effects | A, P, S | Stock ( | |
| ERGOM-MOM | Model | Eutrophication | P, S | Neumann et al. ( |
| Assessment tool | Eutrophication | S | HELCOM ( | |
| FIT | Assessment tool | Fishery management | A, P, S | Eigaard et al. ( |
| GETM-GITM | Model | Hydrography | S | Burchard and Bolding ( |
| Assessment tool | Eutrophication | S | Fleming-Lehtinen et al. ( | |
| Assessment tool | Impact evaluation | S, IW, RM | Karnauskaitė et al. ( | |
| InVest | Model | Impact evaluation | S, IW | Sharp et al. ( |
| Assessment tool | Biodiversity and conservation | P, S | Loh et al. ( | |
| Stakeholder tool | Fishery management | A, P, S | MareFrame project ( | |
| Assessment tool | Biodiversity and conservation | S | MARMONI project ( | |
| Planning tool | Sea-area use | A, P, S, IW, RM | ||
| Assessment tool | Impact evaluation | S, I | Inácio et al. ( | |
| Stakeholder tool | Eutrophication | A, P | Neset and Wilk ( | |
| MIRADI | Stakeholder tool | Biodiversity and conservation | A, P, S, IW, RM | |
| MONERIS | Model | Eutrophication | P | Venohr et al. ( |
| Assessment tool | Cumulative effects | A, P, S | Hansen ( | |
| Assessment tool | Biodiversity and conservation | S | Berg et al. ( | |
| Model | Eutrophication | A, P, S, IW, RM | Wulff et al. ( | |
| Model | Contaminants | P, S | Wania et al. ( | |
| RAUMIS | Model | Impact evaluation | A, P, S, RM | Kreins et al. ( |
| Recreation Site Values | Model | Impact evaluation | A, S, IW | Czajkowski et al. ( |
| SAF | Stakeholder tool | Impact evaluation | D, A, P, S, IW, RM | Støttrup et al. ( |
| SOCOPSE | Planning tool | Contaminants | P, S, IW, RM | Baartmans et al. ( |
| Stakeholder tool | Impact evaluation | IW, RM | Schumacher et al. ( | |
| Symphony | Model | Sea-area use | A, P, S | Swedish Agency for Marine and Water Management ( |
| TargetEconN | Model | Eutrophication | IW, RM | Hasler et al. ( |
| Planning tool | Sea-area use | A, P, S, IW | Menegon et al. ( | |
| Assessment tool | Eutrophication | S | Lindegarth et al. ( | |
| Model | Eutrophication | A, P, S | Huttunen et al. ( | |
| Assessment tool | Eutrophication | P, S | Aroviita et al. ( | |
| Model | Biodiversity and conservation | D, A, P, S, IW, RM | Moilanen et al. ( |
DSTs marked with bold font fulfilled 4 or 5 of the DST definition criteria (DC)
aD drivers, A human activities, P pressures, S state changes, IW impacts (on welfare), RM responses (management measures)
Fig. 1The representation of DSTs addressing different problem topics in the different DAPSIWRM framework segments. The DAPSIWRM framework and the links between the segments are based on Elliott et al. (2017). The pie chart area is scaled according to the number of DSTs (also indicated by n) and numbers on the arrows indicate number of DSTs linking the segments. A single DST can address several segments. D drivers, A human activities, P pressures, S state changes, IW impacts on welfare, RM responses (management measures)
Fig. 2The distribution of tools among different environmental problem areas. The problem areas are ordered according to how many DSTs fulfilled at least four of the DST definition criteria (dark gray bars) and secondly according to number of tools fulfilling 1–3 of the definition criteria (light gray bars)
Fig. 3Proportions of tool scorings according to the performance criteria. N = 26 (number of tools evaluated = 26). Full descriptopns of the performance criteria (PC) and the scoring classes (1–5) are presented in Table 2
Fig. 4Median scores of decision-support tools according to the segments of the DAPSIWRM framework they address. The length of the gray bars correspond to the median score. Performance criteria (PC) and scoring classes (1–5) are presented in Table 2. Note that the same tool can address several segments. N = 26
Fig. 5Median scores of decision-support tools according to the environmental problem area they address. The length of the gray bars correspond to the median score. Performance criteria (PC) and scoring classes (1–5) are presented in Table 2. N = 26 (results shown only for problem area with more than three DSTs)
Fig. 6Proportions of answers from the end-user evaluations of the importance of the performance criteria. N = 108 (number of answers = 108). The full descriptions of the performance criteria (PC) are presented in Table 2
Fig. 7Results from the questionnaire to DST developers/hosts on the involvement of end users in the development of the tools. N = 27 (27 DST developers/host responded to the questionnaire). The full questions are presented in Online Resource 1
The number of DSTs in which end users have been involved in the development of the DSTs for different problem topics
| Eutrophication ( | Impact evaluation ( | Biodiversity and conservation ( | Contaminants ( | Cumulative effects ( | Sea-area use ( | Fishery management ( | Nonindigenous species ( | Noise ( | Hydrography ( | |
|---|---|---|---|---|---|---|---|---|---|---|
| Through interviews etc. | 5 | 5 | 2 | 1 | 2 | 1 | 1 | 1 | 1 | 1 |
| Initiating development | 3 | 1 | 2 | 1 | 3 | 0 | 1 | 1 | 0 | 1 |
| Consultance | 4 | 3 | 2 | 1 | 2 | 1 | 0 | 1 | 0 | 1 |
| In development team | 2 | 2 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 1 |
| Maintaining and updating | 3 | 2 | 3 | 1 | 0 | 1 | 0 | 0 | 0 | 1 |
Note that we did not get answers to from all developers/hosts, N = 27. See Online Resource 1 for full information on the questionnaire