| Literature DB >> 34307513 |
Marianne Sandberg1,2, Ayla Hesp3,4, Cécile Aenishaenslin5, Marion Bordier6, Houda Bennani7, Ursula Bergwerff8, Ilias Chantziaras9,10, Daniele De Meneghi11, Johanne Ellis-Iversen2, Maria-Eleni Filippizi12, Koen Mintiens13, Liza R Nielsen14, Madelaine Norström15, Laura Tomassone11, Gerdien van Schaik8,16, Lis Alban1,14.
Abstract
Regular evaluation of integrated surveillance for antimicrobial use (AMU) and resistance (AMR) in animals, humans, and the environment is needed to ensure system effectiveness, but the question is how. In this study, six different evaluation tools were assessed after being applied to AMU and AMR surveillance in eight countries: (1) ATLASS: the Assessment Tool for Laboratories and AMR Surveillance Systems developed by the Food and Agriculture Organization (FAO) of the United Nations, (2) ECoSur: Evaluation of Collaboration for Surveillance tool, (3) ISSEP: Integrated Surveillance System Evaluation Project, (4) NEOH: developed by the EU COST Action "Network for Evaluation of One Health," (5) PMP-AMR: The Progressive Management Pathway tool on AMR developed by the FAO, and (6) SURVTOOLS: developed in the FP7-EU project "RISKSUR." Each tool was scored using (i) 11 pre-defined functional aspects (e.g., workability concerning the need for data, time, and people); (ii) a strengths, weaknesses, opportunities, and threats (SWOT)-like approach of user experiences (e.g., things that I liked or that the tool covered well); and (iii) eight predefined content themes related to scope (e.g., development purpose and collaboration). PMP-AMR, ATLASS, ECoSur, and NEOH are evaluation tools that provide a scoring system to obtain semi-quantitative results, whereas ISSEP and SURVTOOLS will result in a plan for how to conduct evaluation(s). ISSEP, ECoSur, NEOH, and SURVTOOLS allow for in-depth analyses and therefore require more complex data, information, and specific training of evaluator(s). PMP-AMR, ATLASS, and ISSEP were developed specifically for AMR-related activities-only ISSEP included production of a direct measure for "integration" and "impact on decision making." NEOH and ISSEP were perceived as the best tools for evaluation of One Health (OH) aspects, and ECoSur as best for evaluation of the quality of collaboration. PMP-AMR and ATLASS seemed to be the most user-friendly tools, particularly designed for risk managers. ATLASS was the only tool focusing specifically on laboratory activities. Our experience is that adequate resources are needed to perform evaluation(s). In most cases, evaluation would require involvement of several assessors and/or stakeholders, taking from weeks to months to complete. This study can help direct future evaluators of integrated AMU and AMR surveillance toward the most adequate tool for their specific evaluation purpose.Entities:
Keywords: AMR; evaluation; integrated surveillance; one health; tools
Year: 2021 PMID: 34307513 PMCID: PMC8298032 DOI: 10.3389/fvets.2021.620998
Source DB: PubMed Journal: Front Vet Sci ISSN: 2297-1769
Overview of eight country-based case studies involving six different tools for evaluation of surveillance of antimicrobial use and resistance, 2019.
| Belgium | PMP-AMR and NEOH | Belgian AMR Surveillance Programme (as suggested in the Belgian National Action Plan) | Swine, veal calves, poultry (broilers/laying hens), and humans |
| Denmark | PMP-AMR, ATLASS, ECoSur, NEOH, and SURVTOOLS | Danish Integrated AMR Surveillance Programme (DANMAP)—selected parts | Pigs |
| Canada | ISSEP | Canadian Integrated Program for Antimicrobial Resistance Surveillance (CIPARS) | Humans, livestock, and food chain |
| Italy | NEOH, PMP-AMR, and SURVTOOLS | Italian ClassyFarm Surveillance Programme (data from the Piedmont region) | Pigs |
| Norway | PMP-AMR and NEOH | NORM-VET monitoring program for antimicrobial resistance in the veterinary and food production sectors (NORM-vet) | Broilers |
| The Netherlands | SURVTOOLS and NEOH | Monitoring of Antimicrobial Resistance and Antibiotic Usage in Animals in the Netherlands (MARAN) | Broilers, slaughter pigs, veal calves, and dairy cows |
| United Kingdom | ISSEP | Surveillance of AMU and AMR in the United Kingdom | Humans, livestock, and food chain |
| Vietnam | ECoSur | Surveillance of AMR in Vietnam | Humans, food products, and animals |
ATLASS, the Assessment Tool for Laboratories and AMR Surveillance Systems; ECoSur, Evaluation of Collaboration for Surveillance tool; ISSEP, integrated surveillance system evaluation project; NEOH, Network for Evaluation of One Health; PMP-AMR, The Progressive Management Pathway tool on AMR; AMU, antimicrobial use; AMR, antimicrobial resistance.
Description of the themes describing the scope of the tool in relation to surveillance identified in the Co-Eval-AMR project and used for the additional assessment of the evaluation tools for surveillance programs/activities.
| AMU and AMR | Questions that are specifically addressing the case of AMR (occurrence, prevention, or response) or AMU (recording and management) |
| Collaboration | Questions on the framework of collaboration (organization of roles and responsibilities) and the object of collaboration (exchange of data, information, and knowledge and sharing of capacities). This category also covers questions about the inclusive participation of stakeholders (e.g., considering gender) |
| Resources | Questions quantitatively addressing human, physical, and financial resources. Questions on the training level of human resources are also considered in this category |
| Output and use of information | Questions on surveillance outputs that are provided to inform public and private stakeholders, their use to inform decision making, and the benefits from this use (expected, perceived, or measured) |
| Integration | Questions considering three levels of integration: |
| Governance | Questions related to the legislative framework as well as the steering and coordinating mechanisms for the surveillance system: legislation, steering, and criteria (limits and goals for reduction) |
| Adaptivity | Questions on any structural elements allowing for the surveillance system to adapt and evolve. This may include not only tools, plans, and agreements to evolve (e.g., continuous learning programs and external evaluation) but also the features of management and governance allowing for regular evaluation and adaptation of operations (e.g., frequency of meeting and regularity of progress reports) |
| Technical operations | Questions on technical features of surveillance operations (surveillance design, laboratory capacities, management of specimens, tests applied, data management, and analysis), their quality management (SOP, traceability), and the assessment of their performance (sensitivity and specificity) |
Governance was included as a separate theme in this study but is not a separate theme on the .
Result of the scoring of all six tools with respect to the 11 functional aspects, shown as a heat map (the number of times the tool was assessed is given in the bracket). The scoring scale used was where 1 (red) = not covered, 2 = not well covered (orange), 3 = more or less covered (yellow), 4 = well covered (green).
| User friendliness | 2 | 3 | 4 | 4 | 2 | 4 |
| Meets evaluation needs/requirements | 3 | 4 | 2 | 3 | 4 | 3 |
| Efficiency | 2 | 4 | 4 | 4 | 3 | 3 |
| Use of a step-wise approach to the evaluation | 3 | 2 | 4 | 4 | 3 | 2 |
| Overall appearance | 2 | 3 | 4 | 4 | 2 | 4 |
| Generation of actionable evaluation outputs | 2 | 4 | 4 | 4 | 3 | 2 |
| Allows evaluation of one health aspects | 3 | 3 | 4 | 2 | 4 | 2 |
| Workability in terms of required data (1: very complex and 4: simple) | 2 | 3 | 1 | 4 | 2 | 3 |
| Workability in terms of people to include (1: many and 4: few) | 2 | 3 | 4 | 3 | 2 | 4 |
| Workability in terms of analysis to be done (1: difficult and 4: simple) | 2 | 4 | 4 | 4 | 3 | 3 |
| Time taken for application of tool: time (1: >2 months, 2: 1–2 months, 3: 1 week−1 month, and 4: <1 week) | 2 | 3 | 4 | 3 | 2 | 3 |
| Crude summary score | 25 | 36 | 39 | 39 | 30 | 33 |
Only scored by 11 of the 20 of the assessors.
Only scored by one of the two assessors of ISSEP. The scoring scale used was as follows: 1 = not covered (red), 2 = not well-covered (orange), 3 = more or less covered (yellow), and 4 = well-covered (green).
Results of synthesis of the underlying reasoning for the scoring according to the 11 functional aspects.
| User friendliness | Conceptual framework easy to follow. Evaluation(s) more complicated | Relatively easy to understand and could be improved with a web interface | Can be used without much preparation | Easy to understand and fill in without training | Complex without training, long/exhausting. Scoring OH attributes is relatively simple | Tool itself is easy to fill in, but more complex to conduct evaluations |
| Meets evaluation needs/requirement | Relationships of integrated surveillance activities/outputs described. No guidance on evaluation | Measurement of the level of collaboration, but not the overall added value of collaborating for surveillance activities | Predefined network is comprehensive, but measurement of smaller progressions not possible | Qualitative scoring system could be improved. Partially meeting needs for AMU and AMR evaluation(s) | Comprehensive, less intuitive to use for specific technical details/laboratory part | Epidemiological performance easiest to perform, other parts more difficult |
| Efficiency | Requires a lot of time to conduct evaluation(s) | Evaluation matrix easy to understand/apply. Validation meeting with stakeholder required | Questionable whether all data are really needed | Easy to fill in. Immediate generation of results. Suitable for administrators | Takes a long time to fill in tool. “Theory of change” (ToC) could be better integrated. Not a management tool | Takes some time to fill in the tool and longer time for evaluations |
| Use of a step-wise approach to the evaluation | The tool has five evaluation levels | Only possible to follow progress of collaboration if evaluation repeatedly done | Follows a step-wise approach with areas containing sub-categories reflecting the level of implementation and geography | Follows (inherent) a step-wise approach with four levels with logic progression. Level 1: planning of activity/locally and levels 2, 3, and 4: undertaking activities /regionally/nationally | Stepwise approach to evaluation with the following steps: context description, initiative within context description, OH-ness, and ToC (outcome and impact). If evaluation of progress, repeated evaluations over time needed | Does not follow a step-wise approach. Order would be given by choice of evaluation question(s) and not by the toll itself |
| Overall appearance | The conceptual framework is well-presented | Well-structured, web platform needed | Useful for evaluation of AMU and AMR and residue surveillance at laboratory level | The general assessment part excellent, the sector specific less so. Nice layout, some parts could be improved | Extensive handbook. Excel tool is mostly understandable but too compressed in layout | Generates evaluation plan. Takes time to evaluate integrated surveillance. Objective results |
| Actionable evaluation outputs | No clearly defined actionable outputs | Generation of three graphical outputs of results: one for organizational attributes, one for organizational indexes, and one for functional attributes | Monitors progress and suggests next level | Actions can be agreed upon during assessment. Graphics could be improved. Gaps in sector evaluation | A web diagram makes it easy to identify gaps. Scoring is subjective: may lead to biased results | Not generated by tool. Evaluation could generate first-level actionable outputs (e.g., effect of designs). Other outputs on, e.g., awareness more difficult to obtain |
| Evaluation of OH aspects | Comprehensive | Existence of specific attributes measuring OH aspects, e.g., shared leadership | All sectors covered and measures integration | Not addressed in particular | Major strength of the system's approach and the tool | Can be used for all aspects. Layout does not support all components |
| Workability regarding required data (1: very complex and 4: simple) | Large amounts of data required | Dependent on the complexity of the surveillance system evaluated | Large amounts of data required | Apparently simple. Data are easily accessible | Requires effort/time to gather data. Some data complex to get (e.g., learning/system organization) | Relatively simple to get the data for filling in tool, but for some evaluation questions/objectives, it is complex to acquire the data |
| Workability regarding required people (1: many and 4: few) | Stakeholders from all sectors required | Meant to be applied by an evaluation team | Needs expertise from several areas | All stakeholders invited to evaluation meetings (2 days). One person can do evaluation, but then data capture needed (e.g., through interviews) | Interview of essential actors and stakeholders, but only one evaluator needed | Few people needed |
| Workability regarding analysis to be done (1: difficult and 4: simple) | No guidance on analysis provided | Easy identification of the criteria influencing the evaluation results to support formulation of recommendations | Automated analysis | Generated by the tool. Mostly yes/no answers to questions | Once tool is filled in, it provides support for analyses. Comparing ToC and scoring difficult | Dependent on the number and complexity of evaluation question(s) |
| Time (1: >2 months, 2: 1–2 months, 3: 1 week−1 month, and 4: <1 week) | Long time required for evaluation(s) | Dependent on the complexity of the surveillance system evaluated | If assessor experienced in surveillance or detailed NAP report available, takes relatively short time | Take relatively short time | Filling in the Excel tool is relatively fast once you have the information ready. Defining the ToC and gathering data is time-consuming | Short time to fill in tool. Long time for some of the evaluation objectives/questions |
Synthesis of phrases provided in the SWOT analysis of six different evaluation tools used in eight country-based case studies.
| Like | Provision of a conceptual model for integrated surveillance of AMU and AMR surveillance | Comprehensive evaluation of collaboration Participatory evaluation Provision of a clear guidance | Automated analysesProgress monitoringEasy to communicate results | Easy progress monitoring Participatory evaluation Evaluation of the implementation levels | Comprehensive and multi-faceted OH assessment Evaluation of implementation quality | Objectivity Comprehensive framework for different evaluation aspects |
| Difficulty | No provision of guidance to collect and analyze of data | Evaluation of collaboration only | Why need for such detailed data? | Subjectivity Crude scoring method | Cumbersome | Requirement of training for conducting evaluation Time-consuming for evaluation of complex aspects |
| Be aware of | Necessary combination with other tools depending on the evaluation question | Characterization and evaluation of integration regarding collaborative objectives and context | Not possible to measure minor progress of epidemiological performance | Complexity in terms of people to include Self-assessment tool Results not comparable across countries | Requirement of training for application Resource demanding | Provision of an evaluation plan only, not AMU and AMR specific |
| Not covering | Guidance for conducting evaluation | Surveillance performance | Environment and plant sector specifically | One Health assessment Distinction between ongoing and incomplete activities Evaluation of quality of activities | Progress monitoring Surveillance performance | Laboratory aspects One Health assessment |
Results of scoring of six tools for AMR surveillance evaluation according to eight themes describing the scope of the evaluation tool (the number of times the tool was assessed is given in the bracket).
| AMU and AMR specific | 4 | 2 | 4 | 4 | 3 | 2 |
| Collaboration | 4 | 4 | 4 | 2 | 4 | 2 |
| Resources | 2 | 4 | 3 | 3 | 3 | 3 |
| Output and use of information | 4 | 3 | 3 | 3 | 3 | 2 |
| Integration | 4 | 4 | 3 | 2 | 4 | 2 |
| Governance | 3 | 2 | 4 | 4 | 1 | 2 |
| Adaptivity | 2 | 4 | 4 | 4 | 3 | 2 |
| Technical operations | 2 | 2 | 3 | 2 | 2 | 2 |
| Crude summary score | 25 | 25 | 28 | 24 | 23 | 17 |
Governance was included in this study by 9 of the 20 of the assessors (however, not a separate theme on the .
Synthesis of the underlying reasoning for the scoring according to the eight themes describing the scope of six AMR surveillance evaluation tools.
| AMU and AMR | Framework developed specifically for AMU and AMR | Not specific for AMU and AMR but can be easily applied for AMU and AMR | Designed for AMU and AMR and residues | Designed for AMU and AMR. Misses components besides farm animals | Not designed for this purpose but can be adapted (e.g., under “objectives of the initiatives”). Most, if not all, of these questions are expected to be included as part of elements 2 and 3 | Not developed for AMU and AMR |
| Collaboration | Allows evaluation of collaboration between the different organizations involved | Collaboration at the heart of the tool, e.g., cross sectors/professions/disciplines/ | Between sectors, all actors, and all levels | Reporting, not data exchange. Participation stakeholders/actors considered for institutions. Gender not considered. Promotes knowledge sharing | Collaboration included in all aspects (in element 1). | No particular guidance; difficult to understand how to evaluate the amount of collaboration |
| Resources | Questions not included, but data can be collected if economic analysis is part of evaluation | Financial aspects addressed in detail at different levels: planning, allocation, and use | Ask for unlimited or limited budget | Only present in “governance” | Only covered in “planning” and “sharing” aspects of OH-ness evaluation. Focus on allocation: resources to achieve objectives of the initiative (human/physical/financial resources and training). In NEOH handbook, chapter about economic evaluation of OH | Generates a framework for economical evaluation. Epi-calculator available |
| Output and use of information | Allows to evaluate the outputs of integration and the impacts of integration on decision making and on health and economic outcomes | Allows conclusion about appropriateness of collaborative activities for the expected collaborative outputs (e.g., improving the epidemiological performance). No quantification of impacts on the surveillance value and of costs. | Intermediate-level outputs best addressed | Outputs evaluated (better than impacts), e.g., production of guidelines on prudent use of AM, data reporting to organizations. Not covered in “awareness” | Reveals gaps in OH and where impact of the initiative being evaluated might be improved. Outcomes/impacts depend on type of OH initiative and boundaries of the contextual “system” and resulting ToC. Hence, the evaluator must take into account the appropriate parameters (data and disciplinary paradigms) | If full evaluation, most of the aspects would be covered and impact/output might be possible to measure. Unclear how to measure for intermediate outputs/impacts |
| Integration | Allows evaluating impacts of integration on decision making/health /economic outcomes | Assessment of the organization and functions of collaboration to achieve the desired level of integration, in coherence with the context | Addressed for many areas, not in-depth | Questions on data reporting, adherence to international testing/data standards/level of knowledge/shared decision making. Not across sectors | Integration measures on many levels, e.g., data integration in organizations, national, regional, or international level, and systems interoperation between different sectors. International testing/data standards not included, unless it is included in “initiative” being evaluated | Not included or advanced to evaluate |
| Governance | Partly considered when looking at the overall organization/management | Inclusion of many aspects: rationale and objective of collaboration, responsibilities of stakeholders, functionality of governance mechanisms, etc. | Addressed for many areas | Well covered, one main focus of the tool | Partially in the thinking and systemic organization of the OH-ness evaluation. The tool includes consideration of legislation and National Action Plan, if nation is identified as dimensions in the “system” | Not included, but some aspects might be covered if conducting process evaluation |
| Adaptivity | The tool does not cover this aspect | No monitoring of the progress of collaboration. Monitoring and evaluation of collaboration performance | Measure progress | Designed for measuring improvement | Can be assessed through repeated evaluations. If a dedicated process evaluation is done, the progress can be studied. Evaluator and framework design are “key” | Obtainable if evaluation is done twice (over time) to identify improvement |
| Technical operations | Includes questions on technical aspects, e.g., sampling/methodology | No evaluation of surveillance performance, even if taken into account evaluation of certain collaboration attributes | Quality of epidemiological designs not covered | Includes questions on the targets of surveillance (e.g., pathogens). Low without ATLASS | Not among evaluation objectives. Include few questions probing for capacities/data handling. Could be part of operations assessment of OH-ness, but extent lies upon evaluator and framework followed | Cover technical efficiency/performance, other laboratory aspects not guided/covered |