Literature DB >> 34898637

The impact of conducting preclinical systematic reviews on researchers and their research: A mixed method case study.

Julia M L Menon1,2, Merel Ritskes-Hoitinga1,3, Pandora Pound4, Erica van Oort5.   

Abstract

BACKGROUND: Systematic reviews (SRs) are cornerstones of evidence-based medicine and have contributed significantly to breakthroughs since the 1980's. However, preclinical SRs remain relatively rare despite their many advantages. Since 2011 the Dutch health funding organisation (ZonMw) has run a grant scheme dedicated to promoting the training, coaching and conduct of preclinical SRs. Our study focuses on this funding scheme to investigate the relevance, effects and benefits of conducting preclinical SRs on researchers and their research.
METHODS: We recruited researchers who attended funded preclinical SR workshops and who conducted, are still conducting, or prematurely stopped a SR with funded coaching. We gathered data using online questionnaires followed by semi-structured interviews. Both aimed to explore the impact of conducting a SR on researchers' subsequent work, attitudes, and views about their research field. Data-analysis was performed using Excel and ATLAS.ti.
RESULTS: Conducting preclinical SRs had two distinct types of impact. First, the researchers acquired new skills and insights, leading to a change in mindset regarding the quality of animal research. This was mainly seen in the way participants planned, conducted and reported their subsequent animal studies, which were more transparent and of a higher quality than their previous work. Second, participants were eager to share their newly acquired knowledge within their laboratories and to advocate for change within their research teams and fields of interest. In particular, they emphasised the need for preclinical SRs and improved experimental design within preclinical research, promoting these through education and published opinion papers.
CONCLUSION: Being trained and coached in the conduct of preclinical SRs appears to be a contributing factor to many beneficial changes which will impact the quality of preclinical research in the long-term. Our findings suggest that this ZonMw funding scheme is helpful in improving the quality and transparency of preclinical research. Similar funding schemes should be encouraged, preferably by a broader group of funders or financers, in the future.

Entities:  

Mesh:

Year:  2021        PMID: 34898637      PMCID: PMC8668092          DOI: 10.1371/journal.pone.0260619

Source DB:  PubMed          Journal:  PLoS One        ISSN: 1932-6203            Impact factor:   3.240


Introduction

Keeping up to date with health/medical the literature can be challenging due to the vast number of new articles published every year. Scholarly peer-reviewed journals produce over 3 million articles annually, with a 56% increase since the last decade [1,2]. Consequently, researchers, policymakers and healthcare providers require a way to systematically identify and evaluate literature on a specific topic. For 40 years, systematic reviews (SRs) have provided a powerful way of achieving this within clinical research. SRs follow clear and defined steps, and aim to provide a synthesis as well as a critical assessment of all the available relevant evidence [3]. Such syntheses enable researchers to identify research gaps and help future decision making in practice and policy. The methodology of SRs emerged from the evidence-based medicine paradigm of the 1980’s [4,5]. Today, they are cornerstones of evidence-based medicine, with 30,000 SRs protocols being registered as of 2017 and a 2014 estimate putting the number of published SRs at over one million [6,7]. However, despite their advantages, SRs are struggling to achieve similar status in the preclinical field (i.e., fundamental and applied animal studies, in vitro and ex vivo studies before clinical research) [8]. A lack of knowledge, skills, or awareness of their value may be to blame. The first preclinical SRs began to appear in the early 2000s, a decade or so after the clinical SR standards were established [9]. Despite the slow start, preclinical SRs have been shown to increase transparency, avoid unnecessary duplication and help identify and improve poor reporting and poor study design [8,10,11]. They can be used retrospectively to cast light on clinical trial data or prospectively to prepare for new clinical and preclinical studies, and they may be used to guide future (translational) research [8,10,11]. Interest in preclinical SRs is slowly growing within academia and amongst other stakeholders [12-16]. Since 2011, the Netherlands Organisation for Health Research and Development (ZonMw) has invested in education and coaching in preclinical SRs through their funding programme “More Knowledge with Fewer Animals” (Meer Kennis met Minder Dieren in Dutch or MKMD) [16]. The main goal of this funding programme is to promote and implement the adoption of animal-free research methods by funding new animal-free (human based) innovations and encouraging the use of already existing alternatives. The programme consists of several modules, each with a specific focus [17]. The Knowledge infrastructure module focuses on providing preclinical SR education and coaching via workshops and training, and on promoting open access publishing of (negative or neutral) results [17]. Since its formation, 22 SR workshops have been organised within the Netherlands (over the period 2013–2020) and many participants have subsequently enrolled to conduct a coached preclinical SR. While the benefits of clinical SRs are beyond doubt, the impact of conducting a preclinical SR on researchers and their research field remained unknown. Hence, to investigate these effects, we evaluated the impact of ZonMw funded preclinical SRs on researchers and (their) research.

Research questions

Our study aims to assess the impact of conducting preclinical SRs on researchers, their research and their field through the following objectives: What impact does conducting a preclinical SR have on a researcher in terms of planning, conducting, reporting and appraising their research projects? What impact does conducting a preclinical SR have on a researcher’s views about research, their own research, and their field more generally? What impact does preclinical SRs have on preclinical research in general (e.g., quality, reproducibility, transparency, accessibility)?

Theoretical framework

This project is a research impact case study aiming to evaluate the impact of an intervention (conducting preclinical SRs) on individuals (researchers), products (their research), and environment (their research field). Therefore, it relies on theories and knowledge of research impact assessment. An in-depth description of our theoretical framework can be found in the original ZonMw internal report made for this case study, available at: https://www.zonmw.nl/nl/actueel/nieuws/detail/item/systematisch-literatuuronderzoek-vervangt-vermindert-en-verfijnt-proefdieronderzoek/. Briefly, research impact assessments rely on the evaluation of research impacts, defined as “contributions that research makes to the economy, society, culture, national security, public policy or services, health, the environment, or quality of life, beyond contributions to academia” [18]. These evaluations tend to be conducted to address one or several of the 4As: Advocacy, Accountability, Analysis and learning, and Allocation [19,20]. Overall, research impacts will lead to demonstrable and beneficial changes in behaviours, beliefs, and practices [21]. Consequently, these evaluations usually assess all directions and categories of impacts in a structured and high-quality manner [22]. This may include the use of complex, time-intensive frameworks, including impact pathways to identify or foresee how research projects create (expected) impacts, for example the societal or environmental impact of a research project [23,24]. Since impact evaluations began to be conducted by universities in the 1970’s and 1980’s, a plethora of frameworks has been proposed, with different scopes, aims, and assets [23-25]. Each proposes a specific approach to assessing research knowledge and research quality, and to measuring impacts [26]. Well-known frameworks include the Canadian Academy of Health Sciences Preferred Framework, the National Institute of Health Research Dashboard, or the Excellence in Research for Australia [24,27,28]. Such large endeavours were considered out of scope for our current project. Therefore, we chose two smaller frameworks to structure our study, namely 1) the research impact framework and 2) the behaviour wheel of change. The research impact framework was developed in 2007 by Kuruvilla et al., as a checklist to guide researchers in selecting and evaluating the impacts of their work and interventions [29]. It highlights four areas of impacts: 1) research-related impacts, 2) policy impacts, 3) service impacts, and 4) societal impacts. We focused on the first area and designed our study using the seven categories it provides, which range from “providing data about a problem”, “study replication”, “innovation of new methods” to subtler impacts such as “becoming a member of a scientific society”. All seven categories are available in detail in S1 Appendix. The behaviour wheel of change created by Michie et al., in 2011 presents the idea that behavioural change occurs as a result of good pre-dispositions, interventions, and policies (which enable or support the particular intervention and are created [in part] by responsible authorities) [30] (See S2 Appendix). These good pre-dispositions are capability, opportunity, and motivation as supported by the COM-B system—a behavioural change theory [30,31]. Within the intervention covered by this framework, three types corresponded to our case study: education, training, and enablement. As a result, we hypothesise a pathway by which our intervention (conducting preclinical SRs) in a set context (supported by ZonMw’s workshops, coaching and funds) would create impacts (for researchers and their research) (Fig 1).
Fig 1

Visual representation of the current hypothesis on impacts.

Materials and methods

This study complies with the standard for reporting qualitative research and the consolidated criteria for reporting qualitative research [32,33]. Checklists are available in S3 and S4 Appendices.

Investigator characteristics and reflexivity

The primary investigator is a research assistant with a Master’s Degree in Science, Innovation and Management (Radboud University). She has experience in preclinical SRs and qualitative research via her Master’s Degree and two long-term Master’s internships. She had no relationship with any of the participants and had not taken part in the training or coaching provided by the funding scheme being studied, although she had taken part in one of the workshops. Participants were not aware of her personal goals or personal motivation to conduct this research project. Her interest in preclinical SRs and qualitative research led her to conduct this research project. She was supported and/or supervised by a research fellow with a background in ethics and philosophy, a professor in Evidence-Based Laboratory Animal Science and a ZonMw program manager, all of whom helped create a robust framework for the project.

Context

This study uses a mixed-method approach combining questionnaires and semi-structured interviews. We focused on researchers who had followed ZonMw’s workshops and who (had) received coaching to perform their own preclinical SR. This enabled us to get a snapshot of the preclinical SR field and fairly evaluate the experience of these researchers within a given time period.

Sampling strategy

We used purposive sampling to select our participants. The target population for the questionnaires comprised researchers who had participated in a ZonMw workshop and who had either started (currently conducting or prematurely stopped) or completed a preclinical SR with funded coaching. Those who had completed their SRs were approached to take part in an interview to evaluate the impact of the SR on their subsequent research.

Recruiting participants

Participants were recruited by e-mail and received a reminder two weeks after the first invitation (both the invitation and reminder are available in S5 Appendix). All e-mail addresses were obtained from the information giving during coaching. Unfortunately, some email addresses were inactive, and participants could not be traced despite extensive efforts. Due to the nature of this project, we did not aim to achieve data saturation but rather to collect as much data as possible within the given timeframe.

Data collection methods

The questionnaires and semi-structured interviews were considered complementary, with the interviews providing more in-depth data about the impacts of preclinical SRs than the quantitative information generated by the questionnaires. The questionnaires provided quantitative information about the impacts of preclinical SRs, while the interviews provided more in-depth data. Data collection took place for a period of 6 weeks (23/07/2020–04/09/2020), with the online questionnaires being available for one month (23/07/2020–23/08/2020). Data analysis was performed for a period of almost one month (24/08/2020-18/09/2020).

Online questionnaires

We designed and uploaded the questionnaire onto “Questionpro” (https://www.questionpro.com/), a free questionnaire platform. Our questionnaire consisted of both closed (dichotomous, multiple-choice or scaled) questions and open-ended questions. For rating questions, seven-point Likert scales were chosen (to avoid the bias created by using five-point Likert scales), with the extremities being “completely disagree” and “completely agree” [34]. The participants were divided into two groups; the “SR completed group” and the “SR started group”. The groups were kept separate by skip logic. For the former group, five categories of impact were addressed 1) designing and planning experiments, 2) writing manuscripts, 3) appraising research, 4) skills gained as a result of conducting (steps of) the SR, and 5) experience with conducting (steps of) the SR (including, but not limited to, publishing experiences and wishing to perform further SRs (for the completed group)). For the latter group, only points 3, 4 and 5 were evaluated. At the end of the questionnaires, researchers in the “SR completed” group were also invited to participate in a semi-structured interview on the same topic. The full questionnaire is available in S6 Appendix.

Semi-structured interviews

Researchers willing to participate in an interview were sent an informed consent form by e-mail and were given the opportunity to ask questions before signing (available in S7 Appendix). They were ensured that they could withdraw from the study at any point without any consequences. The informed consent form was based on the World Health Organisation informed consent form template for qualitative studies (https://www.who.int/ethics/review-committee/informed_consent/en/). The semi-structured interviews lasted about one hour and were conducted by teleconference using GoToMeeting. Only the participant and researcher were present on the call. Interviews were structured using a list of open questions on the participant’s experience with their SR and the impacts they felt it had on their research, attitudes, and research field. (The interview guide can be found in S8 Appendix). Field notes were written during and/or after the interviews and were anonymised.

Data processing & data analysis

Questionnaire data were exported from Questionpro to Excel. The “SR completed” and “SR ongoing” groups were analysed separately. Data analysis consisted of frequency counts for close-ended questions (with calculations of median and means), while open-ended questions were subject to content analysis. All analysis was conducted by one reviewer (JMLM). The interviews were video recorded via GoTo Meeting, converted into mp3 files using VLC player, and subsequently transcribed verbatim using Express Scribe Transcription Software. All recordings were anonymised; random numbers were acquired from www.random.org. Transcripts were not returned to participants for comments and/or corrections because we did not want to bias their first answers (as this evaluation is performed for a funding agency). Thematic analysis of the transcripts was conducted by one reviewer (JMLM) in Atlas.ti (Version 8.4.15.0), with some themes emerging from the data and some deriving from the Research impact framework [29,35]. The coding tree and code organisation can be seen in S9 Appendix. Feedback options were not included for either participants’ questionnaires or interviews findings.

Ethical concerns pertaining to human subjects

According to Dutch law, research involving humans must be reviewed by the Central Committee on Research Involving Human Subject or a Medical Research Ethics Committee, if the study is subject to the Medical Research Involving Human Subjects Act [36]. Questionnaire research does not fall within this act and does not require ethical review, unless the questions are burdensome, intimate, or if completing the questionnaire is time-consuming [37,38]. In our case, participants were not patients, children, or vulnerable persons, and the topics addressed did not relate to their health, traumatic events or sensitive matters [39]. Furthermore, the time required to answer was short (maximum 15 minutes). The topics addressed in both questionnaires and interview guides posed no risks to the participants, and in particular no risk of physical or mental harm. For these reasons, we did not seek approval from an Institutional Review Board. In addition, we took several measures to ensure anonymity, confidentiality, and privacy, and obtained informed consent, complying with both qualitative research standards and the data protection act of 2018: all data were anonymised and assigned a random identification code; as noted above, an informed consent form was signed by interviewees; interview recordings were deleted 16 weeks after the interview, any mention of the participants’ names, institutes, or any indicators that could threaten anonymity were omitted from the transcript, and; only the primary investigator (JMLM) had access to the unblinded data.

Techniques to enhance trustworthiness

Thorough piloting was performed for both questionnaires and semi-structured interviews by seven researchers knowledgeable about SRs and from a variety of backgrounds and career levels, namely research assistants (n = 2), PhD student (n = 1), and professors (n = 4). The main investigator and (most of) the co-authors jointly partook in the planning and design of the questionnaires and interview guide. In addition, we performed methods triangulation by using both questionnaires and interviews to answer the same questions, which increase the trustworthiness of our findings.

Results

Response rate

Of 99 potential participants, we were able to contact 95. Sixty-one participants started our questionnaire, and 45 completed it (i.e., 16 drop-outs), giving a response rate of 47.4% and a completion rate of 73.8% (definitions of response and completion rate can be found here [40]). An overview of participants per phase is available in Fig 2.
Fig 2

Number of participants per phase.

Abbreviation: SRs: Systematic Reviews, NR: Not reported.

Number of participants per phase.

Abbreviation: SRs: Systematic Reviews, NR: Not reported. Of the 61 participants, 36 belonged to the “SR completed” group (i.e., they had published or submitted a manuscript of their preclinical SR). In comparison, 18 participants belonged to the “SR ongoing” group (i.e., still in the process of conducting their SR). Two participants did not complete their SRs due to time constraints, while the 5 remaining participants terminated the questionnaire before answering the question about the state of their review. Therefore the 61 participants were divided as such: SR completed (n = 36), SR ongoing (n = 18), SR stopped (n = 2), and NR (n = 5). Ten participants agreed to participate in interviews but only eight interviews were eventually conducted due to the unavailability of two researchers.

Questionnaires

An overview of the questions and the number of respondents for each question can be found in S10 Appendix. The (anonymised) results of the questionnaire are available in S13 Appendix.

Impact on planning, designing, and writing research projects

Within the SR completed group (n = 36), 14 participants went on to perform primary animal studies after completion of their SRs, 5 performed preclinical research using alternatives to animals, 6 performed clinical studies, and 6 moved to meta-research or ceased research. The remaining five participants did not complete this part of the questionnaire. All participants who performed animal studies after their SRs answered questions about planning, designing, and reporting these studies. Most respondents agreed with the statements asked, as illustrated by the median of each question (Figs 3 and 4).
Fig 3

Median answers on planning and designing subsequent research after conducting a preclinical SR.

This graph shows medians per question of a 7 points Likert scale. On this scale, 1 corresponded to completely disagree and 7 to completely agree. Thus, medians with values of 5 or above indicate agreement with the statements. Number of participants (n = 11).

Fig 4

Median answers on writing subsequent research after conducting a preclinical SR.

This graph shows medians per question of a 7 points Likert scale. On this scale, 1 corresponded to completely disagree and 7 to completely agree. Thus, median with values of 5 or above indicate agreement with the statements. Number of participants (n = 10).

Median answers on planning and designing subsequent research after conducting a preclinical SR.

This graph shows medians per question of a 7 points Likert scale. On this scale, 1 corresponded to completely disagree and 7 to completely agree. Thus, medians with values of 5 or above indicate agreement with the statements. Number of participants (n = 11).

Median answers on writing subsequent research after conducting a preclinical SR.

This graph shows medians per question of a 7 points Likert scale. On this scale, 1 corresponded to completely disagree and 7 to completely agree. Thus, median with values of 5 or above indicate agreement with the statements. Number of participants (n = 10). We found that conducting preclinical SRs impacted the way participants planned their future studies, in that they made more use of planning guidelines and increased the time allocated for planning and the use of power calculations to ensure statistical validity. They also contributed to avoiding unnecessary duplication. Moreover, conducting a preclinical SR also impacted the way participants performed their subsequent animal study, for example in terms of the animal model and intervention chosen, and use of blinding and randomisation. Similar results can be seen with regard to reporting. Conducting preclinical SRs strongly impacted the quality of reporting, including but not limited to increasing the use of reporting guidelines, the appropriate reporting of animal characteristics, housing conditions and methods (e.g., blinding and randomisation). Altogether, the results suggest that conducting a preclinical SR improved the quality of participants’ subsequent animal studies through better and more thoughtful planning and conduct, as well as improved transparency as a result of better reporting of information and methods. (Detailed answers for each question can be seen in Fig 11.1 and 11.2 in S11 Appendix). The SR ongoing group (18 participants) answered less detailed questions about their future research. However, all felt that conducting their preclinical SR would influence the way they would conduct and report their next experiment. Some suggested it would impact the model they would choose, the design of future in vitro studies, or would highlight data gaps for clinical research. Some participants felt there would be an impact on their field too. The following quotes illustrate these themes: “Through the systematic review, we understand that the quality of currently published animal study reports is very poor, in particular, reporting randomisation and blinding. This reminds us to pay more attention to those aspects when designing, implementing and writing our own study.”—Respondent 75156050 “This study aims at investigating the best animal models that result in a similar outcome to human X. […] Based on the findings of this study, I’d use the most robust models that will result in the reduction of the use of animals. I’m also confident that people who read this article will employ a similar approach and modify the design of their studies, thus overall, I think this study will have a big impact in the field”–Respondent 76823076 Of the five people who performed research projects using non-animal methods, two affirmed that their SRs had positively influenced their choice of approach. The other three had already decided to use a non-animal approach prior to conducting their SRs. For the six who performed clinical research projects, three stated that the SR had impacted their clinical research; one with respect to the questions they chose to address, another in terms of translating preclinical to clinical research, and the third in terms of highlighting the need to investigate the role of a specific mutant phenotype

Impact on research appraisal

All participants answered questions about appraising research, regardless of the stage of their SRs. These questions aimed to determine the effect of conducting a preclinical SR on the way they conducted appraisals. Overall, the results were equal across groups (Fig 5). Furthermore, few participants disagreed with the statements (data in Fig 11.3 in S11 Appendix). Our findings suggest that the critical stance of participants seemed sharpened by their experience with the preclinical SR, for both reading and assessing papers or research projects, and regardless of where that research comes from.
Fig 5

Median answers on appraising research and critical stance after conducting (part of) a preclinical SR.

This graph shows medians per question of a 7 points Likert scale. On this scale, 1 corresponded to completely disagree and 7 to completely agree. Thus, medians with values of 5 or above indicate agreement with the statements. The total number of participants was 31 for the “Completed SR” group and 18 for the “Ongoing SR” group.

Median answers on appraising research and critical stance after conducting (part of) a preclinical SR.

This graph shows medians per question of a 7 points Likert scale. On this scale, 1 corresponded to completely disagree and 7 to completely agree. Thus, medians with values of 5 or above indicate agreement with the statements. The total number of participants was 31 for the “Completed SR” group and 18 for the “Ongoing SR” group.

Skills

Participants from both groups stated that they gained new skills or improved existing skills in the process of conducting their preclinical SRs. Some of these skills are exclusive to research (e.g., meta-analysis skills, academic writing, critical appraisal), while others go beyond research (e.g., collaboration, negotiating with editors, interdisciplinary working). We identified and categorised four main types of skills: 1) research skills directly linked to the SR stages, 2) Research skills for planning/conducting subsequent research, 3) Critical appraisal, and 4) Interpersonal skills (S12 Appendix).

Experience with their preclinical SRs

Reviews (and reviews steps) often took longer than expected (26/30 participants for the completed group and 8/15 participants for the ongoing group). For instance, researchers often experienced the number of studies during screening, data extraction or data analysis as overwhelming, but also suffered in getting their protocols, manuscripts or revision for publication reviewed. Some had to wait for second or third assessors/reviewers and experienced delays when asking authors for further information. Also, three participants (both groups combined) mentioned the COVID-19 pandemic as a delaying factor. Even though many participants experienced extensive delays while conducting their reviews, most considered it likely that they would conduct another SR or advise colleagues and peers to conduct a preclinical SR. Numbers wise, 34/46 participants mentioned that they would be likely to conduct another SR (18 “yes” and 16 “maybe”), while 45/46 would potentially recommend colleagues to do their own SRs (37 “yes” and 8 “maybe”). Eight participants from the completed SR group had already conducted a second SR after completing their first coached review, illustrating how useful they considered this. Additionally, in the SR completed group, 20/28 participants reported that they would like to receive similar coaching for their next review. Fifteen of these wanted a more personalised approach, with a focus on certain stages of the SR, while 5 wanted coaching for the whole process.

Interviews

The interviews afforded greater insight into the sort of impact that conducting a preclinical SR has on both researchers and their research, as well as how this happens. We identified that impacts occurred in two steps and influenced three given levels (Fig 6).
Fig 6

Process and steps by which conducting preclinical SRs impacts researchers and research at different levels.

First, impacts occur at the level of the researchers; conducting the SRs contributes to an evolution in participants’ thinking, providing them with important insights and skills. This first step leads researchers to step back from their usual point of view and triggers the realisation of specific issues and shortcomings in their own work, research field, and science community. Second, impacts on research can occur now that a new mind-set has developed. Researchers actively modify their activities 1) relative to their own work (lab level), 2) regarding how they appraise their field and advocate for change (field level), and 3) promoting change on a broader scale (science community level). Both steps and levels of impact will be further explained and illustrated with empirical data in the upcoming sections.

Step one. Impacts on researchers: A change in mind-set

Interviewees were from various academic backgrounds and fields. However, they achieved similar insights, skills, and realisations as a result of conducting their preclinical SR. Even though most respondents were already aware of the reproducibility crisis, conducting their review brought them face to face with poorly designed and/or poorly reported studies. This triggered an understanding, not only of the poor quality of animal studies in their field but about their own past mistakes: “I was quite shocked how poorly designed some studies are, while still being published in a high impact journal”–ZonMw 72 “I had two medical students, and they helped me, also during the risk of bias, and they were like ‘how researchers can be so stupid, they are so stupid’, and I was like ‘well, I think I did this too’.”–ZonMw 23 The realisation occurred for most participants during the risk of bias assessment and data extraction stage, as many mentioned they were quite ‘frustrated’ due to unclear or missing information. This eye-opener on quality was confirmed when participants were unable to interpret studies properly or when these poor-quality publications impaired their final analysis. “We had a lot of data, the amount was more than enough but because the quality, I would say, of the studies is pretty low and sometimes you can’t find the right information, it’s harder to draw conclusions.”–ZonMw57 As a result, participants reflected on their past flaws and learned how to prevent such mistakes in the future. For some, these issues made them rethink their activities in the preclinical field, their reasons for using animals and how they performed experiments: “There’s a lot of bias that could be introduced in a model and having done it myself, I really realised that it’s really pitiful of doing these experiments if you don’t do them in the proper way.”–ZonMw84 “It made me really more aware of why you [would] want to use animals and in what way. And even though in my own research I would want to do it in a good way, I saw that we also have flaws, and it made me more aware of what you’re actually doing when you’re doing animal research”- ZonMw57 Despite insufficient quality or lack of evidence, most participants were able to draw conclusions and gain insights. We organised these insights under the four following categories: 1) increased awareness of their own field, 2) identification of data gaps and topics for new primary studies, 3) discovery of heterogeneity and incoherencies, and 4) discrediting or confirming theories. “You retrieve papers that you were not aware of even though you’re quite familiar with literature on the topic, you will always find new things”–ZonMw80 “I really have more, much more transparent ideas about what studies are missing, and need to be done”–ZonMw23 “What was striking for this model is that everybody in the field was using the model in a different way.”–ZonMw84 “We did encounter unexpected findings, which we couldn’t even explain why it was the case. And it really contradicts some basic theories of (field of interest), of how our intervention works.”–ZonMw 91 Overall, participants were convinced about the value of preclinical SRs, and were positive about their experiences with the funding scheme including the coaching. Moreover, many participants admitted that they had underestimated the difficulty of the SR process and were glad to have received proper guidance. “I had underestimated really the efforts needed to make a strong and a good systematic review”–ZonMw72 “I really felt I needed this help”–ZonMw23

Step two. Impact on research (lab level)

Research impacts were identified within the participants’ teams and labs and were categorised as follows: 1) direct impacts on experiments, planning, conducting, reporting, appraising research; execution of other types of research, switching fields, conducting meta-research, and 2) advocacy for better research within their teams, promoting networking and collaboration. The first category of impact was frequently mentioned in the questionnaires. Similar outcomes were reported in the interviews, namely that the planning, conduct and reporting of research projects was performed better after (or while) conducting the SR. The participants were unanimous about the change and progress they experienced thanks to their review: “I was just much more mindful about the blinding, randomisation, the sources of bias. We put enormous amount of efforts into doing that properly”–ZonMw91 “A lot of details would be missing if I hadn’t done this (systematic review), cause now I know how much is missing in the studies”–ZonMw23 Aside from doing new experimental research differently, participants’ SRs fuelled ideas for other types of studies in meta-research and non-animal approaches. For some of them, the SR was a game-changer that led them to switch fields. Some now pursue their career in human research, some in meta-research (where they focus on SRs, meta-analysis, or the development of further tools) and some have distanced themselves from academia to work towards improving the system: “Step by step, you can do a lot of research in humans that is also very helpful. I did a lot of animal experiments and yeah (pause). I’m not going to do that anymore; I’m not going back to animal experiments”–ZonMw84 “I realised then that open science is what I’m interested in. That’s also why I wanted to switch to a different working environment. Not doing research myself anymore that much but actually working on trying to improve the system”–ZonMw57 The second category of impact emphasises our participants’ enthusiasm about sharing their newly acquired knowledge with colleagues and peers. Some passed on their knowledge by teaching their new skills to students or by advising colleagues on how to improve their research, as the following quotes show: “I had this little extra time to teach students how to do it, and it becomes part of their package as well. Because in reality there’s nothing about this type of science in education, and that’s such a shame. Experimental design, statistics, and so, it’s completely missed in education, and I feel that’s what I want to teach my students, how to do good science”–ZonMw91 “I also used this to give presentations and make my colleagues at the department aware of how they should perform animal experiments. So, I try to implement this guideline of ARRIVE also in our department.”–ZonMw84 However, such improvements at a team or individual level may require changes from higher up in organisations. Several interviewees mentioned how supervisors’ lack of familiarity with SRs could impede their execution and the transfer of skills and knowledge within a team. The fact that some professors and supervisors realised the value of SRs by the time they had completed the SR does, however, provide hope. “I have an assertive personality. […] But many other people that are not as assertive might just not do a systematic review in the first place because of how many criticisms there is from the more managerial position in the department. Meaning professors themselves don’t understand, then why should a PhD student do it?”–ZonMw91 Resistance to change remains in the lab, both towards acceptance of preclinical SRs but also to other new methods that can improve research quality. Lab conventions, resistance from colleagues and supervisors, time pressure, funding and competition were the main obstacles experienced by the participants.

Step two. Impact on research (field level)

The two important ways by which our participants impacted their field with their preclinical SR were 1) by dissemination of their findings and 2) by promoting the value of SRs to improve the quality of research, for example, by writing an opinion paper. The usual way of disseminating results is via publications and conferences. Interviewees’ experiences highlighted that getting preclinical SRs published can be challenging. Some participants had difficulties finding journals that would accept their SRs or, once found, faced repeated rebuttals. Moreover, the opinions of peer reviewers were quite influential in the editorial process. Responses about peer reviewers’ input were mixed; some participants had a generally negative experience with peer reviewers who did not understand the SR process or the value of SRs, while others received beneficial input that improved their SRs: “This is I think the biggest frustration of the whole process; that the journals and the reviewers really underestimate the significance, the work, that is in it.[…] they said no ‘you have to do a letter to the editor, it’s not worth an original article’–ZonMw23 This dissonance directly echoes the lack of acceptance, support and education regarding preclinical SRs as mentioned earlier in “Step two. impact on research (lab level)”. Nevertheless, once in the open, preclinical SRs can influence a field (see Box 1). Of course, publication does not ensure that peers will read or accept the SR, or implement the recommended changes. Some participants had frustrating experiences, discovering that the insights from their SRs appeared to have no impact on their field to date.

Box 1: Example of field impacts

EXAMPLE 1 • A participant discovered that the mode of drug administration significantly impacted the results of an intervention, both in clinical and in preclinical experiments. With this issue identified, there could be a change in how this particular intervention is administered across the whole field. “With regard to the patient’s studies, was that everybody just depleted these XX and saw what happened to XX, and then said, ‘oh XX are bad’ or ‘oh XX are good’. But our review made I hope, make people realise when it gets published that it totally depends on you administer your agent to deplete your XX. So, you can’t really say ‘oh they are good’ or ‘they are bad’ because it depends totally on how you do it.” – ZonMw23” EXAMPLE 2 • A participant’s SR triggered a whole series of new SRs in the same direction as their own; as a result ofShowin the value and insight a preclinical SR can provide, other researchers in their field began performing their own preclinical SRs. “People really loved the paper and found it really informing to have a first good overview. And then, multiple research groups actually ‘copy pasted’ our paper, but put another cell type than the cell type we used […] for the research field it was good, because everybody started to systematically assess all the evidences for certain cell lines”- ZonMw15 To press for change, certain participants chose an innovative route: some wrote opinion papers or letters to reach peers and editors, while some others took the matter a step further by creating tools (e.g., apps and websites) specific to their field. “We wrote a paper in a XX journal for animal technicians, and describing all the differences that everybody does with the model. […] also concerning housing and all kinds of things that are important for animal studies. […] I also wrote a paper for the XX organisation, about the use of animals and the 10 sorts of pitfalls, mistakes about experimental models”–ZonMw84

Step two. Impact on research (science community level)

Impacts at a broader level were also identified, i.e., increased awareness and acceptance of preclinical SRs, which could potentially improve the overall quality of preclinical research. These impacts included 1) the influence of stakeholders at a higher level and changing the status quo, 2) increasing training for researchers, and 3) improving and investing in the education of the next generation of researchers. Observing that their SR findings did not bring about changes in their field, certain participants would have liked to pursue the issues at a higher level, for example by involving ethical committees and funders. It was suggested that preclinical SRs could become a requirement prior to performing an animal study and that this could be directed/checked by animal ethical committees. Funders could also be involved by emphasising the value of grant applications that include systematic overviews. As noted above, questionnaire respondents were likely to recommend SRs to their colleagues and peers. This observation was confirmed in the interviews, with participants recommending that their peers or colleagues should receive proper training (via lectures, courses, and/or workshops) in the same way that they had. Participants made many suggestions about promoting preclinical SRs in their field, including advertising on websites, in universities, on social media and at conferences, and by providing financial support. It was suggested that more investment in education would be beneficial in terms of promoting the value and acceptability of SRs, since they do not appear to be sufficiently understood nor employed in many fields. Finally, participants reflected on the younger generation of scientists, suggesting that preclinical SRs and related training would be beneficial for new researchers. “I think training is most powerful, because it’s difficult to change the behaviour of the old researchers […]. But the young researchers you can definitely train, they can become knowledgeable in this field.”–ZonMw72

Discussion

Our findings have highlighted how conducting preclinical SRs impacted both research and researchers as a result of the ZonMw funding scheme, MKMD “Knowledge infrastructure” module. Both questionnaire and interview data indicated that conducting preclinical SRs provided researchers with awareness, skills, and insights that 1) were used and promoted in the planning, conducting and reporting of their subsequent projects, 2) influenced their critical stance when appraising research, and 3) were used and promoted in their research fields (e.g., teaching, dissemination of results, advocating for more transparency). Ultimately, these impacts could lead to more transparent, high-quality research. To our knowledge, this is the first time qualitative research has been employed to identify the research impacts of preclinical SRs. At a lab level, we saw that conducting preclinical SRs not only provided insights and skills but also contributed to long-lasting behavioural changes amongst our participants. Furthermore, most participants were willing to improve their work and advocate for change in their teams (via support and teaching). We might extrapolate that if properly supported, coached, and empowered, a significant number of researchers might re-evaluate their research practices and follow the same path of reflection as our participants. This could lead to more teams wishing to improve knowledge transfer within their labs and among students, functioning a SR feedback loop. Moreover, if universities invest in teaching about SRs and meta-analyses (e.g., on specific courses such as the legally required laboratory animal science courses for researchers wishing to perform animal studies (Function B course EU Directive 2010/63EU, Article 9 course in the Netherlands)), as suggested by our participants, this type of knowledge would become more universally accepted and might increase standards and expectations within the entire preclinical field. However, knowledge transfer both within and between teams depends on several factors and antecedents, which can both promote and impede the dissemination of knowledge [41]. A legal framework and incentives from external stakeholders are needed to sustain this desire for improvement [30]. Given this context, incentives such as the MKMD funding scheme appear to provide appropriate support for initiating and sustaining improvements. Additionally, there are further external sources that could exert influence in this area. One of these is the ethical committee process. Both animal and human ethical committees could demand SRs prior to the conduct of new animal or human studies. Similar demands for transparency are happening with the pre-registration of preclinical studies, the value of which is increasingly acknowledged [42-44]. Other funding and regulatory bodies could further encourage the adoption of preclinical SRs by providing appropriate financial support or by promoting the value of preclinical SRs for translational purposes. Several organisations such as the UK’s NC3Rs and NORECOPA already support the use of preclinical SRs [45,46]. Lastly, journals could promote the adoption of preclinical SRs if more editors were familiar with their value; to date, we found only four journals that regularly accept protocols for preclinical SRs [47-50]. Returning to the behaviour wheel of change framework, with both legislation and intervention in place, only the predispositions need to be set to trigger change. But as highlighted in the interviews, several factors currently hinder the adoption and acceptance of preclinical SRs, including standard lab conventions, resistance from colleagues and supervisors, time pressure, lack of funding, competition, and lack of awareness of their value. There are international hurdles too, such as journals not yet being willing to accept preclinical SRs and a lack of familiarity with SRs among journal editors and peer reviewers. Consequently, the emergence of preclinical SRs falls within a research culture paradox. On the one hand, high quality, transparent, and reproducible research is and should be standard. Previous research highlighted that (preclinical) SRs can play a useful role in this context as they can provide helpful information and insights for policy decision making and translation from preclinical to clinical studies [11,51,52]. On the other hand, researchers live in constant pursuit of delivering impactful results under great time pressure–leaving little space for conducting SRs, which are time-consuming [53]. However, and as this case study illustrates, the time invested is beneficial in that preclinical SRs seem to contribute to a change in mindset and behaviour and to improve research quality. Further studies are warranted to fully understand what hinders or facilitate the conduct of preclinical SRs. For instance, a more in-depth assessment of impacts would provide greater insight and could address other areas of the research impact framework, e.g., policy impacts, societal impacts. In this case study, we focussed primarily on our intervention, and it should be noted that the improvements observed cannot be only attributed to preclinical SRs as complex factors are also involved and may have influenced participants’ answers (e.g., personal growth and capacity, background, seniority). Further assessments may help to highlight or understand the complex factors contributing to the beneficial effects. Regardless of all positive aspects, we should bear in mind that SRs rely on the studies they include and may therefore be of limited value. Like primary studies, they may also be constrained by poor reporting or poor conduct. Such limitations are highlighted with respect to clinical systematic reviews, for example with protocols in PROSPERO not corresponding to the PRISMA-P guidelines, and outcome discrepancies between the protocol and final publication [54,55]. Therefore, it is important that researchers continue to assess SR methodology and seek improvements [56-58]. On the preclinical side, Soliman et al., have provided new guidance for the appropriate conduct of SRs [59]. It is important to maintain a critical perspective and only use SRs where appropriate, and where they can be properly conducted and according to the guidelines.

Strengths and limitations of the study

The major strength of this study lies in its mixed methodology, using both questionnaires and interviews to identify research impacts. This triangulation of data increases the validity of our findings. The choice of design and methodology was carefully considered by researchers from different academic disciplines, helping to ensure the study’s robustness. Regardless of a relatively limited set target population, we collected sufficient data using both questionnaires and interviews. The impacts we identified mapped onto the Research Impact Framework for the categories: “type of knowledge/problem”, “research methods”, “publication and papers”, “translatability potential”, “research network”, and “communications”, showing strong coherence between our findings and the Research Impact Framework. Our findings provide insight into how preclinical SRs have created impacts as a result of the ZonMw MKMD programme. Despite significant strengths, we identified limitations in the methodology. First, our sample consisted mainly of Dutch researchers and was, of course, limited to researchers who were awarded a ZonMw grant within the knowledge infrastructure module. Consequently, our findings–enthusiasm and drive for improvements–may be related to the cultural context. The Netherlands are well known for their innovation-driven enterprise, so we cannot ensure that similar findings would emerge in a different context or with a foreign funding agency. Second, our data collection took place during the COVID-19 pandemic and the study design we used limits the generalisation of our findings. Our findings are reported impacts, thus they are subjective to the perceptions of the participants, in contrast to measurable, tangible impacts. As mentioned earlier, the fact that (some of the) participants conducted their SR during the COVID-19 pandemic may have impacted their experience and influenced their responses. On the one hand, researchers might have had more time or opportunities to work on meta-research during the pandemic. On the other hand, the pandemic created delays in conducting the SRs and could have impacted the researchers personally, including their capacity and opportunity for professional development. Moreover, our study design does not include a control group nor a pre-post intervention assessment, which limits the certainty one can have on the causality of our intervention. An example of this uncertainty can be observed in Fig 5, where both groups answered similarly regardless if they completed their review. This highlights again that the impacts and benefits found in our study cannot be only attributed to the conduct of preclinical SR but may be influenced by the workshop. We could imagine that similar education, i.e., workshops with hands-on practice, could produce similar results without actually conducting a preclinical SR. Taken together, the intervention seems promising but our findings need to be interpreted within the narrow scope of the case study. Lastly, because this study had to be conducted over the summer, it is possible that, despite our best efforts to obtain a high response rate, the summer holidays reduced the availability of participants. Furthermore, the time scale for this study was rather short (3 months), which limited the time we could spend collecting and analysing our data. Finally, the data collection, coding and analysis of the transcripts were all performed by one researcher. The participation of one or multiple additional investigators would have strengthened the trustworthiness of our findings and potentially provided other interpretations. Investigator triangulation should be advocated when possible. Nevertheless, to our knowledge, this is the first case study on such a topic, and in the future, when preclinical SRs become more established, it would be interesting to repeat this study on a broader scale, for example within European institutes and/or the Ensuring Value in Research Funder Forum. In addition, a pre-post test study could be performed to evaluate change in skills and behaviour before and after the intervention. A wider population would also allow analysis by demographic group, e.g., the comparison of senior vs younger researchers, comparison between research field.

Conclusion

This case study has provided important insights into the impacts of training, coaching and conducting preclinical SRs on both research and researchers. Not only did SRs impact research at a lab level, they led to changes in researchers’ views and critical abilities, and spurred efforts to advocate for improvement in their fields. Our project highlights the necessity and importance of supporting preclinical researchers to perform SRs, and demonstrates the impact of this on the quality and transparency of research, as well as on researchers’ awareness and motivation to change the status quo. Our findings suggest that support such as that provided here by the Dutch funding agency ZonMw, is relevant and should be encouraged on an international scale, to improve the quality and translation of preclinical research.

The Research Impact Framework.

(PDF) Click here for additional data file.

The Behaviour Change Wheel.

(PDF) Click here for additional data file.

Standards for Reporting Qualitative Research (SQRQ) checklist.

(PDF) Click here for additional data file.

Consolidated criteria for reporting qualitative studies (COREQ): 32-item checklist.

(PDF) Click here for additional data file.

Invitation to complete the online questionnaire and reminder.

(PDF) Click here for additional data file.

Full questionnaire.

(PDF) Click here for additional data file.

Informed consent form.

(PDF) Click here for additional data file.

Interview guide.

(DOCX) Click here for additional data file.

Code tree and harmonisation.

(PDF) Click here for additional data file.

Organisation of the questionnaire and number of respondents per questions.

(PDF) Click here for additional data file.

Questionnaire results for questions on planning, designing and reporting animal experiments, and appraising research after conducting a preclinical SR.

(PDF) Click here for additional data file.

Skills learnt and improved by performing preclinical systematic review.

(PDF) Click here for additional data file.

Questionnaire results.

(XLSX) Click here for additional data file. 2 Aug 2021 PONE-D-21-16684 The impact on research and researchers of conducting preclinical systematic reviews: a mixed methods case study PLOS ONE Dear Dr. Julia Menon, Thank you for submitting your manuscript to PLOS ONE. This is generally a well written paper but after careful consideration, we feel that it has merit but does not fully meet PLOS ONE’s publication criteria as it currently stands. Therefore, we invite you to submit a revised version of the manuscript that addresses the points raised during the review process. Please address the peer reviewers' comments which offer suggestions for improving readability and grammatical checks. Where applicable, we ask you to provide a point by point response the comments provided. As PLOS ONE does not copy edit accepted manuscripts we ask you to conduct a thorough spell and grammatical check before resubmission Please submit your revised manuscript by 30 August 2021. If you will need more time than this to complete your revisions, please reply to this message or contact the journal office at plosone@plos.org. When you're ready to submit your revision, log on to https://www.editorialmanager.com/pone/ and select the 'Submissions Needing Revision' folder to locate your manuscript file. Please include the following items when submitting your revised manuscript: A rebuttal letter that responds to each point raised by the academic editor and reviewer(s). You should upload this letter as a separate file labeled 'Response to Reviewers'. A marked-up copy of your manuscript that highlights changes made to the original version. You should upload this as a separate file labeled 'Revised Manuscript with Track Changes'. An unmarked version of your revised paper without tracked changes. You should upload this as a separate file labeled 'Manuscript'. If you would like to make changes to your financial disclosure, please include your updated statement in your cover letter. Guidelines for resubmitting your figure files are available below the reviewer comments at the end of this letter. We look forward to receiving your revised manuscript. Kind regards, Eleanor Ochodo Academic Editor PLOS ONE Journal requirements: When submitting your revision, we need you to address these additional requirements. 1. Please ensure that your manuscript meets PLOS ONE's style requirements, including those for file naming. The PLOS ONE style templates can be found at https://journals.plos.org/plosone/s/file?id=wjVg/PLOSOne_formatting_sample_main_body.pdf and https://journals.plos.org/plosone/s/file?id=ba62/PLOSOne_formatting_sample_title_authors_affiliations.pdf. 2. Thank you for stating the following in the Competing Interests/Financial Disclosure * (delete as necessary) section: “I have read the journal's policy and the authors of this manuscript have the following competing interests: Julia Menon declares that she worked for the Department “Health Evidence” within the Radboudumc, in the same team as the coaches who provided training and support for the “knowledge infrastructure” module. Since February 2021, she has a paid position via ZonMw. Merel Ritskes-Hoitinga, who supervised this study, is the head of the SYstematic Review Center for Laboratory (animal) Experimentation (SyRCLE) team. Pandora Pound declared no competing interest. Erica van Oort, who supervised this study, is project manager for ZonMw and is in charge of the “knowledge infrastructure” module, as part of the ZonMw MKMD programme. However, we all sincerely declare that we did our utmost to remain impartial when conducting, analysing and supervising this study.” We note that you received funding from a commercial source: [Name of Company] Please provide an amended Competing Interests Statement that explicitly states this commercial funder, along with any other relevant declarations relating to employment, consultancy, patents, products in development, marketed products, etc. Within this Competing Interests Statement, please confirm that this does not alter your adherence to all PLOS ONE policies on sharing data and materials by including the following statement: "This does not alter our adherence to PLOS ONE policies on sharing data and materials.” (as detailed online in our guide for authors http://journals.plos.org/plosone/s/competing-interests).  If there are restrictions on sharing of data and/or materials, please state these. Please note that we cannot proceed with consideration of your article until this information has been declared. Please include your amended Competing Interests Statement within your cover letter. We will change the online submission form on your behalf. 3. Your ethics statement should only appear in the Methods section of your manuscript. If your ethics statement is written in any section besides the Methods, please delete it from any other section. 4. Please include a copy of Table 1 which you refer to in your text on page 33 and table 12 in page 34. Please review your reference list to ensure that it is complete and correct. If you have cited papers that have been retracted, please include the rationale for doing so in the manuscript text, or remove these references and replace them with relevant current references. Any changes to the reference list should be mentioned in the rebuttal letter that accompanies your revised manuscript. If you need to cite a retracted article, indicate the article’s retracted status in the References list and also include a citation and full reference for the retraction notice. Reviewers' comments: Reviewer's Responses to Questions Comments to the Author 1. Is the manuscript technically sound, and do the data support the conclusions? The manuscript must describe a technically sound piece of scientific research with data that supports the conclusions. Experiments must have been conducted rigorously, with appropriate controls, replication, and sample sizes. The conclusions must be drawn appropriately based on the data presented. Reviewer #1: Yes Reviewer #2: Yes ********** 2. Has the statistical analysis been performed appropriately and rigorously? Reviewer #1: Yes Reviewer #2: N/A ********** 3. Have the authors made all data underlying the findings in their manuscript fully available? The PLOS Data policy requires authors to make all data underlying the findings described in their manuscript fully available without restriction, with rare exception (please refer to the Data Availability Statement in the manuscript PDF file). The data should be provided as part of the manuscript or its supporting information, or deposited to a public repository. For example, in addition to summary statistics, the data points behind means, medians and variance measures should be available. If there are restrictions on publicly sharing data—e.g. participant privacy or use of data from a third party—those must be specified. Reviewer #1: Yes Reviewer #2: Yes ********** 4. Is the manuscript presented in an intelligible fashion and written in standard English? PLOS ONE does not copyedit accepted manuscripts, so the language in submitted articles must be clear, correct, and unambiguous. Any typographical or grammatical errors should be corrected at revision, so please note any specific errors here. Reviewer #1: Yes Reviewer #2: No ********** 5. Review Comments to the Author Please use the space provided to explain your answers to the questions above. You may also include additional comments for the author, including concerns about dual publication, research ethics, or publication ethics. (Please upload your review as an attachment if it exceeds 20,000 characters) Reviewer #1: 1. Yes, this manuscript technically sound, and its data support the conclusions reached. 2. Yes, statistical analysis were performed appropriately 3. Yes, all data was available 4. Yes, it is presented in an intelligible fashion and written in standard English 5. Comments have been attached. Reviewer #2: I would like to thank the editor for the opportunity to review this very interesting paper; and would also like to thank the authors for conducting and reporting on this important and fascinating research topic on preclinical evidence synthesis; that is, relating to meta-research of laboratory experiments on animals. The paper investigates the impact of conducting a preclinical systematic reviews (SR), with funded coaching and workshops, on researchers and their subsequent research. While the limitations in terms of generalisability should be, and are, acknowledged, this is an important first step in reducing animal waste while improving and standardising preclinical work. The work shines the light on a very important and often neglected aspect of research waste: the waste of animal subjects in research. The authors are commended for this. It is our responsibility as scientists to conduct research on animals as ethically, rigorously and efficiently as possible; to ensure we obtain the correct results the first time, with minimal animal use. This may very well be a first-order seminal paper in the establishment of SRs as the top tier of the evidence hierarchy in this field. I offer the following thoughts with regards to the manuscript for the consideration of the editor and authors. I have marked the most pressing issues, ones I would classify as major in a differently designed study, with 'IMPORTANT' throughout the review. Overall The manuscript would benefit from editing for minor spelling, punctuation and grammatical errors throughout, but readability in general is good. I have attempted to identify the most glaring errors in this review. Abstract Lines 17-18: It is not clear from this statement whether the focus of the ZonMw grant scheme is preclinical SRs specifically, or SRs in general. Lines 22-23: This sentence does not accurately reflect that recruited researchers could be those who had finished a preclinical SR as well as those still busy conducting a preclinical SR (and indeed, those who had started, but not finished their SR). Lines 36-39: Though the authors do couch their language here, a word of caution: in the absence of measurements in a control group which did not receive the funded interventions and conducted a preclinical SR; the authors are requested to exercise caution in attributing the reported changes to the interventions alone. Even though researchers may themselves attribute these changes to the intervention, there may be a complex interplay of factors, e.g. the growth that happens during the work done for a PhD, contributing to their changing views and skill sets. Introduction General: It would be useful if authors could explicitly define early on in the manuscript what is meant by ‘preclinical’, as the term is also sometimes used to refer to pre-hospital and emergency medicine contexts. Line 41: Suggest changing to ‘Keeping up to date with health/medical literature…’ Line 46: Please remove errant ‘-‘ from the sentence ending on this line. Line 56: Consider changing ‘throw light’ to ‘cast light’. Line 74: This introductory line should be presented as such. In its present state it is confusing and appears as another question, instead of the overarching aim. I would suggest amending this sentence to ‘This work aims to assess the impact of conducting preclinical SRs on researchers, their research and their field through the following objectives:’ Lines 115-116: While it is acknowledged that these would be outside the scope of the current manuscript, which already describes a great deal of work, it would be interesting to assess the policy impacts and societal impacts (areas 2 and 4 in the manuscript) of the intervention in question using the modified Kuruvilla et al. 2007 checklist. Materials and Methods Line 142: It is not clear what is meant by ‘thrive’ in this context, but it is assumed to be an incorrect translation. Consider changing to ‘drive’ or ‘development’, depending on the original intent. Lines 157-158: This sentence contradicts methodology in following sections, which describes participants who had not finished their SRs also included in the sample. Please correct this factual inaccuracy, or elaborate on why non-completers were included post hoc – if that was the case. Line 163: It may be useful to include here whether participants consented to be contacted by ZonMw for the purposes of intervention assessment when providing details, if this was indeed the case. Line 177: It would be useful, though not essential, to state at the start of this section whether informed consent was sought to use the responses provided in questionnaires. Line 186: As it is not possible for researchers still conducting their SRs to evaluate post-SR skills and experience, it is assumed that this should read ‘For the latter, only points 1, 2 and 3 were evaluated.’ Please correct if this is the case, or elaborate on how post-intervention experiences would be appraised before the end of the intervention – if the manuscript is correct in its current format. IMPORTANT Lines 205-208: It is a pity that more demographic/explanatory variables were not collected on participants, as some measures of association could have been calculated. This would have provided some insights into differences between researchers and how this may, or may not, have shaped their responses – specifically given the outstanding uncertainty presented by the lack of a control group of questionnaire participants. It would be useful if the authors could include these considerations in their manuscript, perhaps as a next step in the evolution of understanding the role of SRs in preclinical work. Lines 235-236: Please provide a breakdown of the number of inputs per background/career level, i.e. ‘…Thorough piloting was performed for both questionnaires and semi-structured interviews by seven researchers knowledgeable in SRs with varying backgrounds and career levels, namely research assistant (n=x), PhD student (n=y) and professor (n=z).’ I don’t think it’s necessary to identify a person at a professorial level as ‘associate’ (or not). Results Lines 243-244: It is very possible that response and completion rates may be defined differently by different sources. A leading survey software defines response rate as ‘the number of people who completed the survey divided by the total sample’ (i.e. 45/95=47.4%), and completion rate as ‘the number of surveys filled out and submitted divided by the number of surveys started’ (i.e. 45/61=73.8%). Please revisit these concepts and calculations, and provide a reference for the definitions used to avoid confusion among readers. Line 248: Please revise the start of the sentence ‘And the 7 remaining…’. It is recommended that this sentence be combined with the previous sentence. Line 265-266: I am not convinced that reporting an ‘average median’ is wise; it is also a meaningless value in terms of agreement without information on the statements asked and the direction of the question. I wouldn’t risk readers skimming over this without referring to the figures, where the median breakdown per question provides much richer information. Line 317: Steer clear of statements such as ‘the results are quite positive’ as this indicates inherent bias for a result. IMPORTANT Lines 317-321: As stated before, it is very difficult to know whether the equivalence in the response of SR completers and non-completers can be ascribed to the SR, given the lack of a control comparator. It is possible that these insights may have evolved in respondents in a similar field, and with comparable education and training, in the absence of conducting a preclinical SR. This evolution might even be suggested by the identical medians in the two groups, with further SR experience (in completers) leading to no increase in critical appraisal compared to non-completers. It is accepted that this uncertainty is part of the case study design, but please make this clear to the reader. IMPORTANT Line 338: It would be interesting to know whether respondents participating in the scheme during 2020 had a different experience to those pre-pandemic in terms of time taken/delays with their SRs. If these data are available this might be an interesting aspect to include, as the time required to conduct an SR often presents a considerable barrier to their initiation and completion. Lines 470 and 484: Please steer clear from informal language such as ‘no easy business’ and ‘budge one bit’. Furthermore, ‘(yet)’ [line 484] again demonstrates an inherent bias by the author for a certain outcome, and should be avoided at all costs. Discussion IMPORTANT General: As stated before, it is very difficult to know whether the reported experiences and skills of respondents can be ascribed to conducting the preclinical SR, given the lack of a control comparator or association with explanatory variables. It is possible that the reported insights would have developed in respondents in a similar field and with comparable education and training. The authors do address the subjective nature of their findings, but could suggest next steps in this important work. As a starting point, it might be useful to suggest (and consider) a pre-post test with a new intake of researchers, measuring their attitudes and skills at baseline and again following the completion of their SR. An alternative, but a weaker test of causality, would be to measure explanatory variables and test associations with reported outcomes in future work. General: It was surprising not to see any exploration of the effect of the pandemic on the findings presented in this paper, given both the global impact of the event as well as several mentions of COVID-19 in the questionnaire responses – specifically related to delays in SR completion. It might be useful for the authors to include a paragraph on how this context may have shaped their findings, both in terms of practicalities for SRs as well as responses; particularly due to shifts in mental health and outlook for many during this time. IMPORTANT Lines 568-569: Statements attributing changes in mindset and behaviour to the conduct of preclinical SRs without acknowledging the complex myriad of circumstances that may contribute to these are problematic. Please review this statement (and others like it) to accurately reflect that preclinical SRs *appear to contribute* to a shift in mindset and behaviour, though comparative data, and more work to identify other explanatory factors, is needed. Conflicts of interest The authors declare their association with the invention provider upfront, and no serious conflicts were identified. I would like to caution the authors, however, around the language they use and how it unintentionally reflects a preferred direction of their findings. With thanks again for the opportunity, and best wishes. ********** 6. PLOS authors have the option to publish the peer review history of their article (what does this mean?). If published, this will include your full peer review and any attached files. If you choose “no”, your identity will remain anonymous but your review may still be made public. Do you want your identity to be public for this peer review? For information about this choice, including consent withdrawal, please see our Privacy Policy. Reviewer #1: No Reviewer #2: Yes: Dr Amanda S Brand [NOTE: If reviewer comments were submitted as an attachment file, they will be attached to this email and accessible via the submission site. Please log into your account, locate the manuscript record, and check for the action link "View Attachments". If this link does not appear, there are no attachment files.] While revising your submission, please upload your figure files to the Preflight Analysis and Conversion Engine (PACE) digital diagnostic tool, https://pacev2.apexcovantage.com/. PACE helps ensure that figures meet PLOS requirements. To use PACE, you must first register as a user. Registration is free. Then, login and navigate to the UPLOAD tab, where you will find detailed instructions on how to use the tool. If you encounter any issues or have any questions when using PACE, please email PLOS at figures@plos.org. Please note that Supporting Information files do not need this step. Submitted filename: PONE-D-21-16684_reviewer.pdf Click here for additional data file. 10 Sep 2021 Dear Editors, We would like to thank the editor and the two reviewers for their comments and suggestions and the opportunity to revise our submitted manuscript. We have addressed all issues that were raised and address detailed answers below. The comments were very insightful and have helped to improve our manuscript. We genuinely hope that our revisions will be satisfactory and that our manuscript will be now suitable for publication. Thank you again for considering our manuscript for publication in PLoS One. Looking forward to hearing from you again, Sincerely, On behalf of all co-authors, Julia Menon Editor’s comments: 1. Please ensure that your manuscript meets PLOS ONE's style requirements, including those for file naming Authors’ response: we made a thorough check and made the necessary changes to comply with PLOS ONE’s style requirements. We genuinely apologise that several mistakes passed through and thank the editor for their keen eye and recommendation to modify our manuscript accordingly. Here is a detailed list of what was modified: -Page 1: Affiliations are numbered, the postal address of the corresponding author was removed, and we added initials to the corresponding author. -line 130: we change the title of Fig 1 to bold -All headings and subheadings were verified and now comply with sentence case -Appendices captions were remodelled for appendix S2, S11, and S12. The appendices files were all renamed as “SX_Appendix.pdf” for example “S1_Appendix.pdf”. -Appendices are now formatted as level 1 headings and their caption were verified for compliance. Appendices were changed from .docx to .pdf format. Therefore, we also aim to reupload all appendices to be consistent. 2. We note that you received funding from a commercial source: [Name of Company] Please provide an amended Competing Interests Statement that explicitly states this commercial funder, along with any other relevant declarations relating to employment, consultancy, patents, products in development, marketed products, etc. Within this Competing Interests Statement, please confirm that this does not alter your adherence to all PLOS ONE policies on sharing data and materials by including the following statement: "This does not alter our adherence to PLOS ONE policies on sharing data and materials.” (as detailed online in our guide for authors http://journals.plos.org/plosone/s/competing-interests). If there are restrictions on sharing of data and/or materials, please state these. Please note that we cannot proceed with consideration of your article until this information has been declared. Please include your amended Competing Interests Statement within your cover letter. We will change the online submission form on your behalf. Authors’ response: We think a slight confusion may have occurred as ZonMw is not a commercial source. It is a Public Benefit Organisation, which provides and receives only public money. To clarify, we still modified the competing interest statement. Therefore, please find below the new Competing Interest Statement: “I have read the journal's policy and the authors of this manuscript have the following competing interests: Julia Menon declares that she worked for the Department “Health Evidence” within the Radboudumc, in the same team as the coaches who provided training and support for the “knowledge infrastructure” module. Since February 2021, she has a paid position via ZonMw. Merel Ritskes-Hoitinga, who supervised this study, is the head of the SYstematic Review Center for Laboratory (animal) Experimentation (SyRCLE) team. Pandora Pound declared no competing interest. Erica van Oort, who supervised this study, is program manager for ZonMw and is in charge of the “knowledge infrastructure” module, as part of the ZonMw MKMD programme. ZonMw funded this project, as declared in our financial statement, as well as the employment of Erica van Oort. ZonMw is a Public Benefit Organisation, which is registered with the Chamber of Commerce The Hague under number 27365263, tax number: 0028.76.528. This does not alter our adherence to PLOS ONE policies on sharing data and materials. However, we all sincerely declare that we did our utmost to remain impartial when conducting, analysing and supervising this study.” 3. Your ethics statement should only appear in the Methods section of your manuscript. If your ethics statement is written in any section besides the Methods, please delete it from any other section. Authors’ response: It seems our ethics statement only appears in the Methods section (in the subheading “Ethical concerns pertaining to human subjects”). 4. Please include a copy of Table 1 which you refer to in your text on page 33, and Table 12 on page 34 Authors’ response: Table 1 and Table 12 are provided in Appendix S1 and S12. Consequently, we deleted their caption and only kept the captions of Appendix S1 and S12. For consistency reasons, we would rather have all supplementary materials be referred to as “Appendix”, than have a mix of tables, figures, supplementary materials etc, in the captions. We also modified the mention of Table 12 to “S12 Appendix” in line 335. Reviewer #1 comments: 1. Line 1-2: Please revise it: "The impact of conducting preclinical systematic reviews on researchers and their research: a mixed method case study" will sound good. Authors’ response: We agree that this title is more straightforward. We modified it accordingly. 2. Line 15: Delete "a" Authors’ response: the “a” was removed accordingly, and a “s” was added to “cornerstone”. Thank you for noticing this typo. 3. Line 46: Delete "-" Authors’ response: It was deleted accordingly. 4. Line 49: Remove "a" Authors’ response: the “a” was removed accordingly and a “s” was added to “cornerstone”. Thank you for noticing this typo. 5. Line 50: Include also the number of SR being conducted. Authors’ response: Thank you for this great suggestion. We added numbers of SR being conducted as of 2014 (this is the most recent update we could find on that matter) and added the reference to the relevant article. The text now reads: “Today, they are cornerstones of evidence-based medicine, with 30,000 SRs protocols being registered as of 2017 and a 2014 estimate putting the number of published SRs at over one million [6,7]”. 6. Line 70: Remove "preclinical" Authors’ response: we removed “preclinical” as you suggested. 7. Line 161: Replace "contacted" with "recruited" Authors’ response: we changed the sentence as suggested. 8. Line 222-224: Even though this study did not pose any risk to participants, ethical clearance was needed since it involved human beings. This study should have applied for ethical clearence. Authors’ response: we realise that our statement may have been unclear. So, we provide more information here and in the text of the manuscript. According to Dutch law, studies including humans can be subjected to the Medical Research Involving Human Subjects Act (WMO) and must be reviewed by the Central Committee on Research Involving Human Subject (CCMO) or a Medical Research Ethics Committees (MREC). However, some studies in which humans are participating are not subject to the WMO, and hence do not require ethical review (see here for more information: https://english.ccmo.nl/investigators/legal-framework-for-medical-scientific-research/your-research-is-it-subject-to-the-wmo-or-not and https://english.ccmo.nl/investigators/additional-requirements-for-certain-types-of-research/non-wmo-research ). Questionnaire research does not follow the WMO and hence does not require ethical review. The exception is if questions are, for instance, burdensome, intimate, or if the questionnaire takes a lot of time to fill in (https://english.ccmo.nl/investigators/additional-requirements-for-certain-types-of-research/other-types-of-research/questionnaire-research). Considering the nature of our questions and the short amount of time needed to fill the questionnaire, our research does not fall within that description. For clarity purposes, we added the following reference and modifications to the text: “According to Dutch law, research involving humans must be reviewed by the Central Committee on Research Involving Human Subject or a Medical Research Ethics Committee, if the study is subject to the Medical Research Involving Human Subjects Act [36]. Questionnaire research does not fall within this act and does not require ethical review, unless the questions are burdensome, intimate, or if completing the questionnaire is time-consuming [37, 38]. In our case, participants were not patients, children, or vulnerable persons, and the topics addressed did not relate to their health, traumatic events or sensitive matters [39]. Furthermore, the time required to answer was short (maximum 15 minutes). The topics addressed in both questionnaires and interview guides posed no risks to the participants, and in particular no risk of physical or mental harm. For these reasons, we did not seek approval from an Institutional Review Board.” 9. Line 242-248: Please add demographic characteristics of the participants; gender, age and their research levels (if possible). Authors’ response: We did not collect demographic characteristics systematically. Therefore, we cannot add them to the text. However, we agree that demographic characteristics could have influenced the results and discuss their potential influence in our discussion. We also suggest that future studies use demographic groups (e.g. young researchers vs senior researchers) Here is the text we added to our discussion: “In this case study, we focussed primarily on our intervention, and it should be noted that the improvements observed cannot be only attributed to preclinical SRs, as complex factors are also involved and may have influenced participants’ answers (e.g., personal growth and capacity, background, seniority). Further assessments may help to highlight or understand the complex factors contributing to the beneficial effects.” And also: “A wider population would also allow analysis by demographic group, e.g., the comparison of senior vs younger researchers, comparison between research field.” 10. Line 242-248: Please rephrase the paragraph it is confusing (not clear) in the current form, in terms of the numbers. Authors’ response: We reverified all numbers and identified that an error got introduced, which may be the reason for the unclearness. It was not 7 but 5 participants who terminated the questionnaire. It was modified appropriately. Moreover, we provided an additional sentence to clarify the numbers in the text and hope it will be suited to your expectations. At the end of the explanation, we state: “Therefore the 61 participants were divided as such: SR completed (n=36), SR ongoing (n=18), SR stopped (n=2), and NR (n=5).” The whole paragraph was also modified for more clarity: Of 99 potential participants, we were able to contact 95. Sixty-one participants started our questionnaire, and 45 completed it (i.e., 16 drop-outs), giving a response rate of 47.4% and a completion rate of 73.8% (definitions of response and completion rate can be found here [40]). An overview of participants per phase is available in Fig 2. Of the 61 participants, 36 belonged to the “SR completed” group (i.e., they had published or submitted a manuscript of their preclinical SR). In comparison, 18 participants belonged to the “SR ongoing” group (i.e., still in the process of conducting their SR). Two participants did not complete their SRs due to time constraints, while the 5 remaining participants terminated the questionnaire before answering the question about the state of their review. Therefore the 61 participants were divided as such: SR completed (n=36), SR ongoing (n=18), SR stopped (n=2), and NR (n=5). Ten participants agreed to participate in interviews but only eight interviews were eventually conducted due to the unavailability of two researchers.” 11. Line 260-262: What happened to the 11 participants in the SR completed group? 36 - 5 - 6 = 25. Please talk about the other missing 11 participants. Authors’ response: We precise the number by adding the following sentences: “Within the SR completed group (n=36), 14 participants went on to perform primary animal studies after completion of their SRs, 5 performed preclinical research using alternatives to animals, 6 performed clinical studies, and 6 moved to meta-research or ceased research. The remaining five participants did not complete this part of the questionnaire.” Reviewer #2 comments: 1. Lines 17-18: It is not clear from this statement whether the focus of the ZonMw grant scheme is preclinical SRs specifically, or SRs in general. Authors’ response: Indeed, the sentence could be confusing. Thank you for spotting it. We added “preclinical” in front of “SRs” to specify that the ZonMw grant scheme focuses on preclinical SRs specifically. The sentence now reads: “Since 2011 the Dutch health funding organisation (ZonMw) has run a grant scheme dedicated to promoting the training, coaching and conduct of preclinical SRs.” 2. Lines 22-23: This sentence does not accurately reflect that recruited researchers could be those who had finished a preclinical SR as well as those still busy conducting a preclinical SR (and indeed, those who had started, but not finished their SR). Authors’ response: To bring more clarity, we rephrased as follow: “We recruited researchers who attended funded preclinical SR workshops and who conducted, are still conducting, or prematurely stopped a SR with funded coaching.”s 3. Lines 36-39: Though the authors do couch their language here, a word of caution: in the absence of measurements in a control group which did not receive the funded interventions and conducted a preclinical SR; the authors are requested to exercise caution in attributing the reported changes to the interventions alone. Even though researchers may themselves attribute these changes to the intervention, there may be a complex interplay of factors, e.g. the growth that happens during the work done for a PhD, contributing to their changing views and skill sets. Authors’ response: We understand and see your point for this part to be less “causal”. To dampen our conclusion statement, we modified the conclusion as follows: “Being trained and coached in the conduct of preclinical SRs appears to be a contributing factor to many beneficial changes which will impact the quality of preclinical research in the long-term. Our findings suggest that this ZonMw funding scheme is helpful in improving the quality and transparency of preclinical research.” 4. General: It would be useful if authors could explicitly define early on in the manuscript what is meant by ‘preclinical’, as the term is also sometimes used to refer to pre-hospital and emergency medicine contexts. Authors’ response: To clarify what we meant, we added to line 56-57: “SRs are struggling to achieve similar status in the preclinical field (i.e., fundamental and applied animal studies, in vitro and ex vivo studies before clinical research)” 5. Line 41: Suggest changing to ‘Keeping up to date with health/medical literature…’ Authors’ response: Agreed, the sentence was modified as suggested. 6. Line 46: Please remove errant ‘-‘ from the sentence ending on this line. Authors’ response: Thank you for noticing this mistake. It was deleted accordingly. 7. Line 56: Consider changing ‘throw light’ to ‘cast light’. Authors’ response: this sentence fragment was modified as suggested. 8. Line 74: This introductory line should be presented as such. In its present state it is confusing and appears as another question, instead of the overarching aim. I would suggest amending this sentence to ‘This work aims to assess the impact of conducting preclinical SRs on researchers, their research and their field through the following objectives:’ Authors’ response: Indeed, the sentence suggested provides a clearer introductory line. We modified it as suggested, with a small modification (we changed “This work” to “Our study”). 9. Lines 115-116: While it is acknowledged that these would be outside the scope of the current manuscript, which already describes a great deal of work, it would be interesting to assess the policy impacts and societal impacts (areas 2 and 4 in the manuscript) of the intervention in question using the modified Kuruvilla et al. 2007 checklist. Authors’ response: We agree that policy impacts and societal impacts would have been very interesting to assess. However, our entire set of questions in both survey and interviews are based to match the research impact category of the Kuruvilla et al. 2007 checklist and the behavioural wheel of change. Assessing policy and societal impacts would require other questions and hence to perform a whole new study – which we cannot do in the scope of this manuscript. We do recognise the relevance of the other areas and hence have added a sentence to the discussion in that regard. The sentence is: “Further studies are warranted to fully understand what hinders or facilitate the conduct of preclinical SRs. For instance, a more in-depth assessment of impacts would provide greater insight and could address other areas of the research impact framework, e.g., policy impacts, societal impacts”. 10. Line 142: It is not clear what is meant by ‘thrive’ in this context, but it is assumed to be an incorrect translation. Consider changing to ‘drive’ or ‘development’, depending on the original intent. Authors’ response: As the reviewer mentioned, it is an incorrect translation. After review, we modified thrive to ‘motivation’. 11. Lines 157-158: This sentence contradicts methodology in following sections, which describes participants who had not finished their SRs also included in the sample. Please correct this factual inaccuracy, or elaborate on why non-completers were included post hoc – if that was the case. Authors’ response: Participants were recruited regardless if they completed, started (and are currently conducting) or started and stopped their systematic review. We can see how the second part can be confusing/unclear in line 157-158. Therefore, we modified it as follows: “The target population for the questionnaires comprised researchers who had participated in a ZonMw workshop and who had either started (currently conducting or prematurely stopped) or completed a preclinical SR with funded coaching.” 12. Line 163: It may be useful to include here whether participants consented to be contacted by ZonMw for the purposes of intervention assessment when providing details, if this was indeed the case. Authors’ response: ZonMw was in contact with all researchers by being their funder. By default, researchers agreed to communicate with ZonMw on their work and progress. Besides, they were given the opportunity to refuse participation or withdraw at any time. We are not sure if this information is required in the manuscript, as “at worst”, researchers received 2 e-mails kindly inviting them to participate in a survey – without any pressure-, which is not burdensome or intimate. 13. Line 177: It would be useful, though not essential, to state at the start of this section whether informed consent was sought to use the responses provided in questionnaires. Authors’ response: The introduction text of the questionnaires clearly stated the intentions of the questionnaire. Though it was not exactly stated as such, we have assumed that researchers agree for their data (anonymously) to be collected, otherwise they would not have agreed to fill out the questionnaire in the first place. 14. Line 186: As it is not possible for researchers still conducting their SRs to evaluate post-SR skills and experience, it is assumed that this should read ‘For the latter, only points 1, 2 and 3 were evaluated.’ Please correct if this is the case, or elaborate on how post-intervention experiences would be appraised before the end of the intervention – if the manuscript is correct in its current format. Authors’ response: We realise that this part does not convey the information we wanted. We meant “skills gained as a result of conducting (steps of) the SR”. Considering that systematic reviews follow structured steps, one researcher could learn new skills by doing some of the steps without actually completing the whole study, e.g., designing a comprehensive search. It is indeed points 3,4, and 5 for the “ongoing group”. We added the following: “4) skills gained as a result of conducting (steps of) the SR, and 5) experience with conducting (steps of) the SR (including, but not limited to, publishing experiences and wishing to perform further SRs for the completed group))”. 15. IMPORTANT Lines 205-208: It is a pity that more demographic/explanatory variables were not collected on participants, as some measures of association could have been calculated. This would have provided some insights into differences between researchers and how this may, or may not, have shaped their responses – specifically given the outstanding uncertainty presented by the lack of a control group of questionnaire participants. It would be useful if the authors could include these considerations in their manuscript, perhaps as a next step in the evolution of understanding the role of SRs in preclinical work. Authors’ response: We understand your point of view and added this point to the discussion “ In this case study, we focussed primarily on our intervention, and it should be noted that the improvements observed cannot be only attributed to preclinical SRs as complex factors are also involved and may have influenced participants’ answers (e.g., personal growth and capacity, background, seniority). Further assessments may help to highlight or understand the intricated factors contributing to the beneficial effects.” “in the future, when preclinical SRs become more established, it would be interesting to repeat this study on a broader scale […]. A wider population would also allow analysis by demographic group, e.g., the comparison of senior vs younger researchers, comparison between research field.” 16. Lines 235-236: Please provide a breakdown of the number of inputs per background/career level, i.e. ‘…Thorough piloting was performed for both questionnaires and semi-structured interviews by seven researchers knowledgeable in SRs with varying backgrounds and career levels, namely research assistant (n=x), PhD student (n=y) and professor (n=z).’ I don’t think it’s necessary to identify a person at a professorial level as ‘associate’ (or not). Authors’ response: We modified this statement accordingly: “Thorough piloting was performed for both questionnaires and semi-structured interviews by seven researchers knowledgeable about SRs and from a variety of backgrounds and career levels, background and career levels, namely research assistants (n=2), PhD student (n=1), (associate), and professors (n=4).” 17. Lines 243-244: It is very possible that response and completion rates may be defined differently by different sources. A leading survey software defines response rate as ‘the number of people who completed the survey divided by the total sample’ (i.e. 45/95=47.4%), and completion rate as ‘the number of surveys filled out and submitted divided by the number of surveys started’ (i.e. 45/61=73.8%). Please revisit these concepts and calculations, and provide a reference for the definitions used to avoid confusion among readers. Authors’ response: For clarity purposes, we decided to use the same definition you provided with a clear reference. Also, figure 2 was modified accordingly. 18. Line 248: Please revise the start of the sentence ‘And the 7 remaining…’. It is recommended that this sentence be combined with the previous sentence. Authors’ response: we combined the two sentences as suggested. It now reads: “Two participants did not complete their SRs due to time constraints, while the 7 remaining participants terminated the questionnaire before answering the question about the state of their review”. To note: We reverified all numbers and identified that an error got introduced. It was not 7 but 5 participants who terminated the questionnaire (the 5 participants were already indicated in Fig 2). It was modified appropriately. 19. Line 265-266: I am not convinced that reporting an ‘average median’ is wise; it is also a meaningless value in terms of agreement without information on the statements asked and the direction of the question. I wouldn’t risk readers skimming over this without referring to the figures, where the median breakdown per question provides much richer information. Authors’ response: We see your point and hence removed the sentences mentioning average median. To give you some context, our intention with the average median was to give an overall value for each part, while bringing new information that was not mentioned in the figure. We agree with the comment and wish people would directly look in detail at the figures. 20. Line 317: Steer clear of statements such as ‘the results are quite positive’ as this indicates inherent bias for a result. Authors’ response: We deleted this part and read the result thoroughly to remove similar statements. 21. IMPORTANT Lines 317-321: As stated before, it is very difficult to know whether the equivalence in the response of SR completers and non-completers can be ascribed to the SR, given the lack of a control comparator. It is possible that these insights may have evolved in respondents in a similar field, and with comparable education and training, in the absence of conducting a preclinical SR. This evolution might even be suggested by the identical medians in the two groups, with further SR experience (in completers) leading to no increase in critical appraisal compared to non-completers. It is accepted that this uncertainty is part of the case study design, but please make this clear to the reader. Authors’ response: We added a sentence about this matter in the discussion: “Moreover, our study design does not include a control group nor a pre-post intervention assessment, which limits the certainty one can have on the causality of our intervention. An example of this uncertainty can be observed in Fig 4, where both groups answered similarly regardless of whether they completed their review. This highlights again that the impacts and benefits found in our study cannot be only attributed to the conduct of preclinical SR but may be influenced by the workshop. We could imagine that similar education, i.e., workshops with hands-on practice, could produce similar results without actually conducting a preclinical SR.” “Taken together, the intervention seems promising but our findings need to be interpreted within the narrow scope of the case study” 22. IMPORTANT Line 338: It would be interesting to know whether respondents participating in the scheme during 2020 had a different experience to those pre-pandemic in terms of time taken/delays with their SRs. If these data are available this might be an interesting aspect to include, as the time required to conduct an SR often presents a considerable barrier to their initiation and completion. Authors’ response: It would be an interesting comparison to make, however we do not have the data to perform this analysis completely. Nevertheless, three participants (one from the completed group and two from the started group) mentioned delays in relation to the covid situation. To address delays, we added the following paragraph “Reviews (and reviews steps) often took longer than expected (26/30 participants for the completed group and 8/15 participants for the ongoing group). For instance, researchers often experienced the number of studies during screening, data extraction or data analysis as overwhelming, but also suffered in getting their protocols, manuscripts or revision for publication reviewed. Some had to wait for second or third assessors/reviewers and experienced delays when asking authors for further information. Also, three participants (both groups combined) mentioned the COVID-19 pandemic as a delaying factor.” 23. Lines 470 and 484: Please steer clear from informal language such as ‘no easy business’ and ‘budge one bit’. Furthermore, ‘(yet)’ [line 484] again demonstrates an inherent bias by the author for a certain outcome, and should be avoided at all costs. Authors’ response: These instances were removed and modified as follow: “Interviewees’ experiences highlighted that getting preclinical SRs published can be challenging.” We also added: “discovering that the insights from their SRs appeared to have no impact on their field.” We verified the rest of the manuscript text to remove informal language. Regarding the ‘(yet)’, we meant that the participants wished for changes in their field (at least by the ones that were frustrated) but that this change did not yet occur. However, we see how it could be misleading and thus removed it according to your suggestion. 24. IMPORTANT General: As stated before, it is very difficult to know whether the reported experiences and skills of respondents can be ascribed to conducting the preclinical SR, given the lack of a control comparator or association with explanatory variables. It is possible that the reported insights would have developed in respondents in a similar field and with comparable education and training. The authors do address the subjective nature of their findings, but could suggest next steps in this important work. As a starting point, it might be useful to suggest (and consider) a pre-post test with a new intake of researchers, measuring their attitudes and skills at baseline and again following the completion of their SR. An alternative, but a weaker test of causality, would be to measure explanatory variables and test associations with reported outcomes in future work. Authors’ response: We added several statements to clarify this aspect: -line 596-600: In this case study, we focussed primarily on our intervention, and it should be noted that the improvements observed cannot be only attributed to preclinical SRs, as complex factors are also involved (e.g., personal growth and capacity, background, seniority). Further assessments may help to highlight or understand the complex factors contributing to the beneficial effects. -line 647-648: In addition, a pre-post test study could be performed to evaluate changes in skills and behaviour before and after the intervention. 25. General: It was surprising not to see any exploration of the effect of the pandemic on the findings presented in this paper, given both the global impact of the event as well as several mentions of COVID-19 in the questionnaire responses – specifically related to delays in SR completion. It might be useful for the authors to include a paragraph on how this context may have shaped their findings, both in terms of practicalities for SRs as well as responses; particularly due to shifts in mental health and outlook for many during this time. Authors’ response: It is indeed an important factor, we now mention it in the strengths and limitations, with the following paragraph: “Second, our data collection took place during the COVID-19 pandemic and the study design we used limits the generalisation of our findings. Our findings are reported impacts, thus they are subjective to the perceptions of the participants, in contrast to measurable, tangible impacts. As mentioned in the result, the fact that (part of the) participants conducted their review during the COVID-19 pandemic may have impacted their experience and hence influenced their answers. On the one hand, researchers might have had more time or opportunities to work on meta-research during the pandemic. On the other hand, the pandemic conditions seem to have created delays in conducting the SRs and could have impacted the researchers personally, including their capacity and opportunity for professional development.” 26. IMPORTANT Lines 568-569: Statements attributing changes in mindset and behaviour to the conduct of preclinical SRs without acknowledging the complex myriad of circumstances that may contribute to these are problematic. Please review this statement (and others like it) to accurately reflect that preclinical SRs *appear to contribute* to a shift in mindset and behaviour, though comparative data, and more work to identify other explanatory factors, is needed. Authors’ response: We overall reduced the strength of the language when talking about causality, for example saying “contribute to” or “seem to participate” instead of “trigger” or “enable”. In addition, as mentioned in an earlier comment, we added the following sentence: “it should be noted that the improvements observed cannot be only attributed to preclinical SRs, as complex factors are also involved (e.g., personal growth and capacity, background, seniority). Further assessments may help to highlight or understand the complex factors contributing to the beneficial effects.” 27. Conflicts of interest. The authors declare their association with the invention provider upfront, and no serious conflicts were identified. I would like to caution the authors, however, around the language they use and how it unintentionally reflects a preferred direction of their findings. Authors’ response: We performed a thorough check of the manuscript to ensure that no preferred directions were mentioned in the findings. When necessary, we rephrased/removed part of the text. Of course, positive language was not consciously included in the text. We acknowledge our enthusiasm for systematic reviews, which may have unintentionally been included in our writing process. We would like to thank you for your warning and have rewritten the text accordingly. To note: We ran checks for minor spelling, punctuation and grammatical errors as you suggested. Submitted filename: Response to Reviewers.docx Click here for additional data file. 3 Nov 2021 PONE-D-21-16684R1The impact of conducting preclinical systematic reviews on researchers and their research: a mixed method case studyPLOS ONE Dear Julia Menon, Thank you for submitting your manuscript to PLOS ONE.  Your responses to the previous comments raised have been well received. We however request that you respond to two minor comments by reviewer 2. Therefore, we invite you to submit a revised version of the manuscript that addresses the points raised during the review process. Please submit your revised manuscript by 18 Dec 2021. If you will need more time than this to complete your revisions, please reply to this message or contact the journal office at plosone@plos.org. When you're ready to submit your revision, log on to https://www.editorialmanager.com/pone/ and select the 'Submissions Needing Revision' folder to locate your manuscript file. Please include the following items when submitting your revised manuscript:If you would like to make changes to your financial disclosure, please include your updated statement in your cover letter. Guidelines for resubmitting your figure files are available below the reviewer comments at the end of this letter. A rebuttal letter that responds to each point raised by the academic editor and reviewer(s). You should upload this letter as a separate file labeled 'Response to Reviewers'. A marked-up copy of your manuscript that highlights changes made to the original version. You should upload this as a separate file labeled 'Revised Manuscript with Track Changes'. An unmarked version of your revised paper without tracked changes. You should upload this as a separate file labeled 'Manuscript'. We look forward to receiving your revised manuscript. Kind regards, Eleanor Ochodo Academic Editor PLOS ONE Journal Requirements: Please review your reference list to ensure that it is complete and correct. If you have cited papers that have been retracted, please include the rationale for doing so in the manuscript text, or remove these references and replace them with relevant current references. Any changes to the reference list should be mentioned in the rebuttal letter that accompanies your revised manuscript. If you need to cite a retracted article, indicate the article’s retracted status in the References list and also include a citation and full reference for the retraction notice. Reviewers' comments: Reviewer's Responses to Questions Comments to the Author 1. If the authors have adequately addressed your comments raised in a previous round of review and you feel that this manuscript is now acceptable for publication, you may indicate that here to bypass the “Comments to the Author” section, enter your conflict of interest statement in the “Confidential to Editor” section, and submit your "Accept" recommendation. Reviewer #1: All comments have been addressed Reviewer #2: (No Response) ********** 2. Is the manuscript technically sound, and do the data support the conclusions? The manuscript must describe a technically sound piece of scientific research with data that supports the conclusions. Experiments must have been conducted rigorously, with appropriate controls, replication, and sample sizes. The conclusions must be drawn appropriately based on the data presented. Reviewer #1: Yes Reviewer #2: Yes ********** 3. Has the statistical analysis been performed appropriately and rigorously? Reviewer #1: Yes Reviewer #2: Yes ********** 4. Have the authors made all data underlying the findings in their manuscript fully available? The PLOS Data policy requires authors to make all data underlying the findings described in their manuscript fully available without restriction, with rare exception (please refer to the Data Availability Statement in the manuscript PDF file). The data should be provided as part of the manuscript or its supporting information, or deposited to a public repository. For example, in addition to summary statistics, the data points behind means, medians and variance measures should be available. If there are restrictions on publicly sharing data—e.g. participant privacy or use of data from a third party—those must be specified. Reviewer #1: Yes Reviewer #2: Yes ********** 5. Is the manuscript presented in an intelligible fashion and written in standard English? PLOS ONE does not copyedit accepted manuscripts, so the language in submitted articles must be clear, correct, and unambiguous. Any typographical or grammatical errors should be corrected at revision, so please note any specific errors here. Reviewer #1: Yes Reviewer #2: Yes ********** 6. Review Comments to the Author Please use the space provided to explain your answers to the questions above. You may also include additional comments for the author, including concerns about dual publication, research ethics, or publication ethics. (Please upload your review as an attachment if it exceeds 20,000 characters) Reviewer #1: (No Response) Reviewer #2: Thank you very much for your considered feedback and amendments to the manuscript - as I stated during the first round of review, this is a very important piece of work; I hope to see it in print (and cite it!). I have no further *material* comments, but two further suggestions to consider, based on your feedback: - Line 249: I think '(associate)' can be removed from the text entirely, i.e. "...namely research assistants (n=2), a PhD student (n=1) and professors (n=4)." - Line 523: I take your point on the intention behind "...had not (yet) made..." in the previous version, and that this reflects the wishes of the participant rather than you as an author team. While I do think it's better to steer clear from any wording that could result in perceived bias, you could consider changing the sentence to "...discovering that the insights from their SRs appeared to have had no impact on their field to date." This is entirely up to you, as the revised version you provided is also fine, but perhaps this provides a bit more nuance. Wishing you all the best, Amanda ********** 7. PLOS authors have the option to publish the peer review history of their article (what does this mean?). If published, this will include your full peer review and any attached files. If you choose “no”, your identity will remain anonymous but your review may still be made public. Do you want your identity to be public for this peer review? For information about this choice, including consent withdrawal, please see our Privacy Policy. Reviewer #1: No Reviewer #2: Yes: Dr Amanda Salomé Brand [NOTE: If reviewer comments were submitted as an attachment file, they will be attached to this email and accessible via the submission site. Please log into your account, locate the manuscript record, and check for the action link "View Attachments". If this link does not appear, there are no attachment files.] While revising your submission, please upload your figure files to the Preflight Analysis and Conversion Engine (PACE) digital diagnostic tool, https://pacev2.apexcovantage.com/. PACE helps ensure that figures meet PLOS requirements. To use PACE, you must first register as a user. Registration is free. Then, login and navigate to the UPLOAD tab, where you will find detailed instructions on how to use the tool. If you encounter any issues or have any questions when using PACE, please email PLOS at figures@plos.org. Please note that Supporting Information files do not need this step. 11 Nov 2021 Reviewer’s comments 1. Line 249: I think '(associate)' can be removed from the text entirely, i.e. "...namely research assistants (n=2), a PhD student (n=1) and professors (n=4)." Authors’ response: We agree that this term makes the sentence “wordy” and does not per se add to the context. We have removed it as suggested. 2. Line 523: I take your point on the intention behind "...had not (yet) made..." in the previous version, and that this reflects the wishes of the participant rather than you as an author team. While I do think it's better to steer clear from any wording that could result in perceived bias, you could consider changing the sentence to "...discovering that the insights from their SRs appeared to have had no impact on their field to date." This is entirely up to you, as the revised version you provided is also fine, but perhaps this provides a bit more nuance. Authors’ response: Thank you for this suggestion! We feel it indeed brings more nuance and clarity to this part of the text. The sentence was modified as suggested. Submitted filename: Reponse to reviewers.docx Click here for additional data file. 15 Nov 2021 The impact of conducting preclinical systematic reviews on researchers and their research: a mixed method case study PONE-D-21-16684R2 Dear Julia Menon, We’re pleased to inform you that your manuscript has been judged scientifically suitable for publication and will be formally accepted for publication once it meets all outstanding technical requirements. Within one week, you’ll receive an e-mail detailing the required amendments. When these have been addressed, you’ll receive a formal acceptance letter and your manuscript will be scheduled for publication. An invoice for payment will follow shortly after the formal acceptance. To ensure an efficient process, please log into Editorial Manager at http://www.editorialmanager.com/pone/, click the 'Update My Information' link at the top of the page, and double check that your user information is up-to-date. If you have any billing related questions, please contact our Author Billing department directly at authorbilling@plos.org. If your institution or institutions have a press office, please notify them about your upcoming paper to help maximize its impact. If they’ll be preparing press materials, please inform our press team as soon as possible -- no later than 48 hours after receiving the formal acceptance. Your manuscript will remain under strict press embargo until 2 pm Eastern Time on the date of publication. For more information, please contact onepress@plos.org. Kind regards, Eleanor Ochodo Academic Editor PLOS ONE 1 Dec 2021 PONE-D-21-16684R2 The impact of conducting preclinical systematic reviews on researchers and their research: a mixed method case study Dear Dr. Menon: I'm pleased to inform you that your manuscript has been deemed suitable for publication in PLOS ONE. Congratulations! Your manuscript is now with our production department. If your institution or institutions have a press office, please let them know about your upcoming paper now to help maximize its impact. If they'll be preparing press materials, please inform our press team within the next 48 hours. Your manuscript will remain under strict press embargo until 2 pm Eastern Time on the date of publication. For more information please contact onepress@plos.org. If we can help with anything else, please email us at plosone@plos.org. Thank you for submitting your work to PLOS ONE and supporting open access. Kind regards, PLOS ONE Editorial Office Staff on behalf of Prof Eleanor Ochodo Academic Editor PLOS ONE
  24 in total

1.  A third of systematic reviews changed or did not specify the primary outcome: a PROSPERO register study.

Authors:  Andrea C Tricco; Elise Cogo; Matthew J Page; Julie Polisena; Alison Booth; Kerry Dwan; Heather MacDonald; Tammy J Clifford; Lesley A Stewart; Sharon E Straus; David Moher
Journal:  J Clin Epidemiol       Date:  2016-04-11       Impact factor: 6.437

2.  The Mass Production of Redundant, Misleading, and Conflicted Systematic Reviews and Meta-analyses.

Authors:  John P A Ioannidis
Journal:  Milbank Q       Date:  2016-09       Impact factor: 4.911

Review 3.  Systematic reviews do not adequately report or address missing outcome data in their analyses: a methodological survey.

Authors:  Lara A Kahale; Batoul Diab; Romina Brignardello-Petersen; Arnav Agarwal; Reem A Mustafa; Joey Kwong; Ignacio Neumann; Ling Li; Luciane Cruz Lopes; Matthias Briel; Jason W Busse; Alfonso Iorio; Per Olav Vandvik; Paul Elias Alexander; Gordon Guyatt; Elie A Akl
Journal:  J Clin Epidemiol       Date:  2018-03-02       Impact factor: 6.437

Review 4.  Nimodipine in animal model experiments of focal cerebral ischemia: a systematic review.

Authors:  J Horn; R J de Haan; M Vermeulen; P G Luiten; M Limburg
Journal:  Stroke       Date:  2001-10       Impact factor: 7.914

Review 5.  The behaviour change wheel: a new method for characterising and designing behaviour change interventions.

Authors:  Susan Michie; Maartje M van Stralen; Robert West
Journal:  Implement Sci       Date:  2011-04-23       Impact factor: 7.327

6.  Describing the impact of health research: a Research Impact Framework.

Authors:  Shyama Kuruvilla; Nicholas Mays; Andrew Pleasant; Gill Walt
Journal:  BMC Health Serv Res       Date:  2006-10-18       Impact factor: 2.655

7.  The impact of Cochrane Systematic Reviews: a mixed method evaluation of outputs from Cochrane Review Groups supported by the UK National Institute for Health Research.

Authors:  Frances Bunn; Daksha Trivedi; Phil Alderson; Laura Hamilton; Alice Martin; Steve Iliffe
Journal:  Syst Rev       Date:  2014-10-27

Review 8.  Facilitating healthcare decisions by assessing the certainty in the evidence from preclinical animal studies.

Authors:  Carlijn R Hooijmans; Rob B M de Vries; Merel Ritskes-Hoitinga; Maroeska M Rovers; Mariska M Leeflang; Joanna IntHout; Kimberley E Wever; Lotty Hooft; Hans de Beer; Ton Kuijpers; Malcolm R Macleod; Emily S Sena; Gerben Ter Riet; Rebecca L Morgan; Kristina A Thayer; Andrew A Rooney; Gordon H Guyatt; Holger J Schünemann; Miranda W Langendam
Journal:  PLoS One       Date:  2018-01-11       Impact factor: 3.240

9.  Analysis of the time and workers needed to conduct systematic reviews of medical interventions using data from the PROSPERO registry.

Authors:  Rohit Borah; Andrew W Brown; Patrice L Capers; Kathryn A Kaiser
Journal:  BMJ Open       Date:  2017-02-27       Impact factor: 2.692

Review 10.  Engaging with research impact assessment for an environmental science case study.

Authors:  Kirstie A Fryirs; Gary J Brierley; Thom Dixon
Journal:  Nat Commun       Date:  2019-10-04       Impact factor: 14.919

View more
  1 in total

1.  The Case for Modernizing Biomedical Research in Ireland through the Creation of an Irish 3Rs Centre.

Authors:  Viola Galligioni; Dania Movia; Daniel Ruiz-Pérez; José Manuel Sánchez-Morgado; Adriele Prina-Mello
Journal:  Animals (Basel)       Date:  2022-04-21       Impact factor: 3.231

  1 in total

北京卡尤迪生物科技股份有限公司 © 2022-2023.