| Literature DB >> 35749396 |
Tamarinde L Haven1, Martin R Holst1,2, Daniel Strech1.
Abstract
Concerns about research waste have fueled debate about incentivizing individual researchers and research institutions to conduct responsible research. We showed stakeholders a proof-of-principle dashboard with quantitative metrics of responsible research practices at University Medical Centers (UMCs). Our research question was: What are stakeholders' views on a dashboard that displays the adoption of responsible research practices on a UMC-level? We recruited stakeholders (UMC leadership, support staff, funders, and experts in responsible research) to participate in online interviews. We applied content analysis to understand what stakeholders considered the strengths, weaknesses, opportunities, and threats of the dashboard and its metrics. Twenty-eight international stakeholders participated in online interviews. Stakeholders considered the dashboard helpful in providing a baseline before designing interventions and appreciated the focus on concrete behaviors. Main weaknesses concerned the lack of an overall narrative justifying the choice of metrics. Stakeholders hoped the dashboard would be supplemented with other metrics in the future but feared that making the dashboard public might put UMCs in a bad light. Our findings furthermore suggest a need for discussion with stakeholders to develop an overarching framework for responsible research evaluation and to get research institutions on board.Entities:
Mesh:
Year: 2022 PMID: 35749396 PMCID: PMC9231768 DOI: 10.1371/journal.pone.0269492
Source DB: PubMed Journal: PLoS One ISSN: 1932-6203 Impact factor: 3.752
SWOT definitions.
|
| Characteristics inherent to the current dashboard approach or the metrics included that are considered valuable for visualizing institutional performance in terms of responsible research |
|
| Characteristics inherent to the current approach that could be considered disadvantages, or areas that need improvement for the dashboard and that are within the realm of the internal environment, meaning the creators of the dashboard could undertake action to alleviate these concerns |
|
| Potential use cases of the dashboard that could increase its chances of successful uptake, as well as additions to the approach or the metrics that–when implemented–would improve the chances of stakeholder wide-uptake of visualizing institutional performance in terms of responsible research |
|
| Characteristics in the external environment that could undermine implementation of the dashboard approach, or others measures, to visualize institutional performance in terms of responsible research |
Demographics of interview participants.
| Stakeholder group | # |
|---|---|
|
| 4 |
| 14 | |
|
| 5 |
|
| 5 ( |
|
| |
|
| 22 |
|
| 6 |
|
| |
|
| 17 |
|
| 10 |
|
| 1 |
|
| 28 |
^ One interview was conducted in German.
The authors have previously collaborated with 3 experts in responsible research.
Fig 1Overview of SWOTs regarding institutional dashboards with metrics for responsible research.
Illustrative quotes per theme.
| SWOT category | # | Illustrative quotations |
|---|---|---|
|
| ||
| Seeing where you stand | 1 | “You need to be a little tough, to be honest to yourself and see where you stand. It might be uncomfortable at the beginning when you think, "all right, we are the [UMC], we are the best, the nicest and the greatest.” (Interviewee 19 –Support staff 3R) |
| 2 | “You can’t change unless you know what your baseline is.” (Interviewee 4 –Responsible Research expert) | |
| 3 | “If we as a funder ask for these things, are we actually in line with the strategy of the institution we are planning to fund, or is this really something we would be imposing, or is this something we are behind? It’s really hard to tell how far the institutions are, because in applications, they will always tell you, "Everything is fine. We have this policy, blah blah blah." But to actually see the numbers really helps. And this, I find really interesting.” Interviewee 13 –funder) | |
| Novel and relevant | 4 | “Some reviewers maybe don’t take the time for it or they look for the h-factor or the impact factor, sorry, by themselves even though you don’t want them to do. This is really a problem and this is why I think we don’t really do this at the moment but I think it could be very interesting to have alternative parameters, not only say look into the papers by yourself, but we have indicators like randomization, blinding, and power calculation for the reviewers.” (Interviewee 14 –funder) |
| 5 | “It’s a positive view. I’m feeling very positive about this because it demonstrates the value of openness and also the meaning of openness and scholarly communication. I think it makes researchers, but also administrators, aware that open access and open science practices matter and that techniques are available to monitoring the progress of these open access and open science practices.” (Interviewee 24 –Responsible research expert) | |
| Clear presentation | 6 | “I like the simplicity of the dashboard, so that you have these indicators and that the context is given when you hover over these information or warning signs with limitations. I really, really like the limitations button so that people know how to interpret the data. So, there’s a nice balance between simplicity and context. I also like the fact that there is the percentage and the absolute number because often, yes, one of the two is given” (Interviewee 22 –librarian and Open Science expert) |
| 7 | “The idea of the dashboard obviously seems to be that you have a quick overview and I don’t have to go into many details there but that was my idea of a dashboard and in terms of that I think it’s quite helpful.” (Interviewee 20 –UMC leadership) | |
|
| ||
| Lack of an overall framework | 8 | “I don’t see a conceptual scheme behind it yet. So, what do you want to measure? You seem to jump immediately to what you can measure. That is one of the things I was missing, jumping immediately in the doable, and I didn’t see the analysis of what you ideally would want to do. […] That type of reasoning, I was missing. It was jumping to what can be automated, jumping to what is available, jumping to what was out, that not that many effort can be made graphs of.” (Interviewee 7 –Responsible research expert) |
| 9 | “There is a critical point that came into my mind, and that is the question, who says which and who gives the standard? So, who says that these metrics are the right ones? And who says that the way they were calculated are correct? So, there must be a really good justification that these metrics are correct and they should be something like a general agreement.” (Interviewee 19 –Animal research expert) | |
| Methods and conceptualization difficult to understand | 10 | “Then, I would also have, when I would be a dean of a UMC, I also should be able to defend the whole thing, to be realistic, because you’re running a shop of researchers. They immediately start criticizing the methodology behind the whole thing. So the methodology should be completely transparent. Well, it will never be excellent, but it should be acceptable, and good enough for purpose. Fit for purpose. I should be able to defend that fiercely, because I need to defend that, as a leader of such an institution.” (Interviewee 7 –Responsible research expert) |
| 11 | “Then listed are three measures against the risks of bias, like randomization, blinding, sample size calculation. They are important measures. I agree, but I don’t think that are comprehensive enough to qualify as indicators of robustness, it needs more than that, I think.” (Interviewee 28 –Support staff; Animal research expert) | |
| Possibly outdated | 12 | “You see the open access dashboards and you see it over time, you know, but 2018 is nice, but first of all, I would challenge you to make dashboards as accurate as possible because open access is moving so rapidly that 2018, yes well, if there is one big Springer deal in place after 2018 or Elsevier deal in place after that, or there’s no deal, then the numbers drop or improve so that’s my first point make it as accurate as possible.” (Interviewee 1 –Librarian and Open Access expert) |
|
| ||
| Initiating change | 13 | “If you’re looking at it like if you want to improve open science and robust research, then it’s a good thing because if you see the average and you see that your own institution is below average, then that gives you an indication that something should be done or it gives you an idea that maybe you should ask questions, why my institution is below average. And that’s good. Then you can start to talk to people and do something about it, try to find reasons for this” (Interviewee 12 –Librarian and Open Science expert) |
| 14 | “So they become a regular, because they are so aggregated and so information-rich, they should be on that level and really a general part of the discussion on a regular theme, because this could also foster the discussion in this area.” (Interviewee 26 –science management and strategy) | |
| Benchmarking over time | 15 | “Because looking at what happens over time within a center, that to me is probably the most informative of this whole thing. I’m not so keen on ranking UMCs. I’m not so keen on doing races between institutions, that is rather trivial. What you want is their progress in the right direction. You want to be able to pick that up, and that you should do on center level and on indicator level.” (Interviewee 7 –Responsible Research expert) |
| 16 | “Where I’m currently is like an absolute information that doesn’t give me much discussion points. So it would be helpful for me if I could decide, what to change over time, and maybe if I, as a board member, had made a decision two years ago and then I would like to see what changes.” (Interviewee 26 – science management and strategy) | |
| Internal usage only | 17 | “So for me, the strength really lay in considering this, for example, as an instrument of self-analysis. That is, if it remains at the level and is not linked to "I’ll show this at the next review and then my bar will be higher than that of the others", but if it is a kind of internal process analysis or an integrated part of an internal process analysis.” (Interviewee 8 & 9 –funder) |
| 18 | “The question is, the moment you go public, we have the blaming issue and then the press comes in and then it’s hard to get better. So it’s not a question of "do we have a benchmark"? It’s a question of at least the first two or three years, can we keep the data in a protected space where people from different institutions can deal with it very open minded and discuss why they think the differences are there.” (Interviewee 23 –UMC leadership) | |
| Tailoring the dashboard | 19 | ”Some institutions may be particularly interested in certain things and others may be interested in very different things. But I think we need to get them to talk about other, a handful or three core practices that we could work on across the board.” (Interviewee 4 – Responsible Research expert) |
| 20 | “I think not only in the medical sciences but especially there because it’s so it’s a very broad field. And I know I know it from the discussion about the impact factors that are very heterogeneous with regard to the disciplines. Whether we have a very, let’s say, visible discipline, like, I don’t know, cardiology or something, and then you have like the small disciplines, like ophthalmology. It’s very small disciplines and with I think very strong effects on the impact factors. So this could be interesting when looking at your robustness indicators to have a differentiation by the disciplines.” (Interviewee 14 – funder) | |
| Complementing the dashboard with other indicators | 21 | “An indicator that I missed actually was, whether there’s a pre-print or not. PubMed Central. We invested a lot the last two years to link pre-prints with the accepted version and the journal, and this could be really of added value. Also to demonstrate to researchers and decision makers that, pre-printing can matter and can lead to quality-assured publications.” (Interviewee 24 –librarian and Open Science expert) |
| 22 | “I think for transparency, you have also to be clear where are conflicts of interest?” (Interviewee 15 –funder) | |
| 23 | “The problem is to not focus only on those things that are easily measured, like open data, number of publications, but also like the more soft "how" things. And they can be expressed also. Like inclusiveness and diversity can be expressed extremely easily in this way. I tell you, that will be scary for <UMC>.” (Interviewee 10 –Responsible Research Expert) | |
| 24 | "You could ask whether stakeholder or patient advocacy groups, for example, were involved in a trial design and preparation of the trial, and perhaps trial conduct or so. But as I said, the patient perspective on how patients who were in a given trial, how they viewed the overall conduct of the trial, I think is another aspect that might be valuable at some stage.” (Interviewee 27 –UMC leadership) | |
| Communication of metrics’ performance | 25 | “[A]s a methodologist maybe, you need to show me what is the validity and the precision of the whole game. Validity in terms of sensitivity, specificity, predictive values, what have you, and precision in terms of test sample and confidence intervals, and whatever. There is no confidence interval in all these neat graphs, which is worrying, because you seem to suggest that there are differences, and I’m not convinced. You might only be looking at random fluctuation” (Interviewee 7 –Responsible Research expert) |
| 26 | “I think the metrics that you have are fine, but I’m not sure if they are accurate. So it depends on the quality of the data you use to produce the dashboard. So I can’t tell what the quality of your data input is. You should make sure that the data you show are reliable” (Interviewee 12 –Responsible Research expert) | |
|
| ||
| Putting institutions in a bad light | 27 | “[I]t’s not really important what others do, we try to achieve 100 percent, that’s not easy, but we would be working on this now. For us, it’s important to set goals from the beginning and to achieve those goals. Yes, it’s always the wrong way to get bad press and then you investigate, what’s the reason for that? And then you gain goals from the bad press. The better thing is to set from the beginning, what are our goals despite of all others and of the press and them to achieve this?” (Interviewee 16 –clinical trials expert) |
| 28 | “I think people worry that this information will be used to point fingers in a negative way. And I think we as a community must work very, very hard, collaboratively with the end users to improve the situation.” (Interviewee 4 –Responsible research expert) | |
| Incorrect interpretation | 29 | “[T]his is a very tricky part because, once a civil servant of a university that I worked for said, "Please keep in mind that you can make a very nice report with an executive summary, but these people, these policymakers, they are like children. They read comic books. So they only look at your tables and graphs, and everything else, the text, is simply overseen, overlooked or forgotten." So you can put a lot of effort in writing down extensive sections with limitations, but not so many people will read these, which is, of course, stupid. I realize that and I’m aware of that. But what can you do about it?” (Interviewee 6 –Open Science expert) |
| 30 | “I think I do, but in the conversations that we’ve had with university leadership, we noticed that a lot of people don’t. They just look at the figures and say, "oh, okay", they take it at face value. And don’t really appreciate the limitations that are that are there.” (Interviewee 5 –Librarian) | |
| Gaming metrics | 31 | “There we are again at the end with ’Goodhart’s Law’. If at some point this becomes a measure that is perhaps linked to success in acquiring third-party funding, then everyone will end up writing in: ‘I share my data with. . .’ How reliable this is then the second level? But these numbers can increase rapidly if you only link them to an output link at the end. In this respect, it may seem objective now, but in the end it is no longer objective when you put it in front of the cart.” (Interviewee 8 & 9 –funder) |
| 32 | “I think that all the potential dangers of a dashboard are always that the metrics are going to be taken as a goal and perhaps also that it’s going to be seen more as a leader board than as a way to help people move forward.” (Interviewee 3 –librarian) |