| Literature DB >> 35258720 |
Ulrike Felt1, Florentine Frantz2.
Abstract
Issues related to research integrity receive increasing attention in policy discourse and beyond with most universities having introduced by now courses addressing issues of good scientific practice. While communicating expectations and regulations related to good scientific practice is essential, criticism has been raised that integrity courses do not sufficiently address discipline and career-stage specific dimensions, and often do not open up spaces for in-depth engagement. In this article, we present the card-based engagement method RESPONSE_ABILITY, which aims at supporting researchers in developing their ability to respond to challenges of good scientific practice. The method acknowledges that what counts and what does not count as acceptable practice may not be as clear-cut as imagined and that research environments matter when it comes to integrity issues. Using four sets of cards as stimulus material, participants are invited to reflect individually and collectively about questions of research integrity from different perspectives. This approach is meant to train them to negotiate in which contexts certain practices can still be regarded as acceptable and where possible transgressions might begin. RESPONSE_ABILITY can be seen as fostering the creation of an integrity culture as it invites a more reflexive engagement with ideals and realities of good practice and opens a space to address underlying value conflicts researchers may be confronted with. Concluding the article, we call for caution that addressing issues of integrity meaningfully requires striking a delicate balance between raising researchers' awareness of individual responsibilities and creating institutional environments that allow them to be response-able.Entities:
Keywords: Borderlands of good scientific practice; Card-based engagement method; Integrity training; Research integrity; Response-ability; Values in research
Mesh:
Year: 2022 PMID: 35258720 PMCID: PMC8904341 DOI: 10.1007/s11948-022-00365-6
Source DB: PubMed Journal: Sci Eng Ethics ISSN: 1353-3452 Impact factor: 3.777
Fig. 1Discussion map and four sets of cards
| “I think this is a problem that is the most common but the least discussed in our field,” Miriam states as she explains why she ranked the transgression card ‘misrepresenting data’ first. She bemoans that there are so many things that researchers do not talk about when it comes to representing data. It is the “little differences” that worry her, the things that are difficult to be standardized and regulated. Her observations trigger a lively debate. Frida continues this line of thought, specifically considering statistical tests, and argues that it sometimes seems arbitrary which statistical test somebody uses. She thinks that people often do not know any better and simply take what works best for them. But she also points to the unlikeliness that her supervisor will critically scrutinize how she performs her analysis: “If you have a polished graph […] and the data fit with what you would expect,” there is usually little discussion about how she did her statistics. Smiling, because Frida apparently describes a situation he recognizes, Alfonso pushes the argument further. A lack of systematic scrutiny is his major concern. He describes himself as trying to be overly cautious with regard to potential tricks that expectations and hypothesis might play on one’s own judgment. One may take “a little step into misrepresenting the data” if you are convinced by a theory. An uncomfortable silence spreads in the room. What to do with this sudden open admission of potential bias? Laura breaks the silence spinning Alfonso’s thoughts further by saying that this is not an individual problem but rather a more fundamental question of how science gets done. In times when scientists need to be good storytellers and create coherent, publishable, mind-blowing stories to survive in academia, it is natural, she argues, that you want to ‘find’ good results. “I mean people most of the time prepare their articles in such a way that they tell a story so that it gets published. And they don’t show the results that could hinder the article [from getting published]." |
| “Before we discuss the cards you selected in detail,” the moderator says, “I would ask you to guess which [research condition] cards were selected most frequently?” Several people answer, “Pressure and quality.” Some of the participants just point at that very card and nod as it is being raised. Throughout the debate so far, the feeling of being pressured in their work has repeatedly surfaced as have reflections on the threats these perceived pressures may have on the quality of the produced knowledge. Discussing this card, they collectively reflect on “how to get away from this model of publish, publish, publish, and rather focus more on quality”. Elisa shares a brief anecdote of a discussion she followed between two professors. “One professor […] said: “When we were young and doing our PhDs, we had much more time. We didn’t have so much pressure to publish […] For junior researcher it’s so much harder today.” And the other professor who was even younger just said “It will always be like that. The more you do, the better you are, the more you publish the better you are.” She strongly disagrees that doing good research is only about delivering large quantities of knowledge and expresses frustration about the pervasiveness of the capitalist understanding of the world. Paula agrees that her understandings of being a good scientist also do not align with how research is rewarded nowadays. She ties her frustration about the importance of papers to the temporal imaginations that come with them: “You have to do your PhD in three years, you have to publish three studies in three years, and you only get published when you have significant or positive results. I think this is the most hindering thing about doing good research because it really makes you work poorly to get published, but you don’t have the time to really think about the problems and discuss them, because you only have three years.” |
| “Is there anything you missed in our discussion about values in science?” asks the moderator once the participants have each described how they ranked the value cards on their board and the group has already collectively reflected on some of the similarities and differences in ordering rationales. Lorenz proposes a card about “Resilience. […] I think that’s a little bit missing here. I see a lot of facets of the PhD in there, but this is a little bit missing in my opinion.” He goes on to argue that for him it is vital to be able to deal with setbacks and recover from challenges or errors. His suggestion is faced with criticism: “Is that really a value?”, Lucia asks. After all, she does not see it being valued—rather, it is required that individuals come up with a certain mental strength if they want to succeed, survive in science. For Lorenz, mental health and seeing a “person rather than just a scientist” deserves attention in this discussion on research integrity and he keeps on arguing for it. Alfonso agrees and sees the point Lorenz wants to make, but for him, the capacity to continue working despite the problem one encounters is captured by the value of commitment. Lorenz responds that “if you ramp up the commitment to one hundred, problems you have to deal with hit you harder. And if you go down with the commitment, you can deal with problems easier.” Not everyone agrees, but Lorenz continues to argue that it matters to think about the person doing the research and that none of the values on the cards are self-evident but depend on the person behind them. After an intense discussion about what it means that science is conducted by humans, Lorenz closes the debate by stating that his concern would probably have been satisfied by “adding two words on [the] fairness [card]: fair treatment of others |
| “It has happened to me before and it is something that pisses me off very much,” Lena starts by talking about why she chose the dilemma card on reviewer misconduct. Telling someone to cite their own articles as a review comment is a transgression of good scientific practice for her if they clearly don’t match the argument of the paper. While she cautions that sometimes reviewers suggest useful papers which she did not know, she is outspoken about pushing back against reviewer fraud in her response letter to the editor. “I would say that I didn’t find that they were connected enough to the result of the paper to include them.” She perceives the nodding of her colleagues as she looks around. “I also chose the same dilemma,” Kavaan continues shyly, “I experienced this as well and I think it is common practice to agree with everything that the referees ask for, especially their easy suggestions.” He describes his process of first revising their easy comments, such as correcting typos and justifying formulas, before he adds the suggested articles “without even actually reading the whole papers,” Yet he is outspoken about just adding a limited number of papers. Xenia interrupts their discussion. She stresses that she does not know whether or not she would add the references. It would depend on the journal she would be aiming for. Her group recently wanted to publish in a high-impact journal. And they agreed to simply add a sentence to include the reviewers’ paper. “I am pretty sure we wouldn’t have done this, at least I wouldn’t have done this if it was a different journal. […] It is always a compromise.” After being asked by the moderator whether others had also pondered over choosing this dilemma, Gregory takes the opportunity to talk about the problems that his group faced when one of their reviewers was a member of another school of thought. Once he ended describing the complex odyssey of compromising in order to get the paper published, Mia shared an anecdote, partly as advice to Gregory, partly to expand the review problem. “Sometimes acknowledgments are used to avoid certain reviewers, right? So, you just put people into the acknowledgments and then they will not get the paper to review.” Everyone laughs before the discussion returns to more serious reflections on reviewers and power abuse. |
| After an extensive round of discussing the research conditions, the participants are asked to come up with changes they want to see in science and write them on the empty change cards. The cards are quickly filled with catchy titles and long lists of changes. Everybody is then asked to present their suggestions. Some of them overlap, such as a general desire to have “more time to really think about what and why are we doing science, how we communicate with others and how we can be open to the ideas of others and the critique of others.” But some participants have more concrete suggestions, such as Anna’s idea to re-think letters of recommendations not only as hierarchically top-down but also as bottom-up: “If it’s about leadership positions, why not [ask for] references from people who have worked below you, who have worked in your team?” She goes on to describe how this could not only help to combat power abuse and make it more visible but also strengthen the value that supervising and leading a team has if it is acknowledged as something that can be accounted for. This idea that the incentive systems are misguiding is also taken up by Eva: “I think most of the problems that we’re dealing here right now could be solved by just having a different incentive structure. So right now, what we do is we reward only things that we can’t control, which is data and results, right? If you have nice results, then you can publish them in Nature and you get a nice position.” She goes on to argue that incentivizing good practices, such as sharing data, also publishing negative data … would, in the long run, benefit science. |