Literature DB >> 25574259

Evaluation in RCR Training-Are You Achieving What You Hope For?

Richard McGee1.   

Abstract

This Perspective addresses the value of, and realistic approaches to, incorporating formal evaluation processes in Responsible Conduct of Research (RCR) training. It comes from the experiences of a career that has combined: leading research teams and directing Ph.D. and M.D./Ph.D. training; teaching RCR since it was first required by NIH; teaching evaluation methods to directors of RCR and research training programs; and serving as an external evaluator for RCR and research training programs. Approaches to evaluation are introduced, contrasting quantitative and qualitative evaluation methods, along with the differences between formative (process) and summative (outcome) evaluation. Practical and realistic approaches are presented, knowing that RCR programs seldom have the luxury of time and funding for extensive evaluation. Guidance is provided on how to make sure evaluation starts from and focuses on what the training is designed to achieve (in terms of knowledge, skills, attitudes, and behaviors) rather than just what activities are taking place or what information is being 'delivered.' Examples of evaluation questions that might be asked about RCR programs are provided, as well as approaches to answering them.

Entities:  

Year:  2014        PMID: 25574259      PMCID: PMC4278458          DOI: 10.1128/jmbe.v15i2.853

Source DB:  PubMed          Journal:  J Microbiol Biol Educ        ISSN: 1935-7877


INTRODUCTION

“Evaluation” can encompass a wide range of questions being asked and data collected to answer them. For RCR training, it can range from the mundane (taking attendance) to sophisticated assessment of attitudinal changes or even new strategies for behavioral changes. One big dividing line is between determining whether a course or the delivery of material is seen as effective or useful by participants and true assessment of learning and/or change. It can be relatively straightforward to evaluate effectiveness of delivery and obtain feedback for course or individual improvement—often referred to as formative or process evaluation. By contrast, assessment of learning is much more difficult to assess, and it becomes even harder when what you are teaching has lots of shades of gray with little black and white, like RCR. Welcome to RCR and the challenges to figuring out whether you are achieving what you hope for! This Perspective provides a short introduction to a framework of how to approach evaluation of RCR training. Few who become responsible for RCR training come from backgrounds with a heavy emphasis on evaluation and assessment, especially of teaching and learning. I inadvertently became involved with systematic evaluation of research training programs, in addition to leading them, about the same time I became responsible for RCR training at a previous institution. The expertise we developed with evaluation methods, and seeing how much can be learned through them, naturally led us to apply evaluation to RCR training as well. Having the combined understanding of research training, evaluation, and RCR training led us to the role of teaching evaluation methods to faculty and others around the US, and serving as external evaluators/consultants for a number of research training and RCR programs. This short article is designed to share some insights acquired through these various activities as a starting point for those who do not think about evaluation on a daily basis.

APPROACHES TO EVALUATION

The simplest form of RCR evaluation would focus on program mechanics, delivery by presenters, perceived value of discussion cases for displaying principles, completion of required activities, and other relatively concrete criteria. Evaluation or feedback on individual sessions and presenters often takes a minimalist approach, hoping to get good response rates if not too much is asked for on surveys. Most typical are online or paper surveys. The critical issue for obtaining useful information, however, is in the precision of the questions asked to return the actual data desired. For example, scaled responses (Likert scales) to the questions below would return vastly different data with equally variable utility: How much did you like the session on Authorship? How effective was the presenter on Authorship? How effective was the session on Authorship? How much new material did you learn in the session on Authorship? How much did the session clarify questions you had about how Authorship should be determined? These are all examples of what is referred to as quantitative evaluation measures since numbers are obtained. Quantitative questions are used a lot because they give something concrete to look at very quickly and easily. However, they do not give you any insights into why the session (or presenter) was effective (or not), what was learned, or what clarifications actually were achieved. It also is very difficult to know whether small differences between questions, or the same questions from year to year, are meaningful, and what absolute number should be considered good or bad. This can be improved by anchoring each number with a text ‘definition’ so a person is actually calibrating with word choice rather than simply a number range. An example of anchoring for “How much did you lean in the session on Authorship” could be: 1 = nothing new, 2 = a few new tidbits, 3 = a modest amount of new information, 4 = a lot of new information, 5 = I thought I knew a lot but I realized how little I knew before the class. A numerical scale is maintained but the frequency of each response becomes more revealing than a numerical average. And if the critical measure is not averages but frequencies, a unique response such as #5 can be included. By contrast, qualitative evaluation questions ask for text to provide different types of information. Again, these are particularly useful for formative evaluation to know how information presented or discussions and readings are being received: What made the session on Authorship particularly effective or ineffective for you? Please provide 2 to 5 new things you learned about authorship from today’s session. What questions did you have coming in about authorship that were clarified? What questions about authorship do you still have and/ or are you still unclear about? Qualitative questions are much better at providing details and information on what is working, what is not, and what to change, but require that people give a bit of time and attention to answering them. If a large group is involved, compiling and analyzing them can be problematic, although with online surveys, compiling is very simple. In both of these types of questions, relatively straightforward questions about perceived effectiveness or concrete topics are addressed. The other type of evaluation is referred to as summative or outcome evaluation. This aspect of evaluation, as the terms imply, measure over time some accumulated effect or particular outcomes. With RCR it would not be typically done over a single course or workshop series but would gather standardized data over several years to look for a cumulative effect. For example, one could contact participants in an RCR course a year later and ask for information on how their experience with or approaches to RCR issues had been affected by the course. Doing this over several years could provide information on the cumulative effect of RCR training across several cohorts of participants. This assumes, however, that no major change in RCR training took place. If changes did take place, it would be a way to look for the impacts of those changes.

GOING DEEPER TO ACTUAL LEARNING—FUNDAMENTAL PRINCIPLES OF TEACHING AND LEARNING

Because the topics to be covered in RCR are essentially specified by NIH and high-quality how-to resources for teaching RCR have been developed, most RCR courses tend to just cover the material and call it a day. There is significant benefit to be gained, however, from spending some time and effort to find out whether any learning is taking place and to determine the relative effectiveness of individuals and approaches being used to promote learning. Note: the emphasis is not on what you think you are teaching, but rather what do you hope is learned; with a focus on teaching, you end up assessing what has been transmitted rather than what has been received and processed. Also, with RCR, learning is not necessarily associated with knowing the right answer but rather with understanding the nuances and context that can define different answers. Evaluating or assessing learning can be seen as extra work for not much benefit, especially by laboratory scientists, but an argument can be made that poorly taught courses that do not cause any learning to take place can send a message that RCR is really not important, the last message we want to send. To begin asking if learning is taking place requires stepping back to the basics—consciously articulating what you hope is being learned. In theory, this is where the teaching of traditional academic courses starts, but, in those courses, the focus is largely on objective information and such things as research design. RCR courses do include some objective information, but much within research practices is not codified or even agreed upon universally. Thus, what is taught and hopefully learned is much more nuanced and variable than in academic subjects, which shifts the approach to assessing whether learning is taking place away from typical exams. Graded exams may still be found in RCR courses but usually only for those that have academic credit associated with them. Types of learning that can take place are typically broken down into four categories, categories that apply to RCR training as well: knowledge, skills, attitudes, and behaviors. With RCR, beliefs or ethical behaviors are also included for some courses; some even include moral positions, but usually the small dose of RCR training is not likely to impact something so fundamental as moral architectures. With this framework, assessment (a more common terminology than evaluation with respect to learning) of learning is approached by stepping back to analytically articulate what it is hoped each session or segment within the course will accomplish. If you are starting from scratch to define the content of a course, you will have a lot of latitude for what you might include. With the topics of RCR being largely specified, however, it becomes more a matter of taking each topic and starting from what you hope will be learned. The tendency is to start with what you decide to teach rather than consciously starting with what is to be learned, but it is essential to focus on learning. Since, most of the time, we start thinking about evaluation and assessment only after we have started teaching a certain content with a particular approach, I often suggest reverse engineering. By this, I mean start with the content of what you are teaching and ask the question: Why? What is it you hope participants will learn from this session and/or take away from it? These are often referred to as learning objectives, but in my experience, learning objectives can be very high-level and are not always specific enough to get down to the real detailed learning goals of a session. Either way, the objectives or learning goals you identify must be sufficiently well defined to be measurable or it simply will not be possible to assess whether anyone achieved them. Remember, they can be knowledge, skills, attitudes, or behaviors. Assessing learning in RCR beyond objective information such as policies, resources, what to do if you suspect misconduct, etc. will be generally quite difficult with multiple-choice questions. Some examples of learning-focused assessment questions would include: After today’s session, how have your views changed regarding criteria for authorship established by journals? After today’s session, how has your understanding changed for the level of financial compensation at which disclosure of a potential conflict of interest is required? After today’s session, have your views on non-financial conflict of interest changed? If so, how? Trying to determine whether RCR training has any impact on behaviors is particularly difficult. Courses are usually relatively compact and behaviors are very difficult to measure. One approach we have used is to ask about anticipated future behaviors. For example: From the discussions today, how do you anticipate deciding on authorship when you have your own research group? How has your thinking of how and when to discuss authorship with members of your research group changed as a result of today’s session? One particular complexity with teaching and learning in RCR is the difference between something you hope to be learned that is new and easily accepted vs. something that requires unlearning something old in favor of the something new and/or conflicts with what individuals have observed previously. In a qualitative study we conducted in an RCR course for biomedical Ph.D. students and postdoctoral fellows, our findings showed that it is quite easy to guide the thinking of individuals when what you are teaching is new and does not conflict with any prior experience or personal beliefs. When what you present does conflict, however, it is very difficult to convince people to change their thinking in a short session, and doing so requires very different teaching approaches (1). This is not a new finding in teaching and learning in general, but it is particularly important in RCR training, as anyone taking an RCR course is coming into it with a substantial background of informal RCR training—what they have observed and been ‘taught’ through observing others doing research and the research mentoring process long before they enter an RCR course.

CONCLUSION

The goal of this Perspective has been to present evaluation as something that can be valuable and realistic. For those of us who are scientists, it is just gathering data with each RCR session or course, which are actually experiments. By thinking carefully about what you are hoping to achieve, and what questions you would like to answer, it is not all that difficult to get meaningful information to understand something about how what is being offered is being received and processed. Collaborating with someone who does evaluation or survey work on a regular basis can be a big help, but it still is essential that you work with them to define the questions about the RCR training you want to ask. Evaluation experts can then help you design an approach to answer your questions. As noted, it is not easy to get deeply into whether or not teaching RCR will change behaviors, but it is at least possible to get feedback with which to revise and continually improve how we approach teaching RCR.
  1 in total

1.  Teaching and learning responsible research conduct: influences of prior experiences on acceptance of new ideas.

Authors:  Richard McGee; Julka Almquist; Jill L Keller; Steven J Jacobsen
Journal:  Account Res       Date:  2008 Jan-Mar       Impact factor: 2.622

  1 in total

北京卡尤迪生物科技股份有限公司 © 2022-2023.