Literature DB >> 28101266

How to Assess Your CURE: A Practical Guide for Instructors of Course-Based Undergraduate Research Experiences.

Erin E Shortlidge1, Sara E Brownell1.   

Abstract

Integrating research experiences into undergraduate life sciences curricula in the form of course-based undergraduate research experiences (CUREs) can meet national calls for education reform by giving students the chance to "do science." In this article, we provide a step-by-step practical guide to help instructors assess their CUREs using best practices in assessment. We recommend that instructors first identify their anticipated CURE learning outcomes, then work to identify an assessment instrument that aligns to those learning outcomes and critically evaluate the results from their course assessment. To aid instructors in becoming aware of what instruments have been developed, we have also synthesized a table of "off-the-shelf" assessment instruments that instructors could use to assess their own CUREs. However, we acknowledge that each CURE is unique and instructors may expect specific learning outcomes that cannot be assessed using existing assessment instruments, so we recommend that instructors consider developing their own assessments that are tightly aligned to the context of their CURE.

Entities:  

Year:  2016        PMID: 28101266      PMCID: PMC5134943          DOI: 10.1128/jmbe.v17i3.1103

Source DB:  PubMed          Journal:  J Microbiol Biol Educ        ISSN: 1935-7877


INTRODUCTION

There have been national recommendations to integrate research experiences into the undergraduate biology curriculum (1, 49). While independent research in a faculty member’s lab is one way to meet this recommendation, course-based undergraduate research experiences (CUREs) are a way to scale the experience of doing research to a much broader population of students, thereby increasing the accessibility of these experiences (4). In contrast to traditional “cookbook” lab courses where students complete a pre-determined series of activities with a known answer, CUREs are lab courses where students work on novel scientific problems with unknown answers that are potentially publishable (3, 11). These research experiences have been shown to benefit both students and the instructors of the CUREs (45, 54, 55). As such, CUREs are growing in popularity as an alternative to the traditional laboratory course, particularly in biology lab classes (3, 10, 12, 36, 37, 51, 54, 62). The in-class research projects often last the duration of a semester or quarter, although sometimes they are implemented as shorter modules. CUREs vary in topics, techniques, and research questions—with the common thread being that the scientific questions addressed are novel, and perhaps most significantly, of interest to the scientific research community beyond the scope of the course. CUREs meet national calls for reforming biology education and may provide unique learning outcomes for students, as emphasized in Vision and Change: “learning science means learning to do science” (1). Studies have reported various student outcomes resulting from CUREs including: increased student self-confidence (6, 40), improved attitudes toward science (35, 36), ability to analyze and interpret data (10, 13), more sophisticated conceptions of what it means to think like a scientist (10), and increased content knowledge (34, 46, 62). Although there is general consensus that CUREs can have a positive impact on students, it is often unclear what specific aspect of a CURE leads to a measured outcome (17). Further, given the uniqueness of each individual CURE, instructors and evaluators of CUREs may struggle to identify how to effectively assess particular elements of their CURE. Further, a combination of assessment techniques may yield the most holistic understanding of course outcomes, especially for CUREs that are already being implemented (3, 29). While an ideal assessment of outcomes from any experimental pedagogy would include an analysis of a matched comparison course, logistical reasons may likely prevent an instructor from executing a randomized controlled study or even a quasi-experimental design format allowing a comparison of students in a CURE with students in a non-CURE course (but see 13, 31, 36 for examples). Further, finding an appropriate comparison group of students can be difficult; in particular, if students choose to take the CURE, then there is a possible selection bias (13). However, one can account for known differences and similarities in students by using multiple linear regression models that control for student demographics and ability such as incoming grade point average (GPA) (e.g., 59). Although these are all factors to take into consideration, in this article, our aim is to provide instructors with fundamental guidelines on how to get started assessing their own CUREs, without necessarily needing a comparison group. In assessing their CURE, we propose that instructors first begin with their intended learning outcomes for the CURE. Once an instructor’s learning outcomes have been identified, they can align assessments with those outcomes. We suggest that instructors browse existing assessments that may align with their learning outcomes to see whether any present appropriate ways to evaluate their CURE. If not, instructors should consider designing their own assessment situated in the specific context of their CURE, which may require collaboration with an education researcher or group of researchers with expertise in assessment. Finally, instructors need to critically evaluate the results of the assessment, being cautious in their interpretation. Taken together, these steps will provide a “best practices” model of how to effectively assess CURE learning environments (Fig. 1).
FIGURE 1

Guide to assessing course-based undergraduate research experiences (CUREs).

Guide to assessing course-based undergraduate research experiences (CUREs).

Step 1: Identify learning outcomes

The first step in evaluating a CURE is to identify learning goals so that an instructor can assess how successful the CURE was at attaining those learning goals. There are a number of resources available to help instructors identify and establish learning goals (33) and design a course based on these learning goals, i.e., ‘backward design’ (22, 64). We use the terms “learning goals” and “learning outcomes” as discussed by Handelsman et al. in Scientific Teaching (2004): “Learning goals are useful as broad constructs, but without defined learning outcomes, goals can seem unattainable and untestable. The outcome is to the goal what the prediction is to the hypothesis” (33). Thus, the first question to ask is not, “How can I assess my CURE?”, but “What learning outcomes do I want to measure?” Learning outcomes can vary, ranging from technical skills (e.g., “students will be able to micropipette small amounts of substances accurately”), to content knowledge (e.g., “students will be able to explain the steps of the polymerase chain reaction”), to high-level analytical skills (e.g., “students will be able to adequately design a scientific experiment”). More general learning goals for a CURE may also include affective gains such as self-efficacy or improved attitude toward science (6, 36). These affective gains may be more difficult for traditionally trained biologists to measure, or to evaluate the results of, particularly if one is less familiar with the theoretical frameworks that define these constructs. Collaboration with experts in education to understand affective gains may be particularly appropriate if these are the anticipated outcomes for one’s CURE. These presented learning outcomes are intended to serve as examples of the diversity of conceivable student outcomes from a CURE but reflect only a fraction of those possible. Frameworks to identify learning outcomes from CUREs have been developed elsewhere (e.g., 11, 16) and could be used in conjunction with the present article as a starting point for CURE assessment. While it is possible for CUREs to lead to gains in various domains, instructors may want to focus the learning goals on those with potentially measurable learning outcomes that are either not feasible in a lecture course, or are best-suited for a lab course. For example, any biology course can cover content, but in addition to other possible gains afforded by the CURE format, a lab course focusing on novel data is uniquely positioned to teach students about the process of science or perhaps the importance of repeating experiments. However, if the CURE is a required course in a department or if CUREs are being taught parallel to traditional lab courses, there may be already-established departmental learning goals and specific outcomes that must be targeted by the course.

Step 2: Select an assessment aligned with your learning outcomes

Once the anticipated learning outcomes for the CURE have been identified, the next step is to find an assessment strategy that aligns with the learning goals and anticipated learning outcomes. For some of the more common learning outcomes from CUREs, assessment instruments may either already exist or have been developed specifically for CUREs. These are sometimes referred to as “off-the-shelf” assessments because they have previously been published and instructors could in theory grab one “off the shelf” and administer it in the CURE classroom. However, it is important to consider how well these assessment instruments measure an instructor’s specific intended learning outcomes. Tight alignment of the assessment instrument with the desired learning outcomes is essential to accurately interpret CURE results. We further encourage instructors to critically evaluate these instruments in terms of: administration (e.g., How much class time does it require to administer? Is it a pre-post-course comparison?), time required to score the assessment (e.g., multiple choice, which can be auto-graded vs. open-ended responses, which need to be evaluated with a rubric), and what validation has been conducted on the instrument and how appropriate it is for an instructor’s specific population of students (e.g., has the instrument been previously administered to high school students but not to undergraduates?).

Previously developed assessments

Table 1 outlines assessment instruments that instructors could potentially use to evaluate their CURE, ordered by their primary aims. The table includes details of the format of the assessment, ease of administration and grading, and the population(s) that the instrument has been developed on. While we are not intending for this list to be comprehensive, and there are likely other assessment instruments that can be used to assess CUREs, we hope that this could be helpful for CURE instructors who are at the beginning stages of thinking about assessment.
TABLE 1

Assessment instrument table.

Assessment Instrument NameAcronymPrimary AimSecondary AimValidation PopulationAnswer TypeHow AdministeredNumber of Items/Time to AdministerScoringCitation
Blooming Biology ToolBBTCognitive skillsCritical thinkingUniversity of WashingtonOpen-endedIn class or onlineNAModerate – rubric(20)
California Critical Thinking Skills TestCCTST aCognitive skillsCritical thinkingCalifornia State University FullertonMultiple choiceOnline45 minEasy(28)
Critical Thinking Assessment TestCAT aCognitive skillsCritical thinkingMultiple populationsOpen-endedIn class15 open-ended itemsModerate – scoring guide(59)
Study Process QuestionnaireR-SPQ-2FCognitive skillsDeep and surface learningUG in Hong KongLikert-typeIn class20 itemsEasy(6)
Networking ScalebCommunicationNetworkingUniversity of PittsburghLikert-typeOnline5 itemsEasy(34)
Perceived Cohesion ScalePCSCommunity and collaborationSense of belongingMultiple ages & populationsLikert-typeIn class6 itemsEasy(7)
Torrance Tests of Creative ThinkingTTCT/ATTACreativityCreativityMultiple ages & populationsOpen-ended15 minModerate – scoring guide(63)
Environmental Attitudes InventoryEAIEnvironmental AwarenessEnvironmental AttitudesMultiple ages & populationsLikert-typeOnline24 or 72 itemsEasy(50)
New Ecological Paradigm ScaleNEPEnvironmental awarenessEnvironmental awarenessMultiple ages & populationsLikert-type15 itemsEasy(26)
Grit ScaleGritGritPerseverance, self-control, passionMultiple ages & populationsLikert-typeIn class12 itemsEasy(25)
Views About Sciences SurveyVASSNature of scienceNature of scienceMultiple ages & populationsContrasting alternatives design (multiple choice)In class30 itemsEasy(32)
Views on the Nature of ScienceVNOS-CNature of scienceNature of scienceMultiple ages & populationsOpen-endedIn class45–60 minDifficult – inter-rater & interviews(44)
Project Ownership SurveyPOSOwnershipProject ownershipMultiple UG populationsLikert-typeOnline16 itemsEasy(33)
Career Decision Making Survey – Self AuthorshipCDMS-SAOwnershipSelf-authorshipMultiple populationsLikert-type18 itemsEasy(19)
Laboratory Course Assessment SurveyLCASPerceptions of biologyCollaboration, discovery and relevance, iterationMultiple UG populationsLikert-typeOnline17 itemsEasy(18)
Colorado Learning Attitudes about Science SurveyCLASS-BioPersonal gains in context of scienceAttitudes about discipline specific science/problem solvingUniversity of British Columbia, University of Colorado BoulderLikert-typeOnline pre/post31 itemsEasy(54)
Classroom Undergraduate Research ExperienceCUREPersonal gains in context of scienceAttitudes about scienceLikert-typeOnline pre/postAdaptable,15 minEasy – done by author(46)
Research on the Integrated Science CurriculumRISCPersonal gains in context of scienceAttitudes about science – interdisciplinaryLikert-typeOnline pre/postAdaptableEasy – done by author
Science Motivation Questionnaire IISMQIIPersonal gains in context of scienceMotivation to learn scienceUniversity of GeorgiaLikert-typeIn class or online25 itemsEasy(29)
Survey of Undergraduate Research ExperiencesSUREPersonal gains in context of sciencePersonal gains (UREs)Multiple UG populationsLikert-typeOnline pre/post44 itemsEasy – done by author(45)
Undergraduate Student Self-Assessment InstrumentURSSAPersonal gains in context of sciencePersonal gains in context of science (UREs)Multiple UG populationsLikert-typeOnline postAdaptableEasy(65)
Student Assessment of Learning GainsSALGPersonal gains in context of scienceStudent perception of inquiry labsMultiple UG populationsLikert-typeOnline5 itemsEasy(55)
Molecular Biology Data Analysis TestMBDATProcess of scienceData analysisMultiple UG populationsMultiple choiceIn class and online pre/post20 items(52)
Results Analysis Concept InventoryRACIProcess of scienceData analysisUniversity of British ColumbiaMultiple choiceIn class pre/post12 itemsEasy
Biological Experimental Design Concept InventoryBEDCIProcess of scienceExperimental designUniversity of British ColumbiaMultiple choiceIn class pre/post14 items, 18 minutesEasy(23)
Expanded Experimental Design Ability TestE-EDATProcess of scienceExperimental designUniversity of WashingtonOpen-endedIn class and online pre/postModerate – rubric(15)
Experimental Design – First Year UndergraduateProcess of scienceExperimental designUniversity of British ColumbiaMultiple choiceIn class pre/post18 itemsEasy
Experimental Design Ability TestEDATProcess of scienceExperimental designBowling Green StateOpen-endedIn class pre/post10–12 minModerate – rubric(58)
Rubric for Experimental DesignREDProcess of scienceExperimental designMidwestern Research UniversityWriting samplePre-post onlineNAModerate – rubric(21)
Test of Scientific Literacy SkillsTOSLSProcess of scienceScientific literacyMultiple populationsMultiple choiceIn class pre/post30 minEasy(30)
Classroom Test of Scientific ReasoningCTSRProcess of scienceScientific reasoningMultiple populationsOpen-endedMultiple choice13 itemsModerate(43)
The Rubric for Science WritingRubricProcess of scienceScientific reasoning/science communicationUniversity of South CarolinaWriting-sampleOut of classNAModerate – rubric(62)
National Survey of Student EngagementNSSE aStudent engagementStudent engagementMultiple ages & populationsLikert-typeOnline70 itemsEasy(41)

Indicates that the instrument has a fee for use;

Instrument is to be used in conjunction with the POS.

Blank cells indicate that the information was not specified.

NA = not applicable; UG = undergraduate; URE = undergraduate research experience.

Assessment instrument table. Indicates that the instrument has a fee for use; Instrument is to be used in conjunction with the POS. Blank cells indicate that the information was not specified. NA = not applicable; UG = undergraduate; URE = undergraduate research experience. Included in the table are primary references for each instrument so instructors can find more information on the process of the development of the instrument as well as the efforts made by the assessment developers to ensure that the instruments produce data that has been shown to be valid and reliable (25, 47, 58). There are a set of best practices standards for educational and psychological measures outlined by the American Educational Research Association (2), and assessment instruments ideally adhere to these standards and provide evidence in support of the validity and reliability of the resulting data. It is important to note that no assessment instrument is generally validated— it is only valid for the specific population on which it was tested. An assessment instrument that was developed on a high school population may not perform the same way on a college-level population, and even an assessment instrument developed on a population of students at a research-intensive institution may not perform the same way on a community college student population. There is not a “one size fits all” for assessment, nor is there ever a perfect assessment instrument (5). There are pros and cons to each depending on one’s specific intentions for using the assessment instrument, which is why it is critical for instructors to judiciously evaluate the differences among instruments before choosing to use one.

If no existing assessment fits, then design your own

Existing assessment instruments may not be specific enough to align with an instructor’s anticipated learning outcomes. Instructors may want to measure learning outcomes that are specific to the CURE, and using an instrument that is not related to the specific context of the CURE may not be able to achieve that. We recommend that instructors consider working with education researchers to design their own assessments that are situated in the context of the CURE (e.g., 38), and/or use standard quizzes and exams as a measure of expected student CURE outcomes (e.g., 10). The choices instructors make will depend on the intention of their assessment efforts: is the intent to make a formative or summative assessment? What does the instructor intend to learn from and do with the measured outcome data? For example, do they wish to use the results to advance their own knowledge of the course success, for a research study, or a programmatic evaluation?

Step 3: Interpret the results of the assessment

Once instructors administer an assessment of their CURE, it is important to be careful in interpreting the results. In a CURE, students are often doing many different things and it is difficult to attribute a learning gain to one particular aspect of the course (16). Further, a survey that asks students about how well they think they can analyze data is measuring student perception of their ability to analyze data (e.g., 12), which could be different than their actual ability (e.g., 38) or the instructor’s perception of that student’s ability to analyze data (e.g., 55). Thus, it is important that instructors not try to overgeneralize the results of their assessment, and that they are aware of the limitations of student self-reported gains (9, 40). Yet, student perceptions are not always limitations, as student self-report can be the best way to measure learning goals such as confidence, sense of belonging, and interest in pursuing research—here it is appropriate to document how a student feels (15). Further, the instructor may want to know what the student thinks they are gaining from the course. For example, if an instructor’s expected learning outcome is for students to learn to interpret scientific figures, they could work to answer the question using a multi-pronged approach, measuring student perception of ability paired with a measure of actual ability. To achieve this, an instructor could use or design an assessment that asks students to self-report on their perceived ability to interpret scientific graphs. The instructor could then pair the self-report instrument with an assessment testing their actual ability to interpret scientific graphs. Using this approach, an instructor could learn whether there is alignment between what the instructor thinks the students are learning, what the students think they are learning, and whether the students are actually learning the skill. Thus, the attributes and limitations of assessment instruments and strategies are dependent on both the learning outcomes one wants to measure and the conclusions one wants to draw from the data.

Putting the steps in action: An example of alignment of goals and assessment

Here we present guiding questions for instructors to ask when approaching an assessment instrument. These steps meet minimum expectations for using best practices in evaluating CURE outcomes. a) How is this instrument aligned with the learning goals of my CURE? Is it specifically aimed at measuring this particular outcome? b) What populations has it been previously administered to? Does that student population reasonably match mine? c) What is the time needed to use, administer, and analyze the results of the instrument? Is this feasible within my course timeline and personal availability? Possible follow-up question: d) Do I aim to use the assessment results outside of my own classroom and/or try to publish them? If no, then validity and reliability measures may be less critical for an interpretation of the results. If yes, what validity and reliability measures have been performed and reported on for this instrument, and should I consider collaborating with an education researcher?

Assessing student understanding of experimental design

To help instructors determine how to assess their CUREs, we have identified one of the most commonly expected learning outcomes from CUREs: Students will learn how to design scientific experiments. We conducted phone surveys with faculty members who we had previously interviewed regarding their experiences developing and teaching their own CURE (55). We asked them to identify whether they thought students gained particular outcomes as a result of participating in their CURE (See Appendix 1 for details). Of the 35 surveys conducted, 86% of faculty participants reported that they perceived that students learned to design scientific experiments as a result of the CURE. Using the steps outlined in this essay, we provide an example of how to begin to assess this learning outcome by considering the pros and cons of different assessment instruments (all cited in Table 1). The instruments we discuss below have the explicit primary aim of evaluating student understanding of the “Process of Science” and the secondary aim of evaluating student understanding of “Experimental Design.” Further, the instruments were developed using undergraduate students at large, public research universities. One of the first instruments to be developed to measure students’ ability to design biology experiments was the Experimental Design Ability Tool (EDAT) (56). The EDAT is a pre-post instrument, intended to be administered at the beginning and end of a course or module to evaluate gains in student ability. The EDAT consists of open-ended prompts asking students to design an experiment: the pretest prompt is focused on designing an investigation into the benefits of ginseng supplements, and the posttest prompt asks students to design an investigation into the impact of iron supplements on women’s memory. Student written responses to both prompts are evaluated using a rubric. This assessment was developed using a nonmajors biology class and has been since adapted for a majors class; the revised instrument is the Expanded-Experimental Design Ability Tool (E-EDAT) (14). The E-EDAT has the advantage that the revised rubric gives a more detailed report of student understanding, as it allows for intermediate evaluation of student ability to design experiments. However, the open-ended format of both these assessments means that grading student responses using the designated rubrics may be too time-consuming for many instructors. Additionally, the prompts of the EDATs are specific to human-focused medical scenarios, which may not reflect the type of experimental design that students are learning in their CURE. Another pre-post assessment instrument, the Rubric for Experimental Design (RED), is a way to measure changes in student conceptions about experimental design (20). The RED is a rubric that can be used to evaluate student writing samples on experimental design, but is not associated with specific predetermined questions (20). Since many CUREs adopt a model where students write a final paper taking the form of a grant proposal or journal article, and the RED requires the instructor to have some sort of student writing sample already in place, the RED may be appropriate. Yet, similar to the EDAT/E-EDAT, the scoring of this instrument is time-consuming and the writing samples will need to be coded by more than one rater to achieve inter-rater reliability, which may be a limitation for some instructors. However, instructors using the RED have the advantage of a rubric that targets five common areas where students traditionally struggle regarding experimental design, thus potentially helping an instructor to disaggregate specific areas of student misconceptions and understanding of experimental design principles. A pre-post, multiple-choice concept inventory, the Biological Experimental Design Concept Inventory (BEDCI), was developed to test student ability to design experiments (21). The BEDCI has the advantage that it is easy to score since it can be automated and the instructor can quickly identify student gains on the test, but a disadvantage is that the BEDCI consists of a fixed set of questions. The specific context of each question could impact how students perform on the assessment, and the context of these questions may not overlap with the context of the CURE. Additionally, the BEDCI is to be presented as a PowerPoint during class, so instructors need to allocate in-class time for administration. These instruments may help an instructor to understand whether their students have achieved some level of experimental design ability, but the majority of these instruments are not specific to the context of any given CURE. Thus, there may be specific learning goals related to the experimental design context of the particular CURE that an instructor wants to probe. An additional and/or alternative approach is to design a test of experimental design ability using the specific context of the CURE. While we often use the term “experimental design” to include any aspect of designing an experiment in science, aspects of experimental design in a molecular biology CURE are different than aspects of experimental design in a field ecology CURE. Further, even if students can design an experiment in one context, this does not mean that they can design an appropriate experiment in another context, nor should they necessarily be expected to do so, particularly if understanding nuances of both experimental systems was not a predetermined learning goal. Instructors may miss important gains in their students’ abilities to design relevant experiments if they are using a generic experimental design assessment instrument (38). Perhaps students can design experiments in the specific context of their CURE (e.g., design an experiment to test the levels of protein in yeast cells in a molecular biology CURE versus design an experiment to identify abiotic factors influencing the presence of yeast in a flowering plant’s nectar in an ecology CURE), but they are unable to effectively design experiments in the converse scientific context. Even skilled scientists can have difficulty in designing an experiment in an area that is not in their specific domain of biological expertise. It may be important to test students using their specific CURE context in order to maximize the chance of seeing an outcome effect that can be credibly attributed to the CURE. It is unlikely that a previously developed assessment instrument will be directly aligned with expected outcomes from one’s CURE, so we encourage instructors to work with education researchers to develop situated assessments that are appropriate for each specific CURE context (e.g., 38).

CONCLUSION

As more CUREs are developed and implemented in biology lab courses across the country, instructors are becoming increasingly interested in assessing the impact of their CUREs. Although there is complexity in assessment, the aim of this paper is not to overwhelm instructors, but instead to offer a basic assessment strategy: identify anticipated CURE learning outcomes, select an assessment instrument that is aligned with the learning outcomes, and cautiously interpret the results of the assessment instrument. We also present a table of previously developed assessment instruments that could be of use to CURE instructors depending on their learning goals, student populations, and course context. While this is only the tip of the iceberg as far as how instructors can assess their CUREs, and we anticipate that many more assessment instruments will be developed in the coming years, we hope that this table can provide instructors with a starting point for considering how to assess their CUREs. We encourage instructors to be thoughtful and critical in their assessment of their CUREs as we continue to learn more about the impact of these curricula on students. Appendix 1: Faculty perceptions of student gains from participation in CUREs
  30 in total

1.  Education. Scientific teaching.

Authors:  Jo Handelsman; Diane Ebert-May; Robert Beichner; Peter Bruns; Amy Chang; Robert DeHaan; Jim Gentile; Sarah Lauffer; James Stewart; Shirley M Tilghman; William B Wood
Journal:  Science       Date:  2004-04-23       Impact factor: 47.728

2.  IBI series winner. Student-directed discovery of the plant microbiome and its products.

Authors:  Carol A Bascom-Slack; A Elizabeth Arnold; Scott A Strobel
Journal:  Science       Date:  2012-10-26       Impact factor: 47.728

3.  Classroom-based science research at the introductory level: changes in career choices and attitude.

Authors:  Melinda Harrison; David Dunbar; Lisa Ratmansky; Kimberly Boyd; David Lopatto
Journal:  CBE Life Sci Educ       Date:  2011       Impact factor: 3.325

4.  Development and Validation of a Rubric for Diagnosing Students' Experimental Design Knowledge and Difficulties.

Authors:  Annwesa P Dasgupta; Trevor R Anderson; Nancy Pelaez
Journal:  CBE Life Sci Educ       Date:  2014       Impact factor: 3.325

5.  A central support system can facilitate implementation and sustainability of a Classroom-based Undergraduate Research Experience (CURE) in Genomics.

Authors:  David Lopatto; Charles Hauser; Christopher J Jones; Don Paetkau; Vidya Chandrasekaran; David Dunbar; Christy MacKinnon; Joyce Stamm; Consuelo Alvarez; Daron Barnard; James E J Bedard; April E Bednarski; Satish Bhalla; John M Braverman; Martin Burg; Hui-Min Chung; Randall J DeJong; Justin R DiAngelo; Chunguang Du; Todd T Eckdahl; Julia Emerson; Amy Frary; Donald Frohlich; Anya L Goodman; Yuying Gosser; Shubha Govind; Adam Haberman; Amy T Hark; Arlene Hoogewerf; Diana Johnson; Lisa Kadlec; Marian Kaehler; S Catherine Silver Key; Nighat P Kokan; Olga R Kopp; Gary A Kuleck; Jane Lopilato; Juan C Martinez-Cruzado; Gerard McNeil; Stephanie Mel; Alexis Nagengast; Paul J Overvoorde; Susan Parrish; Mary L Preuss; Laura D Reed; E Gloria Regisford; Dennis Revie; Srebrenka Robic; Jennifer A Roecklien-Canfield; Anne G Rosenwald; Michael R Rubin; Kenneth Saville; Stephanie Schroeder; Karim A Sharif; Mary Shaw; Gary Skuse; Christopher D Smith; Mary Smith; Sheryl T Smith; Eric P Spana; Mary Spratt; Aparna Sreenivasan; Jeffrey S Thompson; Matthew Wawersik; Michael J Wolyniak; James Youngblom; Leming Zhou; Jeremy Buhler; Elaine Mardis; Wilson Leung; Christopher D Shaffer; Jennifer Threlfall; Sarah C R Elgin
Journal:  CBE Life Sci Educ       Date:  2014       Impact factor: 3.325

6.  Course-based undergraduate research experiences can make scientific research more inclusive.

Authors:  Gita Bangera; Sara E Brownell
Journal:  CBE Life Sci Educ       Date:  2014       Impact factor: 3.325

7.  Assessment of course-based undergraduate research experiences: a meeting report.

Authors:  Lisa Corwin Auchincloss; Sandra L Laursen; Janet L Branchaw; Kevin Eagan; Mark Graham; David I Hanauer; Gwendolyn Lawrie; Colleen M McLinn; Nancy Pelaez; Susan Rowland; Marcy Towns; Nancy M Trautmann; Pratibha Varma-Nelson; Timothy J Weston; Erin L Dolan
Journal:  CBE Life Sci Educ       Date:  2014       Impact factor: 3.325

8.  A course-based research experience: how benefits change with increased investment in instructional time.

Authors:  Christopher D Shaffer; Consuelo J Alvarez; April E Bednarski; David Dunbar; Anya L Goodman; Catherine Reinke; Anne G Rosenwald; Michael J Wolyniak; Cheryl Bailey; Daron Barnard; Christopher Bazinet; Dale L Beach; James E J Bedard; Satish Bhalla; John Braverman; Martin Burg; Vidya Chandrasekaran; Hui-Min Chung; Kari Clase; Randall J Dejong; Justin R Diangelo; Chunguang Du; Todd T Eckdahl; Heather Eisler; Julia A Emerson; Amy Frary; Donald Frohlich; Yuying Gosser; Shubha Govind; Adam Haberman; Amy T Hark; Charles Hauser; Arlene Hoogewerf; Laura L M Hoopes; Carina E Howell; Diana Johnson; Christopher J Jones; Lisa Kadlec; Marian Kaehler; S Catherine Silver Key; Adam Kleinschmit; Nighat P Kokan; Olga Kopp; Gary Kuleck; Judith Leatherman; Jane Lopilato; Christy Mackinnon; Juan Carlos Martinez-Cruzado; Gerard McNeil; Stephanie Mel; Hemlata Mistry; Alexis Nagengast; Paul Overvoorde; Don W Paetkau; Susan Parrish; Celeste N Peterson; Mary Preuss; Laura K Reed; Dennis Revie; Srebrenka Robic; Jennifer Roecklein-Canfield; Michael R Rubin; Kenneth Saville; Stephanie Schroeder; Karim Sharif; Mary Shaw; Gary Skuse; Christopher D Smith; Mary A Smith; Sheryl T Smith; Eric Spana; Mary Spratt; Aparna Sreenivasan; Joyce Stamm; Paul Szauter; Jeffrey S Thompson; Matthew Wawersik; James Youngblom; Leming Zhou; Elaine R Mardis; Jeremy Buhler; Wilson Leung; David Lopatto; Sarah C R Elgin
Journal:  CBE Life Sci Educ       Date:  2014       Impact factor: 3.325

9.  Measuring Networking as an Outcome Variable in Undergraduate Research Experiences.

Authors:  David I Hanauer; Graham Hatfull
Journal:  CBE Life Sci Educ       Date:  2015       Impact factor: 3.325

10.  The Undergraduate Research Student Self-Assessment (URSSA): Validation for Use in Program Evaluation.

Authors:  Timothy J Weston; Sandra L Laursen
Journal:  CBE Life Sci Educ       Date:  2015       Impact factor: 3.325

View more
  18 in total

1.  Fermentation revival in the classroom: investigating ancient human practices as CUREs for modern diseases.

Authors:  Jennifer K Lyles; Monika Oli
Journal:  FEMS Microbiol Lett       Date:  2020-11-23       Impact factor: 2.742

2.  Collaborating with Undergraduates To Contribute to Biochemistry Community Resources.

Authors:  Kathryn L Haas; Jennifer M Heemstra; Marnix H Medema; Louise K Charkoudian
Journal:  Biochemistry       Date:  2017-11-02       Impact factor: 3.162

3.  Define Your Goals Before You Design a CURE: A Call to Use Backward Design in Planning Course-Based Undergraduate Research Experiences.

Authors:  Katelyn M Cooper; Paula A G Soneral; Sara E Brownell
Journal:  J Microbiol Biol Educ       Date:  2017-05-26

4.  A Call to Develop Course-Based Undergraduate Research Experiences (CUREs) for Nonmajors Courses.

Authors:  Cissy J Ballen; Jessamina E Blum; Sara Brownell; Sadie Hebert; James Hewlett; Joanna R Klein; Erik A McDonald; Denise L Monti; Stephen C Nold; Krista E Slemmons; Paula A G Soneral; Sehoya Cotner
Journal:  CBE Life Sci Educ       Date:  2017       Impact factor: 3.325

5.  Fear of the CURE: A Beginner's Guide to Overcoming Barriers in Creating a Course-Based Undergraduate Research Experience.

Authors:  Brinda Govindan; Sarah Pickett; Blake Riggs
Journal:  J Microbiol Biol Educ       Date:  2020-05-29

6.  A standardized workflow for submitting data to the Minimum Information about a Biosynthetic Gene cluster (MIBiG) repository: prospects for research-based educational experiences.

Authors:  Samuel C Epstein; Louise K Charkoudian; Marnix H Medema
Journal:  Stand Genomic Sci       Date:  2018-07-11

7.  Development of a Tool to Assess Interrelated Experimental Design in Introductory Biology.

Authors:  Tess L Killpack; Sara M Fulmer
Journal:  J Microbiol Biol Educ       Date:  2018-10-31

8.  Adding Authenticity to Inquiry in a First-Year, Research-Based, Biology Laboratory Course.

Authors:  Jane L Indorf; Joanna Weremijewicz; David P Janos; Michael S Gaines
Journal:  CBE Life Sci Educ       Date:  2019-09       Impact factor: 3.325

9.  Multi-Institutional, Multidisciplinary Study of the Impact of Course-Based Research Experiences.

Authors:  Catherine M Mader; Christopher W Beck; Wendy H Grillo; Gail P Hollowell; Bettye S Hennington; Nancy L Staub; Veronique A Delesalle; Denise Lello; Robert B Merritt; Gerald D Griffin; Chastity Bradford; Jinghe Mao; Lawrence S Blumer; Sandra L White
Journal:  J Microbiol Biol Educ       Date:  2017-09-01

10.  How to Identify the Research Abilities That Instructors Anticipate Students Will Develop in a Biochemistry Course-Based Undergraduate Research Experience (CURE).

Authors:  Stefan Mark Irby; Nancy J Pelaez; Trevor R Anderson
Journal:  CBE Life Sci Educ       Date:  2018-06       Impact factor: 3.325

View more

北京卡尤迪生物科技股份有限公司 © 2022-2023.