| Literature DB >> 30417756 |
Erika G Offerdahl1, Melody McConnell2, Jeffrey Boyer3.
Abstract
For decades, formative assessment has been identified as a high-impact instructional practice that positively affects student learning. Education reform documents such as Vision and Change: A Call to Action expressly identify frequent, ongoing formative assessment and feedback as a key instructional practice in student-centered learning environments. Historically, effect sizes between 0.4 and 0.7 have been reported for formative assessment experiments. However, more recent meta-analyses have reported much lower effect sizes. It is unclear whether the variability in reported effects is due to formative assessment as an instructional practice in and of itself, differences in how formative assessment was enacted across studies, or other mitigating factors. We propose that application of a fidelity of implementation (FOI) framework to define the critical components of formative assessment will increase the validity of future impact studies. In this Essay, we apply core principles from the FOI literature to hypothesize about the critical components of formative assessment as a high-impact instructional practice. In doing so, we begin the iterative process through which further research can develop valid and reliable measures of the FOI of formative assessment. Such measures are necessary to empirically determine when, how, and under what conditions formative assessment supports student learning.Entities:
Mesh:
Year: 2018 PMID: 30417756 PMCID: PMC6755885 DOI: 10.1187/cbe.18-02-0029
Source DB: PubMed Journal: CBE Life Sci Educ ISSN: 1931-7913 Impact factor: 3.325
FIGURE 1.Formative assessment as an iterative process through which instructors and students use evidence of student understanding to monitor and generate actionable feedback to support progress toward desired learning outcomes.
Criteria for characterizing level of support for proposed critical components
| Level of support | Qualifier | Criterion |
|---|---|---|
| 4 | Strong support | Critical component is the 1) focus of a research question in two or more studies using different research methods or in different contexts |
| 3 | Moderate support | Critical component is the 1) focus of a single study and/or two or more studies with indirect evidence in support of the component |
| 2 | Limited support | Critical component is the 1) focus of studies with indirect evidence in support of the component |
| 1 | Theoretical support established | No more than two studies providing theoretical support, but no empirical studies. |
Proposed critical components of formative assessment for which there is strong (empirical or theoretical) support as defined in Table 1
| Category | Critical component | Description |
|---|---|---|
| Structural | Learning objectives | Clear criteria for success are identified. |
| Formative assessment prompts | Mechanisms for eliciting the range and extent of students’ understanding are employed. | |
| Evidence of student understanding | Range and extent of student understanding is made explicit to teacher and student. | |
| Feedback | A comparison of the learner’s current state with the criteria for success is used to generate timely, relevant, and actionable feedback. | |
| Skills for self-regulated learning | Students know how to identify personal strengths/weaknesses relevant to the instructional task and how to create and monitor a plan for completing a learning task. | |
| Personal pedagogical content knowledge (PCK) | Instructors possess discipline-specific and pedagogical knowledge for designing and reflecting on instruction of particular topics. | |
| Prior knowledge | Students’ prior knowledge is activated and interacts with how they learn information. | |
| Instructional | Reveal student understanding | The student/class willingly provides an appropriate response to the formative assessment prompt. |
| Personal pedagogical knowledge and skills (PCK&S) | The instructor uses particular discipline-specific knowledge and pedagogical skills to diagnose learning of a particular topic and provide feedback in a particular way to particular students. | |
| Diagnosis of in-progress learning | The instructor and/or student uses formative assessment prompt and learning outcome to diagnose learner’s current state. | |
| Generate feedback | The instructor and/or student generate(s) feedback about the learner’s current state. | |
| Recognize and respond to feedback | The student recognizes and acts on feedback to shape learning. |
Nonexhaustive list of existing tools that could potentially be used and/or modified to measure aspects of FOI of formative assessment
| Measurement tools | Relevant critical component(s) | References |
|---|---|---|
| Bloom’s taxonomy tools | Learning outcomes, formative assessment prompts | |
| ESRU cyclea | Formative assessment prompts, evidence of student understanding, feedback | |
| ICAP frameworkb | Formative assessment prompts | |
| Project PRIME PCK rubricc | Personal pedagogical content knowledge | Gardner and Gess-Newsome, 2011 |
| Practical Observation Rubric To Assess Active Learning (PORTAAL) | Learning outcomes, formative assessment prompts, feedback | |
| Concept inventories (e.g., EcoEvo-Maps) | Prior knowledge | |
| Metacognitive Awareness Inventory | Skills for self-regulated learning |
aThe ESRU cycle is characterized by an instructor Eliciting evidence of Student thinking followed by an instructor Recognizing the response and Using it to support learning.
bThe ICAP framework uses student behaviors to characterize cognitive engagement activities as interactive, constructive, active, and/or passive.
cProject PRIME (Promoting Reform through Instructional Materials that Educate) produced the PCK rubric with support from the National Science Foundation (DRL-0455846).