| Literature DB >> 25185237 |
Abstract
We have created an inventory to characterize the teaching practices used in science and mathematics courses. This inventory can aid instructors and departments in reflecting on their teaching. It has been tested with several hundred university instructors and courses from mathematics and four science disciplines. Most instructors complete the inventory in 10 min or less, and the results allow meaningful comparisons of the teaching used for the different courses and instructors within a department and across different departments. We also show how the inventory results can be used to gauge the extent of use of research-based teaching practices, and we illustrate this with the inventory results for five departments. These results show the high degree of discrimination provided by the inventory, as well as its effectiveness in tracking the increase in the use of research-based teaching practices.Entities:
Mesh:
Year: 2014 PMID: 25185237 PMCID: PMC4152215 DOI: 10.1187/cbe.14-02-0023
Source DB: PubMed Journal: CBE Life Sci Educ ISSN: 1931-7913 Impact factor: 3.325
Teaching practices inventory categories
| I. | Course information provided (including learning goals or outcomes) |
| II. | Supporting materials provided |
| III. | In-class features and activities |
| IV. | Assignments |
| V. | Feedback and testing |
| VI. | Other (diagnostics, pre–post testing, new methods with measures, etc.) |
| VII. | Training and guidance of TAs |
| VIII. | Collaboration or sharing in teaching |
Abbreviated descriptions of the list of inventory items that receive points on the rubric sorted according to general factors that support learning and teacher effectiveness, along with references on their impacta
| Factor | Practice that supports | References on benefits |
|---|---|---|
| Knowledge organization | I. List of topics to be covered | Promising Practice No. 1: Learning Outcomes in |
| I. List of topic-specific competencies ( | Promising Practice No. 4: Scenario-based Content Organization in | |
| I. List of competencies that are not topic related (critical thinking, problem solving) | ||
| II. Animations, video clips, simulations | ||
| II. Lecture notes or copy of class materials1 (partial/skeletal or complete) | 1
| |
| III. Time spent on the | 2 | |
| Long-term memory and reducing cognitive load | II. Worked examples1 | 1 |
| III. Students read/view material on upcoming class and quizzed2 | 2 | |
| Motivation | I. Affective goals—changing students’ attitudes and perceptions | Chapter 3 in |
| II. Articles from scientific literature | ||
| III. Discussions on why material useful | ||
| V. Students explicitly encouraged to meet individually with you ( | ||
| VI. Students provided with opportunities to have some control over their learning | ||
| Practice | II. Practice or previous years’ exams | Chapter 5 in |
| III. Number of small-group discussions or problem solving | ||
| III. Demonstrations in which students first predict behavior1III. Student presentationsIII. Fraction of class time [not] lecturingIII. Number of PRS questions posed followed by student–student discussionIV. Problem sets/homework assigned and contributing to course grade2IV. Paper or project (involving some degree of student control)3 ( | 1 | |
| Feedback | II. Student wikis or discussion board with significant contribution from instructor/TA | |
| II. Solutions to homework assignments | ||
| III. Number of times pause to ask for questions | ||
| IV. Assignments with feedback and opportunity to redo work ( | ||
| IV. Students see marked assignments | ||
| IV. Students see assignment answer key and/or marking rubric | ||
| IV. Students see marked midterm exams | ||
| IV. Students see midterm answer keys | ||
| V. Number of midterm exams | ||
| V. Breakdown of course mark | ||
| Metacognition | III. Reflective activity at end of class | |
| VI. Opportunities for self-evaluation | Chapter 7 in | |
| Group learning ( | IV. Encouragement for students to work collaboratively on their assignments | Promising Practice No. 2: Organize Students in Small Groups in |
| IV. Explicit group assignments Also | ||
| Connect with student prior knowledge and beliefs | VI. Assessment at beginning of courseVI. Use of pre–post survey of student interest and/or perceptions ( | |
| Feedback on effectiveness | V. Midterm course evaluation1 | |
| V. Repeated feedback from students1 | ||
| VI. Use of instructor-independent pre–post test (e.g., concept inventory) | ||
| VI. Use of a consistent measure of learning that is repeated | ||
| VI. New teaching methods with measurements of impact on learning | ||
| Gain relevant knowledge and skills | VII. TAs satisfy English-language criteria1 | 1 |
| VII. TAs receive one-half day or more of training2 | ||
| VII. Instructor–TA meetings on student learning and difficulties, etc.2 | ||
| VIII. Used “departmental” course materials | ||
| VIII. Discussed how to teach the course with colleague(s)3 | ||
| VIII. Read literature about teaching and learning relevant to this course ( | ||
| VIII. Sat in on colleague's class3 | ||
aNote that the item descriptions are abbreviated to save space. The full version of inventory in the Appendix should be consulted to fully understand what that item on the survey is asking. The classification is for the convenience of the reader rather than any sort of factor analysis. Many of the practices represented by a single inventory item contribute via several of the factors listed, and the factors themselves are not orthogonal. We list practices according to a somewhat arbitrary choice as to their single “most important” factor and the most relevant references, noting in italics some of the most important other factors by which that practice contributes. The references listed are not an exhaustive list and in most cases are reviews that contain many original references. This table does not include 14 commonly used teaching practices that are captured by the inventory to characterize the teaching methods used but are not given points in the scoring rubric due to insufficient evidence as to their impact on learning. Superscript numbers in column 2 refer to applicable references in column 3.
ETP scoresa
| Department | AVE (SD) | EWA | I | II | III | IV | V | VI | VII | VIII | |
|---|---|---|---|---|---|---|---|---|---|---|---|
| D1 | 28 | 33.4 (9.4) | 39.3 | 3.9 | 4.2 | 7.8 | 3.2 | 7.5 | 2.3 | 1.6 | 2.9 |
| D2 | 31 | 32.6 (8.5) | 33.6 | 3.7 | 4.5 | 6.1 | 3.3 | 8.1 | 1.6 | 2.3 | 2.9 |
| D3 | 34 | 31.1 (8.9) | 33.8 | 4.4 | 3.9 | 6.6 | 3.5 | 5.9 | 2.1 | 1.7 | 3.1 |
| D4 | 31 | 31.1 (8.2) | 33.3 | 4.0 | 4.1 | 6.7 | 2.7 | 6.6 | 1.6 | 2.0 | 3.4 |
| D5 | 55 | 24.1 (6.5) | 25.2 | 2.7 | 3.1 | 4.0 | 2.1 | 8.3 | 0.7 | 1.1 | 2.1 |
| Maximum possible | 67 | 6 | 7 | 15 | 6 | 13 | 10 | 4 | 6 | ||
| Category SD | 1.7 | 1.4 | 3.0 | 1.5 | 1.8 | 1.7 | 1.3 | 1.5 | |||
aAverage and SD (AVE (SD)), enrollment-weighted average (EWA), and category averages, I through VIII, of ETP scores for one term of courses in five departments. “Enrollment-weighted average” is the weighted average calculated by weighting the score for each course by its respective enrollment.
Figure 1.Histograms of the ETP scores for the courses in the five departments. Histogram bins are 5 wide (±2 around the central value). ETP scores are integers.
Comparison of the teaching practices inventory data for the 2006–2007 and 2012–2013 academic yearsa
| AVE (SD) | EWA | I | II | III | IV | V | VI | VII | VIII | |
|---|---|---|---|---|---|---|---|---|---|---|
| D3 2006–2007 | 20.4 (6.2) | 19.2 | 2.3 | 3.4 | 2.9 | 2.5 | 6.0 | 0.7 | 0.8 | 2.0 |
| D3 2012–2013 | 27.3 (6.8) | 28.9 | 4.4 | 3.8 | 4.5 | 3.5 | 5.5 | 1.2 | 0.9 | 3.5 |
aAverage and SD (AVE (SD)), enrollment-weighted average (EWA), and category averages for department D3. The scoring is lower than in Table 3, because it is based only on the subset of 40 scored questions common to both versions of the inventory. SEs for the category scores are 0.5 for category III and 0.3 for all the others.