Literature DB >> 35966814

A comparison of preservice teacher perceptions of instructor video and text-based feedback.

Erik Kormos1.   

Abstract

This experimental quantitative study investigated 48 preservice teachers from a Midwestern state and their perceptions of instructor feedback on a common assessment. Instructor feedback was provided on a program-wide lesson plan in traditional text-based format (N = 26) or digital video-based feedback (N = 22). Feedback competence was assessed using four categories: (1) student perceptions of instructor; (2) student perceptions of knowledge acquisition and learning; (3) student perceptions of personal involvement in the course; and (4) student perceptions of personal motivation. An independent samples t test found participants who received video-based feedback reported higher levels of perceived instructor effectiveness, skill development, intrinsic motivation, and preparedness to enter the profession. The findings moved the literature forward through the significant differences of effectiveness based upon type. Results can help demonstrate the continued need for more personalized feedback for preservice educators to meet course outcomes and improve teaching practices.
© The Author(s), under exclusive licence to Springer Nature Switzerland AG 2022.

Entities:  

Keywords:  Educational technology; Lesson planning; Multimedia feedback; Online assessment; Teacher education

Year:  2022        PMID: 35966814      PMCID: PMC9362620          DOI: 10.1007/s43545-022-00413-9

Source DB:  PubMed          Journal:  SN Soc Sci        ISSN: 2662-9283


Introduction

Feedback for preservice teachers is an important and influential element in preparing future educators. The ability to provide high-quality and constructive feedback should be a foundational piece of any teacher preparation program. Feedback from instructors and peers can help teachers learn, grow, and improve as they become experts (Snead and Freiberg 2019). Feedback is an essential element of preparation and should be a continuous activity long after graduation. Traditionally, it may be delivered in a face-to-face setting after a supervisor observation during a practicum experience. However, with the onset of digital video, combined with the COVID-19 pandemic, this is not always a realistic possibility. A 2020 poll conducted by the National Parents Union found that 58% of American K-12 students were learning entirely online, with an additional 18% receiving a blend of face-to-face and remote instruction. In total, less than a quarter of U.S. public schools are fully in-person (Barnum 2020). Digital video-based feedback can offer a solution and meet at the intersection of face-to-face and online instruction. This type of feedback allows for sessions to not be time and location dependent while providing more flexibility for both the preservice educator and their instructors (Prilop et al. 2020). Although not caused specifically by the pandemic, video-based feedback can provide multiple alternatives for preservice teachers for those inside and outside of a traditional environment. The purpose of this study is to explore preservice teacher perceptions about the quality of instructor feedback provided on a common assessment. This lesson plan is a required element of the university curriculum and artifact for accreditation. More specifically, the research aimed to explore student perspectives of text-based and video-based feedback on lesson plans. The goal of this research was to better understand how to create effective feedback for preservice teachers as they deliver lesson plans to provide a more valuable learning experience that translates into the field. Unlike prior studies, students received feedback in two forms: recorded video or transcription of an audio recording to be delivered in text-based format. Prior research focused on typed or written feedback, audio, and video feedback (Christ et al. 2017; Grainger 2020; Snead and Freiberg 2019).

Literature review

Feedback in teacher education programs

The relationship between feedback and performance is one of the most impactful ways to promote achievement across a wide variety of situations (Hattie and Timperley 2007; Narciss 2013). It plays a vital role and provides information related to current performance in an effort to achieve continuous improvement and standards related to program requirements. Prior studies (Fukkink et al. 2011; McCarthy 2015; Pinter et al. 2015) revealed how feedback is essential to the improvement of performance and self-confidence. When it comes to preservice teachers, providing teaching practice is not the only requirement to become a skilled educator. Along with peer feedback, it is important for students receiving instructor evaluations of their own work to learn more from a former or current educator in the field (Christ et al. 2017). Hammerness et al. (2005) found teachers who actively sought feedback to develop teaching expertise had students with higher levels of academic achievement. Feedback is an integral part of preservice education and current teacher professional development. When done correctly, it can lead to enhanced self-evaluation by the preservice teacher and cause them to evaluate their pedagogical decisions with the intention of improving their teaching (Tripp and Rich 2012). Previous research (Allen et al. 2015; Fawzi and Alddabous 2019) explored the significant impact of instructor feedback on knowledge, practice, attitudes, and student performance. The researchers found that simply providing feedback does not guarantee improved performance in a practicum or full-time setting. In order for preservice teachers to elevate their competence and skill, high-quality and personalized feedback is required (Sailors and Price 2015). Prilop et al. (2020) defined feedback competence in teacher assessment as “the skill to convey critical assessments of a teacher's classroom practice to initiate reflection.” Feedback needs to be a criteria-based evaluation of a teaching lesson plan or performance that identifies strengths and weaknesses. When communicating with a student, instructor feedback needs to articulate this information to preservice teachers in a constructive manner (Grainger 2020). When performing evaluations, instructors can employ a balance of expert feedback and novice feedback. For expert-type feedback, it is important to utilize the criteria and provide situation-specific comments while using a first-person perspective (Henderson and Phillips 2015). However, novices may prefer to receive feedback that contains reflective questions, general examples, and specific suggestions for improvement (Prins et al. 2006).

Characteristics of effective instructor feedback

Education courses are essential to the development of student teachers and incorporation of a program’s curriculum with hands-on experience in a classroom. The purpose of instructor feedback is to fill gaps between teacher education theory and practice in an easy-to-understand, meaningful, and relevant manner (Fawzi and Alddabous 2019). Since these courses play such an important role, it is essential for preservice teachers to receive feedback to become professionally qualified in not just content knowledge, but other aspects such as pedagogy and the implementation of educational technology (Hung 2016). When executed correctly, it can provide a platform to raise teacher awareness, promote differentiation, and foster positive change in teaching practices (Agudo and de Dios 2017). Numerous studies (Akerson et al. 2017) found high-quality feedback related to lesson planning played a fundamental role in helping preservice teachers grow professionally and develop an understanding of classroom teaching. The efficacy of feedback is driven by three aspects: content, function, and presentation (Narciss 2013). Further, Hattie and Timperley (2007) asserted feedback needs to answer three essential questions: Where am I going?; How am I going?; Where to next? Despite prior research, no clear consensus exists related to a proper way to accurately measure feedback. Rather, abstract classifications are often applied to measure feedback quality and allow for flexibility in a variety of specific circumstances (Jones et al. 2018). This versatility creates an environment where generic knowledge can be measured and applied in multiple situations without the constraints of domain or task-specific criteria (Gielen and De Wever 2015). Since lesson planning and teaching situations do not always present clear solutions, an assessment of validity and reliability would be difficult to utilize. When evaluating preservice teachers, instructors should do so in terms of content, licensure area, and/or style criteria within the framework of a specific assignment or observation (Crichton and Valdera Gil 2015). Feedback quality can be measured in a number of ways within the teacher education curriculum. Sluijsmans et al. (2002) identified five characteristics of effective feedback to improve preservice teacher preparedness. First, instructor comments should be appropriate for the specific context and scope of the assignment. For example, an instructor needs to be able to critique a lesson plan based on defined criteria from either the course or program (Ellis and Loughland 2017). Next, the instructor needs to be capable of explaining their judgments and identifying specific examples in their assessment (Gielen and De Wever 2015). Third, constructive suggestions should be included and presented from the perspective of a tutoring component. This is important to articulate additional information besides evaluation of aspects such as pedagogy, mistakes, and how to proceed or adapt teaching strategies (Narciss 2013). Next, feedback should contain thought-provoking questions which aim to increase active engagement with an assignment (Nicol and Macfarlane-Dick 2006). Lastly, effective feedback provides positive and negative comments. Both types can provide valuable information to enhance future teaching situations (Jons 2019). Also, high-quality feedback should be spoken or written in the first person and follow a clear organizational style (Prins et al. 2006). Regardless of whether it is provided in a multimedia or text-based format, it should be constructive, supportive, and incorporate theory and practice (Agudo and de Dios 2017).

Text-based feedback

In general, instructor feedback can be delivered in multiple formats such as text, audio, and video for synchronous and asynchronous learning. With the rise of online instruction, explicit corrections via synchronous text-based instruction proved effective in learning development (Yilmaz 2012). Prior research provided evidence text-based communication offers advantages comparted to audio and video options. Rassaei (2019) found while instructors can deliver effective feedback across multiple modalities, new developments in video sharing platforms such as Zoom and Skype allow for audio transcriptions of recordings. This ability can provide further detail and allow learners to scan for main points rather than listening to an entire audio or video file (Snead and Freiberg 2019). Despite its benefits, prior research found distinct differences between the potentials of text-based feedback compared to other media. For example, while a learner can use spelling and morphological cues from text-based feedback, it often lacks a human presence and social aspect of teacher–student communication (Sato and Loewen 2018). Ice et al. (2007) argued text-based communication lacks nuance and is quickly being overshadowed by the social element of commuter computer mediated interaction that furthers a loss of meaning and increases learner dissatisfaction.

Video-based feedback

Teacher education programs provide an opportunity for video-based feedback on lesson planning and field experiences. Tripp and Rich (2012) indicated instructor feedback in these situations becomes more focused and provides greater context when video is incorporated. Participants revealed video feedback as more specific with suggestions relevant to the assignment. Feedback on lesson plans and teaching without video can be abstract and leave students with a lack of clarity in terms of what they did right and wrong (García and Pons-Seguí 2020). As opposed to text-based feedback, video tended to be more dialogic and helped students understand the entire context of instructor comments. Hollingsworth and Clarke (2017) found this relationship to be present in video-based feedback for in-service teachers. However, there is a lack of literature related to preservice teacher lesson planning before they enter the classroom. Instructors, cooperative teachers, and supervisors play significant roles and offer unique perspectives in the development of preservice teachers. Participants in prior research stated they trusted a supervisor's opinion more than their own (Baecher et al. 2018; Vertemara and Flushman 2017). However, the use of video as a feedback tool can impact the experience novice teachers receive. Preservice teachers indicated face-to-face feedback often lacked structure, but through the usage of video support, it became more specific and in-depth (Perkoski 2017). Preservice teachers in the classroom can benefit from video-based feedback in a variety of ways. They can watch recordings of their own teaching, as well as their peers, which allows for them to discuss their teaching practice with not only each other but with experienced teachers and supervisors (Snead and Freiberg 2019). Previous research (Hollingsworth and Clarke 2017; Tripp and Rich 2012) found digital video assessment allowed for more accurate and probing reviews of teaching instances. Video feedback also provides an opportunity for preservice teachers to identify specific areas for improvement. During a practicum, they can watch the video of their teaching lesson, pause and fast forward, and identify specific teaching situations to improve or reflect upon (Ellis and Loughland 2017). Whether it is for teachers preparing their first lesson or those conducting a practicum, video-based feedback can be an effective mentoring resource.

Theoretical framework

A variety of theoretical frameworks for examining technology integration have become prevalent at various times, each with its own specific purpose or intended context for utilization. Notable frameworks include the RAT Model and SAMR Model (Hanover Research 2013). First introduced by Hughes (2005), the acronym RAT stands for replacement, amplification, and transformation, the three levels of technology implementation described in this framework. Hew and Brush (2007) described the model as a channel to promote an understanding of technology-supported pedagogy. At the replacement level, a digital technology substitutes for another tool, such as the highlighting of text function in Google Docs or Microsoft Word in lieu of circling on a printed worksheet (Hughes et al. 2006). Although the specific assignment remained unchanged, the tool or medium for demonstration of understanding being employed is different. The amplification step occurs when digital technology serves as a way to enhance productivity and efficiency, such as students utilizing Google Docs to provide written feedback to peers in the comments instead of handwritten feedback on the margins of the paper where students reply to each other in real time (Castro 2018). Although the task remains fundamentally as it was prior to technology being implemented, the capabilities provide greater benefit for teaching and learning than if technology was omitted from the learning process (Kimmons et al. 2015). Lastly, at the transformation level, instructional technology was used to modify teaching practices, student learning, and the way feedback is presented (Mulder 2017). Browser extensions, such as Mote for Google Chrome, allow students to leave audio feedback as well in Google Docs and Slides. While the RAT framework may be helpful to describe classroom technology integration, it has yet to find widespread adoption with only a few publications referencing the framework. One possibility is that multiple models were developed during the same time, such as the SAMR Model, which proved more popular among researchers and practitioners (Green 2014). After evaluation of both models, it was apparent the SAMR model was a better fit to analyze video and written instructor feedback. The SAMR model was first presented by Puentedura (2006, 2014) and served as the theoretical framework for this study. Although similar to the RAT model, SAMR features four levels of technology usage rather than three: substitution, augmentation, modification, and redefinition. The SAMR model examines technology usage as a substitute for another tool to facilitate augmentation, task modification, and task completion roles. The following descriptions of the four levels were adapted from Puentedura (2006, 2014). During the substitution step, a technology is replaced by another, such as an infographic software replacing a traditional poster board-based poster. Although the task assigned to students fundamentally remained the same, the tools are different and now allow for creating in a permanent, digital form. For augmentation, the instructional technology allows for functional improvements such as spellcheck, grammar check, and the ability to copy and paste in an infographic such as Canva or Piktochart which is not possible using poster board alone. Although the features may simplify the workload, it does not fundamentally change the assignment itself. In the third step, the employment of modification technologies allows for the redesign of the task or feedback provided to students (McGinnis 2019). For example, students can interact and leave audio or video replies to an instructor over a particular assignment, such as a lesson plan, or teaching observation. Furthermore, video technologies such as Zoom and Microsoft Teams allow for follow-up meetings to occur regardless of place and time, adding flexibility and transforming traditional feedback. Lastly, at the redefinition level, the task assigned has now been completely transformed and no longer resembles the original task itself (Pfaffe 2017). For instance, by utilizing video feedback, instructors can illustrate on and share a screen, leave timestamps for students to review their work, as well as provide an archived recording for preservice teachers to review at a later date. In doing so, work assigned to preservice teachers can be transformed with the adaptation and classification of technology applications to produce effective teaching candidates.

Methodology

This experimental study compared preservice teacher perceptions of instructor video and text-based feedback. Participants’ experience with both assessment types occurred upon submission of a common assessment within the teacher preparation program. The study considered whether or not changes are seen in perceptions of feedback type and each the following subscales: (1) student perceptions of instructor; (2) student perceptions of knowledge acquisition and learning; (3) student perceptions of personal involvement in the course; (4) student perceptions of personal motivation. The study examines the following research questions: RQ1 What is the relationship between feedback type and student perceptions of their instructor? RQ2 What is the relationship between feedback type and students’ perceived skill acquisition? RQ3 Are there any significant differences in student perceptions of course focus between feedback types? RQ4 Are there any significant differences in student perceptions of coursemotivation between feedback types?

Participants

The participants consist of teacher candidates at a small, private, and rural university in the Midwest region of the United States enrolled in a 3-credit educational technology course during a 15-week semester. The course (EDCI 232) is required for all education majors and totaled approximately 60 students. Each course section was predominantly online with a weekly face-to-face session. All sections utilized Blackboard as the learning management system. Blackboard was chosen due to its grading efficiency as well as video-enabled communication tools to provide instructor feedback on student submissions. Before commencing data collection, approval for this study was obtained from the local institutional review board. Non-probability sampling in the form of convenience sampling yielded a total of approximately 60 teacher candidates enrolled for practicum experiences in the spring 2020 semester who were invited to take part in the study. All students who provided informed consent and took part in the post-feedback survey. Forty eight individuals provided their informed consent and also responded to the survey. Characteristics of the participants are provided in Table 1.
Table 1

Participant characteristics

VariablesTotal (N = 48)
Age in years
 Mean ± SD23.06 ± 4.15
 Min–Max18–31
Gender
 Female29
 Male19
Preservice training program
 Primary27
 Middle grades11
 Secondary3
 Multi-age7
Participant characteristics

Data analysis

Data results were transferred to Google Sheets directly from the Google Form. Afterward, the results were downloaded as an Excel document and uploaded to the Statistical Package for the Social Sciences (SPSS) for analysis. Descriptive statistics including mean and standard deviation analyzed each question item. For inferential statistics, an independent samples t test further examined the relationship between types of instructor feedback and student perceptions. After descriptive statistics and frequency tables were generated for demographic variables, Cronbach’s alpha was calculated for each subscale included in the instrument to determine the internal consistency of items and gauge their reliability. The results show the instrument reaches an acceptable level of reliability for all elements including the student perceptions of instructor (α = 0.77), student perceptions of knowledge acquisition and learning (α = 0.71), student perceptions of personal involvement in the course (α = 0.88), and student perceptions of personal motivation (α = 0.92).

Instrument

A self-inventory questionnaire examined preservice teacher perceptions of instructor feedback on a common curriculum core assessment. Participants were university students enrolled in a teacher education program at a medium-sized, private Midwestern university. Respondents were enrolled in courses that comprised the core curriculum of the program taught by the same instructor. Data collection occurred at the completion of the courses after students received instructor feedback on their lesson plan. Feedback was based on the College of Education’s lesson plan rubric. The researcher used Google Forms as the platform to collect student responses. Participants were randomly assigned into two groups. Group A received feedback using text-based communication in the university’s learning management system (LMS). Group B received an instructor-narrated video recording with audio within the LMS. Following the assessment, the instructor sent an email to inform students their lesson plan feedback was available. The email contained a hyperlink to the survey and provided respondents with an opportunity to earn three bonus points on the assignment if they participated. The survey remained open for 14 days and students were sent a reminder email at the end of the first week. In total, the survey was administered to preservice educator courses and included statements from previous studies and instruments (Perkoski 2017). The investigation focused on four research objectives: (1) preservice teacher perceptions of instructor; (2) preservice teacher perceptions of knowledge acquisition and learning; (3) preservice teacher perceptions of personal involvement in the course; (4) preservice teacher perceptions of personal motivation. The first research objective included statements related to instructor approachability, expertise, participation, and attitude toward students. Objective two focused on the understanding, clarity of feedback, knowledge level, and recall. The third objective examined comfort level and participation. Lastly, objective four examined motivation for the subject topic, education courses, and teaching as a career. The survey utilized a five-point Likert scale after each item that ranged from 1 (Strongly Disagree) to 5 (Strongly Agree). In an analysis of Liker data types, Boone and Boone (2012) found Likert scales are effective in measuring character and personality traits based on the measurement of attitudinal scales to form a composite score. The final section consisted of demographic questions including gender, age, licensure program, expected grade, time spent on the course per week, and technology proficiency. As survey research traditionally included data collection from a large population, the primary purpose is to acquire characteristics of a large sample of individuals in a short time frame (Ponto 2015). The final question asked for participant email addresses in order to qualify for the extra credit opportunity. All responses remained confidential and were deleted after six months.

Results

For the first objective, an independent samples t-test revealed significant differences between groups for all statements (Table 2). Students who received text-based feedback (M = 3.79, SD = 1.25) reported significantly lower scores related to instructor approachability than those in the video-based group (M = 4.86, SD = 0.41), t(48) = 3.63, p < 0.001, d = 0.86. Students who received video-based instruction were significantly more likely to feel closer to their instructor (M = 4.69, SD = 0.48), t(48) = 3.79, p = 0.001, d = 0.83, and believe the instructor cared about their work (M = 4.86, SD = 0.37), t(48) = 3.80, p < 0.005, d = 0.96. In addition, respondents with video-based feedback (M = 4.82, SD = 0.32), t(48) = 2.28, p < 0.05, d = 0.27, indicated higher levels of perceived instructor knowledge than the text-based group (M = 4.71, SD = 0.47). Lastly, those in the video group M = 4.76, SD = 0.38), t(48) = 3.12, p < 0.01, d = 0.42, were more likely to feel the instructor was involved in the course than text-based students (M = 4.57, SD = 0.51). These results suggest video feedback does have an effect on preservice teachers' perception of the instructor and their relationship.
Table 2

T test results for student perception of instructor

NText-basedVideo w/ audiot test
MSDMSD
The feedback made the instructor seem more approachable483.791.254.860.413.63*
The feedback made me feel closer to the instructor483.431.164.690.483.79**
The feedback made the instructor seem knowledgeable484.710.474.820.322.28*
The feedback made me think the instructor was more involved in the course484.570.514.760.383.12**
The feedback made me think the instructor cared about my work484.360.634.860.373.80*

*p < 0.05; **p < 0.01

1 = Strongly Disagree, 2 = Disagree, 3 = Neither Agree nor Disagree, 4 = Agree, 5 = Strongly Agree

T test results for student perception of instructor *p < 0.05; **p < 0.01 1 = Strongly Disagree, 2 = Disagree, 3 = Neither Agree nor Disagree, 4 = Agree, 5 = Strongly Agree The objective is two measured feedback type and student perception of skill acquisition. Results from an independent samples t-test revealed a significant relationship between five out of the six statements (Table 3). The 22 participants who received the video-based feedback (M = 4.88, SD = 0.34), compared to the 26 who received text-based feedback (M = 4.36, SD = 0.75), demonstrated significantly better perceptions of clarity and context (t(48) = 2.39, p < 0.05, d = 0.89). Also, a significant relationship was revealed related to the clarity of expectations. The video group (M = 4.84, SD = 0.40) reported significantly higher opinions than the text group (M = 4.29, SD = 0.83), t(48) = 2.17, p < 0.05, d = 0.84. There was a significant difference in the scores for an explanation of grades as the video group reported higher levels of agreement (M = 4.44, SD = 0.81) than those with written feedback (M = 3.71. SD = 1.20), t(48) = 2.83, p < 0.01, d = 1.07.
Table 3

t-test results for student perception of skill acquisition

NText-basedVideo w/ audiot test
MSDMSD
The feedback made clearer the context that the instructor was trying to convey484.360.754.880.342.39*
The feedback increased the clarity of the instructor’s expectations484.290.834.840.402.17*
The feedback increased my knowledge of the subject matter483.920.764.440.812.03
The feedback explained my grade for the lesson plan483.711.204.690.482.83*
The feedback told me what I needed to do to improve my performance484.141.104.730.312.92*
The feedback will be easy to recall484.360.844.880.342.15*

*p < 0.05; **p < 0.01

1 = Strongly Disagree, 2 = Disagree, 3 = Neither Agree nor Disagree, 4 = Agree, 5 = Strongly Agree

t-test results for student perception of skill acquisition *p < 0.05; **p < 0.01 1 = Strongly Disagree, 2 = Disagree, 3 = Neither Agree nor Disagree, 4 = Agree, 5 = Strongly Agree Further, respondents indicated a significant relationship related to feedback letting the preservice teacher know what was needed to improve performance. Video recipients (M = 4.88, SD = 0.31) reported higher scores than those who received written feedback (M = 4.14, SD = 1.10), t(48) = 2.92, p < 0.02, d = 0.92, while preservice teachers with written feedback (M = 4.36, SD = 0.84) were significantly less likely to recall their feedback as opposed to their peers (M = 4.73, SD = 0.31), t(48) = 2.15, p < 0.05, d = 0.81. No significant difference was found related to an increase in knowledge of the subject matter t(48) = 2.83, p = 0.93. Taken together, the responses revealed the use of video played a vital role in preservice teachers' understanding of expectations, identifying specific areas of improvement, context, and recollection of instructor feedback. The objective is three investigated perceptions of course focus and feedback. An independent samples t-test was performed comparing the mean scores of the groups and uncovered a significant relationship between the two variables (Table 4). The video group (M = 4.81, SD = 0.40) scored much higher on the statement related to their perception of personal involvement compared to text-based (M = 3.50, SD = 0.94), t(48) = 4.85, p < 0.001, d = 1.81. The independent samples t test also found a significant difference of comfort level in the course. Participants assessed via video (M = 4.77, SD = 0.41) reported higher scores than those in the text-based group (M = 3.57. SD = 0.89), t(48) = 2.94, p < 0.03, d = 1.05. Data analysis revealed no significant effect on an understanding of the subject t(48) = 2.03, p = 0.52, or relevance to the purpose of the assignment t(48) = 1.88, p = 0.64. These results indicated individuals in the video group experienced more intrinsic motivation than did individuals in the text group. Specifically, they reported higher levels of participation and comfort level in the course.
Table 4

t-test results for student perception of course focus

NText-basedVideo w/ Audiot test
MSDMSD
The feedback made me feel more involved in the course483.500.944.810.404.85**
The feedback increased my level of understanding of the subject484.001.044.630.622.03
The feedback was relevant to the purpose of the assignment484.570.514.820.341.88
The feedback made the course more comfortable483.570.894.770.412.94**

*p < 0.05; **p < 0.01

1 = Strongly Disagree, 2 = Disagree, 3 = Neither Agree nor Disagree, 4 = Agree, 5 = Strongly Agree

t-test results for student perception of course focus *p < 0.05; **p < 0.01 1 = Strongly Disagree, 2 = Disagree, 3 = Neither Agree nor Disagree, 4 = Agree, 5 = Strongly Agree The fourth objective measured perceptions of student motivation. An independent samples t-test found multiple significant relationships between groups. Students in the video group (M = 4.82, SD = 0.34) reported significantly higher differences in positive motivation for becoming a teacher due to the feedback compared to those who received a text-based evaluation (M = 3.86, SD = 1.29), t(48) = 2.86, p < 0.02, d = 1.08). Preservice teachers with video feedback (M = 4.75, SD = 0.48) also reported higher levels of motivation for education courses (M = 3.76, SD = 1.02), t(48) = 2.46, p < 0.03, d = 1.24. Additionally, results suggested a significant relationship between confidence in the real-life application of their lesson plan between groups t(48) = 2.20, p < 0.04, d = 0.79). The independent samples t-test exposed no significant effect for motivation for the subject matter, t(48) = 1.74, p = 0.63, and reduced anxiety about becoming a teacher, t(48) = 1.42, p = 0.17 (Table 5).
Table 5

t-test results for student perception of motivation

NText-basedVideo w/ Audiot test
MSDMSD
The feedback has positively affected my motivation to become a teacher483.861.294.820.342.86*
The feedback has positively affected my motivation in education courses483.761.024.750.482.46*
The feedback has positively affected my motivation for the subject matter483.920.764.440.811.74
The feedback reduced my anxiety about becoming a teacher483.710.994.190.831.42
The feedback increased my confidence in the real life application of my lesson plan483.861.234.630.622.20*

*p < 0.05; **p < 0.01

1 = Strongly Disagree, 2 = Disagree, 3 = Neither Agree nor Disagree, 4 = Agree, 5 = Strongly Agree

t-test results for student perception of motivation *p < 0.05; **p < 0.01 1 = Strongly Disagree, 2 = Disagree, 3 = Neither Agree nor Disagree, 4 = Agree, 5 = Strongly Agree

Discussion

This quantitative study sought to investigate preservice teacher perceptions of video-based and written instructor feedback on lesson plans. Students who received video feedback were significantly more likely to find instructor assessments effective in nearly every category. RQ1 found significantly higher positive perceptions of their instructor. More specifically, they felt closer to their instructor, viewed them as more approachable, and the instructor cared about their work. Since preservice teachers may not be able to interact with instructors or supervisors in-person, video can provide an interpersonal connection and allows for communication to not be dependent on time or geography (Fukkink et al. 2011). The findings align with prior research that found written feedback may be interpreted in an abstract fashion and not provide clarity in terms of what a student did correct or incorrect when compared to video feedback (García and Pons-Seguí 2020; Prilop et al. 2020). Respondents in the video group were also more likely to view their instructor as knowledgeable and involved in the course. These elements help add credibility to assignment feedback and expertise. Along with prior research (Perkoski 2017; Prins et al. 2006; Snead and Freiberg 2019), the findings indicate preservice teachers are novices and their development is furthered through reflective questions and specific suggestions for improvement based on their specific lessons from an expert in the field. The second research question sought to reveal any significant relationships between the variables and perceived personal course involvement. Responses indicated multiple significant relationships between groups and skill acquisition. Similar to previous studies (Christ et al. 2017; Henderson and Phillips 2015), participants with video feedback were significantly more likely to understand the context and clarity of the comments. Even further, they stated video feedback did a better job of explaining the rationale for their grade on the lesson plan. This suggests an ability for video feedback to provide specific feedback in all elements of teaching, not just content delivery. This is vital to bridge the gap between theory taught in the university curriculum and real-world experience in a K-12 classroom (Fawzi and Alddabous 2019). RQ3 revealed valuable data on which feedback type respondents perceived as most effective to keep them forced on the course and material. One element to successful feedback is the delivery of needed elements to improve preservice teacher performance. The text-based group suggested that they were not as confident they received the needed input to improve their performance. However, results differed from prior studies by Tripp and Rich (2012) and Vertemara and Flushman (2017) which previously found participants were significantly less likely to remember the feedback at a later time. Effective feedback should provoke the recipient to evaluate their teaching practice with the aim of continuous improvement (Sailors and Price 2015). Although feedback alone does not guarantee development, it should be of a high-quality nature and personalized to increase the likelihood of progress. Scores related to student perception of intrinsic course focus differed significantly between groups. Students in the video group reported higher levels of involvement and comfortability in the course, which is similar to prior studies of preservice teachers (Grainger 2020; Jons 2019). One reason may be the capability of video to create an interpersonal connection. A well-developed relationship can facilitate clearly communicated feedback to preservice teachers in a constructive manner (Grainger 2020). Interestingly, student perceptions of whether the feedback increased understanding of the subject or was relevant to the assignment did not differ by type. The final research question asked about perceptions of student motivation and feedback type. Interestingly, the variables did not play a significant role in preservice educator anxiety about becoming a teacher. However, it did appear to play a considerable role in students' perception of intrinsic motivation in their chosen career. Those in the video group were significantly more likely to indicate feedback positively affected their motivation to be a teacher. Further, they reported higher levels of confidence in the real-life application of their lesson plan. This is made possible with video as it allows for more probing analysis from experienced supervisors. Compared to non-video feedback, this allows for preservice teachers to identify specific elements of their teaching to improve and reflect upon (Ellis and Loughland 2017). These findings may be of value for instructors, supervisors, and administrators within teacher preparation programs. As all levels of teacher education incorporate an increasing number of online and multi-locality options, essential assessments should be designed to meet the specific needs and flexibility of today’s students. This is especially true as more than 75% of American K-12 schools operated at least partially online due to the pandemic (Barnum 2020). Even once the pandemic is over, districts may be hesitant to put practicum students back into their classrooms. As a result, this also limits the potential opportunities for instructors or supervisors to view teaching demonstrations in-person. These factors enhance the need for teacher educators to potentially observe remote lessons and record impactful video feedback. This can allow for preservice teachers to watch a recording of their own teaching with comments from an instructor time stamped for specific reflection (Hollingsworth and Clarke 2017). This manuscript sought to extend relevant literature to better understand how feedback types are perceived by preservice teachers. When students submit lesson plans, meaningful input from instructors and supervisors is a critical piece of the learning and development process. Whether delivered via multimedia or the written word, the message should support theory and practice with constructive and supportive feedback (Agudo and de Dios 2017). Lastly, the results indicate video feedback is an effective tool that can increase student motivation, and their desire to become a teacher and develop a closer relationship with their instructors. The study provides recommendations to relevant educational society stakeholders and policymakers. As the findings aligned with prior research to yield positive student perceptions of multiple feedback types, a possible combination of audio and text-based feedback is perceived by students as most effective. These findings are consistent with previous research related to advocating for curriculum designed to meet the needs of a student population with diverse learning styles. As many students receive instructor feedback in multimedia formats, preservice teacher faculty and training institutions have a collective responsibility to provide feedback for learners that not only represents this diversity of learning styles, but also differentiates between methods of delivery. Analysis and interpretation of the results provided multiple opportunities for continued research and practice. A quantitative analysis of online and face-to-face practicum experiences could help generate discussion regarding gaps in assessment structure and validity. An investigation of student attitudes focused on the use of self-assessments and peer evaluations after watching video recordings of teaching demonstrations is an additional potential investigation. From a qualitative perspective, research in the form of focus group interviews with preservice teachers from both groups may provide more insight. Participants can explain in detail the benefits received from both assessment types and which felt more personalized specifically to them.

Limitations

Research participants were limited to preservice education students enrolled at a medium-sized private Midwestern university. As a result, those from different parts of the state, region, and country did not participate. A second limitation occurred when current teachers enrolled in graduate programs were not asked to participate in the study. The final limitation was the exclusion of students enrolled in courses not taught by the researcher. The final limitation was the exclusion of students enrolled in the teacher education program who take the courses as part of a partnership at a local community college.
  3 in total

1.  Feedback for general practitioners in training: quality, styles, and preferences.

Authors:  Frans J Prins; Dominique M A Sluijsmans; Paul A Kirschner
Journal:  Adv Health Sci Educ Theory Pract       Date:  2006-08       Impact factor: 3.853

2.  Enhancing Secondary School Instruction and Student Achievement: Replication and Extension of the My Teaching Partner-Secondary Intervention.

Authors:  Joseph P Allen; Christopher A Hafen; Anne C Gregory; Amori Y Mikami; Robert Pianta
Journal:  J Res Educ Eff       Date:  2015-04-11

Review 3.  Understanding and Evaluating Survey Research.

Authors:  Julie Ponto
Journal:  J Adv Pract Oncol       Date:  2015-03-01
  3 in total

北京卡尤迪生物科技股份有限公司 © 2022-2023.