Literature DB >> 32005229

Micro-feedback skills workshop impacts perceptions and practices of doctoral faculty.

Najma Baseer1, James Degnan2, Mandy Moffat3, Usman Mahboob4,5.   

Abstract

BACKGROUND: Doctoral supervision is a distinct form of supervision with clearly defined responsibilities. One of these is the delivery of effective face-to-face feedback to allow supervisees to improve upon their performances. Unfortunately, doctoral supervisors, especially of health sciences, are often not trained in supervisory skills and therefore practice mostly on a trial and error basis. Lack of understanding of the feedback process leads to incongruence in how supervisors and supervisees perceive feedback. However, standardized training practices like microteaching can allow supervisors to acquire effective feedback practices. In this study we employed a schematic approach of microteaching, that is micro-feedback, in a workshop to develop feedback skills of doctoral supervisors, and assessed the overall effectiveness of this training using the Kirkpatrick evaluation framework.
METHODOLOGY: This was a Quasi-experimental study with a repeated measures and a two-group separate sample pre-post test design. A micro-feedback skills workshop was organized to enhance feedback skills of doctoral supervisors using microteaching technique. The first two levels of the Kirkpatrick evaluation model were used to determine the workshop's effectiveness. An informal Objective Structured Teaching Exercise (OSTE) was used to assess feedback skills of the supervisors, both before and after the workshop. A questionnaire was developed to compare pre-and post-workshop perceptions of the supervisors (n = 17) and their corresponding supervisees (n = 34) regarding the ongoing feedback practice.
RESULTS: Despite the hectic schedule, most doctoral supervisors (17 of 24, 71%) were willing to undertake faculty development training. Participants indicated a high level of satisfaction with the workshop. A learning gain of 56% was observed on pre-post OSTE scores. Prior to the workshop, perceptions of how supervisees should be given the feedback differed significantly between supervisors and supervisees with an effect size difference of r = 0.30. After the workshop there was a negligible difference in perceptions between supervisors and supervisees (r = .001). Interestingly, supervisors shifted their perceptions more toward those that were originally held by the supervisees.
CONCLUSIONS: These findings suggest that well-designed and properly assessed structured programs such as micro-feedback workshops can improve how doctoral supervisors provide feedback to their supervisees and align supervisors' perceptions of that feedback with those of their supervisees.

Entities:  

Keywords:  Doctoral supervisors; Microteaching; Objective structured teaching exercise (OSTE); Postgraduate; Workshop

Mesh:

Year:  2020        PMID: 32005229      PMCID: PMC6995071          DOI: 10.1186/s12909-019-1921-3

Source DB:  PubMed          Journal:  BMC Med Educ        ISSN: 1472-6920            Impact factor:   2.463


Background

Supervision is the key formal pedagogical method in which the supervisor plays a pivotal role in helping supervisees achieve their learning goals and develop the required professional competence [1, 2]. To use supervision effectively, doctoral supervisors must employ certain distinct skills; in particular, providing timely face-to-face high quality feedback [3-7]. Effective feedback has to be constructive, motivational, comprehensible, and delivered in a timely manner [8]. Feedback given to supervisees not only influences the research process but also deepens supervisees’ understanding of the skills needed to become an effective medical educator [9, 10]. Evidence shows that faculty and students often perceive ongoing feedback practices differently [11]. Supervisors deem supervisees responsible for comprehending and effectively implementing the feedback provided, whereas supervisees are often not content with the quality of feedback they receive and consider it at times inexplicit or confusing [3, 5, 12]. Moreover, medical and allied sciences doctoral supervisors are often not trained in didactic skills, which inhibits them from developing effective supervisory skills and invites them to imitate their own supervisors on a trial and error basis [13, 14]. At the doctoral level, the supervisor-supervisee interactions are mostly based on face-to-face extended systematic conversations or feedback sessions. Moreover, the doctoral supervisors, during their postgraduate experience, are mostly accustomed to the written feedback. The face-to-face supervisory meetings are often unstructured and vary tremendously in terms of frequency and timings. Despite the intricacy of doctoral supervisor-supervisee relationship, no formal training is available to train the doctoral faculty for such interactive sessions. Multiple feedback models have been developed to facilitate effective feedback practices. The Pendleton feedback model is useful for the feedback process in doctoral PhD supervision interactions and additionally can support inexperienced supervisors to provide specific feedback in a supportive manner [15]. Currently used in many healthcare settings, this model facilitates a two-way interaction between the supervisor and the supervisee, allowing and supporting the supervisees to carry out their own self-assessment. Various factors influence differences between how supervisors and supervisees perceive the effectiveness of feedback. A lack of training and peer support for supervisors is one [16]. Medical and allied sciences doctoral supervisors are often not specifically trained in supervisory skills, which can inhibit their development in this area and invites them to imitate their own supervisors, often on a trial and error basis [13, 14]. Hence, supervisory training is essential for enhancing the professional development of supervisors. A paradigm shift in the way medical education is delivered has prompted many faculty development programs to increase the effectiveness of doctoral supervision [17-21]. Some of these programs have employed the method of “microteaching” to develop new supervisory skills and to improve on old ones [22-25]. The term “Micro” symbolizes a more precise and in-depth observation during which special emphasis is given to an explicit pedagogical skill such as effective face-to-face feedback. Thus using the analogy and principles of microteaching, a similar schematic approach of micro-feedback skills can be used to inculcate effective feedback skills among the doctoral faculty. Self-reported perceived improvement in skills acquisition are often unreliable, hence the actual skill level acquired by the supervisors remain to be more robustly evaluated [26]. Direct measures, such as the objective structured teaching exercise (OSTE), can help indicate both a baseline level of a skill and any change that has resulted from a training program [27-29] The direct measure of the evaluation process would allow evaluation of a faculty-training program to go beyond measuring simply ‘Reaction’ to include a more robust measurement of the ‘Learning’ of the skill [30-33]. Keeping in view the acquisition metaphor of learning specially in the context of faculty development [34], this study aimed to assess whether training in a micro-feedback skills workshop leads to an improvement in observed feedback skills of doctoral supervisors individually. Training activities took place in a simulated environment using audiovisual aids and scenarios based on giving immediate feedback to the supervisees [35, 36]. We gauged the effectiveness of the workshop using the first two levels of the Kirkpatrick evaluation model, and we assessed the feedback skills of the doctoral supervisors through micro-training sessions and OSTEs. Lastly, we compared supervisors’ and their corresponding supervisees’ perceptions of the ongoing feedback practices.

Methods

Study design & setting

This was a quasi-experimental study with two parts; a repeated measures design was used to measure differences in participants’ perceptions of feedback before and after the workshop, and a two-group separate sample pre-post design was used to evaluate the feedback skills of the doctoral supervisors using OSTE [37, 38]. The participants of the workshop, composed of doctoral supervisors belonging to eight different doctoral programs of basic and allied health sciences, were assigned randomly into two groups. One group consisting of half number of participants took part in pre-testing to evaluate their skill level using OSTE, while the other group participated in post-workshop testing, 8 weeks after the workshop (Fig. 1). The setting for this study was Khyber Medical University (KMU) Peshawar.
Fig. 1

Study Design and data collection procedure. A flowchart showing study design and data collection procedure

Study Design and data collection procedure. A flowchart showing study design and data collection procedure

Participants

This study targeted doctoral faculty supervisors in different constituent institutes of Khyber Medical University. Twenty-four were invited to participate and 17 consented in writing to take part, three of whom took part in the pilot phase. Participants were then asked to identify two postgraduate supervisees each, who were then invited to participate in the study. Thirty-four supervisees also gave a written consent to take part in the study, six of whom took part in the pilot phase.

Ethical approval

Ethical approval was obtained from the Graduate Studies Committee, Advanced Studies and Research Board (AS&RB) and the Ethical Committee of Khyber Medical University.

Pilot testing of the data collection tools

All the instruments and OSTE stations were pilot tested for face validity, content validity, and reliability. The supervisor perception questionnaire was validated using Content Validity Index (CVI) [39, 40] by incorporating data retrieved from seven experienced educationists with teaching and doctoral supervision experience of more than 10 years. Similarly the content was face validated and reliability was computed using the data from three doctoral supervisors and their corresponding six supervisees. OSTE scenarios were pilot tested using standardized students. Two assessors independently established the content validity and inter-rater reliability of the checklist, which were designed based on the principles of Pendleton’s model of effective feedback [15]. The marking rubric for OSTE consisted of two different types of rating scales; a standardized task specific stepwise marking checklist and a global rating scale [41].

Perception questionnaires

Just before and 8 weeks after the workshop, all participants in the main study completed self-administered 5-point Likert questionnaires (Additional file 1: Annex I and II). The questionnaire was designed following the principles of instrument development [42] and after a thorough literature search [4, 5, 11, 43, 44]. The items in the questionnaires were structured in accordance with the Pendelton’s model of effective feedback to assess the supervisors’ and supervisees’ perceptions of ongoing face-to-face feedback practices [15, 44]. The questionnaire was pilot tested as mentioned above. Changes in participants’ attitudes were analyzed using an approach suggested by Mahmud Zamalia [45]. On a scale of 1 to 5, a score of 2.5 or less was defined as a negative response while a score of 3.5 and above was defined as a positive response. Scores between 2.5 and 3.5 were considered neutral.

Micro-feedback skills workshop

The workshop was designed keeping in view the acquisition metaphor of learning specially in the context of faculty development [34]. The workshop was interactive in nature and lasted for 8 h. It consisted of four sessions; introductory, behavioral remodeling, micro-feedback skills and workshop feedback sessions. Microteaching was used as an information transfer and training method [22, 23]. The introductory session covered the workshop objectives, the principles of effective feedback, and the elements of microteaching. During the behavioral remodeling session, the participants watched exemplary enactment videos based on the principles of Pendelton’s mode of effective feedback and deliberated on them. During the microteaching session, each workshop participant provided feedback to the trained simulated students [46] based on the pre-determined four doctoral one-to-one feedback scenarios and in return received feedback on their own performance from both the workshop facilitator and the rest of the participants on a microteaching checklist (Additional file 1: Annex III). At the end of the workshop, all the participants were asked to complete a workshop feedback form (Additional file 1: Annex IV) and a pre-post self-evaluation form [47] (Additional file 1: Annex V), both of which corresponded to level I of the Kirkpatrick program evaluation model.

Objective structured teaching exercise

Before the workshop, each supervisor in the pre-test group participated in an informal OSTE exercise in which the reviewer used a standardized rubric to evaluate the participants’ feedback skills. Designed to accommodate the supervisors’ hectic schedules, this 30-min informal OSTE exercise was conducted in each supervisor’s office setting to accommodate their schedule [28] and to correspond to level II within Kirkpatrick’s program evaluation model. The post-test OSTE exercise took place 8 weeks after the initial exercise, using the same scenarios and checklists as the pre-test [48, 49].

Data analysis

The data were analyzed using IBM SPSS Statistics version 23. Higher scores on the questionnaire’s Likert scale indicated a more positive rating by the respondents. In addition, effect sizes were computed to measure difference between the supervisors’ and supervisees’ perceptions of the feedback practices. Due to a relatively small sample size, the assumption of underlying normality within the data was evaluated using the Shapiro-Wilk test [50]. For the normally distributed data, independent-sample and paired t-tests were used to compare pre- to post-workshop scores. For non-normally distributed data, the Wilcoxon matched-pair signed-rank test was used for the paired analysis of the matched groups and the Mann-Whitney U test was used to compare two independent groups. Since multiple comparisons were to be made, Bonferroni adjustments to the probability levels needed to indicate statistical significance values were calculated to correct for chance differences. The Bonferroni adjustments for pre- and post-questionnaire item means were .002 (.05/33); and for OSTE items, a value of .002 (.05/28) was required. Based on participants’ pair-wise analyses, the Bonferroni corrected level needed for statistical significance was computed as 0.004 (0.05/14).

Results

Pilot testing

After expert validation, the questionnaire was reduced to 33 items and an average Content Validity Index (CVI) of 0.87 was obtained [39, 40]. All 33 items of supervisor questionnaires and 32 items of supervisee questionnaire were rated as relevant and fully understandable with the inter-item reliability index (Cronbach α = 0.79) and (Cronbach α = 0.89), respectively. Similarly, for the OSTE checklists, the coefficient alpha values of 0.79 and 0.86, respectively, for the task-based checklist and for the global rating scale suggested that the checklists were highly reliable.

Demographics of the participants

A total of 17/24 (71%) of doctoral supervisors (16 male and 1 female) took part in the study. Of these 17, three supervisors took part in the pilot phase of the study while the remaining 14 were included in the main study. Similarly, out of the 34 (14 males and 20 females) corresponding postgraduate supervisees, six took part in the pilot whereas 28 corresponding supervisees participated in the training exercise (Table 1).
Table 1

Participants’ demographics in terms of percentages

AgeGenderQualificationDoctoral supervision experience in yearsAttended Feedback workshop
DoctoralBasic
20–3536–40> 40MaleFemaleBasicAlliedMedicineDentistryAllied1–33–5> 5YesNo

Supervisors

(n = 17)

42.742.914.392.97.185.714.357.114.328.67.192.9

Supervisee

(n = 34)

67.9257.142.957.153.614.332.1
Participants’ demographics in terms of percentages Supervisors (n = 17) Supervisee (n = 34)

Evaluation at Kirkpatrick level I

Workshop feedback form

The participants rated the workshop very highly, i.e. 4 or higher on a 5-point scale on all 22 items of the workshop feedback form (Table 2). The lowest rating (mean = 3.92, SD = .61) was for the item that asked if the time allotted for the training exercise was sufficient. The maximum rating of 4.93 ± .26 was for the item that asked if the instructor was helpful. In addition, participants were asked to complete two open-ended questions inquiring about the strengths and weaknesses of the workshop. Examples of the comments written in this section included: “Pendleton’s steps were quite good”, “Proper way of giving feedback to students”, “Well-organized skill enhancement workshop”, “Students performed well and near to real life experiences”, “Scenarios could have been more diverse”, “More scenarios for no repetition” and “Four hours were not enough.”
Table 2

Item means of workshop feedback performa

Workshop Feedback performa itemsMean & SD
1I was well informed about the objectives of this workshop.4.12 ± .77
2This workshop lived up to my expectations.4.43 ± .51
3The content is relevant to my needs.4.79 ± .42
4The content was organized and easy to follow.4.64 ± .49
5The workshop objectives were clear to me.4.21 ± .69
6The workshop activities stimulated my learning.4.35 ± .63
7The activities in this workshop gave me sufficient practice and feedback.4.07 ± .73
8The difficulty level of this workshop was appropriate.4.00 ± .78
9The pace of this workshop was appropriate.4.14 ± .66
10The method of instruction was appropriate.4.50 ± .75
11The meeting room and facilities were adequate.4.42 ± .75
12Workshop had a sense of direction.4.42 ± .51
13The workshop was a good way for me to learn this content.4.57 ± .51
14The time allotted for the training was sufficient. a3.92 ± .61
15The instructor was well prepared.b4.93 ± .26
16The instructor was helpful.4.85 ± .36
17Participation and interactions were encouraged.4.71 ± .46
18Objectives stated were met.4.42 ± .51
19I will be able to use what I learned in this workshop.4.50 ± .51
20Overall I will rate the content valuable.4.42 ± .51
21I will recommend this workshop to others.4.57 ± .51
22I would be interested in attending a follow-up, more advanced workshop on this same subject.4.85 ± .36

aLowest scoring item

bHighest scoring item

Item means of workshop feedback performa aLowest scoring item bHighest scoring item

Retrospective pre-post self-evaluation form

For the underlying non-normally distributed data, the Wilcoxon matched-pair signed-rank test was used to assess respondents’ changes in ratings over time. The data showed significant changes in pre- and post-workshop perceptions of all doctoral supervisors (Bonferroni’s correction: sig ≤ 0.004) (Table 3).
Table 3

Comparative analysis of pre- and post-workshop self-evaluation form

ParticipantsWilcoxon matched-pair signed-rank test (Z valuesa)Asymp Sig. (2-tailed) **
Participant 1− 3.051b.002
Participant 2−3.434 b.001
Participant 3−3.873 b.000
Participant 4−3.051 b.002
Participant 5−3.115 b.002
Participant 6−3.145 b.002
Participant 7−3.207 b.001
Participant 8−2.810 b.002
Participant 9−3.068 b.002
Participant 10−3.162 b.002
Participant 11−3.508 b.000
Participant 12−3.443 b.001
Participant 13−3.332 b.001
Participant 14−3.376 b.001

**Responses showing significant changes (p-value ≤ 0.004)

aStatistic value for the test

bBased on negative ranks, assigned when the post-test score is higher than the pre-test score and hence their difference gives a negative value

Comparative analysis of pre- and post-workshop self-evaluation form **Responses showing significant changes (p-value ≤ 0.004) aStatistic value for the test bBased on negative ranks, assigned when the post-test score is higher than the pre-test score and hence their difference gives a negative value

Evaluation at Kirkpatrick level II

Perception questionnaires

The reliability coefficient (α = 0.90) for the pre-workshop perception questionnaire indicated high internal consistency [43]. The following analyses were performed on the data obtained from supervisors and supervisees on the perception questionnaires: Pre and post-workshop supervisor perception questionnaires Pre and post-workshop supervisee perception questionnaires Comparison between supervisor and supervisee perceptions both before and after the workshop.

Supervisors’ perception questionnaire

Most responses to both the pre- and post-workshop questionnaires showed positive trend with all post-workshop items rated at 3.50 or higher. The highest level of agreement for both pre- and post- workshop questionnaires was found for item no. 1, with mean values of 4.50 ± 0.52 and 4.64 ± 0.50, respectively. Only item no. 6 was rated lower than 3.0 on the pre-test with a mean value of 2.64 ± 1.34. The post-workshop response to this item showed a significant positive shift with a mean value of 4.00 ± 0.78. All other items on the supervisor post-test questionnaire were rated in the positive range (Table 4).
Table 4

Pre and post workshop supervisor and supervisees ratings of feedback practice items

Questionnaire ItemsMean values of the items on 5 point Likert scale
Supervisor perceptionsSupervisee perceptions
Pre workshopPost workshopp-valuePre workshopPost workshopp-value
The supervisor…
 1. Is available for a planned meeting with supervisees4.5 ± .524.6 ±. 50.4354.4±. 734.5±. 51.523
 2. Selects an appropriate time and place to give feedback3.8 ± .974.1 ± .66.1114.6±. 694.8 ± .74.588
 3. Informs supervisee if there is a delay in the feedback session3.9 ± 1.03.9 ± .70.1654.6±. 634.6 ± .62.839
 4. Gives self, enough time to prepare the feedback3.4 ± .843.8 ± .89.068-**-**-**
 5. For feedback on written material, reads the draft prior to the feedback session4.1 ± .734.5 ± .52.0824.4±. 794.4 ± .74.861
 6. Instructs supervisee to document the proceedings of the feedback session2.6 ± 1.34.0 ± .78.0072.7 ± 1.183.7 ± .76.001*
 7. Gives supervisee an opportunity to discuss feedback face to face4.1 ± .534.4 ± .65.002*4.6±. 744.6 ± .73.857
 8. Prepared to handle any complicated situation whilst providing the feedback4.1 ± .664.4 ± .50.0273.7±. 913.5 ± .96.693
 9. Keeps emotions in check during the feedback session3.8 ± .434.1 ± .61.1653.4 ± 1.103.4 ± .99.901
 10. Keeps his/her voice in control during the feedback session4.1 ± .734.3 ± .61.001*4.6±. 744.6 ± .73.846
 11. Tries to make eye contact with supervisee during the feedback session4.3 ± .614.4 ± .65.002*4.6±. 694.6 ± .69.857
 12. Keeps the feedback process pertinent to the relevant content3.9 ± .864.3 ± .61.1104.5±. 794.5 ± .79.861
 13. Is capable of making supervisees understand his/her expectations during the feedback session3.9 ± .734.1 ± .36.3364.4±. 874.5 ± .58.599
 14. Provides specific information about supervisee’s performance4.0 ± .684.0 ± .551.004.5±. 644.5 ± .64.832
 15. Explains the impact of supervisee’s actions on their professional development3.6 ± 1.04.4 ± .65.001*4.5±. 794.5 ± .79.846
 16. Discusses solutions to problems faced by supervisees during the research4.1 ± .734.4 ± .63.0544.3 ±. 854.7 ± .44.548
 17. Guides supervisees if they are not performing effectively3.7 ± .834.5 ± .65.001*4.3±. 804.2 ± .99.857
 18. Helps supervisees acknowledge that a problem exists4.1 ± .624.1 ± .54.3364.3 ± 1.024.4 ± .99.802
 19. Gives constructive feedback on specific areas to improve upon4.7 ± .504.1 ± .53.5834.3 ±. 914.4 ± .91.894
 20. Carefully listens to the responses of the supervisee4.2 ± .704.4 ± .76.1393.6±. 963.7 ± .95.704
 21. Encourages supervisees during the feedback4.0 ± .684.1 ± .66.0684.6±. 794.6 ± .79.865
 22. Encourages supervisees to probe for more details3.9 ± .624.4 ± .93.1044.5±. 804.7 ± .74.839
 23. Encourages supervisees to take credit for their success3.7 ± .614.3 ± .73.0074.2±. 994.4 ± .39.643
 24. Acknowledges the efforts of his/her supervisees3.9 ± .624.4 ± .65.0334.5 ±. 294.5 ± .79.832
 25. Tries to understand feedback from the supervisee’s viewpoint3.4 ± .854.4 ± .74.0074.6±. 574.6 ± .57.832
 26. Tries to incorporate the preferred communication style of the supervisee3.4 ± 1.03.6 ± .74.2343.4 ± 1.143.5 ± 1.07.000*
 27. Attempts to turn every feedback session into a useful encounter3.4 ± 1.03.9 ± .66.0074.3±. 894.7 ± .82.754
 28. Accepts the responsibility for his/her role in achieving supervisee’s educational goals3.9 ± .864.3 ± .73.0544.6±. 884.6 ± .21.876
 29. Ensures that the feedback session should be a dialogue, not a monologue4.1 ± .544.4 ± .65.0283.1 ± 1.173.9 ± 1.09.826
 30. Finishes the feedback session with an action plan for future3.9 ± .954.4 ± .76.0264.3±. 854.6 ± .23.765
 31. Remembers to appreciate the supervisees after they receive the feedback3.6 ± .634.1 ± .77.0143.1 ± 1.043.3 ± 1.01.626
 32. Asks supervisees if they have understood the feedback given3.4 ± .844.3 ± .73.0134.5±. 884.8 ± .92.921
 33. Follows up on his/her previous feedback in the subsequent meeting3.9 ± .104.0 ± .92.5003.1 ± 1.183.1 ± 1.07.899

*Responses showing significant shift (p-value ≤ 0.002)

**Item was omitted for the supervisee questionnaire as it was only relevant to the supervisors and could not be answered by the supervisees

Pre and post workshop supervisor and supervisees ratings of feedback practice items *Responses showing significant shift (p-value ≤ 0.002) **Item was omitted for the supervisee questionnaire as it was only relevant to the supervisors and could not be answered by the supervisees Paired t-test analysis showed that the supervisor responses to 5 of 33 items (item nos. 7,10,11, 15 and 17) changed significantly after the micro-feedback workshop (p ≤ 0.002)(Table 3). Based on the supervisors’ responses, Wilcoxon pair-wise analyses indicated significant shift in the perceptions of more than 50% (i.e. 8/of 14) of the doctoral supervisors (Table 5 Section 1).
Table 5

Wilcoxon matched-pair signed-rank test to compare pre- and post test data

Section 1: Comparison of pre- and post-workshop perceptions of the supervisors
 Participant 1−4.47a.000**
 Participant 2−4.44a.000**
 Participant 3−0.30a.763
 Participant 4−3.27a.001**
 Participant 5− 0.95a.343
 Participant 6−1.80 a.042
 Participant 7−2.83 a.005
 Participant 8−2.91 a.004**
 Participant 9− 2.47 a.014
 Participant 10−3.67a.000**
 Participant 11−4.17a.000**
 Participant 12−3.66 a.000**
 Participant 13−4.05 a.000**
 Participant 14−0.62 a.536
Section 2: Comparison of pre- and post-workshop perceptions of supervisees regarding the feedback practices of their corresponding supervisors
 Supervisees of Supervisor 1−0.18a.857
 Supervisees of Supervisor 2−2.84a.002**
 Supervisees of Supervisor 3−1.07a.284
 Supervisees of Supervisor 4−1.92b.055
 Supervisees of Supervisor 5−0.88a.377
 Supervisees of Supervisor 6−0.06b.950
 Supervisees of Supervisor 7−2.18a.029
 Supervisees of Supervisor 8−0.15b.882
 Supervisees of Supervisor 9−0.68a.497
 Supervisees of Supervisor10−1.98b.048
 Supervisees of Supervisor11−2.08a.037
 Supervisees of Supervisor 12−1.21b.226
 Supervisees of Supervisor13−0.69b.494
 Supervisees of Supervisor14−1.31b.192

aBased on positive ranks assigned when the pre-test score is higher than the post-test score and their difference gives a positive value

bBased on negative ranks, assigned when the post-test score is higher than the pre-test score and hence their difference gives a negative value

**Responses showing significant changes (p-value ≤ 0.004)

Wilcoxon matched-pair signed-rank test to compare pre- and post test data aBased on positive ranks assigned when the pre-test score is higher than the post-test score and their difference gives a positive value bBased on negative ranks, assigned when the post-test score is higher than the pre-test score and hence their difference gives a negative value **Responses showing significant changes (p-value ≤ 0.004)

Supervisee perception questionnaire

None of the items on the pre-and-post workshop questionnaires were rated negatively (i.e., an average score less than 2.50). The most positive responses to the pre-workshop questionnaire were observed for item no. 2, 3, 7 and 10, 11, 21 and 25 (Table 4), each with an average rating of 4.6, while the most positive response for the post workshop items was for item 2. Pre-post ratings for items 6 and 26 showed significant changes (p-value ≤0.002) (Table 4). To determine the change in the supervisees’ perceptions regarding the feedback practices of their respective supervisors, the average value of the responses obtained from each supervisor’s two corresponding supervisees were computed. The data were compared using the Wilcoxon matched-pair signed-rank test. The responses obtained from the supervisees of only one supervisor displayed significant change in perceptions of feedback practices (Table 5 section 2).

Comparison between the perceptions of supervisors and supervisees

The perceptions of supervisors and supervisees towards the ongoing feedback practices differed significantly prior to the workshop. The results of a Mann-Whitney U test comparing the overall pre-test scores to the post-test scores indicated strong differences in perceptions between the two groups (p < 0.004) However, the comparison of post-test data indicated that there was no significant difference (p = 0.49) between how the supervisors and supervisees perceived feedback practices. The effect size associated with the difference between supervisors’ and supervisees’ average scores on the questionnaire dropped from r = 0.29 before the workshop to r <.001 after the workshop. OSTE sessions were conducted with the supervisors of both pre-test and post-test groups. The OSTE data consisted of checklist-based item scores (performance was scored based on 20 marks) and a global rating scale (5 point scale from 1 to 5), both of which were rated remotely by two raters (Table 6). Results of Shapiro-Wilk tests for data normality confirmed that the pre- and post-workshop checklist data did not violate normality assumptions (p = .12 and p = .09, respectively). An independent t-test was performed to compare both the pre-test and post-test groups. The OSTE and Global Rating Score post-test gains were significant (t values of − 5.98 (p < .001) for the checklist gains and − 4.56 (p < .001) for GRS gains. The effect size for the difference associated with the checklist gain was r = .51 and for the GRS gain r = .60). These score gains indicate that the training program had been effective. Overall learning gains as a result of the workshop were estimated using a procedure recommended by Barwood et al. [51]. Using the Barwood et al. procedure (see formula below), based on the checklist, learning gains of 57% were estimated for the participants in the program.
Table 6

OSTE data for both pre-test and post-test groups, showing scores for checklist-based items and global rating scale

Descriptive statistics for OSTE data
OSTE scoresMinimum scoreMaximum scoreMeanStd. Dev
Checklist score for pre-test group (Total score = 20)6.631813.402.81
Checklist score for post-test group (Total score = 20)13.362017.161.75
GRS for pre-test group (Total score = 5)1.0053.061.20
GRS for post-test group (Total score = 5)2.5054.29.75
OSTE data for both pre-test and post-test groups, showing scores for checklist-based items and global rating scale * Sum of individual checklist scores of all the participants of the post-test group ** Sum of individual checklist scores of all the participants of the pre-test group *** sum of all the OSTE checklists scores i.e. 20 × 28 = 560

Discussion

This study was carried out to determine whether a micro-feedback skills workshop could improve the feedback skills of doctoral supervisors. The findings of this study suggest that a significant improvement was observed not only in the perceived feedback skills of the doctoral supervisors (Kirkpatrick Level 1) but also in observed feedback skills via an OSTE (Kirkpatrick Level 2). Of the 17 supervisors, only one had attended a previous workshop on feedback skills. More generally, very few of the supervisors had participated in workshops designed to improve supervisory skills previous to this study.

Participants of the study

The participants of this study were a very specific and exclusive group of academicians in the field of basic and allied medical sciences. Although they constituted a relatively small sample, their impact on the supervisees and the ongoing basic medical research is substantial. Pre training, supervisors exhibited high variability in their supervisory practices and there was a marked difference in what they report doing and what their supervisees reported. For example, most supervisors believed that they met informally with their supervisees on a daily basis; yet, supervisees reported that they rarely met with their supervisors on a daily basis. The second group of participants consisted of the postgraduate research supervisees of the participating doctoral faculty. Handley et al. suggested that an objective assessment of feedback quality is a cumbersome task [43]. Therefore, of all the available tools and resources, students are the best evaluators of the effectiveness of feedback practices. Hence, it was incumbent to incorporate the supervisees’ perceptions of ongoing feedback practices, since they constituted the major stakeholder in this context. The students varied in terms of their demographics and educational background and these were supervisees who were selected for participation in the study based on their supervisors’ recommendations. Would non-recommended supervisees deviate further from the supervisor’s perceptions? However, for this study we chose to target and invite those more likely to take part, as students are one of the best evaluators of the effectiveness of feedback practices. Moreover, engaging supervisees in workshop activities is important not only for developing their pedagogic literacy but also for understanding the long-term impact of the feedback provided [43].

The immediate impact of micro-feedback skills workshop

In this study, the levels I and II of the Kirkpatrick program evaluation model (“Reaction” and “Learning”) were fully implemented since these two levels have an individual impact, while levels III and IV have institutional effects [32]. The evaluation of the micro-feedback workshop indicated the faculty perceptions of the importance of feedback (“Reaction”) and their skills in understanding how to deliver feedback (“Learning”) have improved. Moreover, the evidence suggested that some “behavioral modification” did take place and that the supervisors and supervisees were much more likely to agree on the importance of faculty feedback training. The immediate evaluation process (level 1) also highlighted some useful learning points for the workshops for the future. The first is that participants wanted more of it, which was consistent with other similar studies [46, 52, 53]. There were a number of suggestions asking for more variation in the microteaching scenarios used. However, this would require more logistical support and human resources [54]. Participants reported that the videotape interactions and the use of standardized students particularly facilitated the learning process for them. This is similar to participant responses in the Gelula and Yudkowsky study [46].

The intermediate impact of micro-feedback skills workshop

The intermediate impact of the workshop was gauged using the outcome of perception questionnaires and enhancement in the feedback skills (learning gain) of the workshop participants. The questionnaires completed by the supervisees reflected their evaluation of their supervisors’ feedback skills while the ones completed by their supervisors were a form of self-assessment [11]. The supervisor questionnaire consisted of 33 items and the supervisee questionnaire consisted of 32 items. Except for Item 4 of the supervisor questionnaire, each item of the supervisor questionnaire was also on the supervisee questionnaire. The item omitted on the supervisee questionnaire is relevant to the supervisor and could not be answered by the supervisees. The eight-week gap between pre- and post-assessment lessened the likelihood that the participants remembered the choice they had selected in the pre-workshop questionnaire. Supervisors rated their perceptions of their own training skill abilities highly before the workshop (the average self-rating score exceeded 3.50 on 28 of the 33 items) and at higher levels after the workshop (the average self-rating score exceeded 3.50 on all 33 items). These results indicate that supervisors are considerably confident regarding the feedback they provide. One area of concern relates to the supervisors’ instruction for supervisees to document the proceedings of feedback sessions. Supervisees and supervisors rated this item the lowest on the perception questionnaires. The low rating given to the item about documenting procedures is important because it may explain why poor recall might contribute to the inability of some of the supervisees to follow their supervisors’ instructions and for supervisors to judge whether the supervisees have properly understood the feedback that was given. The micro-feedback workshop appears to have resulted in significant change in the way supervisors perceive their own feedback practices. Supervisors’ post workshop item means exceeded pre-workshop item means on every item of the supervisors’ questionnaire, and the gains in supervisors’ ratings reached statistically significant levels for 15 of the 32 items. What needs to be explored is whether this change was only for a short term? Slightly different trends were observed in the data obtained from the supervisee questionnaire. In general, supervisees rated their perceptions of how they received feedback higher than their own supervisors rated how they gave feedback. Supervisees rated their supervisors’ feedback skills higher than their supervisors’ corresponding ratings of themselves on 24 of 32 of the pre-workshop items and on 21 of 32 of the post-workshop items. Supervisees post-workshop averages exceeded pre-workshop averages on 29 of the 32 items, however none of the differences in pre- to post-workshop supervisee item means reached statistically significant levels. Significant differences were also observed among the pre-workshop perceptions between the supervisors and supervisees; however, no significant differences were found between the two groups after the workshop. The higher post-workshop self-ratings among the supervisors may reflect the supervisors’ improvement in their feedback knowledge and skills but could possibly reflect an attempt to match up to the expectations of their supervisees. The results also imply that in supervisees’ opinion of the feedback practices of their supervisors were satisfactory even before the workshop. The supervisees’ inability to gauge a change in their supervisors’ skills may also be attributed to a lack of pedagogic literacy among the supervisees as they are mostly not involved in faculty development and training programs [43]. A similar finding was also found in a longitudinal study carried out at University of Alberta, in which the students’ ratings of their faculty’s feedback practices were consistent over the period of time of the study [55]. Sidhu et al. [56] compared training programs at different universities and found that they did change supervisees’ perceptions. Their study found high supervisee expectations and highlighted the supervisees’ need for quality feedback from their supervisors. Handley et al. [43] also made similar observations, wherein postgraduate students perceived that their faculty lacked interest in the timely delivery of quality feedback, while the faculty emphasized the quantity of feedback rather than the quality. These conflicting observations can be attributed not only to contextual differences between the study settings but also to the lack of structured faculty development programs and the incoherence of feedback practices at the postgraduate level. The informal OSTE sessions correspond to the level II (“Learning”) of the Kirkpatrick model of program evaluation and were used to assess the intermediate outcome of the micro-feedback skills workshop by comparing the pre-test and post-test OSTE data. Usually, the pre- and post-test exercise is performed on the same group. However, in this study, a relatively less common separate sample pre- and post-test design was used [38, 57] on different but comparable groups of doctoral supervisors. This was crucial as the available sample was quite small; hence, having a separate control group was not feasible. Also, since the study was skill-based and of short duration, there was a probability of a recall bias and a testing effect. All these issues were effectively dealt with by the use of a separate sample pretest-posttest design [37, 38, 57], which requires a single set of data per participant, and allows generalizability by randomly assigning participants to different observation times [37, 38]. The participants of the study were highly committed faculty members. The availability of all the participants for a traditional multi-station based OSTE was not only logistically challenging but was also overwhelming for the participants. Hence, the OSTE sessions were conducted in a relatively informal but a highly explicit manner, within the office settings of individual supervisors [28]. Unlike in conventional objective structured clinical examinations (OSCE), the participants of this OSTE session were not formally debriefed about their performances. This was not desirable; however, the OSTE used in this study was a faculty development instrument and not an assessment tool. During the workshop, the expert facilitator provided comprehensive feedback about the participant’s performance after each microteaching session. Both a checklist summary score and a global rating score were used to rate the OSTE performance of the participants. Literature suggests that the scores of the global rating scale are more reliable than scores based on item-based checklist scores owing to a more holistic nature of the global rating scale [58]. The combination of both measures in this study resulted in a more comprehensive assessment of participants’ skills The pre-post workshop learning gains on both the global rating and the checklist rating scales were highly significant (more than 56% of learning gain) and the results of the study indicate that the micro-teaching workshops were successful in enhancing the feedback skills of the doctoral supervisors significantly.

Limitations

This study was not void of limitations. Since the sampling was through census and the participants volunteered for the workshop, the possibility of self-selection bias cannot be eliminated. Therefore, an inherent drive among the participants may have had an impact on the overall level of satisfaction towards the workshop [59]. The sample size was small due to the exceptional nature of the participants; nevertheless, the results indicated statistically significant changes in the perceptions and practices of the participants. The OSTE sessions were videotaped and reviewed remotely by only two reviewers because of the limited number of available experts. Furthermore, the reviewers were not blinded and therefore knew whether the participants belonged to the pre-test or post-test group. However, there was a high consistency in terms of OSTE scores across both the reviewers. The micro-feedback skills workshop was a single-time activity and lacked subsequent reinforcement, which is often desirable to attain long-term learning [59, 60]. Evaluation of the overall impact on the institutions affected by this study (“Results”) will take more time and is outside the scope of this current study. Nonetheless, based on the results of the first two Kirkpatrick evaluation levels, it appears that with circumstantial modifications and a reinforcing element, micro-feedback skills workshops can enhance the feedback skills of postgraduate research supervisors.

Conclusion

This study assessed the extent to which a micro-feedback skills workshop can influence the feedback practices and perceptions of doctoral supervisors. The workshop was designed to enable supervisors to provide effective feedback to supervisees during training. By assessing the perceptions of supervisors and supervisees pre- and post-workshop, we were able to see their perceptions of the feedback move into alignment with one another. Hence, this study demonstrated that videotaped microteaching and OSTE sessions could be used to enhance supervisory skills. The approach not only provides a more realistic supervisory training experience but also assists in modifying supervisory behaviors and practices. This study also offered a framework for supervisors to develop more effective feedback procedures for use during formal supervisor-supervisee meetings. High-quality feedback that takes place during formal meetings can be very significant for the professional development of both the supervisor and supervisee. The results of this study suggest that faculty development workshops may enhance the knowledge and skills of doctoral medical education faculty as well as faculty involved in other areas of education. More detailed and comprehensive studies are required to establish the relatively long-term effects of micro-teaching training programs at individual and institutional levels. Additional file 1: Annexes containing: Annex I. Supervisor questionnaire. Annex II. Supervisee questionnaire. Annex III. Microteaching checklist. Annex IV. Workshop Feedback performa. Annex V. Workshop pre-post self evaluation form.
  25 in total

1.  A separate-sample pretest-post-test design to evaluate a practice-management seminar for residents.

Authors:  D C Lynch; J J Johnson
Journal:  Acad Med       Date:  1999-05       Impact factor: 6.893

2.  Development and implementation of an objective structured teaching exercise (OSTE) to evaluate improvement in feedback skills following a faculty development workshop.

Authors:  Sarah Stone; Kathleen Mazor; Sarah Devaney-O'Neil; Susan Starr; Warren Ferguson; Scott Wellman; Eric Jacobson; David S Hatem; Mark Quirk
Journal:  Teach Learn Med       Date:  2003       Impact factor: 2.414

3.  Kirkpatrick's levels and education 'evidence'.

Authors:  Sarah Yardley; Tim Dornan
Journal:  Med Educ       Date:  2012-01       Impact factor: 6.251

Review 4.  A systematic review of the use and effectiveness of the Objective Structured Teaching Encounter.

Authors:  Robert L Trowbridge; Laura K Snydman; Jenny Skolfield; Janet Hafler; Robert G Bing-You
Journal:  Med Teach       Date:  2011       Impact factor: 3.650

Review 5.  Understanding constructive feedback: a commitment between teachers and students for academic and professional development.

Authors:  Yasir Hamid; Sajid Mahmood
Journal:  J Pak Med Assoc       Date:  2010-03       Impact factor: 0.781

6.  Using standardised students in faculty development workshops to improve clinical teaching skills.

Authors:  Mark H Gelula; Rachel Yudkowsky
Journal:  Med Educ       Date:  2003-07       Impact factor: 6.251

7.  The effect of rTMS on auditory processing in adults with chronic, bilateral tinnitus: a placebo-controlled pilot study.

Authors:  Caroline H S Barwood; Wayne J Wilson; Alicja N Malicka; Bradley McPherson; David Lloyd; Katherine Munt; Bruce E Murdoch
Journal:  Brain Stimul       Date:  2013-02-21       Impact factor: 8.955

Review 8.  Microteaching and standardized students support faculty development for clinical teaching.

Authors:  Mark H Gelula; Rachel Yudkowsky
Journal:  Acad Med       Date:  2002-09       Impact factor: 6.893

9.  Evaluation of a continuing professional development training program for physicians and physician assistants in hospitals in Laos based on the Kirkpatrick model.

Authors:  Hyun Bae Yoon; Jwa-Seop Shin; Ketsomsouk Bouphavanh; Yu Min Kang
Journal:  J Educ Eval Health Prof       Date:  2016-05-31

10.  Normality tests for statistical analysis: a guide for non-statisticians.

Authors:  Asghar Ghasemi; Saleh Zahediasl
Journal:  Int J Endocrinol Metab       Date:  2012-04-20
View more

北京卡尤迪生物科技股份有限公司 © 2022-2023.