Literature DB >> 28178915

The relationship between the monitored performance of tutors and students at PBL tutorials and the marked hypotheses generated by students in a hybrid curriculum.

Jonas I Addae1, Pradeep Sahu2, Bidyadhar Sa2.   

Abstract

INTRODUCTION: There have been a number of published studies examining the link between the effectiveness of the problem-based learning (PBL) process and students' performance in examinations. In a hybrid PBL/lectures curriculum, the results of such studies are of limited use because of the difficulty in dissociating the knowledge gained at lectures from that gained through PBL-related activities. Hence, the objectives of this study were: (1) to develop an instrument to measure the performance of tutors and students at PBL tutorials, and (2) to explore the contribution of such performances to the marks attained by students from the hypotheses generated at PBL tutorials.
METHODS: A monitoring instrument for assessing the performances of non-expert tutors and students at tutorials was developed and validated using principal component analysis and reliability analysis. Also, a rubric was formulated to enable a content expert to assign marks to the quality of hypotheses generated.
RESULTS: The monitoring instrument was found to be valid and reliable. There was a significant correlation between the performance of tutors at tutorials and hypotheses marks. In contrast, there was no significant correlation between the performance of students and hypotheses marks. DISCUSSION: The monitoring instrument is a useful tool for improving the PBL process, especially where the medical programme depends on non-expert PBL tutors. In addition to ensuring good PBL processes, it is important that students achieve the desired output at PBL tutorials by producing hypotheses that help them understand the basic sciences underlying the clinical cases. The latter is achieved by the use of an open-ended rubric by a subject expert to assign marks to the hypotheses, a method that also provides additional motivation to students to develop relevant and detailed hypotheses.

Entities:  

Keywords:  Problem-based learning; exploratory factor analysis; hypothesis quality; instrument; monitoring; non-expert tutors

Mesh:

Year:  2017        PMID: 28178915      PMCID: PMC5328341          DOI: 10.1080/10872981.2017.1270626

Source DB:  PubMed          Journal:  Med Educ Online        ISSN: 1087-2981


Introduction

Problem-based learning (PBL) which has been a major part of medical education for half a century was promoted initially to improve application of knowledge by students to diagnose and manage clinical problems [1]. PBL, when properly applied, has been found to be an effective approach in medical education because it promotes constructive, self-directed, collaborative, and contextual learning [2,3]. However, there has been some debate on the superior effectiveness of PBL in the learning of basic medical knowledge and clinical skills [4,5]. Additionally, there is the tendency for cracks to develop and weaken the effectiveness of the PBL process and its outcome unless there is keen vigilance over the medical school’s PBL programme [6]. How students learn contributes significantly to what they learn [7]; hence, the focus of PBL should be on both the delivery process (how students learn) as well as the content (what students learn). Assessment, monitoring, and programme evaluation all contribute to improving what and how students learn [7]. A number of publications have reported the effectiveness of the PBL process by measuring students’ satisfaction of the process from surveys [8,9], or the knowledge acquired by students as measured by examinations [10-12]. However, there is hardly any reported work that uses an independent PBL specialist to monitor performance of tutor and students during PBL tutorial sessions and how their performance affects the quantity and quality of hypotheses generated by the tutorial group. The medical school at the University of the West Indies, Trinidad and Tobago, uses a hybrid system of PBL and lectures/laboratory practicals. The school follows the seven-step systematic approach of PBL developed by the University of Linburg, Maastricht [1]. The medical school uses non-expert tutors with MD (or equivalent) or PhD backgrounds who would have some understanding of the content of the cases. A PBL group, which meets once a week, comprises 11–13 students and the tutor. A PBL session lasts approximately three hours and comprises two phases: (1) ‘problem-analysis phase’ during which students brainstorm a new problem, develop relevant hypotheses based on prior knowledge and generate learning objectives for self-study; (2) ‘reporting phase’ during which students discuss the objectives generated from the previous problem and revise the previous hypotheses. In the first two years of our medical programme, students are expected to focus their learning on identifying key issues in the problem and providing detailed explanations for the issues identified; they are not expected to provide diagnosis or a management plan for patients in the problem. The quantity and quality of relevant hypotheses generated by a tutorial group is marked by a content expert and the results given to the students before the next tutorial session. The medical school recognizes the three key elements that are essential to a successful PBL programme: (1) the problem used to stimulate learning, (2) the tutors as facilitators of learning, and (3) the group work (or team work) that ensures interaction amongst the students [13]. The current study focused on the latter two elements with the following objectives: (1) to validate a monitoring instrument to measure the performance of tutors and students in a tutorial, i.e., how students learn; (2) to determine the extent to which the performance of tutors and students in a tutorial influenced the hypotheses generated by students, i.e., what students learn.

Methods

Developing a monitoring instrument

The study was conducted in accordance with the guidelines of the university’s ethics committee. The first part of the study was to validate a suitable monitoring instrument and determine its reliability. The items on the monitoring instrument were determined by medical education experts using information from published articles on the expected roles of tutor and students; the latter included the group leader, record keeper, and other students in the group [14-21]. Following feedback from experienced PBL tutors, the medical education experts settled on two constructs with 21 items: 11 items for evaluating the performance of students during PBL tutorial sessions and 10 items for evaluating the performance of students during the tutorials. The two constructs and items in the instrument are listed in Table 1. In order to test further the validity of the instrument, it was given to 40 randomly-chosen PBL tutors to rate the extent to which each item was appropriate for the construct being examined. The participants were asked to evaluate the items using a 5-point Likert scale: 1: strongly disagree, 2: disagree, 3: undecided, 4: agree, 5: strongly agree. The data obtained was subjected to an exploratory factor analysis (principal component analysis) with oblique (direct oblimin) rotation; the latter was chosen with the assumption that the factors would not be independent. The internal consistency of the items in each construct was determined by measuring Cronbach’s reliability coefficients.
Table 1.

Summary of exploratory factor analysis of PBL monitoring instrument.

ItemsFactor 1: Performance of studentsFactor 2: Performance of tutor
Leader encourages all group members to participate in the discussion0.850.06
Leader allows everyone to express his/her views0.820.13
Students encourage cooperative behaviour in the group0.800.43
Leader summarizes the views at appropriate period during the discussion0.720.20
All students get involved in group discussions0.660.36
Leader emphasizes clarification of different issues0.650.03
Students respond to feedback from tutor0.630.22
Record keeper participates in group discussions0.550.46
Students ask stimulating questions0.550.44
Leader starts PBL session on time0.50−0.08
Tutor encourages recording of contributions during brainstorming0.160.87
Tutor encourages noting down hypotheses and learning objectives properly0.110.82
Tutor creates a supportive and comfortable learning environment−0.040.75
Tutor guides students in formulating learning objectives0.450.67
Tutor promotes critical thinking skills0.250.67
Tutor encourages cooperative behaviour in the group0.120.58
Tutor encourages all students including less involved students to take part in discussion0.260.56
Tutor ensures students adhere to polite norms of spoken communication0.450.56
Tutor intervenes when discussion goes off track0.540.55
Tutor ensures group follows the steps of PBL0.440.54
   
Item not included  
Tutor facilitates self-directed learning−0.010.44
   
Eigenvalues7.013.22
Contribution to variance (%)33.3715.35
Cronbach’s alpha0.870.86

Factor loadings of 0.50 or more are in bold.

Summary of exploratory factor analysis of PBL monitoring instrument. Factor loadings of 0.50 or more are in bold.

Monitoring performance of tutors & students and marking hypotheses generated by students

In the second part of the study, the instrument (with the same 5-point Likert scale as above) was used by an independent medical education specialist to monitor 23 PBL groups in years 1 and 2 (basic sciences) of the medical programme. The hypotheses generated by each PBL group for a particular clinical problem was marked by an independent content expert to generate a hypotheses mark for the group. The courses had five to eight clinical problems depending on the length of the course. Table 2 shows the rubric used to determine the hypotheses mark obtained by a group of students for a PBL problem, with the maximum mark being 10. The average hypotheses mark for the group was the mean of all the marks obtained during the course. Pearson’s product-moment correlation analysis was used to determine how the performance of tutors and students influenced the average hypotheses mark for the group. All statistical analyses were performed using SPSS (version 22) software.
Table 2.

Rubric used to mark the hypotheses generated by PBL groups.

 A+ (10)A (8)B (7)C (5)
Critical issuesAll critical issues in the problem are identifiedAll critical issues in the problem are identifiedMost of the critical issues in the problem are identifiedLess than 50% of the critical issues in the problem are identified
ExplanationsDetailed explanations are given for all critical issues.Issues not clearly stated in the problem are identified and properly explainedDetailed explanations are given for at least 75% of the issues identifiedDetailed explanations are given for at least 50% of the issues identifiedDetailed explanations are given for less than 50% of the issues identified

Numbers in brackets are the marks associated with the letter grades.

Rubric used to mark the hypotheses generated by PBL groups. Numbers in brackets are the marks associated with the letter grades.

Results

Validation of monitoring instrument

Responses to the questionnaire were obtained from 31 tutors, giving a response rate of 78%. Table 1 shows the results of the principal component analysis of the 21 items with oblimin rotation. The Kaiser-Meyer-Olkin measure of sampling adequacy was found to be 0.60 which was above the recommended acceptable limit of 0.5 [22]. The scree plot showed an inflexion that justified retention of the two factors. Table 1 shows the eigenvalues and percentage of variance accounted for by each factor. Table 1 also shows the rotated factor loadings of all items with values of 0.5 or greater in bold. There were 10 items that clustered around factor 1, and another 10 items on factor 2. One item had a factor loading of less than 0.5 and was therefore excluded in determining the reliability of the instrument and in part 2 of the study. A Cronbach’s alpha coefficient of 0.7 or more indicates a reliable scale [22,23]. The two factors, using the 10 items in each construct, had coefficients of approximately 0.87 and 0.86 (Table 1). None of the items in either factor had a ‘corrected item-total correlation’ of less than 0.4 or increased the alpha coefficient for the construct when the item was deleted. Hence, the reliability of the instrument was found to be acceptable and the items with bold rotated factors in Table 1 were used for part 2 of the study.

Influence of performance of tutors and students in a tutorial on the hypotheses generated by students

In the second part of the study, the monitored performance of tutors and students in 23 PBL groups were compared to the hypotheses marks attained by the group of students. The mean score for students performance on the 5-point Likert scale was 3.7 ± 0.4 (mean ±s.d.), whilst that for tutor performance was 4.3 ± 0.4. The quality of hypotheses generated by all student groups which was marked over a maximum of 10 was 8.3 ± 0.8 (mean ±s.d.). Pearson’s product-moment correlation analysis showed a significant correlation between the monitored tutor performance and hypotheses mark attained by students (r = 0.44; p = 0.02, one-tailed). In contrast, the correlation coefficient between student performance and hypotheses mark was very low and not significant (r = 0.08; p = 0.35, one-tailed). Additionally, there was a significant correlation between the monitored tutor performance and students performance (r = 0.43, p = 0.02, one-tailed)

Discussion

There have been several published assessment methods for PBL, many of which assess the process whilst others assess the outcome, e.g., knowledge content [8-11,24]. In order to improve the effectiveness of PBL as a learning tool, it is necessary to assess both the process and the outcome. The instrument we developed for assessing the process initially had two constructs and 21 items. Following a factor analysis, one item was deleted because of low factor loading, thus the final instrument has 20 items, 10 of which clustered around students’ performance at PBL tutorials and the other 10 around tutors’ performance. The results of this part of the study indicate the instrument has both construct validity and internal consistency (reliability) and could be used to monitor the performances of tutors and students during PBL tutorials. Schmidt (1983) proposed that information processing theory in educational psychology formed the bedrock of PBL, i.e., activation of prior knowledge, encoding specificity, and elaboration of knowledge. Additionally, cooperative learning (rather than competitive learning) is essential for the success of PBL [2,25]. Cooperative learning is promoted when students have shared goals and rewards, and optimize their complementary roles to achieve them. Another key advantage of PBL is to promote the development of clinical reasoning skills during the early stages of medical training. Familiarity with clinical cases leads to the formation of clearly defined illness scripts that enable clinicians (especially specialists) to quickly diagnose a clinical case. However, when confronted with unfamiliar cases (especially complex ones), doctors tend to unravel the case by using hypotheses generation in the form of self-explanations [26,27]. Hypotheses generation involves constructing linkages amongst items in the case and with the underlying mechanisms/reasons from biological, psychological, social, ethical, and legal perspectives. We promoted collaborative learning and the development of clinical reasoning skills in our programme by introducing a system in which the written hypotheses from PBL groups for each problem are marked by a content expert each week and the marks given back to the group, thus ensuring immediate feedback to students on their output. The marks attained by the group formed part of each student’s continuous assessment. This approach encouraged students not only to work together but also gave the faculty a sense of what students have learnt, and to address possible gaps in knowledge across the groups. Some studies have found significant correlation between the score assigned to a student by a tutor and the student’s marks on traditional examinations, e.g., multiple choice questions; whilst others have found no such correlation [10-12]. The design of the current study differs from the other published ones in that (1) we examined the two separate constructs that contribute to the PBL process i.e. performances of students and tutors, (2) we reduced bias in scoring students and tutors at PBL tutorials by using the same medical education expert to do the scoring, (3) the marks obtained were directly associated with the PBL process because it was based solely on the hypotheses generated by the group at PBL and not on examination scores which are affected by other forms of learning in a hybrid medical education curriculum, and (4) the marks were attained by the whole group and not by individual students. Our study has added another dimension to the debate by the finding that the performance of tutors correlated significantly with the quantity and quality of hypotheses generated by students. In the current study, the hypotheses marks achieved by students were not influenced by the variation in performance of students at tutorials. This suggests that the quantity and quality of hypotheses generated by students is dependent on factors other than the students’ performance in the PBL tutorial process. For example, the hypotheses generated would have reflected the depth of the students’ prior knowledge or the depth of the knowledge gained during the self-study step of the PBL process. This line of reasoning could be supported by the diverse learning resources online and technological tools that are currently available to medical students [28]. A number of studies have demonstrated that both expert and non-expert tutors do influence the PBL process, and the importance of having quality tutors for the PBL process [5,29,30]. In the current study, the performance of tutors had significant correlation with the performance of students during tutorials. In our programme, PBL tutors are trained to be effective facilitators by the medical school’s Centre for Medical Sciences Education. Hence, it was reassuring to note that tutor performance in this study was found to be significantly correlated with students’ performance during tutorial.

Conclusion

An instrument has been developed to monitor the performance of tutors and students at PBL tutorials. The 20-item instrument with two constructs was shown to have good validity and reliability. Whilst having a good PBL process is essential, it is also important that the students’ achieve the desired output at PBL tutorials by producing hypotheses that are relevant to the clinical case being discussed. Hence, we have also provided a rubric for assessing the hypotheses generated by students during PBL tutorials. The results of this study emphasize the important role of the tutors in facilitating the performance of students at tutorials and getting them to engage in discussions that produce excellent hypotheses. The tutors, albeit non-expert, were found to be capable of influencing not only how students learn but also what they learn.
  25 in total

1.  Profiles of effective tutors in problem-based learning: scaffolding student learning.

Authors:  W S De Grave; D H Dolmans; C P van der Vleuten
Journal:  Med Educ       Date:  1999-12       Impact factor: 6.251

2.  Effectiveness of problem-based learning curricula: theory, practice and paper darts.

Authors:  G R Norman; H G Schmidt
Journal:  Med Educ       Date:  2000-09       Impact factor: 6.251

Review 3.  Problem-based learning: future challenges for educational practice and research.

Authors:  Diana H J M Dolmans; Willem De Grave; Ineke H A P Wolfhagen; Cees P M van der Vleuten
Journal:  Med Educ       Date:  2005-07       Impact factor: 6.251

Review 4.  Problem-based learning: description, advantages, disadvantages, scenarios and facilitation.

Authors:  R W Jones
Journal:  Anaesth Intensive Care       Date:  2006-08       Impact factor: 1.669

5.  Problem-based learning: a strategic learning system design for the education of healthcare professionals in the 21st century.

Authors:  Matthew Choon-Eng Gwee
Journal:  Kaohsiung J Med Sci       Date:  2009-05       Impact factor: 2.744

6.  Cracks in problem-based learning: what is your action plan?

Authors:  Samy A Azer; Michelle McLean; Hirotaka Onishi; Masami Tagawa; Albert Scherpbier
Journal:  Med Teach       Date:  2013-08-23       Impact factor: 3.650

7.  Students' self-explanations while solving unfamiliar cases: the role of biomedical knowledge.

Authors:  Martine Chamberland; Sílvia Mamede; Christina St-Onge; Marc-Antoine Rivard; Jean Setrakian; Annie Lévesque; Luc Lanthier; Henk G Schmidt; Remy M J P Rikers
Journal:  Med Educ       Date:  2013-11       Impact factor: 6.251

8.  The influence of medical students' self-explanations on diagnostic performance.

Authors:  Martine Chamberland; Christina St-Onge; Jean Setrakian; Luc Lanthier; Linda Bergeron; Annick Bourget; Silvia Mamede; Henk Schmidt; Remy Rikers
Journal:  Med Educ       Date:  2011-07       Impact factor: 6.251

9.  Problem-based learning: rationale and description.

Authors:  H G Schmidt
Journal:  Med Educ       Date:  1983-01       Impact factor: 6.251

10.  Increased correlation coefficient between the written test score and tutors' performance test scores after training of tutors for assessment of medical students during problem-based learning course in Malaysia.

Authors:  Heethal Jaiprakash; Aung Ko Ko Min; Sarmishtha Ghosh
Journal:  Korean J Med Educ       Date:  2016-01-27
View more
  3 in total

1.  Medical students' perceptions of small group teaching effectiveness in hybrid curriculum.

Authors:  Pradeep Kumar Sahu; Shivananda Nayak; Vincent Rodrigues
Journal:  J Educ Health Promot       Date:  2018-02-09

2.  Tutor assessment of PBL process: does tutor variability affect objectivity and reliability?

Authors:  Bidyadhar Sa; Chidum Ezenwaka; Keerti Singh; Sehlule Vuma; Md Anwarul Azim Majumder
Journal:  BMC Med Educ       Date:  2019-03-08       Impact factor: 2.463

3.  Prevalence of Psychological Distress among Undergraduate Medical Students: A Cross-Sectional Study.

Authors:  Pradeep Kumar Sahu; Bijoor Shivananda Nayak; Vincent Rodrigues; Srikanth Umakanthan
Journal:  Int J Appl Basic Med Res       Date:  2020-10-07
  3 in total

北京卡尤迪生物科技股份有限公司 © 2022-2023.