Suzanne Schut1, Erik Driessen1. 1. Department of Educational Development and Research, Faculty of Health, Medicine and Life Sciences, Maastricht University, Maastricht, the Netherlands.
Human decision making is prone to bias, fallibility and irrationality.1 In high‐stakes accountability systems, such as health care and education, this can be challenging to deal with. The quality of patient care and of competency‐based education and assessment depends on the collaborative approach of multiple experts making numerous judgements while dealing with ill‐structured problems. Both contexts are characterised by challenging, high‐stakes work demands and by the crucial roles played by frontline professionals, both of which place substantial pressure on the quality of human decision making.2 How should we deal with the limitations in our abilities to make and improve these high‐stakes, complex decisions?Heuristics offer some guidance and support when dealing with complex decision making in ill‐structured settings. They are described as mental shortcuts or interrelated sets of principles or guidelines that are used to guide the process of problem solving.3 By contrast with algorithms (i.e. step‐by‐step prescriptions for achieving particular goals that, when used properly, promise a guaranteed solution to the problem), heuristics are problem‐solving strategies that may lead to solutions. An example of heuristics is analogical thinking, in which one limits the search for solutions to situations that are similar to that at hand. In this issue of Medical Education, Feufel and Flach4 propose to boost the use of heuristics in medical education to improve the process of clinical decision making. We echo the call made by Feufel and Flach4 and think it is interesting to make an analogy to the decision‐making process in the assessment context within medical education. As we have alluded to earlier, we think there are relevant similarities in expert judgements and the complexity and stakes of the decision‐making process in the respective environments of clinicians and educators.Feufel and Flach4 present two commonly used heuristics they have identified in clinical decision making in the emergency department: ‘Common Things’ (what is most common given the symptoms, medical history and current observations), and ‘Worst Cases’ (in which the differential diagnosis is focused on the potential consequences associated with the different symptoms rather than the likelihood of a disease). In medical education one of the most high‐stakes, summative decisions an educator is required to make concerns whether or not a learner has ‘succeeded’. This is a decision in which an educator needs to discriminate between those who know, understand, are on track, are competent and are entrusted, and those who are not. Indicators of learners’ performance levels or competency development may be more easily interpreted by teachers or supervisors with the use of these heuristics: Common Things (what is most common or likely given the indicators of behaviour and performance), and Worst Cases (alarming indicators of potential consequences related to the student's performance that focus the teacher's support on the provision of remediation or more supervision, such as indications of lapses in professional behaviour). This could, according to Feufel and Flach,4 improve human decision making in clinical situations and may also work in the context of assessment in medical education.In medical education one of the most high‐stakes, summative decisions an educator is required to make concerns whether or not a learner has ‘succeeded’Because there is significant overlap in the symptoms associated with both Common Things and Worst Cases, errors will occur. Feufel and Flach4 refer to those errors as false alarms and misses. In the assessment literature, they are known as false positives and false negatives. Feufel and Flach4 argue that the primary cause of error in decision making is therefore not humanweakness or irrationality, but clinical complexity. To improve the quality of the decision and to mitigate the likelihood of errors, the authors4 borrow a concept from signal detection theory, the so‐called ‘decision criterion’. This reflects the criterion that is used to decide whether to focus patient management on a Common Thing or a Worst Case, and is a key parameter with respect to the quality of the decision. What determines ‘good’ quality or a ‘satisfactory’ criterion will depend on domain‐specific values and the potential consequences of an error. In summative assessment, the responsibility and power to make that decision (or to ‘set the decision criterion’) are, in most cases, within the exclusive domain of the assessor, institute or regulatory body – a practice that is rarely challenged. It reflects the underlying norms and values we hold to assessment practices and who we consider best placed to make those high‐stakes decisions. Does this practice still reflect the domain‐specific values in medical education and assessment? Let's consider the values and practices that currently apply in assessment in medical education.Does current assessment practice still reflect the domain‐specific values in medical education?Given its emphasis on self‐regulated learning and the aspiration to deliver lifelong learners to the health care system, medical education clearly seems to value the role and responsibility of the learner. Are these values mirrored in our assessment practices and more specifically in setting the decision criterion? The pivotal question here is: who should set the ‘satisfactory’ decision criterion to make high‐stakes decisions in assessment within competency‐based medical education? Isn't the unilateral setting of the decision criterion by the assessor at odds with the objectives of modern education and assessment models, which aim to determine who is a reflective, self‐regulating and competent professional?Who should set the ‘satisfactory’ decision criterion to make high‐stakes decisions in assessment within competency‐based medical education?If we want learners to make difficult choices in an informed and reflective way, such as in the complex clinical decision‐making process presented in the paper by Feufel and Flach,4 we need to reconsider learners’ roles and involvement in setting the decision criterion. Involving the learner in setting the decision criterion that is used to determine competence or entrustability might benefit the decision‐making process tremendously, not only in terms of improving the decision‐making process, but also by increasing the acceptability of the decision and, more importantly, the meaningfulness of assessment practices in general. In a recent study, learners argued that opportunities to influence and control the process of decision making in assessment stimulate a sense of agency and facilitate a shared responsibility for their learning and assessment experience. Moreover, learner agency in assessment enables the potential to use assessment as a learning opportunity for self‐regulated learning.5Involving the learner in setting the decision criterion that is used to determine competence or entrustability might benefit the decision‐making process tremendouslyWe are not arguing that learners should set their own assessment criteria completely, and neither are we ignorant of the challenges and tensions of self‐assessment practices. We do think it is time to transform the common assessment model – in which the role of the student is passive – into an assessment model in which both assessor and student contribute to the decision‐making process.
Authors: Suzanne Schut; Sylvia Heeneman; Beth Bierer; Erik Driessen; Jan van Tartwijk; Cees van der Vleuten Journal: Med Educ Date: 2020-04-06 Impact factor: 6.251