Literature DB >> 33533743

The power of subjectivity in competency-based assessment.

M Gopalakrishnan1, M K Garg1.   

Abstract

Mesh:

Year:  2021        PMID: 33533743      PMCID: PMC8098865          DOI: 10.4103/jpgm.JPGM_1251_20

Source DB:  PubMed          Journal:  J Postgrad Med        ISSN: 0022-3859            Impact factor:   1.476


× No keyword cloud information.
We congratulate Virk A et al. on their recent review- “The power of subjectivity in competency-based assessment” as it has reopened the timely question of subjectivity in assessments in the context of the recent competency-based medical education (CBME) curriculum implementation in India.[1] We agree with the authors that assessing clinical competence is more than mere stacking of numbers, and the assessment scenario must be holistic, more than just the sum of parts. We also agree with the authors about the misplaced assumption that reliability in the checklist-mediated approach, especially in ?Objective Structured Clinical Examinations (OSCEs), was attributed to “objectivity,” while it has turned out to be a “red-herring” while the “black box” that works there is actually wider sampling, i.e., multiple examiners and multiple assessment stations.[2] There are several points in the review which need further examination. The authors have argued that objective assessments are “norm-referenced,” while subjective assessments are criterion referenced. Both these claims appear to be misleading. The norm and criterion referencing are useful and applicable in different situations.[3] For example, we routinely use a classical “objective” assessment like a multiple choice question test (MCQ) with a “pass mark” (say 40%) for the test which makes it criterion referenced. When the same MCQ is interpreted as a student doing better than 60% of all the test takers, it becomes norm referenced. A norm-referenced interpretation may be useful in situations like entrance exams, while a criterion is referenced for assessing the mastery of subjects by the students. They have nothing to do with the assessment being subjective or objective. The authors have also stated that objective assessment is comparable to a cross-sectional study and subjective to a longitudinal study. This is also misleading in the sense that both types of assessments can have data points over time or at one point making them longitudinal or cross-sectional. For example, the results of three MCQ tests or OSCEs over three months is a longitudinal objective assessment. Also, the claim that all standard setting is purely expert opinion seems far-fetched as several well-established frameworks exist for the same.[4] We believe that the argument here must not be objective versus subjective assessment but how both can complement each other in creating valuable richer data about the Indian Medical Graduates for fulfilling the dual objectives of feedback for learning and making defensible high stake pass–fail judgments. Hence, we wish to reframe the dichotomy with the lens of programmatic assessment. Programmatic assessment is a set of broad principles which incorporate various aspects which the authors have touched upon in their article. These include collecting a continuum of rich low-stake data points (each representing a “pixel” in the whole picture) involving qualitative feedbacks (including subjective assessments) and mentorship for the learner, resulting in trustworthy decision making.[5] The principles of programmatic assessment align well with CBME. The real challenge is to figure out the implementation strategies, especially at the undergraduate level and see how concepts like workplace-based assessments and portfolios can be modified and adapted to our needs and settings.

Financial support and sponsorship

Nil.

Conflicts of interest

There are no conflicts of interest.
  5 in total

1.  Item response theory: applications of modern test theory in medical education.

Authors:  Steven M Downing
Journal:  Med Educ       Date:  2003-08       Impact factor: 6.251

2.  A plea for the proper use of criterion-referenced tests in medical assessment.

Authors:  Chris Ricketts
Journal:  Med Educ       Date:  2009-12       Impact factor: 6.251

3.  Twelve Tips for programmatic assessment.

Authors:  C P M Van Der Vleuten; L W T Schuwirth; E W Driessen; M J B Govaerts; S Heeneman
Journal:  Med Teach       Date:  2014-11-20       Impact factor: 3.650

4.  The power of subjectivity in competency-based assessment.

Authors:  A Virk; A Joshi; R Mahajan; T Singh
Journal:  J Postgrad Med       Date:  2020 Oct-Dec       Impact factor: 1.476

5.  Deconstructing programmatic assessment.

Authors:  Tim J Wilkinson; Michael J Tweed
Journal:  Adv Med Educ Pract       Date:  2018-03-22
  5 in total

北京卡尤迪生物科技股份有限公司 © 2022-2023.