| Literature DB >> 34912682 |
Shaista Saiyad1, Purvi Bhagat2, Amrit Virk3, Rajiv Mahajan4, Tejinder Singh5.
Abstract
Assessment is a process that includes ascertainment of improvement in the performance of students over time, motivation of students to study, evaluation of teaching methods, and ranking of student capabilities. It is an important component of the educational process influencing student learning. Although we have embarked on a new curricular model, assessment has remained largely ignored despite being the hallmark of competency-based education. During the earlier stages, the assessment was considered akin to "measurement," believing that competence is "generic, fixed and transferable across content," could be measured quantitatively and can be expressed as a single score. The objective assessment was the norm and subjective tools were considered unreliable and biased. It was soon realized that "competence is specific and nontransferable," mandating the use of multiple assessment tools across multiple content areas using multiple assessors. A paradigm change through "programmatic assessment" only occurred with the understanding that competence is "dynamic, incremental and contextual." Here, information about the students' competence and progress is gathered continually over time, analysed and supplemented with purposefully collected additional information when needed, using carefully selected combination of tools and assessor expertise, leading to an authentic, observation-driven, institutional assessment system. In the conduct of any performance assessment, the assessor remains an important part of the process, therefore making assessor training indispensable. In this paper, we look at the changing paradigms of our understanding of clinical competence, corresponding global changes in assessment and then try to make out a case for adopting the prevailing trends in the assessment of clinical competence. Copyright:Entities:
Keywords: Assessment; assessor; competency-based medical education; faculty development program; measurement; programmatic assessment
Year: 2021 PMID: 34912682 PMCID: PMC8633695 DOI: 10.4103/ijabmr.IJABMR_334_21
Source DB: PubMed Journal: Int J Appl Basic Med Res ISSN: 2229-516X
Various definitions of assessment highlighting the role it plays in teaching and learning
| Assessment refers to the processes employed to make judgments about the achievements of students over a course of study[ |
| The process of measuring an individual’s progress and accomplishments against defined standards and criteria, which often includes an attempt at measurement[ |
| Assessment is any formal or purported action to obtain information about the competence and performance of a student[ |
| Assessment in medical education addresses complex competencies and thus requires quantitative and qualitative information from different sources as well as professional judgment[ |
Figure 1Changing perspectives about clinical competence
Evolution of assessment in medical education
| Salient features | Key points | Limitations | Implications | Role of assessor |
|---|---|---|---|---|
|
| ||||
| The measurement phase (1970s) | ||||
| Learner competence purely a quantitative measure; can be expressed as a (single) score | Objectivity important for reliability | Numerical results from structured, standardized tests insufficient to determine student competence | Measurement still useful for lower levels of miller and selection tests | Objectification and standardization to compensate for “noise” in assessment |
| Competence is generic and transferable | Stimulus format more important than response format | Difficult to measure inter-dependent traits | “Does” part not assessable by pure numbers | Assessor training to design standardized tools |
| Aimed at minimizing the role of human judgment to reduce unreliability | Validity can be built in by quality assurance in item and test development | Student performance did not generalize well across content | Can produce high inter-rater agreement | |
| Structuring and standardization to increase reliability | Mathematical models can predict competence and behavior | Objectivity did not ensure reliability/generalizability | Validity and reliability ‘built into’ the tool | |
|
| ||||
|
| ||||
|
| ||||
| Competence is not transferable | Bias is inherent to expert subjective judgment | Inter-rater agreement may be low | Expert judgment is indispensable to assessing complex skills | Use appropriate and adequate sampling of assessors to dilute bias |
| Assessment of performance is a judgment and decision-making process | Manipulating numbers do not manipulate reality | Resource and time intensive | Devise strategies to alleviate bias | Feedback based on direct observation |
| Asserted the role of human judgment in the assessment process | Validity and reliability not inherent to the tool | May not be appropriate for selection type tests | Create meaningful relationship between teacher and learner through feedback dialogues and follow-up | Assessor training important to pick up dyscompetence |
| Assessment in real authentic settings inclusive of aspects such as critical thinking, professionalism, reflection and self-regulation | Qualitative information has more value than mere scores. More so, for complex domain-independent skills | Learner has to be part of assessment process | Capacity building of assessors in use of assessment tools | Validity and reliability depend on the way a tool is used |
| Feedback is an indispensable component of assessment | Systems and structures may be unsupportive | |||
|
| Stakeholders buy in needed | |||
|
| ||||
|
| ||||
| Signifies an approach in which information about the learner’s competence and progress is collected continually over time, analyzed and, as and when needed, supplemented with purposefully collected additional assessment information | No single method can assess all levels of millers pyramid | Need of expert judgment to make meaningful conclusions from the information gathered using a different assessment instrument | Optimizes decision making by gathering rich and abundant information from multiple assessment times, methods and sources | Multiple assessors in multiple settings/contexts |
| Whole task assessment | Need of examiners with sufficient assessment literacy and expertise | Assessment used purposively to achieve the desired learning impact on learners | Assessor must be domain expert | |
| Delinking of assessment and decision making | Systems and structures may be unsupportive | Different examiners with different thought processes add richness to assessment | Feedback and mentoring | |
| Concept of utility of assessment with meaningful compromises and aggregation | Stakeholders buy-in needed | Validity and reliability of the “program” rather than of the tool | ||
|
| ||||
Figure 2Futuristic approach for an authentic competency assessment system