| Literature DB >> 33966099 |
Marcio M Gomes1,2,3, David Driman4, Yoon Soo Park5, Timothy J Wood6, Rachel Yudkowsky5, Nancy L Dudek7,8,9.
Abstract
Competency-based medical education (CBME) is being implemented worldwide. In CMBE, residency training is designed around competencies required for unsupervised practice and use entrustable professional activities (EPAs) as workplace "units of assessment". Well-designed workplace-based assessment (WBA) tools are required to document competence of trainees in authentic clinical environments. In this study, we developed a WBA instrument to assess residents' performance of intra-operative pathology consultations and conducted a validity investigation. The entrustment-aligned pathology assessment instrument for intra-operative consultations (EPA-IC) was developed through a national iterative consultation and used clinical supervisors to assess residents' performance at an anatomical pathology program. Psychometric analyses and focus groups were conducted to explore the sources of evidence using modern validity theory: content, response process, internal structure, relations to other variables, and consequences of assessment. The content was considered appropriate, the assessment was feasible and acceptable by residents and supervisors, and it had a positive educational impact by improving performance of intra-operative consultations and feedback to learners. The results had low reliability, which seemed to be related to assessment biases, and supervisors were reluctant to fully entrust trainees due to cultural issues. With CBME implementation, new workplace-based assessment tools are needed in pathology. In this study, we showcased the development of the first instrument for assessing resident's performance of a prototypical entrustable professional activity in pathology using modern education principles and validity theory.Entities:
Keywords: Assessment; Competency-based medical education; Entrustable professional activity; Intra-operative consultations; Validity; Workplace-based assessment
Mesh:
Year: 2021 PMID: 33966099 PMCID: PMC8516791 DOI: 10.1007/s00428-021-03113-6
Source DB: PubMed Journal: Virchows Arch ISSN: 0945-6317 Impact factor: 4.064
Fig. 1Entrustment-aligned pathology assessment instrument for intra-operative consultations (EPA-IC)
Types of rater biasa
| Type of rater bias | Description |
|---|---|
| Halo effect | A single score in a rating scale is awarded, which is designed to reflect the overall quality of the performance |
| Extreme response bias | The respondents may mark the extreme anchors rather than those in between, which can be due to other biases (see below) |
| Leniency-stringency bias | Some raters tend to be more lenient, while others are more stringent, which is usually related to personality traits |
| Incompetence bias | The rater tendency to assign high ratings because of his/her lack of confidence or competence in rating the behavior. This occurs when raters are incompetent on the tasks being rated, because they do not want to penalize the person being rated for his or her own shortcomings |
| Buddy bias | The degree of acquaintance between supervisor and trainee might increase ratings because of social aspects |
| Back-scratching bias | A faculty member gives high ratings to residents on the assumption that the resident will be less likely to give them a low rating (fear of retribution) |
aAdapted from Berck RA35
Descriptive statistics for the entrustment-based pathology assessment of intraoperative consultations
| Rating | Range | Item-total | |||
|---|---|---|---|---|---|
| Item | Mean | SD | Min | Max | Correlation |
| Pre-procedure plan | 4.78 | 0.58 | 2 | 5 | 0.71 |
| Case preparation | 4.75 | 0.80 | 1 | 5 | 0.72 |
| Surgery-pathology handover | 4.77 | 0.68 | 1 | 5 | 0.78 |
| Technical performance | 4.58 | 0.88 | 1 | 5 | 0.72 |
| Diagnostic interpretation | 4.41 | 0.98 | 1 | 5 | 0.77 |
| Post-procedure plan | 4.71 | 0.63 | 2 | 5 | 0.78 |
| Efficiency and flow | 4.84 | 0.50 | 2 | 5 | 0.77 |
| Communication/collaboration | 4.89 | 0.36 | 3 | 5 | 0.69 |
Results of G-study: variance components of the different factors
| Facet | Variance | %Variance | Variance associated differences |
|---|---|---|---|
| pa | .032 | 5 | Between residents |
| f:p | .281 | 48 | Between forms any given resident received |
| i | .026 | 5 | Between items |
| pi | .003 | 0 | Residents getting different ratings on the items |
| fi:p | .243 | 42 | Due to the interaction of all 3 factors plus overall error |
ap resident, f forms, i items
G (overall) = (var(p) + var(pi)/ni)/(var(p) + var(pi)/ni) + var(f:p)/nf + var(fi:p)/nfni = .41
G (internal consistency) = var(p) + var(f:p)/(var(p) + var(f:p) + var(pi)/ni + var(fi:p)/ni = .91
Overall performance according to PGME year of training
| PGYa | Mean | SD | |
|---|---|---|---|
| 2 | 4.46 | 0.70 | 35 |
| 3 | 4.96 | 0.09 | 17 |
| 4 | 4.99 | 0.04 | 9 |
| 5 | 4.90 | 0.27 | 12 |
| Total | 4.71 | 0.55 | 73 |
a Post-graduate year of training
Ratings of resident ability to safely perform intraoperative consultations independently according to post-graduate year of training
| Post-graduate year | |||||
|---|---|---|---|---|---|
| 2 | 3 | 4 | 5 | Total | |
| No | 16 | 1 | 0 | 0 | 17 |
| Yes | 19 | 16 | 9 | 12 | 56 |
| Total | 35 | 17 | 9 | 12 | 73 |
Threats to validity in assessment
| Construct-irrelevant variance | The variation in scores is due to something unrelated to the construct intended to be measured. For instance, if raters are considering the resident’s year of training when judging their performance, it could alter the score in a way unrelated to their ability to perform intra-operative consultations |
| Construct underrepresentation | Only part of the construct intended to be measured is actually being measured. For instance, if the ability to communicate results to surgeons is not assessed, the score would not capture all the aspects related to the ability to perform intraoperative consultations |