| Literature DB >> 31216731 |
Hayley Croft1, Conor Gilligan2, Rohan Rasiah3, Tracy Levett-Jones4, Jennifer Schneider5.
Abstract
An increasing emphasis on health professional competency in recent times has been matched by an increased prevalence of competency-based education models. Assessments can generate information on competence, and authentic, practice-based assessment methods are critical. Assessment reform has emerged as an academic response to the demands of the pharmacy profession and the need to equip graduates with the necessary knowledge, skills and attributes to face the challenges of the modern workforce. The objective of this review was to identify and appraise the range of assessment methods used in entry-level pharmacy education and examine current trends in health professional assessment. The initial search located 2854 articles. After screening, 36 sources were included in the review, 13 primary research studies, 12 non-experimental pharmacy research papers, and 11 standards and guidelines from the grey literature. Primary research studies were critically appraised using the Medical Education Research Study Quality Instrument (MERSQI). This review identified three areas in pharmacy practice assessment which provide opportunities for expansion and improvement of assessment approaches: (1) integrated approaches to performance assessment; (2) simulation-based assessment approaches, and; (3) collection of validity evidence to support assessment decisions. Competency-based assessment shows great potential for expanded use in pharmacy, but there is a need for further research and development to ensure its appropriate and effective use.Entities:
Keywords: Entrustable Professional Activities (EPAs); Objective Structured Clinical Examination (OSCE); assessment; competency-based education; health professionals; pharmacist; pharmacy; pharmacy student; simulation
Year: 2019 PMID: 31216731 PMCID: PMC6630227 DOI: 10.3390/pharmacy7020067
Source DB: PubMed Journal: Pharmacy (Basel) ISSN: 2226-4787
Reasons for change in focus on practice-based assessments in health professional education.
| A move towards outcome-based education models, including competency-based approaches [ |
| Increased quality assurance (QA) of tertiary education, evidenced directly through student performance [ |
| Emphasis on human factors implicated in medical error and patient safety [ |
| Increasing government and community expectations and pressure on universities for ‘work-ready’ graduates [ |
| Increased accreditation requirements for programmes [ |
| Integration of professional competency standards into education programs [ |
| Increasing employer expectations [ |
Description of assessments used in pharmacy education.
| Assessment Process | Description | Assessment Characteristics Related to Literature Review Themes | ||
|---|---|---|---|---|
| Integration of Competencies | Authenticity | Validity and Reliability | ||
| Multiple-Choice Questions (MCQs) including Extended Matching Questions (EMQs) and computer-adaptive tests (CATs) | Traditional MCQ most widely consists of a question (stem) followed by several (typically 4–5) possible answer options; may also be true/false format [ | Primarily assess knowledge in specific subject areas; may be possible to assess higher cognitive processes (e.g., interpretation, knowledge application) with well-constructed clinical scenarios [ | Lack assessment authenticity [ | High levels of reliability [ |
| Written examination, including modified essay question (MEQ) | Traditional written examination usually requires candidates to respond to a variety of questions using short, long or (mini/modified) essay style; open-ended responses in written form. Questions may elicit specific knowledge or facts, or incorporate theory of clinical skills and communications. | Primarily assess knowledge in specific subject areas; may be possible to assess higher cognitive processes (e.g., interpretation, knowledge application) with well-constructed clinical scenarios [ | Generally, lack authenticity, as students are not able to demonstrate performance. Questions that assess application of knowledge in real-world scenarios are more authentic than those that focus on student’s ability to reproduce information. | Lower levels of reliability when compared with MCQ, since responses are open ended. Wider sampling and more directed questioning generally increase reliability. Validity will vary depending on content and construction of questions, number of answer items. |
| Viva Voce “viva”/traditional oral examination | Oral (rather than written) examination conducted face-to-face with examiner(s). | As well as clinical knowledge, viva may be useful for the assessment of characteristics which are difficult to assess via other techniques such as professionalism, clinical reasoning, ethics, communication skill, problem solving [ | Lack assessment authenticity due to the hypothetical nature of questioning. | Viva examinations are often unstructured, wide variation may occur between questioning for different candidates and different assessors, thus they are prone to errors of variability [ |
| Simulated patient encounters/“role plays” and practical examination | Examination of practice-based skills through demonstration of that task e.g., patient counselling, pharmaceutical compounding examination [ | Tend to focus on one area of practice e.g., preparation of compounded product, counselling, medication history taking [ | Has the potential to be authentic. Authenticity is increased when psychological fidelity is high and decision-making closely simulates the real context of the skill [ | Examples of validity and reliability established for some tools including communication and counselling skills of pharmacists (CCSP) tool [ |
| Objective Structured Clinical Examination (OSCE) | The OSCE objectively tests multiple skill sets in a controlled environment. Candidates move through a series of time-limited stations for the assessment of professional tasks in a simulated environment using standardised marking rubrics [ | Encourage students to practice skills more holistically [ | Although OSCEs often use trained simulated patients in simulated environment, authenticity has been questioned as scenarios may not reflect the reality of clinical practice [ | Validity and reliability should be established for individual assessments. However, increasing the number of stations can improve validity and reliability [ |
| Workplace Based Assessments (WPBAs) | Case-based discussion (CBD) [ | Integration is dependent on the assessment. | Authenticity is high due to the assessment of competence and performance takes place during normal work activities; advantages of authenticity rely on the appropriate use of tools, and engagement of both learner and assessor [ | Validity and reliability should be established for individual assessment tools. Often content validity is limited, as the assessment is of the student’s management of one specific case at one point in time. Construct validity is high because the tools assess actual practice in the workplace. Reliability is often dependent on assessor’s training and experience, and may be improved by using standardised, validated assessment tools. Examples of validated assessment tools in pharmacy education, e.g., pharmacy mini-PAT [ |
| Portfolio | A collection of longitudinal evidence of professional development including performance evaluation samples, action plans, self-reflection, evidence of continuing professional development (CPD), presentations, documentation of critical incidents, evidence of research and quality improvement projects. | Integration is dependent on the source of evidence in the portfolio; there is opportunity to capture evidence from a range of settings that show amalgamation of competencies, but content requirements of portfolio may need to be clearly defined to ensure this. | Authenticity is high as samples of evidence are directly from workplace; may be used as a repository for completes WPBAs. | Valid method for assessing competence, however threats to validity exist as contents may vary considerably and are self-reported. Evidence shows a wide range of reliability scores [ |
| Entrustable Professional Activities (EPA) | EPAs are used as both a link competencies and professional responsibilities in practice; and as a mechanism to decide the level of supervision for a student [ | High level of integration as EPAs require multiple competencies to be applied in an integrative fashion [ | Authenticity is high due to the assessment of competence and performance takes place while performing units of professional practice that reflect the daily work of the practitioner. | Few studies report on the psychometric properties of EPAs. Those that do report moderately strong inter-rater reliability [ |
Figure 1PRISMA Flow Diagram for the literature review search [32].
Mapping of assessment approaches to Millers Pyramid.
| Assessment Type | Millers Pyramid Level [ | ||||
|---|---|---|---|---|---|
| Knows (Knowledge) | Knows How (competence) | Shows How (Performance) | Does (Action) | Is (Identity) | |
| Multiple-Choice Questions (MCQ) | Yes | Partially | No | No | No |
| Extended Matching Questions (EMQ) | Yes | Partially | No | No | No |
| Written Examination | Yes | Yes | No | No | No |
| Computer-Adaptive Testing (CAT) | Yes | Partially | No | No | No |
| Viva Voce/Oral Exams | Yes | Yes | Partially | No | No |
| Simulated patient encounters/practical examination | Yes | Yes | Partially | No | No |
| Objective Structured Clinical Examination (OSCE) | Yes | Yes | Yes | No | No |
| Workplace Based Assessments (WBA) | Yes | Yes | Yes | Yes | Yes |
| Portfolio | Yes | Yes | Yes | Yes | Yes |
| Entrustable Professional Activities (EPA) | Yes | Yes | Yes | Yes | Yes |
Description of the included experimental research studies in pharmacy assessment (n = 13) and quality appraisal using the Medical Education Research Study Quality Instrument (MERSQI) [53].
| Citation/Location/Quality | Study Participants/Assessment Approach | Study Aims/Methods | Outcomes and Key Findings | Limitations | Reports on Integration of Competencies (Y/N) | Includes Simulation/Reports on Authenticity (Y/N) | Measures Validity/Reliability of Assessment Tool (Y/N) |
|---|---|---|---|---|---|---|---|
| Santos, S and Manuel, J. (2017) [ | Fourth-year undergraduate BPharm students (n = 14) and assessors (n = 6). | Aim: Describe and evaluate the design and implementation of an authentic assessment in undergraduate pharmacy course. | Authenticity in assessment is subjective to each student. | Single site. | N | Y | N |
| Hirsch, A; Parihar, H. (2014) [ | Fourth-year undergraduate PharmD students (n = 73). | Aim: To create a capstone course that provides a comprehensive and integrated review of the pharmacy curriculum. | 95% of students successfully passed the capstone course. | Details about individual assessments are poorly described. | N | Y | N |
| Mackellar, A. et al. (2007) [ | Pharmacy academics across three universities (n = 38). | Aim: To identify valid and reliable criteria by which patients can assess the communication skills of pharmacy students. | 7 criteria identified that were important measures of pharmacy students’ communication skills and rated as face valid and reliable. | Limited statistical power due to modest sample size. | N | Y | Y |
| Kadi, A. et al. (2005) [ | First-year undergraduate pharmacy students (n = 38). | Aim: To evaluate the accuracy of pharmacy students’ compounding skills. | Errors ranged from 25% to >200% of the label amount. | Only 54% of students participated. | N | Y | N |
| Salinitri, F. et al. (2012) [ | Third-year undergraduate pharmacy students (n = 54). | Aim: Compare pharmacy students’ performance on an OSCE to their performance on a written examination for the assessment of problem-based learning. | OSCE performance did not correlate with written examination scores. | Single site. | Y | Y | N |
| Sturpe, D. (2009) [ | PharmD faculty members (n = 88). | Aim: Describe the current OSCE practices (awareness of, interest in, current practice and barriers) in Doctor of Pharmacy (PharmD) programs in the Unites States. | 37% of program responses reported using OSCEs; 63% of program responses reported they did not use OSCEs, but half of these were considering incorporating it into their curriculum. | Descriptive statistics were used to analyse interview transcripts. | N | N | N |
| Rastegarpanah, M. et al. (2019) [ | Third- and fourth-year undergraduate pharmacy students (n = 12) and faculty experts (n = 7). | Aim: Design and validate a tool to assess pharmacy students’ performance in developing effective communication and consultation skills. | High inter-rater reliability between expert raters and simulated patient (SP) ratings ( | Small sample size limits generalisability; no control group. | N | Y | Y |
| Kirton, SB and Kravitz, L. (2011) [ | Recent graduates of undergraduate pharmacy program now completing “preregistration” year (n = 39). | Aim: investigate correlation between performance in OSCE and traditional pharmacy practice examinations at the same level. | When comparing Year 3 OSCE and Year 3 written exam data there was moderate correlation between results from the two methods of assessment. | Data from OSCEs in Year 2 of the program was incomplete and therefore omitted in the analysis. | N | N | N |
| Kimberlin, C. (2006) [ | Faculty members primarily responsible for communication skills instruction (n = 47). | Aim: Describe current practices in assessment of patient communication skills in US colleges of pharmacy | Content analyses revealed there is considerable variety in the skills assessed and the formatting and weighting of different skills. | Modest (56%) response rate. | N | Y | N |
| Aojula, H. et al. (2006) [ | First-year Master of pharmacy (MPharm) students. | Aim: Explore computer-based approaches for summative assessments with emphasis on development time, academic rigor, security and organisation. | Discrepancy between hand-marking and computer-based marking was <1%, initially improved by embedding a Spellcheck tool. | Single site. | N | N | N |
| Kelley, K. et al. (2008) [ | Fourth-year undergraduate PharmD students (n = 109). | Aim: To develop an assessment tool that would (1) help students review therapeutic decision-making and improve confidence in their skills; (2) provide pharmacy practice residents with opportunity to lead small group discussions (3) provide program-level assessment data. | No significant difference between pre- and post- test self-reported confidence levels. | Single site. | N | N | N |
| Hanna, L-A. et al. (2017) [ | Fourth-year Master of Pharmacy (MPharm) degree (n = 118). | Aim: Establish pharmacy students’ views on assessment and an integrated five-year degree. | Most respondents considered formative assessment improved academic performance. | Research students were excluded from the survey. | N | N | N |
| Benedict, N. et al. (2017) [ | First (P1, n = 111)-and third (P3, n = 108)-year undergraduate PharmD students and first-year postgraduate (PGY1, n = 25) pharmacy residents | Aim: To design an assessment of practice readiness using blended-simulation progress testing. | Patterns of results were consistent with expectations that scores would improve with advancing training levels. | Survey not administered to PGY1 students; low survey response rate (50%) for P3 students. | N | Y | Y |