J Searle1. 1. Department of Obstetrics, Gynaecology and Reproductive Medicine, Flinders University of South Australia, Australia.
Abstract
CONTEXT: The responsibility to determine just who is competent to practice medicine, and at what standard, is great. Whilst there is still a period available for potential remediation, examinations at the completion of year three of the four-year Graduate Entry Medical Programme (GEMP) at Flinders University of South Australia (FUSA) are high stakes and contain the majority of final summative assessment for the certification of student to doctor. Therefore, the medical school has recently examined its methods for certification, the clinical practice standards sought in its programme and how to determine these standards. DESIGN: For all assessments a standard was documented and methods employed to set these standards using specific measures of performance. A modification of the Angoff method was applied to the written examination and the Rothman method, using two criteria, was used to determine competency in the objective structured clinical examination (OSCE). These methods were used for the first time in 1998. Both methods used trained 'experts' as standard setters and both methods used the notion of the 'borderline candidate' to determine the passing standard. This paper describes these two criterion-referenced standard-setting procedures as used in this school and related examination performance. CONCLUSIONS: Whilst the use of standard-setting procedures goes part way to defining and measuring competence, it is time consuming and requires significant examiner training and acceptance. Using 50% to determine who is and isn't competent is simpler but not transparent, fair nor defensible.
CONTEXT: The responsibility to determine just who is competent to practice medicine, and at what standard, is great. Whilst there is still a period available for potential remediation, examinations at the completion of year three of the four-year Graduate Entry Medical Programme (GEMP) at Flinders University of South Australia (FUSA) are high stakes and contain the majority of final summative assessment for the certification of student to doctor. Therefore, the medical school has recently examined its methods for certification, the clinical practice standards sought in its programme and how to determine these standards. DESIGN: For all assessments a standard was documented and methods employed to set these standards using specific measures of performance. A modification of the Angoff method was applied to the written examination and the Rothman method, using two criteria, was used to determine competency in the objective structured clinical examination (OSCE). These methods were used for the first time in 1998. Both methods used trained 'experts' as standard setters and both methods used the notion of the 'borderline candidate' to determine the passing standard. This paper describes these two criterion-referenced standard-setting procedures as used in this school and related examination performance. CONCLUSIONS: Whilst the use of standard-setting procedures goes part way to defining and measuring competence, it is time consuming and requires significant examiner training and acceptance. Using 50% to determine who is and isn't competent is simpler but not transparent, fair nor defensible.
Authors: C E Cabrera Pivaral; E A Gutiérrez Roman; G Gonzalez Pérez; F Gonzalez Reyes; F Valadez Toscano; C Gutiérrez Ruvalcaba; C D Rios Riebeling Journal: J Nutr Health Aging Date: 2008-02 Impact factor: 4.075
Authors: Charlie C Xue; Wenyu Zhou; Anthony L Zhang; Kenneth Greenwood; Cliff Da Costa; Alex Radloff; Vivian Lin; David F Story Journal: BMC Health Serv Res Date: 2008-01-31 Impact factor: 2.655