OBJECTIVE: In medical education the focus has shifted from gaining knowledge to developing competencies. To effectively monitor performance in practice throughout the entire training, a new approach of assessment is needed. This study aimed to evaluate an instrument that monitors the development of competencies during postgraduate training in the setting of training of general practice: the Competency Assessment List (Compass). METHODS: The distribution of scores, reliability, validity, responsiveness and feasibility of the Compass were evaluated. RESULTS: Scores of the Compass ranged from 1 to 9 on a 10-point scale, showing excellent internal consistency ranging from .89 to .94. Most trainees showed improving ratings during training. Medium to large effect sizes (.31-1.41) were demonstrated when we compared mean scores of three consecutive periods. Content validity of the Compass was supported by the results of a qualitative study using the RAND modified Delphi Method. The feasibility of the Compass was demonstrated. CONCLUSION: The Compass is a competency based instrument that shows in a reliable and valid way trainees' progress towards the standard of performance. PRACTICE IMPLICATIONS: The programmatic approach of the Compass could be applied in other specialties provided that the instrument is tailored to specific needs of that specialism.
OBJECTIVE: In medical education the focus has shifted from gaining knowledge to developing competencies. To effectively monitor performance in practice throughout the entire training, a new approach of assessment is needed. This study aimed to evaluate an instrument that monitors the development of competencies during postgraduate training in the setting of training of general practice: the Competency Assessment List (Compass). METHODS: The distribution of scores, reliability, validity, responsiveness and feasibility of the Compass were evaluated. RESULTS: Scores of the Compass ranged from 1 to 9 on a 10-point scale, showing excellent internal consistency ranging from .89 to .94. Most trainees showed improving ratings during training. Medium to large effect sizes (.31-1.41) were demonstrated when we compared mean scores of three consecutive periods. Content validity of the Compass was supported by the results of a qualitative study using the RAND modified Delphi Method. The feasibility of the Compass was demonstrated. CONCLUSION: The Compass is a competency based instrument that shows in a reliable and valid way trainees' progress towards the standard of performance. PRACTICE IMPLICATIONS: The programmatic approach of the Compass could be applied in other specialties provided that the instrument is tailored to specific needs of that specialism.
Authors: Marnix P D Westein; Harry de Vries; Annemieke Floor; Andries S Koster; Henk Buurma Journal: Am J Pharm Educ Date: 2019-08 Impact factor: 2.047
Authors: Elisabeth Flum; Roar Maagaard; Maciek Godycki-Cwirko; Nigel Scarborough; Nynke Scherpbier; Thomas Ledig; Marco Roos; Jost Steinhäuser Journal: GMS Z Med Ausbild Date: 2015-05-13
Authors: R van der Gulden; S Heeneman; A W M Kramer; R F J M Laan; N D Scherpbier-de Haan; B P A Thoonen Journal: BMC Med Educ Date: 2020-06-26 Impact factor: 2.463
Authors: Pieter C Barnhoorn; Vera Nierkens; Marianne C Mak-van der Vossen; Mattijs E Numans; Walther N K A van Mook; Anneke W M Kramer Journal: BMC Fam Pract Date: 2021-12-20 Impact factor: 2.497