Clarence D Kreiter1, George Bergus. 1. Department of Family Medicine, University of Iowa, Iowa City, Iowa 52246, USA. clarence-kreiter@uiowa.edu
Abstract
CONTEXT: The development of a valid and reliable measure of clinical reasoning ability is a prerequisite to advancing our understanding of clinically relevant cognitive processes and to improving clinical education. A record of problem-solving performances within standardised and computerised patient simulations is often implicitly assumed to reflect clinical reasoning skills. However, the validity of this measurement method for assessing clinical reasoning is open to question. OBJECTIVES: Explicitly defining the intended clinical reasoning construct should help researchers critically evaluate current performance score interpretations. Although case-specific measurement outcomes (i.e. low correlations between cases) have led medical educators to endorse performance-based assessments of problem solving as a method of measuring clinical reasoning, the matter of low across-case generalisation is a reliability issue with validity implications and does not necessarily support a performance-based approach. Given this, it is important to critically examine whether our current performance-based testing efforts are correctly focused. To design a valid educational assessment of clinical reasoning requires a coherent argument represented as a chain of inferences supporting a clinical reasoning interpretation. DISCUSSION: Suggestions are offered for assessing how well an examinee's existing knowledge organisation accommodates the integration of new patient information, and for focusing assessments on an examinee's understanding of how new patient information changes case-related probabilities and base rates.
CONTEXT: The development of a valid and reliable measure of clinical reasoning ability is a prerequisite to advancing our understanding of clinically relevant cognitive processes and to improving clinical education. A record of problem-solving performances within standardised and computerised patient simulations is often implicitly assumed to reflect clinical reasoning skills. However, the validity of this measurement method for assessing clinical reasoning is open to question. OBJECTIVES: Explicitly defining the intended clinical reasoning construct should help researchers critically evaluate current performance score interpretations. Although case-specific measurement outcomes (i.e. low correlations between cases) have led medical educators to endorse performance-based assessments of problem solving as a method of measuring clinical reasoning, the matter of low across-case generalisation is a reliability issue with validity implications and does not necessarily support a performance-based approach. Given this, it is important to critically examine whether our current performance-based testing efforts are correctly focused. To design a valid educational assessment of clinical reasoning requires a coherent argument represented as a chain of inferences supporting a clinical reasoning interpretation. DISCUSSION: Suggestions are offered for assessing how well an examinee's existing knowledge organisation accommodates the integration of new patient information, and for focusing assessments on an examinee's understanding of how new patient information changes case-related probabilities and base rates.
Authors: Clarence D Kreiter; Thomas Haugen; Timothy Leaven; Christopher Goerdt; Nancy Rosenthal; William C McGaghie; Fred Dee Journal: Med Educ Online Date: 2011-01-14
Authors: R Brent Stansfield; Lisa Diponio; Cliff Craig; John Zeller; Edmund Chadd; Joshua Miller; Seetha Monrad Journal: BMC Med Educ Date: 2016-10-14 Impact factor: 2.463
Authors: Salah A Aldekhayel; Nahar A Alselaim; Mohi Eldin Magzoub; Mohammad M Al-Qattan; Abdullah M Al-Namlah; Hani Tamim; Abdullah Al-Khayal; Sultan I Al-Habdan; Mohammed F Zamakhshary Journal: BMC Med Educ Date: 2012-10-24 Impact factor: 2.463