Literature DB >> 29607338

Comparing Kirkpatrick's original and new model with CIPP evaluation model.

Roghayeh Gandomkar1.   

Abstract

Entities:  

Year:  2018        PMID: 29607338      PMCID: PMC5856911     

Source DB:  PubMed          Journal:  J Adv Med Educ Prof        ISSN: 2322-2220


× No keyword cloud information.
Dear Editor, In a young field like educational program evaluation, it is inevitable that conceptual frameworks such as Kirkpatrick model are revised with time and with greater knowledge. The New World Kirkpatrick Model (NWKM) is the new version of Kirkpatrick model which is more welcome to context and process, and hence probably much closer to the context-input-process-product (CIPP) model (1). The aim of this paper is to explore the similarities and differences between three well-known evaluation models including the original and new versions of Kirkpatrick model and CIPP model. The original version of Kirkpatrick model is an outcome-focused model evaluating the outcomes of an educational program, for instance, in the field of medical education, in four levels of reaction, learning, transfer and impact, respectively (2). The model is rooted in reductionist approach suggesting that the educational program success or lack of success can be explained simply by reducing the program into its elements and examining them (i.e. its outcomes) (3). Yet, Kirkpatrick’s original model fails to provide the evaluators with an insight into the underlying mechanisms that inhibit or facilitate the achievement of program outcomes (4). In response to this shortcoming, the new version of Kirkpatrick model added new elements to recognize the complexities of the educational program context (5). The most highlighted changes have been occurred at Level 3 to include processes that enable or hinder the application of learned knowledge or skills. The required drivers that reinforce, monitor, encourage, and reward learners to apply what is learned during training, on the job learning that happens outside the formal program and Learners’ motivation and commitment to improve their performance on the job are interfering factors that may influence the given outcomes at level 3. Learners’ confidence and commitment, and learners’ engagement and subject relevance ware added to Level 2 and level 1, respectively, to broaden the scope of evaluation at these two levels (5). Although the NWKM appears to better embrace the complexity of educational programs, some investigators may declare that it would be similar to CIPP evaluation model. I suppose that there are some fundamental differences between them. The CIPP model stems from the complexity theory that takes into account the educational program as an open system with emergent dynamical interactions among its component parts and the surrounding environment. As a result, CIPP pays explicit and implicit attention to the program context by considering context evaluation as a separate component of four complementary sets of evaluation studies, as well as identifying the contextual factors in other components of the model by employing a variety of qualitative methods (6). On the other hand, the NWKM is limited to measuring some confounding factors such as learner characteristics or organizational factors on program outcome achievement (1). Kirkpatrick, like many traditional program evaluation models, focuses on proving something (i.e. outcome achievement) about a program. Thus, it is usually conducted at the end of the program. CIPP, on the other hand, acknowledges program improvement, so providing useful information for decision makers during all phases of program development even when the program is still being developed (7). The NWKM has broadened the scope of traditional model by adding some process measures enabling evaluators to interpret the outcome evaluation results, but with the aim of proving an educational program. Overall, notwithstanding some improvement, NWKM has still some theoretical differences with the CIPP model resulting in varied methodological and practical preferences. However, it is not unexpected to witness more convergence around these evaluation models with greater knowledge and experience in the future.
  4 in total

Review 1.  Program evaluation models and related theories: AMEE guide no. 67.

Authors:  Ann W Frye; Paul A Hemmer
Journal:  Med Teach       Date:  2012       Impact factor: 3.650

2.  Going beyond Kirkpatrick in evaluating a clinician scientist program: it's not "if it works" but "how it works".

Authors:  Kathryn Parker; Gwen Burrows; Heather Nash; Norman D Rosenblum
Journal:  Acad Med       Date:  2011-11       Impact factor: 6.893

3.  Has the new Kirkpatrick generation built a better hammer for our evaluation toolbox?

Authors:  Katherine A Moreau
Journal:  Med Teach       Date:  2017-06-26       Impact factor: 3.650

4.  Undergraduate medical education programme renewal: a longitudinal context, input, process and product evaluation study.

Authors:  Azim Mirzazadeh; Roghayeh Gandomkar; Sara Mortaz Hejri; Gholamreza Hassanzadeh; Hamid Emadi Koochak; Abolfazl Golestani; Ali Jafarian; Mohammad Jalili; Fatemeh Nayeri; Narges Saleh; Farhad Shahi; Seyed Hasan Emami Razavi
Journal:  Perspect Med Educ       Date:  2016-02
  4 in total
  3 in total

1.  Comparing the effectiveness of training course formats for point-of-care ultrasound in the third trimester of pregnancy.

Authors:  Susan Campbell Westerway
Journal:  Australas J Ultrasound Med       Date:  2019-01-10

2.  A Developing Nation's Experience in Using Simulation-Based Training as a Preparation Tool for the Coronavirus Disease 2019 Outbreak.

Authors:  P S Loh; Sook-Hui Chaw; Ina I Shariffuddin; Ching-Choe Ng; Carolyn C Yim; Noorjahan Haneem Md Hashim
Journal:  Anesth Analg       Date:  2021-01       Impact factor: 6.627

3.  Learning benefits of live surgery and semi-live surgery in urology-informing the debate with results from the International Meeting of Reconstructive Urology (IMORU) VIII.

Authors:  Roland Dahlem; Christian P Meyer; Victor M Schuettfort; Tim A Ludwig; Phillip Marks; Malte W Vetterlein; Valentin Maurer; Constantin Fuehner; Florian Janisch; Armin Soave; Michael Rink; Silke Riechardt; Oliver Engel; Margit Fisch
Journal:  World J Urol       Date:  2020-11-02       Impact factor: 4.226

  3 in total

北京卡尤迪生物科技股份有限公司 © 2022-2023.