Literature DB >> 25200016

Automated essay scoring and the future of educational assessment in medical education.

Mark J Gierl1, Syed Latifi, Hollis Lai, André-Philippe Boulais, André De Champlain.   

Abstract

CONTEXT: Constructed-response tasks, which range from short-answer tests to essay questions, are included in assessments of medical knowledge because they allow educators to measure students' ability to think, reason, solve complex problems, communicate and collaborate through their use of writing. However, constructed-response tasks are also costly to administer and challenging to score because they rely on human raters. One alternative to the manual scoring process is to integrate computer technology with writing assessment. The process of scoring written responses using computer programs is known as 'automated essay scoring' (AES).
METHODS: An AES system uses a computer program that builds a scoring model by extracting linguistic features from a constructed-response prompt that has been pre-scored by human raters and then, using machine learning algorithms, maps the linguistic features to the human scores so that the computer can be used to classify (i.e. score or grade) the responses of a new group of students. The accuracy of the score classification can be evaluated using different measures of agreement.
RESULTS: Automated essay scoring provides a method for scoring constructed-response tests that complements the current use of selected-response testing in medical education. The method can serve medical educators by providing the summative scores required for high-stakes testing. It can also serve medical students by providing them with detailed feedback as part of a formative assessment process.
CONCLUSIONS: Automated essay scoring systems yield scores that consistently agree with those of human raters at a level as high, if not higher, as the level of agreement among human raters themselves. The system offers medical educators many benefits for scoring constructed-response tasks, such as improving the consistency of scoring, reducing the time required for scoring and reporting, minimising the costs of scoring, and providing students with immediate feedback on constructed-response tasks.
© 2014 John Wiley & Sons Ltd.

Entities:  

Mesh:

Year:  2014        PMID: 25200016     DOI: 10.1111/medu.12517

Source DB:  PubMed          Journal:  Med Educ        ISSN: 0308-0110            Impact factor:   6.251


  2 in total

1.  Detection of Residents With Progress Issues Using a Keyword-Specific Algorithm.

Authors:  Gaby Tremblay; Pierre-Hugues Carmichael; Jean Maziade; Mireille Grégoire
Journal:  J Grad Med Educ       Date:  2019-12

2.  Patients don't come with multiple choice options: essay-based assessment in UME.

Authors:  Jeffrey B Bird; Doreen M Olvet; Joanne M Willey; Judith Brenner
Journal:  Med Educ Online       Date:  2019-12
  2 in total

北京卡尤迪生物科技股份有限公司 © 2022-2023.