| Literature DB >> 12386474 |
Adrian M. Casillas1, Stephen G. Clyman, Yihua V. Fan, Ronald H. Stevens.
Abstract
This study applied an unsupervised neural network modeling process to test data of the National Board of Medical Examiners (NBME) Computer-based Clinical Scenarios (CCS) to identify new performance categories and validate this process as a scoring technique. The classifications resulting from this neural network modeling were consistent with the NBME model in that highly rated NMBE performances (ratings of 7 or 8) were clustered together on the neural network output grid. Very low performance ratings appeared to share few common features and were accordingly classified at isolated nodes. This clustering was reproducible across three separately trained networks with greater than 80% agreement in two of the three networks trained. However, the neural network also contained performance clusters where disparate NBME-based ratings ranged from 1 (worst) to 8 (best). Here, agreement between networks was less than 60%. Through visualization of the search strategies (search path mapping), this neural network clustering was found to be sensitive to quantitative and qualitative test selections such as excessive usage of irrelevant tests reflecting broader behavioral classification in some instances. A disparity between NBME ratings and an independent human rating system was detected by the neural network model since disagreement among raters was also reflected by a lack of neural network performance clustering. Agreement between rating systems, however, was correlated with neural network clustering for 92% of the highly rated performances.Entities:
Year: 2000 PMID: 12386474 DOI: 10.1023/A:1009802528071
Source DB: PubMed Journal: Adv Health Sci Educ Theory Pract ISSN: 1382-4996 Impact factor: 3.853