| Literature DB >> 19223995 |
Sara Kim1.
Abstract
A wide range of e-learning modalities are widely integrated in medical education. However, some of the key questions related to the role of e-learning remain unanswered, such as (1) what is an effective approach to integrating technology into pre-clinical vs. clinical training?; (2) what evidence exists regarding the type and format of e-learning technology suitable for medical specialties and clinical settings?; (3) which design features are known to be effective in designing on-line patient simulation cases, tutorials, or clinical exams?; and (4) what guidelines exist for determining an appropriate blend of instructional strategies, including on-line learning, face-to-face instruction, and performance-based skill practices? Based on the existing literature and a variety of e-learning examples of synchronous learning tools and simulation technology, this paper addresses the following three questions: (1) what is the current trend of e-learning in medical education?; (2) what do we know about the effective use of e-learning?; and (3) what is the role of e-learning in facilitating newly emerging competency-based training? As e-learning continues to be widely integrated in training future physicians, it is critical that our efforts in conducting evaluative studies should target specific e-learning features that can best mediate intended learning goals and objectives. Without an evolving knowledge base on how best to design e-learning applications, the gap between what we know about technology use and how we deploy e-learning in training settings will continue to widen.Entities:
Keywords: Computer-Assisted Instruction; Education, Medical; Learning
Year: 2006 PMID: 19223995 PMCID: PMC2631188 DOI: 10.3352/jeehp.2006.3.3
Source DB: PubMed Journal: J Educ Eval Health Prof ISSN: 1975-5937
Fig. 1Comparison of percentages of 125 US medical schools reporting the use of eductional software program in basic sciences curriculum in 1998 and 2002.
Fig. 2Comparison of percentages of 125 US medical schools reporting the use of eductional software program in clinical clerkships in 1998 and 2002.
Fig. 3Kirkpatrick's model of summative evaluation.
Number and % of Students with Correct Scores on Qualitative Comments and Multiple-Choice Questions
Summary reviews of computer and web-based educational systems
| Author/Year | Focus of Review | Number of Studies | Period Covered | Discipline/Specialty | Main Results & Implications |
|---|---|---|---|---|---|
| Adler & Johnson, 2000[ | Provide a general overview of literature of computer-aided instruction (CAI) in medical education. | 1,071 | 1966-1998 | Medicine, Education | Most studies report demonstration projects without evaluative data. More studies are needed in CAI-to-CAI comparative studies rather than CAI-to-Non-CAI studies. Economic analyses associated with applications and technologies are needed.A greater knowledge base needed for understanding how to integrate CAI into a larger medical curriculum and how to evaluate CAI to understand its effectiveness in different learning environments involving different students. |
| Chumley-Jones, Dobbie and Alford, 2002[ | Identify aspects of Web-based learning that have been studied.Describe evaluation strategies used in the reviewed studies. | 76 | 1966-2002 | Medicine, Dental, Nursing | The majority of studies were descriptive in nature with no evaluative data. Descriptive studies tended to report learners' satisfaction with learning tools.Among studies reporting data, the use of pre- & post- knowledge test using multiple choice question format was the most prevalent method.Only one study described direct and indirect costs associated with Web-based vs. text-based learning.Areas of unique contribution of Web-based learning in training of health professionals need to be more clearly defined. |
| Letterie, 2003[ | To assess the quality of evidence for implementing computer-assisted instruction. | 210 | 1988-2000 | Medicine | Most studies were descriptive in nature. Studies positively endorsed featured technology without measure of effectiveness.The most widely used assessment measure included pre- and post-tests of knowledge.Few studies compared computer-assisted instruction with different learning modalities. |
| Lau and Bates, 2004[ | To examine types and content of e-learning technology in undergraduate medical education. | 50 | 1997-2002 | Medicine(undergrad) | The majority of studies descriptive in nature.Lack of study design makes it difficult to judge the quality of descriptive reports.The majority of evaluation measures included user satisfaction, actual usage, subjective feedback, and student performance. |
| Curran and Fleet, 2005[ | To examine the nature and characteristics of Web-based continuing medical education evaluative outcomes. | 86 | 1966-2003 | Medicine(continuing medical education) | The majority of evaluative research is based on participant satisfaction data.Lack of systematic evidence that suggests that Web-based CME enhances clinical practice performance or patient/health outcomes. |
| Issenberg, McGaghie, Petrusa, Gordon, and Scalese, 2005[ | Review and synthesize existing evidence in the literature of features and uses of high-fidelity medical simulations that lead to effective learning. | 109 | 1966-2003 | Medicine | Among the target features, 47% of the reviewed articles reported that feedback is the most important feature of simulation-based medical education; 39% identified repetitive practice as a key feature involving the use of high-fidelity simulations, 25% cited the need to integrate simulation in the curriculum is an essential feature, and 14% highlighted the range of task difficulty as an important variable. Less than 10% of the reviewed studies cited the following features as important factors for simulations: multiple learning strategies, capture clinical variation, controlled environment, individualized learning, defined outcomes, and simulator validity correlated with learning. |