| Literature DB >> 32565611 |
Sandra Barteit1, Dorota Guzek1, Albrecht Jahn1, Till Bärnighausen1,2,3, Margarida Mendes Jorge1, Florian Neuhann1.
Abstract
In low- and middle-income countries (LMICs), e-learning for medical education may alleviate the burden of severe health worker shortages and deliver affordable access to high quality medical education. However, diverse challenges in infrastructure and adoption are encountered when implementing e-learning within medical education in particular. Understanding what constitutes successful e-learning is an important first step for determining its effectiveness. The objective of this study was to systematically review e-learning interventions for medical education in LMICs, focusing on their evaluation and assessment methods. Nine databases were searched for publications from January 2007 to June 2017. We included 52 studies with a total of 12,294 participants. Most e-learning interventions were pilot studies (73%), which mainly employed summative assessments of study participants (83%) and evaluated the e-learning intervention with questionnaires (45%). Study designs, evaluation and assessment methods showed considerable variation, as did the study quality, evaluation periods, outcome and effectiveness measures. Included studies mainly utilized subjective measures and custom-built evaluation frameworks, which resulted in both low comparability and poor validity. The majority of studies self-concluded that they had had an effective e-learning intervention, thus indicating potential benefits of e-learning for LMICs. However, MERSQI and NOS ratings revealed the low quality of the studies' evidence for comparability, evaluation instrument validity, study outcomes and participant blinding. Many e-learning interventions were small-scale and conducted as short-termed pilots. More rigorous evaluation methods for e-learning implementations in LMICs are needed to understand the strengths and shortcomings of e-learning for medical education in low-resource contexts. Valid and reliable evaluations are the foundation to guide and improve e-learning interventions, increase their sustainability, alleviate shortages in health care workers and improve the quality of medical care in LMICs.Entities:
Keywords: Country-specific developments; Evaluation methodologies; Evaluation of CAL systems; Medical education; Post-secondary education; e-learning
Year: 2020 PMID: 32565611 PMCID: PMC7291921 DOI: 10.1016/j.compedu.2019.103726
Source DB: PubMed Journal: Comput Educ ISSN: 0360-1315 Impact factor: 8.538
Inclusion criteria according to the PICOS framework.
| Parameter | Description |
|---|---|
| Population | Health care professionals comprising medical students and medical doctors taking part in an educational endeavor taking place in LMICs. |
| Intervention | Studies that have evaluated asynchronous e-learning for medical education. |
| Comparison | Studies may have no comparison or comparator. |
| Outcome | Studies must include (i) evaluation methods used for asynchronous e-learning for medical education and (ii) outcome measures within their e-learning intervention |
| Studies | All study types that were published in English between 2007 and 2018 from peer-reviewed journals, conference papers. Opinion papers, commentaries, editorial notes, systematic reviews and meta-analysis were excluded. |
Fig. 1PRISMA flowchart (Moher et al., 2009) showing study selection process.
Overview of terminology definitions.
| Term | Definition |
|---|---|
| assessment | The systematic collection of individual learner achievements, also of groups, with regard to knowledge, ability and advancement, usually measured with marks or percentages ( |
| asynchronous | Content delivery that occurs at a different time than the student receives it and thus may be carried out while the student is offline ( |
| blended learning | E-learning supplemented by face-to-face training ( |
| content type | Category of media used for an e-learning intervention (e.g. PowerPoints, videos, audio) |
| delivery method | E-learning delivery techniques and characteristics (e.g., via mobile devices, computer-lab in school) |
| diagnostic assessment | Pre-assessment to determine user knowledge and skills prior to the intervention ( |
| distance learning | Provision of access to learning for those who are located geographically separated from the instructor ( |
| document review | Evaluation method using audit and analysis of design documents or documents produced by learners as they use an e-learning environment ( |
| e-learning | The use of computer technology to deliver training including technology-supported learning either online, offline or both ( |
| evaluation | The measurement of educational processes, content and adequacy for interventions, activities, programs, curricula ( |
| evaluation timing | Point in time when the e-learning evaluation is completed (prior, during, or post intervention) |
| formative assessment | The continuous, often informal communication between teacher and learner/s to provide feedback on strengths, weaknesses and potential improvements, generally more descriptive and qualitative ( |
| interactive content | E-learning content that makes use of multimedia content, such as text, graphics, audio and/or video and provides options for navigation, interaction and communication ( |
| podcasts | “A video and/or audio file made available in digital format for download over the Internet” ( |
| static content | E-learning content with limited interactivity and content engagement for learners ( |
| summative assessment | The end-point of an assessment, often formal, whose character is more numeric and quantitative ( |
| units of measurement | The manner or scale in which the intervention outcome is measured, such as a Likert-scale |
Study characteristics of 52 included studies describing an e-learning program evaluation that was implemented in a low-and middle-income country (Jan 2007–June 2017).
| Study Characteristics | Number of Participants, n (%) | Number of Interventions, n (%) |
|---|---|---|
| Total | 12,294 (100%) | 52 (100%) |
| Cross-sectional or not controlled | 7,172 (58%) | 27 (52%) |
| Experimental (1 group, self-controlled) | 2,388 (19%) | 7 (13%) |
| Experimental (2 or 3 groups, Intervention and control) | 2,734 (23%) | 18 (35%) |
| Randomized | 938 (8%) | 11 (22%) |
| Non-randomized | 11,356 (92%) | 41 (78%) |
| Objective | 519 (4%) | 11 (21%) |
| Subjective | 2,779 (23%) | 19 (37%) |
| Mix of subjective and objective | 8,996 (73%) | 22 (42%) |
| Qualitative | 3,106 (25%) | 4 (8%) |
| Quantitative | 8,844 (72%) | 39 (75%) |
| Mixed methods | 344 (3%) | 9 (17%) |
| Low-income country | N/A | 1 (2%) |
| Lower-middle-income country | 4,200 (34%) | 17 (33%) |
| Upper-middle-income country | 7,890 (64%) | 33 (63%) |
| All of the above combined | 204 (2%) | 1 (2%) |
| Africa | 6,498 (51%) | 12 (22%) |
| Asia | 1,726 (14%) | 24 (44%) |
| Europe | 204 (2%) | 3 (6%) |
| South America | 4,322 (33%) | 15 (28%) |
| Medical students (undergraduate) | 9,311 (72%) | 34 (61%) |
| Medical students and Residents | 71 (1%) | 1 (2%) |
| Medical students, Residents and Physicians | 798 (6%) | 2 (4%) |
| Residents (postgraduate) | ||
| Residents and Physicians | 310 (2%) | 5 (9%) |
| Physicians (work-experienced) | 276 (2%) | 3 (5%) |
| 2,185 (17%) | 7 (13%) | |
Characteristics described in more than one study are indicated. The number of participants and number of e-learning interventions are depicted in absolute numbers and the percentage is based on the absolute numbers of participants and studies for each individual characteristic.
N/A: number of participants not explicitly stated in the study.
More than one option possible for each study.
Features of the e-learning system, and characteristics of assessment and evaluation methods of 52 included studies on LMICs e-learning program evaluation (Jan 2007–June 2017).
| pilot | 4,690 (34%) | 38 (72%) |
| implemented | 9,611 (66%) | 16 (28%) |
| online | 9,458 (77%) | 38 (73%) |
| offline | 300 (2%) | 4 (8%) |
| online and offline | 2,536 (20%) | 10 (20%) |
| mobile-based | 334 (3%) | 5 (9%) |
| computer-based | 9,301 (76%) | 39 (75%) |
| mobile- and computer-based | 2,236 (18%) | 3 (6%) |
| unspecified | 423 (3%) | 5 (10%) |
| questionnaire | 12,140 (44,6%) | 46 (47%) |
| knowledge testing | 8,732 (32%) | 35 (35%) |
| document review | 43 (0,2%) | 1 (1%) |
| interviews | 110 (0,4%) | 3 (3%) |
| focus groups | 228 (0,8%) | 6 (6%) |
| system log data | 6,001 (22%) | 8 (8%) |
| Diagnostic | 2,187 (14%) | 30 (35%) |
| Formative | 2,569 (17%) | 8 (9%) |
| Summative | 10,339 (69%) | 48 (56%) |
| 1 method type | 1,133 (9%) | 19 (37%) |
| 2 method types | 4,495 (37%) | 12 (23%) |
| 3 method types | 5,933 (48%) | 16 (31%) |
| 4 method types | 438 (4%) | 4 (8%) |
| 5 method types | 295 (2%) | 1 (2%) |
| MERSQI ≥ 10.625 | 2,025 (17%) | 26 (50%) |
| MERSQI < 10.625 | 10,269 (83%) | 26 (50%) |
| NOS-E ≥ 2.5 | 6,550 (53%) | 33 (64%) |
| NOS-E < 2.5 | 5,774 (47%) | 19 (36%) |
| NOS ≥ 5 | 5,408 (44%) | 32 (62%) |
| NOS < 5 | 6,886 (56%) | 20 (38%) |
| 1. knowledge | 6,618 | 31 (16%) |
| 2. satisfaction | 6,613 | 20 (10%) |
| 3. usage | 8,680 | 17 (9%) |
| 4. perceptions | 7,681 | 14 (7%) |
| 5. attitude | 963 | 10 (5%) |
| 6. opinion | 2,127 | 10 (5%) |
| 7. skills | 335 | 9 (5%) |
| 8. usefulness | 2,348 | 8 (4%) |
| 9. demographics | 2,513 | 7 (4%) |
| 10. ease of use | 333 | 4 (2%) |
| 11. usability | 156 | 4 (2%) |
| 12. acceptance | 289 | 4 (2%) |
| 13. experience | 330 | 3 (2%) |
| 14. self-perceived confidence | 48 | 3 (2%) |
| 15. learning relevance | 243 | 3 (2%) |
| 100% | 665 (5%) | 14 (27%) |
| 80–99% | 2,809 (23%) | 16 (31%) |
| 50–79% | 2,672 (22%) | 7 (13%) |
| below 50% | 3,806 (31%) | 5 (10%) |
| NA | 2,342 (19%) | 10 (19%) |
| Yes | 10,349 (84%) | 45 (87%) |
| No | 1,184 (10%) | 5 (10%) |
| NA | 761 (6%) | 2 (4%) |
Characteristics described in more than one study are indicated. The number of participants and the number of e-learning interventions are depicted in absolute numbers and the percentage is based on the absolute numbers of participants and studies for each individual characteristic.
More than one option possible for each study.
Participant numbers separately counted for each option.
Scores between raters averaged.
Measurement outcomes only included if study n ≥ 3, for a detailed list see Appendix I.
N/A: number of participants not explicitly stated in study.
Fig. 2Study quality according to NOS, NOS-E and MERSQI quality assessment tools of the various evaluation methods for e-learning intervention in the included studies.
Outcome measures categorized by the evaluation and assessment methods.
| Outcome measures by evaluation and assessment methods | ||
|---|---|---|
| Methods | Outcome Measure | Number of e-learning interventions (n) |
| document review | content fit | 1 |
| focus group | needs | 2 |
| utlity | 1 | |
| opinion | 2 | |
| usage | 1 | |
| usefulness | 1 | |
| learning relevance | 1 | |
| interview | learning relevance | 2 |
| interaction | 1 | |
| attitude | 1 | |
| perspective | 1 | |
| improvements | 1 | |
| system log data | usage | 6 |
| technical problems | 1 | |
| demographics | 1 | |
| participation | 2 | |
| access date | 1 | |
| activity | 1 | |
| performance | 1 | |
| questionnaire | access date | 1 |
| access speed | 1 | |
| access to technology | 4 | |
| activity | 1 | |
| attitude | 6 | |
| cognitive load | 1 | |
| computer literacy | 1 | |
| content fit | 4 | |
| demographics | 5 | |
| ease of questions | 1 | |
| ease of use | 5 | |
| educational environment | 1 | |
| effectiveness | 3 | |
| efficacy | 1 | |
| efficiency | 1 | |
| expectations | 1 | |
| experience | 3 | |
| improvements | 1 | |
| infrastructure | 1 | |
| interaction | 3 | |
| learning approaches | 1 | |
| learning relevance | 2 | |
| motivation | 3 | |
| opinion | 7 | |
| participation | 2 | |
| perceived benefits | 1 | |
| perceived learning gains | 1 | |
| perceptions | 10 | |
| performance | 2 | |
| playability | 1 | |
| preferences | 1 | |
| quality of learning materials | 1 | |
| recommendable | 1 | |
| satisfaction | 11 | |
| self-evaluation | 1 | |
| self-regulation | 1 | |
| service quality | 1 | |
| suitability | 1 | |
| teaching quality | 1 | |
| teaching strategies | 1 | |
| technical problems | 1 | |
| technology acceptance | 1 | |
| time spent to perform task | 1 | |
| usability | 3 | |
| usage | 9 | |
| usefulness | 3 | |
| user-friendliness | 1 | |
| pre-post-testing | skills | 10 |
| knowledge | 29 | |
| system log data | knowledge | 1 |
| questionnaire | affected behavior | 1 |
| self-assessment | 1 | |
| self-perceived skills | 1 | |
| self-reported behavior | 1 | |
Fig. 3Outcome measures of included studies according to the variables of Attwell's evaluation framework.