Literature DB >> 30157087

Intervention Descriptions in Medical Education: What Can Be Improved? A Systematic Review and Checklist.

Jennita G Meinema1, Nienke Buwalda, Faridi S van Etten-Jamaludin, Mechteld R M Visser, Nynke van Dijk.   

Abstract

PURPOSE: Many medical education studies focus on the effectiveness of educational interventions. However, these studies often lack clear, thorough descriptions of interventions that would make the interventions replicable. This systematic review aimed to identify gaps and limitations in the descriptions of educational interventions, using a comprehensive checklist.
METHOD: Based on the literature, the authors developed a checklist of 17 criteria for thorough descriptions of educational interventions in medical education. They searched the Ovid MEDLINE, Embase, and ERIC databases for eligible English-language studies published January 2014-March 2016 that evaluated the effects of educational interventions during classroom teaching in postgraduate medical education. Subsequently, they used this checklist to systematically review the included studies. Descriptions were scored 0 (no information), 1 (unclear/partial information), or 2 (detailed description) for each of the 16 scorable criteria (possible range 0-32).
RESULTS: Among the 105 included studies, the criteria most frequently reported in detail were learning needs (78.1%), content/subject (77.1%), and educational strategies (79.0%). The criteria least frequently reported in detail were incentives (9.5%), environment (5.7%), and planned and unplanned changes (12.4%). No article described all criteria. The mean score was 15.9 (SD 4.1), with a range from 8 (5 studies) to 25 (1 study). The majority (76.2%) of articles scored 11-20.
CONCLUSIONS: Descriptions were frequently missing key information and lacked uniformity. The results suggest a need for a common standard. The authors encourage others to validate, complement, and use their checklist, which could lead to more complete, comparable, and replicable descriptions of educational interventions.

Entities:  

Mesh:

Year:  2019        PMID: 30157087      PMCID: PMC6365274          DOI: 10.1097/ACM.0000000000002428

Source DB:  PubMed          Journal:  Acad Med        ISSN: 1040-2446            Impact factor:   6.893


The core business of medical education is to educate trainees so that they become competent professionals. To optimize this process, many educational interventions are developed and studied for their effectiveness. Such studies can be considered complex because their outcomes may be influenced by many variables, such as teacher quality, the setting, and trainees’ previous experiences and levels of commitment. Moreover, the number of interacting components within intervention studies is generally high.[1-3] It is of great importance, therefore, that all relevant components be described explicitly[4] so that educational interventions can be replicated, study outcomes can be understood and interpreted, and reliable conclusions can be drawn regarding the effectiveness of (aspects of) interventions. Furthermore, clear descriptions of educational interventions help medical educators translate successful interventions to local settings.[4-6] Studies published on educational interventions in the field of medical education often lack thorough descriptions, however.[7-11] The main shortcomings that have been identified include incomplete information and a lack of uniformity in the descriptions of both the research methods and the educational interventions themselves.[9,12,13] In a review focusing on how studies of educational interventions for evidence-based practice (EBP) describe these interventions, Phillips et al[13] demonstrated that there were many inconsistencies in the descriptions. Because educational interventions affect medical education programs, which in turn affect health care, improving completeness and uniformity in the descriptions of educational interventions is important.[14] To improve such descriptions, it would be helpful to know what problems currently exist. In this review, we therefore systematically analyzed published descriptions of educational interventions to identify potential gaps and limitations. To identify the limitations of published intervention descriptions, we first needed to identify criteria for thorough descriptions. While several checklists had already been designed to report on educational interventions,[5,9,10,14,15] none seemed fully appropriate for describing educational interventions in medical education because their scope was either content specific[15] or too broad and also included the description of the study methodology.[5,9,10,14] Therefore, based on the educational intervention literature and the aforementioned existing checklists, we established a new checklist of key items that authors should cover to describe interventions in medical education. Using this checklist, we performed a systematic review of medical education intervention studies, with a focus on interventions in postgraduate medical education.

Method

Development of the checklist

We began by developing a list of criteria to assess the descriptions of educational interventions in medical education in January 2015. This checklist, and hence this review, covered the description of the intervention only and did not deal with study design. Our list was based on the Guideline for Reporting Evidence-Based Practice Educational Interventions and Teaching (GREET) checklist developed by Phillips et al.[15] We reformulated the GREET item that was specific to EBP (content/subject) to be applicable to educational interventions in general. We merged two items (planned changes, unplanned changes) and supplemented four items (theory, delivery, schedule, instructors). We incorporated four items (learning objectives, educational strategies, materials, environment) directly into our list. We did not include attendance because the scope of this item was covered by two others (delivery [from GREET] and participants [a new item, described below]). On the basis of the literature, we identified additional items that were not covered by GREET. According to Morrison et al,[14] Reznich and Anderson,[5] and Windish et al,[12] a first step when developing an intervention should be the assessment of learning needs of future participants. In addition, as discussed by Olson[6] and by Windish et al,[12] knowing the rationale behind the design of an educational intervention provides insight into the main decisions, and participants and context can influence the design. Therefore, we added the following criteria to our checklist: learning needs,[5,6,12,14] intervention development process,[6,9] context and settings,[5,6,9] and participants.[10] We also added items on evaluating the effect of and satisfaction with the intervention: assessment and satisfaction.[2,5,6,9,10] The final list of 17 items was completed in November 2015 and discussed during a workshop that month at a Dutch medical education conference.[16] The workshop participants agreed on the importance of all criteria. Subsequently, in January 2016, a pilot survey was conducted, using a subset of 15 educational intervention papers included in our systematic review (described below). Two of the authors (J.M. and N.B.) individually scored each intervention description on all items and discussed their scores. All criteria except assessment proved possible to score. The studies included in the pilot showed that it was often unclear whether the assessment performed was for the study, for the intervention (or part of the intervention), or for both. Assessment was therefore not scored in the current review, although we included this item in the checklist given its overall importance. Furthermore, scoring was difficult on the materials and planned/unplanned changes criteria because it was not always clear which materials were used or whether the intervention was changed during the study. J.M. and N.B. ultimately reached agreement on how to score all criteria. Based on the discussions, a comprehensive description of each item and its scoring was developed. Our final checklist distinguishes three stages—preparation, intervention, and evaluation—and includes 17 items (Table 1). Each item, except assessment, is scored 0, 1, or 2, depending on the completeness of the description: A score of 2 corresponds to a detailed description, a score of 1 corresponds to an unclear or partial description, and a score of 0 is given when information is lacking entirely. Some criteria contain multiple elements; for example, a complete or detailed description of the context and settings item for a score of 2 includes the educational institute, time of the year, year of the curriculum, and course area/specialty. If one of these elements is missing from the description, the article scores a 1 on this criterion. The scoring guidelines for each item are provided in the last column of Table 1.
Table 1

Checklist for Thorough Descriptions of Educational Interventions in Medical Education, With Criteria and Scoringa

Checklist for Thorough Descriptions of Educational Interventions in Medical Education, With Criteria and Scoringa

Search methods

The MEDLINE (Ovid), Embase (Ovid), and ERIC (Ovid) databases were searched on March 22, 2016, for eligible English-language studies published since January 1, 2014. (The detailed search strategy used for each database is available in Supplemental Digital Appendix 1 at http://links.lww.com/ACADMED/A590.) To narrow the search, the scope was set to studies evaluating the effects of educational interventions during formal teaching (classroom teaching) in postgraduate medical education.

Inclusion and exclusion criteria

We included studies of educational interventions concerning postgraduate trainees (i.e., residents, interns) irrespective of specialty and year of education when residents accounted for at least 50% of the participants and the intervention and/or control session took place during classroom teaching. Studies concerning physical examination or technical/clinical skills interventions (e.g., palpation instruction) were excluded, as were studies concerning interventions “on the job” and in bedside teaching, because these settings are less controlled than classroom settings and were therefore expected to be more difficult to describe. In addition, articles were excluded if the full text was unavailable in English or if they did not include an evaluation or study the effects of the intervention (i.e., reviews, abstracts, editorials, protocols, innovation reports, conference proceedings). Qualitative studies were excluded because they are generally exploratory and serve a different purpose than quantitative studies, which generally focus on effectiveness.

Study selection

N.B. and J.M. reviewed the titles and abstracts of records identified through the database searches against the inclusion and exclusion criteria. Their independent analysis of a subset (the first 200) of the publications yielded a 97% agreement rate. On the basis of this high level of agreement, the remaining publications were reviewed and selected by either J.M. or N.B. If the title and abstract met the inclusion criteria, or if they did not provide sufficient clarification to determine whether the study met the criteria, the full text was reviewed for eligibility. The final set of included articles was scored using the 16 scored items on our checklist (Table 1), and data were collected on general study characteristics (journal, study design, country of origin, specialty of study participants). After the pilot survey (described above), the remaining articles were divided between J.M. and N.B. for scoring. Articles that referred to supplementary material (in an appendix, online, or in a prior publication) were scored on the basis of all information available, including the supplementary material when it could be obtained. J.M. and N.B. discussed scores frequently to maintain uniformity in scoring practice. Narrative details backing the scores were collected and are available from the corresponding author upon request. As noted above, the included studies were not assessed on the methodology used because the aim of this systematic review was to analyze descriptions of the interventions rather than methodology.

Analysis

All supporting quotations and other narrative data, associated scores, and general study characteristics were documented using Microsoft Excel 2011 (Microsoft Corp., Redmond, Washington). Descriptive statistics of study scores and characteristics were calculated using SPSS version 19.[17] The analysis was performed from April 2016 to February 2017.

Results

The electronic database searches identified 1,826 records (Figure 1). The initial screening of titles and abstracts yielded 122 articles. After review of the full texts, 102 articles were included. Screening the reference lists of these 102 articles for additional eligible studies yielded 3 articles, resulting in a total of 105 articles selected for inclusion in the review.[18-122]
Figure 1

Flow diagram, from database searches to final included studies, for a systematic review of educational intervention descriptions in postgraduate medical education in studies published January 2014–March 2016. Inclusion criteria consisted of (1) educational intervention, (2) residents at least 50% of participants, and (3) classroom teaching (e.g., didactic lecture, small-group discussion). The authors excluded articles that did not have the full text available (in English) for review, as well as editorials, reviews, abstracts, conference proceedings, and qualitative studies. aSome studies were excluded on the basis of multiple exclusion criteria. This figure lists each excluded study only once, at the most relevant applicable exclusion criterion.

Flow diagram, from database searches to final included studies, for a systematic review of educational intervention descriptions in postgraduate medical education in studies published January 2014–March 2016. Inclusion criteria consisted of (1) educational intervention, (2) residents at least 50% of participants, and (3) classroom teaching (e.g., didactic lecture, small-group discussion). The authors excluded articles that did not have the full text available (in English) for review, as well as editorials, reviews, abstracts, conference proceedings, and qualitative studies. aSome studies were excluded on the basis of multiple exclusion criteria. This figure lists each excluded study only once, at the most relevant applicable exclusion criterion. Throughout the screening process, articles were excluded for the following reasons, in order of incidence: No effect study or no evaluation of the intervention (n = 1,063; 58.2%) No classroom teaching (n = 331; 18.1%) No resident participants or < 50% resident participants (n = 206; 11.3%) Qualitative study (n = 50; 2.7%) Only title and abstract available (e.g., congress or poster abstracts) (n = 74; 4.1%) Details on individual articles and the reasons for exclusion are available from the corresponding author upon request.

Characteristics of included studies

The 105 included articles were published in 74 journals, the most prevalent of which were the Journal of Graduate Medical Education (8.6%; n = 9) and Academic Psychiatry (6.7%; n = 7). The studies were conducted in 17 countries, most often in the United States (61.0%; n = 64). The interventions covered 17 specialties; the most frequent were internal medicine (14.3%, n = 15), family medicine (12.4%, n = 13), emergency medicine (10.5%, n = 11), and anesthesiology (10.5%, n = 11). Some studies did not report the specialty of the participants clearly (13.3%, n = 14), and some studies included trainees from multiple specialties (5.7%, n = 6). Almost half of the studies evaluated the intervention by using a pre- and posttest design for one group (48.6%; n = 51). The study characteristics are summarized in Supplemental Digital Appendix 2 at http://links.lww.com/ACADMED/A590.

Assessment of educational innovation descriptions: Scores on the checklist

An overview of the scores for each item, stage, and article is provided in Supplemental Digital Appendix 3 at http://links.lww.com/ACADMED/A591. The distribution of scores for each criterion for the included articles is presented in Table 2. Good practice examples for each item are provided in Supplemental Digital Appendix 4 at http://links.lww.com/ACADMED/A590.
Table 2

Distribution of Scores on Checklist for Thorough Descriptions of Educational Interventions in Medical Education, for Studies Included in a Systematic Review of Educational Intervention Descriptions in Postgraduate Medical Educationa

Distribution of Scores on Checklist for Thorough Descriptions of Educational Interventions in Medical Education, for Studies Included in a Systematic Review of Educational Intervention Descriptions in Postgraduate Medical Educationa Each description of an educational intervention could score between 0 and 32 points based on the criteria checklist (Table 1). None of the articles described all of the criteria in detail. The lowest score of the articles we analyzed was 8 (5 studies[29,58,61,68,101]), and the highest score was 25 (1 study[50]). The mean score for the 105 included articles was 15.9 (SD 4.1). The range of scores was as follows: 12 articles (11.4%) scored 8–10, 80 articles (76.2%) scored 11–20, and 13 articles (12.4%) scored 21–25. Twenty-four articles referred to additional material (in an appendix, online, or in a prior publication). We could not find the additional material for 5 articles (mean score: 13.8 [SD 3.1]).[18,19,44,48,83] For the remaining 19 articles,[22,23,25,35,39,41,50,51,74,78,81,91-93,98,104,110,116,119] scoring included the supplementary material. The mean score of these 19 articles was 18.3 (SD 3.7; range 13–25), significantly higher than the mean score of 15.5 (SD 4.0) for the 81 articles without additional information (P = .007; 95% CI on the observed difference = 0.8–4.8).

Preparation (2 criteria).

Learning needs were reported in all articles, with 78.1% (n = 82) describing them in detail (score 2). However, 21.9% (n = 23) failed to describe why the intervention was designed specifically for the participants (score 1). The intervention development process was described in detail by 28.6% (n = 30) of the studies (score 2), as in the following example from Chee et al[33]: We designed a teaching skills curriculum for the HMS [Harvard Medical School] Residency Training Program in Ophthalmology following Kern’s 6-step approach to curriculum development. These steps include problem identification, needs assessment of targeted learners, establishment of goals and objectives, design and implementation of educational strategies, and program evaluation.

Intervention (12 criteria).

Which educational strategy was applied during the intervention was reported by 79.0% (n = 83) of the studies (score 2). However, 52.4% (n = 55) failed to mention the educational theory behind the strategy (score 0). An illustrative example of a study that did support the applied strategy with theoretical substantiation is Daly et al[36]: Priori studies have found that narrow-band imaging (NBI) can be taught to physicians by in-person training and web-based program. Learning objectives were reported in detail by 19.0% (n = 20) of the studies (score 2), but 43.8% (n = 46) did not report any objectives (of the intervention) at all (score 0). The content/subject of the intervention was described clearly in 77.1% (n = 81; score 2). The number of participants was reported in all studies, except 1.[60] However, details such as the participants’ level of knowledge or demographic information were missing in 62.9% (n = 66) of the studies (score 1). Some information on the context and setting (i.e., the institution, time of the year, year of the curriculum, and course area/specialty) was reported by almost all studies (98.1%, n = 103). However, 65.7% (n = 69) did not describe all of these elements (score 1). A complete description of the schedule (i.e., number of sessions, frequency, timing and duration, and program schedule) was given in 25.7% (n = 27) of the studies (score 2); however, 67.6% (n = 71) did not describe all of these elements (score 1), and 6.7% (n = 7) reported no information on the schedule at all (score 0). With regard to course materials, 24.8% (n = 26) of the studies did not describe any materials (score 0), and detailed information was missing from another 37.1% (n = 39; score 1). The remaining 38.1% (n = 40) described their materials in more detail (score 2), such as in this example from Haspel et al[54]: PowerPoint lecture. Participants were given access to the activities at the workshop electronically (initially Google Docs and later Google Forms). We have also made all teaching materials and an instructor handbook available online at no cost and are planning train-the-trainer sessions to facilitate broader dissemination for local implementation. Whether the participants received any incentives was not described in 74.3% (n = 78) of the studies (score 0). Whether the intervention was mandatory was mentioned in 16.2% (n = 17; score 1). Only 9.5% (n = 10) of the studies reported whether participants received some other kind of incentive (score 2), as in this example from Woodworth et al[117]: The only reward that subjects received for study participation was a DVD containing the educational video and interactive simulation, which was given to them at the conclusion of the study. Information about the instructors/teachers was reported in 18.1% (n = 19) of the studies in detail (score 2), but 28.6% (n = 30) did not report any information about the instructors at all (score 0). In more than half of the studies (53.3%, n = 56), information was missing on the instructors’ backgrounds or whether they received any specific training for the intervention (score 1). Information on delivery (i.e., group size and the ratio of learners to instructors) was described in detail in 22.9% (n = 24) of the studies (score 2), but not at all in 42.9% (n = 45; score 0). The environment was not described in 84.8% (n = 89) of the studies (score 0). Only 5.7% (n = 6) of the studies reported where the intervention took place (score 2), as in this example from Sawatsky et al[97]: Lectures are given by the same faculty member twice at two separate locations, a university-based hospital and a VA [Veterans Affairs] hospital. Both locations are set up with tables in rows, with several chairs at each table.

Evaluation (2 criteria).

Planned/unplanned changes, or whether the intervention required some kind of adaption before or during the intervention, were not reported by 82.9% (n = 87) of the studies (score 0). Furthermore, more than half of the studies (53.3%, n = 56) did not describe the participants’ level of satisfaction with, or their experience of, the intervention (score 0).

Discussion

Thorough and uniform descriptions of educational interventions are helpful for medical educators when comparing interventions and translating good practices into their own local programs. We therefore developed a checklist for thorough descriptions of educational interventions in medical education. Using this list, we performed a systematic evaluation of the 105 studies included in our review of educational interventions in postgraduate medical education. Our results indicate that descriptions of educational interventions can be improved. The only criterion covered by all articles included in our review was the learning needs that prompted the educational intervention, with 78.1% of the studies providing detailed descriptions. Other criteria that were frequently described in detail were the content/subject (77.1%) and educational strategies (79.0%) of the intervention. This finding is in line with Phillips and colleagues’[13] review of the literature describing educational interventions for EBP. Participants, context and settings, schedule, and instructors/teachers were frequently reported, but many studies failed to provide a complete description for these criteria. On the other hand, our evaluation also indicated that other criteria on our checklist were generally not reported; more than 75% of studies did not describe theory, incentives, environment, or planned and unplanned changes. With respect to the incentives and planned and unplanned changes criteria, it could be that some interventions had no incentives or changes. Such absence is clearly information that would be of interest to readers. Using a checklist, like ours, would aid in reporting all relevant information for a thorough description. Our results show that there is no uniformity in descriptions of educational interventions during classroom teaching for residents. Although not specific to educational interventions, lack of uniformity in descriptions of medical education research has been reported in the literature.[5,7,9,12] One reason why certain criteria on our list are not covered by many studies might be that authors feel they must provide a limited description of their intervention when their article is restricted by a word limit. As a solution, authors could describe the educational intervention in detail in a separate (design) paper,[123] or they could submit additional material.[6] In our review, articles that referred to additional material scored better than average (independent t test, mean difference = 2.8; P = .007; 95% CI = 0.8–4.8). However, this additional information should be easy to find, which was not always the case. On the basis of our review, we recommend describing the educational intervention in a table or figure. This would ensure that medical educators and researchers can find all of the needed information easily and help them more swiftly adapt the intervention for their local sites. Other reasons why certain criteria are not covered may be that authors are either unaware of all of the criteria (because of the lack of a standard) or disagree regarding their importance. Because the criteria we included in our checklist are based on the literature,[5,6,9,10,15] and because they were considered to be important during discussions at the Dutch medical education conference,[16] it seems more likely that authors are unaware of them. We hope that this review will enhance the awareness of the importance of a description that reports on all relevant criteria. Our results showed that more than half of the studies failed to describe the theory supporting the educational strategies. In addition, the learning objectives were reported in detail by less than one-fifth of the included studies. Without addressing these key criteria, it is unclear how one can measure whether the intervention was successful and why study results were obtained. As we noted above, we decided not to score one criterion, assessment, because our pilot survey showed that it is often not possible to determine whether an included assessment was developed to study the effectiveness of an intervention, was an integral part of the intervention, or both. However, to assess the effectiveness of an intervention, it is of paramount importance to know whether the intervention includes a formative or summative assessment.[9] We therefore recommend including a clear description of the assessment and its purpose in educational intervention studies. We did not evaluate the quality of the interventions performed; we only evaluated the descriptions of the interventions. The quality of the description of an intervention does not necessarily correlate with the quality of the intervention itself. For instance, many studies lacked a description of the intervention development process; however, the interventions were likely well thought out and set up systematically. Furthermore, a complete description of an intervention does not necessarily mean the intervention was successful. A limitation of this systematic review is that the sample of studies was limited to reports on educational interventions for residents and classroom teaching. The type of intervention may have some influence on the level and pattern of description. Classroom teaching is a relatively regulated type of intervention that is probably more easily described than many other types of educational interventions. However, there are educational interventions that are even more regulated (e.g., e-learning) or that would require special criteria (e.g., skill training). Future research could include other settings to validate our criteria list. In addition, research could explore the level of agreement on the importance of the selected criteria so that a common standard can be developed. Our checklist provides a departure point for such a standard that, to the best of our knowledge, previously did not exist. In conclusion, this systematic review confirms that there is room for improvement in descriptions of educational interventions during classroom teaching in postgraduate medical education. We found that key information on interventions was frequently missing and that there was a lack of uniformity of descriptions across studies. This suggests a need for a common standard for describing educational interventions. This review informs authors and readers on the essential criteria for describing educational interventions in medical education. We encourage other researchers to validate, complement, and use our criteria list, as this should lead to more complete, comparable, and replicable descriptions of interventions.

Acknowledgments:

The authors are sincerely grateful to Suzanne van Rhijn for the assistance with the search of articles.
  115 in total

1.  Evidence-based education: development of an instrument to critically appraise reports of educational interventions.

Authors:  J M Morrison; F Sullivan; E Murray; B Jolly
Journal:  Med Educ       Date:  1999-12       Impact factor: 6.251

2.  Developing a Communication Curriculum and Workshop for an Internal Medicine Residency Program.

Authors:  Sherine Salib; Elizabeth M Glowacki; Lindsay A Chilek; Michael Mackert
Journal:  South Med J       Date:  2015-06       Impact factor: 0.954

3.  Using Audience Response System technology and PRITE questions to improve psychiatric residents' medical knowledge.

Authors:  Amanda Hettinger; Joyce Spurgeon; Rif El-Mallakh; Barbara Fitzgerald
Journal:  Acad Psychiatry       Date:  2014-02-22

4.  Development and implementation of a formalized geriatric surgery curriculum for general surgery residents.

Authors:  Andrew S Barbas; John C Haney; Brandon V Henry; Mitchell T Heflin; Sandhya A Lagoo
Journal:  Gerontol Geriatr Educ       Date:  2014-03-10

5.  Transitioning from a noon conference to an academic half-day curriculum model: effect on medical knowledge acquisition and learning satisfaction.

Authors:  Duc Ha; Michael Faulx; Carlos Isada; Michael Kattan; Changhong Yu; Jeff Olender; Craig Nielsen; Andrei Brateanu
Journal:  J Grad Med Educ       Date:  2014-03

6.  Team-based learning in a pathology residency training program.

Authors:  Tamar C Brandler; Jordan Laser; Alex K Williamson; James Louie; Michael J Esposito
Journal:  Am J Clin Pathol       Date:  2014-07       Impact factor: 2.493

7.  Teaching concepts of transesophageal echocardiography via Web-based modules.

Authors:  John D Mitchell; Feroze Mahmood; Vanessa Wong; Ruma Bose; David A Nicolai; Angela Wang; Philip E Hess; Robina Matyal
Journal:  J Cardiothorac Vasc Anesth       Date:  2014-11-14       Impact factor: 2.628

8.  Using Comprehensive Video-Module Instruction as an Alternative Approach for Teaching IUD Insertion.

Authors:  Juan Antonio Garcia-Rodriguez; Tyrone Donnon
Journal:  Fam Med       Date:  2016-01       Impact factor: 1.756

9.  Residents' knowledge of quality improvement: the impact of using a group project curriculum.

Authors:  Katherine Duello; Irene Louh; Hope Greig; Nancy Dawson
Journal:  Postgrad Med J       Date:  2015-08-07       Impact factor: 2.401

10.  Educational Intervention in Primary Care Residents' Knowledge and Performance of Hepatitis B Vaccination in Patients with Diabetes Mellitus.

Authors:  Saowanee Ngamruengphong; Jennifer L Horsley-Silva; Stephanie L Hines; Surakit Pungpapong; Tushar C Patel; Andrew P Keaveny
Journal:  South Med J       Date:  2015-09       Impact factor: 0.954

View more
  6 in total

1.  Is checklist an effective tool for teaching research students? A survey-based study.

Authors:  Julia Wang; Gladson Vaghela; Abdelrahman M Makram; Dhir Gala; Nguyen Khoi Quan; Nguyen Tran Minh Duc; Atsuko Imoto; Kazuhiko Moji; Nguyen Tien Huy
Journal:  BMC Med Educ       Date:  2022-07-20       Impact factor: 3.263

2.  Advanced Pediatric Emergency Airway Management: A Multimodality Curriculum Addressing a Rare but Critical Procedure.

Authors:  Michael P Goldman; Ambika Bhatnagar; Joshua Nagler; Marc A Auerbach
Journal:  MedEdPORTAL       Date:  2020-09-04

3.  Developing an interprofessional transition course to improve team-based HIV care for sub-Saharan Africa.

Authors:  E Kiguli-Malwadde; J Z Budak; E Chilemba; F Semitala; D Von Zinkernagel; M Mosepele; H Conradie; J Khanyola; C Haruruvizhe; S Martin; A Kazembe; M De Villiers; M J A Reid
Journal:  BMC Med Educ       Date:  2020-12-09       Impact factor: 2.463

4.  The World Stroke Academy: A World Stroke Organization global pathway to improve knowledge in stroke care.

Authors:  Gustavo Saposnik; Laura Ceci Galanos; Rodrigo Guerrero; Florencia Casagrande; Emili Adhamidhis; Meah Ming Yang Gao; Maria Fredin Grupper; Anita Arsovska
Journal:  Int J Stroke       Date:  2022-03-10       Impact factor: 6.948

5.  Implicit Bias Recognition and Management: Tailored Instruction for Faculty.

Authors:  Natalia Rodriguez; Emily Kintzer; Julie List; Monica Lypson; Joseph H Grochowalski; Paul R Marantz; Cristina M Gonzalez
Journal:  J Natl Med Assoc       Date:  2021-06-16       Impact factor: 2.739

6.  Impact of guided self-study on learning success in undergraduate physiotherapy students in Switzerland - a feasibility study of a higher education intervention.

Authors:  Slavko Rogan; Jan Taeymans; Stefan Zuber; Evert Zinzen
Journal:  BMC Med Educ       Date:  2021-06-29       Impact factor: 2.463

  6 in total

北京卡尤迪生物科技股份有限公司 © 2022-2023.