Literature DB >> 33529356

Training non-intensivist doctors to work with COVID-19 patients in intensive care units.

Morten Engberg1,2, Jan Bonde3, Sigurdur T Sigurdsson3,4, Kirsten Møller4, Leizl J Nayahangan1, Marianne Berntsen4, Camilla T Eschen5, Nicolai Haase3, Søren Bache3, Lars Konge1,2, Lene Russell1,3.   

Abstract

BACKGROUND: Due to an expected surge of COVID-19 patients in need of mechanical ventilation, the intensive care capacity was doubled at Rigshospitalet, Copenhagen, in March 2020. This resulted in an urgent need for doctors with competence in working with critically ill COVID-19 patients. A training course and a theoretical test for non-intensivist doctors were developed. The aims of this study were to gather validity evidence for the theoretical test and explore the effects of the course.
METHODS: The 1-day course was comprised of theoretical sessions and hands-on training in ventilator use, hemodynamic monitoring, vascular access, and use of personal protective equipment. Validity evidence was gathered for the test by comparing answers from novices and experts in intensive care. Doctors who participated in the course completed the test before (pretest), after (posttest), and again within 8 weeks following the course (retention test).
RESULTS: Fifty-four non-intensivist doctors from 15 different specialties with a wide range in clinical experience level completed the course. The test consisted of 23 questions and demonstrated a credible pass-fail standard at 16 points. Mean pretest score was 11.9 (SD 3.0), mean posttest score 20.6 (1.8), and mean retention test score 17.4 (2.2). All doctors passed the posttest.
CONCLUSION: Non-intensivist doctors, irrespective of experience level, can acquire relevant knowledge for working in the ICU through a focused 1-day evidence-based course. This knowledge was largely retained as shown by a multiple-choice test supported by validity evidence. The test is available in appendix and online.
© 2021 The Acta Anaesthesiologica Scandinavica Foundation. Published by John Wiley & Sons Ltd.

Entities:  

Keywords:  COVID-19; ICU; assessment; curriculum; education; pandemic response; skills preparation; test; training; viral outbreak

Mesh:

Year:  2021        PMID: 33529356      PMCID: PMC8013477          DOI: 10.1111/aas.13789

Source DB:  PubMed          Journal:  Acta Anaesthesiol Scand        ISSN: 0001-5172            Impact factor:   2.274


Many intensive care units have been overwhelmed by the COVID‐19 pandemic, sometimes requiring unconventional measures to provide the necessary medical expertise to manage the heavy patient load. This study describes how non‐intensive care medicine doctors were trained to assist intensive care specialists to care for critically ill COVID‐19 patients. The results suggest that doctors could acquire much relevant knowledge to help with work in teams in intensive care units after a 1‐day evidence‐based course in caring for critically ill COVID‐19 patients.

INTRODUCTION

In March 2020, a rapidly growing number of patients infected with Severe Acute Respiratory Syndrome‐coronavirus‐2 (SARS‐CoV‐2) prompted most Danish intensive care units (ICUs) to increase their capacity in light of the concerning reports from Northern Italy of a surge of patients with COVID‐19 requiring mechanical ventilation. At Rigshospitalet (Copenhagen University Hospital), we opened an entirely new ICU unit with 60 beds dedicated to treatment of COVID‐19 patients. The new unit, which was named COVITA, resulted in an increase to 120 ICU beds overall at Rigshospitalet, thereby doubling the capacity. As a consequence of the sudden ICU expansion, existing ICU medical staff resources were insufficient, resulting in an urgent need for doctors trained to work in COVITA. The widespread cancellation of elective surgery and outpatient appointments due to the pandemic meant that doctors from a wide range of specialties and experience levels were available. However, because the knowledge and clinical skills necessary in the ICU differ from those needed in other specialties, we undertook the task to quickly organize a course to train non‐intensivist doctors to care for COVITA patients. The framework for the course was based on an extensive educational needs assessment study among doctors and nurses in Wuhan, who were working with COVID‐19 patients at the peak of the epidemic in early 2020. The aims of the needs assessment, which we performed in collaboration with doctors at Sun Yat‐sen University, Guangzhou, China, were to identify theoretical and practical aspects necessary to develop a comprehensive training curriculum on COVID‐19 management, including treatment, prevention of spread, and protection of staff. (Hou X, Hu W, Russell L, Kuang M, Konge L, Nayahangan LJ Educational needs in the COVID‐19 pandemic: A Delphi study among doctors and nurses in Wuhan, China, UNPUBLISHED, Submitted to BMJ Open, September 2020). Based on the results from this collaboration, we developed a 1‐day ICU training course for non‐intensivist doctors, which comprised of both theoretical and hands‐on sessions. To ensure that the set course aims were met, and doctors had the required knowledge after the course, evaluation of the course effects using objective assessment with a test was crucial. Importantly, such a test should be validated to ensure that it measured the intended competence. , The aims of this study were to develop and assess the validity of a theoretical test of knowledge in intensive care for COVID‐19 patients and to explore the short‐ and long‐term effects of a fast‐track course specifically developed to train experienced non‐intensivist doctors in intensive care. We hypothesized that doctors with clinical experience from other hospital specialist areas would be ready to assist in the ICU after a focused 1‐day course and that the effects of such a course, given in the context of an ongoing pandemic, would be long‐lasting.

METHODS

Study design

This study consisted of three phases: (A) Development and validation of the test; (B) development of the course; (C) testing and long‐term follow‐up.

A: Development and validation of the test

The test was developed in March 2020 by a group with expertise in intensive care medicine (LR, KM, SS) and medical education research (ME, LK). A balanced number of multiple‐choice questions (MCQ) on five topics were developed following general best principles for construction and phrasing of MCQ questions. The topics were basic theory of intensive care medicine, mechanical ventilation, use of personal protective equipment, insertion and use of central venous lines, and invasive hemodynamic monitoring. Unanimous consensus on 25 questions was reached after three iterations. Each question had one best option answer and three wrong answers (distractors). The correct answers were defined based on the local application of the international guidelines for management of critically ill adults with COVID‐19 and best practice in intensive care. , We investigated the validity of the MCQ test using the contemporary framework developed by Messick. The test was administered to two groups: (a) Doctors currently working in an ICU who were either consultants in intensive care or who have had at least 2 years of postgraduate clinical ICU experience ("Experts"); (b) Danish medical students in their last year of medical school ("Novices"). The novices were invited through a social media forum for final‐year medical students in Denmark. Due to the restrictions on unnecessary meeting activities, an online version of the MCQ test (FlexiQuiz; nextSpark Pty Ltd., Melbourne, Australia) was used. Qualifying experts at four ICUs were invited in person and completed a printed version of the test at their convenience.

B: Development of the course

We organized nine 1‐day fast‐track courses to train non‐intensivist doctors in intensive care for COVID‐19 patients. Doctors with valid medical licensure were eligible to join; doctors with more clinical experience were prioritized. The curriculum was developed by senior consultants in intensive care medicine with extensive teaching experience. The overall course content (developed by LR) was based on the results from the previously mentioned needs assessment and aimed to prepare non‐intensivist doctors both theoretically and practically to treat COVID‐19 patients in the ICU (Table 1). The course program consisted of two theoretical sessions and four hands‐on simulation‐based sessions (Table 1). The material for the theoretical sessions was prepared by a group of intensive care physicians (SS, NH, SB) lead by a professor in neurointensive care (KM). The hands‐on sessions were (a) Mechanical ventilation; allowing the participants to operate and change settings on a ventilator (Oxylog 3000, Dräger, Germany) connected to a manikin lung based on different scenarios (eg, hypoxemia or high inspiratory pressures), (b) Hemodynamic monitoring; introducing the participants to invasive blood pressure monitoring and vasopressor treatment using a manikin arm with arterial line setup and automatic infusion pump (Perfuser Space, Braun, Germany), (c) Vascular access; where participants practiced placing a central line in the jugular vein on a manikin (Gen II Ultrasound Central Line Training Model, Bluephantom, CAE Healthcare, Canada) and catheter handling and potential complications were discussed, and finally, (d) Personal protective equipment; training in safe donning and doffing of personal protective equipment. In order to limit potential infection spreading, the number of participants was limited to six in the theoretical sessions and two in the hands‐on sessions.
TABLE 1

Content of the one‐day fast‐track course

TimingDurationType of activityContent
ONE DAY COURSE 15 minutes Pre‐test
60 minutes Theoretical session Basic intensive care
90 minutes Simulation Mechanical ventilation: use and settings based on different scenarios
60 minutes Theoretical session Treating COVID‐19 ICU patients
90 minutes Simulation

Haemodynamic monitoring:

Use of arterial cannula, use and interpretation of invasive blood pressure monitoring devices, vasopressor treatment, use of automatic infusion pumps

90 minutes Simulation Vascular access: placement, use and potential complications of central venous catheters
30 minutes Workshop Donning and doffing of personal protective equipment
15 minutes Post‐test
6‐8 weeks post‐courseMaximum 23 minutes Retention test
Content of the one‐day fast‐track course Haemodynamic monitoring: Use of arterial cannula, use and interpretation of invasive blood pressure monitoring devices, vasopressor treatment, use of automatic infusion pumps

C: Testing and long‐term follow‐up

Course participants took the test immediately before (“pretest”) and immediately after (“posttest”) the sessions of the day. Six weeks after the course, all participants received email invitations and subsequent reminders to retake the test as a follow‐up within 2 weeks (“retention‐test”) (Table 1). The pre‐ and posttests were completed supervised in a printed format in the classroom. The follow‐up tests were completed unsupervised at the participants' discretion using the online version of the MCQ test.

Statistical analysis

Test validation: Item analysis was performed on the 25 multiple‐choice questions. Questions with a point biserial item discrimination index <0.1 (ie, very low correlation with total scores) were discarded. The remaining questions were classified into four item levels according to their difficulty index calculated as the proportion of all examinees (ICU doctors and medical students) who answered the question correctly: Level I (best item statistics: middle range of difficulty, typically with high discrimination) 0.45‐0.75, Level II (easy) 0.76‐0.91, Level III (difficult) 0.25‐0.44, Level IV (extremely difficult or easy) <0.24 or >0.91. , Level I questions are preferred, and level IV questions would be discarded as recommended. The mean test scores of the two groups were compared using independent samples t‐test to check the test's discriminatory ability. The consistency of the two groups was compared using Levene's test for variances. A pass/fail standard was defined using the contrasting groups' method.

Analysis of course data

Test scores between groups were compared using independent samples t‐test and score changes using one‐sample t‐test. The effect of pretest on posttest scores was calculated in a univariate linear regression model. All analyses were performed using IBM SPSS Statistics version 25.0 and P‐values <.05 were considered significant. We used GraphPad Prism 6.00 for OS (USA) to create the graphs.

Ethical considerations

Participation in this study (by completion of the tests) was voluntary and independent of participation in the course. All participants were informed of the study purpose and gave their informed and written consent to participate. All test results were anonymized for data handling and analysis. The Capital Region of Denmark's ethical committee confirmed that this study was exempt from ethical review (reference number H‐20030438).

RESULTS

A: Validation of the MCQ test

For the validation of the test, 37 experienced intensivists (“experts”) were invited to participate; they all completed the test. One hundred and thirty‐five final‐year medical students from the four medical schools in Denmark had completed the test when enrolment was closed. To balance the data for the statistical analysis, only the first 74 consecutive answers were enrolled in the study. Item analysis based on all answers (n = 111) revealed two questions with an item discrimination index <0.1, which were discarded. Difficulty indices were calculated for the remaining 23 questions; none was found too easy or too difficult (ie, Level IV) (Table 2). Therefore, the final test consisted of 23 questions.
TABLE 2

Development of the MCQ; Item analysis

Items level and categorisation4 Difficulty Index a Number of questionsExample
IBest item statistics b 0.45‐0.7510 Question 1: Which of the following organ system checklists is most appropriate when assessing the intensive care patient?
IIEasy c 0.76‐0.916 Question 2: In which order should you remove ("doff") personal protective equipment?
IIIDifficult d 0.25‐0.447 Question 4: Which vein should be the inexperienced doctor's choice for placing a central venous line?
IVExtremely difficult or easy e 0.24 or >0.910 (none)

Proportion of all examinees who answered the item/question correctly. For detailed description, please refer to the Statistical analysis paragraph in the Methods section.

A middle range of difficulty, typically with high discrimination.

Questions which many participants answered correctly.

Questions which few participants answered correctly.

Questions which almost all or almost none of the participants answered correctly.

Development of the MCQ; Item analysis Proportion of all examinees who answered the item/question correctly. For detailed description, please refer to the Statistical analysis paragraph in the Methods section. A middle range of difficulty, typically with high discrimination. Questions which many participants answered correctly. Questions which few participants answered correctly. Questions which almost all or almost none of the participants answered correctly. Comparison of the experts’ and novices’ scores (maximum 23 points) showed that the experts scored better than novices (mean 19.6 (SD 1.8) versus mean 9.5 (SD 3.2); P < .001 (Figure 1), demonstrating a strong relation to experience and a lower variance in score among the experts (P = .003). A credible pass/fail standard was established at 16 points (Figure 1). Only two novices achieved this score (3% false positives), whereas one experienced failed (3% false negatives).
FIGURE 1

Establishing a pass/fail‐standard using the contrasting groups' method. Comparison of the experts’ and novices’ scores in the final test (maximum 23 points), showed that the experts scored significantly better than novices (mean 19.6 (SD 1.8) vs. mean 9.5, (SD 3.2); P < .001, demonstrating a strong relation to experience. A credible pass/fail standard was established at 16 points.9 Only two novices achieved this score (3% false positives), whereas one experienced failed (3% false negatives). [Colour figure can be viewed at wileyonlinelibrary.com]

Establishing a pass/fail‐standard using the contrasting groups' method. Comparison of the experts’ and novices’ scores in the final test (maximum 23 points), showed that the experts scored significantly better than novices (mean 19.6 (SD 1.8) vs. mean 9.5, (SD 3.2); P < .001, demonstrating a strong relation to experience. A credible pass/fail standard was established at 16 points.9 Only two novices achieved this score (3% false positives), whereas one experienced failed (3% false negatives). [Colour figure can be viewed at wileyonlinelibrary.com] The final multiple‐choice test of 23 questions is provided in Appendix A and is available online on https://www.flexiquiz.com/SC/N/COVID‐19MCQ.

B: Training of non‐intensivist doctors

Participants: In the recruitment process for COVITA, 90 doctors without ICU experience signed up to help. Information and requests to volunteer were primarily distributed through the heads of departments, but many contacted us directly. Fifty‐four doctors were enrolled in the fast‐track course, and all gave their consent to participate in this study. They represented 15 different specialties with a wide range in clinical experience level (Table 3). Thirty doctors (56%) were qualified specialists, and 24 (44%) were doctors in postgraduate training or research positions.
TABLE 3

Baseline data for Non‐ICU Physicians

N (%)
Non‐ICU physicians54 (100)
Male/female28/26 (52/48)
With postgraduate ICU experience a 5 (9)
Specialties
Anaesthesia3 (6)
Surgical b 9 (17)
Medical c 5 (9)
Neurology16 (30)
Gynecology3 (6)
Paediatrics7 (13)
Laboratory and specialty functions d 6 (11)
Others e 5 (9)

Five doctors (9%) had one‐year ICU experience or less. Another five doctors indicated more than one year of experience as visiting consulting doctors in the ICU (one paediatrician, one neurosurgeon, one cardiologist and two neurologists).

Surgical specialties: Ear‐nose‐throat: 2; Neurosurgery: 3; Orthopaedic surgery: 3; Vascular surgery: 2.

Medical specialties: Cardiology: 1; Endocrinology: 3; Haematology: 1.

Laboratory and speciality functions: Genetics: 1; Clinical physiology: 2; Neurophysiology: 3.

Others: PhD‐students: 3, Unknown: 2.

Baseline data for Non‐ICU Physicians Five doctors (9%) had one‐year ICU experience or less. Another five doctors indicated more than one year of experience as visiting consulting doctors in the ICU (one paediatrician, one neurosurgeon, one cardiologist and two neurologists). Surgical specialties: Ear‐nose‐throat: 2; Neurosurgery: 3; Orthopaedic surgery: 3; Vascular surgery: 2. Medical specialties: Cardiology: 1; Endocrinology: 3; Haematology: 1. Laboratory and speciality functions: Genetics: 1; Clinical physiology: 2; Neurophysiology: 3. Others: PhD‐students: 3, Unknown: 2.

C: Testing and long‐term follow‐up

Result of pre‐ and posttest: All 54 doctors performed the pre‐ and posttest. The mean pretest score was 11.9 (SD 3.0), which was higher than the final‐year medical students (P < .001; 95% CI 1.3‐3.5) but below the earlier established pass–fail score of 16 (P < .001); 48/54 (88%) scored below 16 points. On the posttest, all doctors (54/54) scored 16 points or above and thereby passed the test (mean 20.6, SD 1.8). This mean score was marginally higher than ICU specialists (CI 0.2‐1.8; P = .012). Posttest scores showed a weak positive correlation with pretest scores, corresponding to an average of one additional point in the posttest for every 6.3 additional points in the pretest (beta 0.16, CI 0.00‐0.31, P = .047) (Figure 3).
FIGURE 3

Individual results of pre‐, post‐ and retention‐tests. Post‐test scores were positively correlated with pre‐test scores, but the effect size was small: beta = 0.16 (P < .05), corresponding to an average of 1 additional point in the post‐test for every 6.3 additional points in the pre‐test.

Retention of knowledge

43/54 (80%) of the doctors performed the same test after 6‐8 weeks (retention test) (Figure 2 and 3), in which 34/43 (79%) scored minimum 16 points corresponding to the set pass–fail score (mean 17.4, SD 2.2). Overall, the mean decrease in score was 3.1 points (SD 1.9), corresponding to a score decline of 15% (CI 12%–18%). At the follow‐up, 22/43 (51%) of the doctors had had minimum one shift at COVITA, but this did not influence retention test results compared to those who had not (CI −1.0‐1.2; P = .90).
FIGURE 2

Test results for all participants. Each dot represents a test result by one participant. The solid black lines are the mean and standard deviation for each group. Validity evidence for the multiple‐choice test (MCQ) was assessed by distribution of the test to intensive care unit (ICU) specialists and final‐year medical students. The test was then distributed immediately before (pre‐test) and immediately after (post‐test) the course and again six to eight weeks after the course (retention‐test).

Test results for all participants. Each dot represents a test result by one participant. The solid black lines are the mean and standard deviation for each group. Validity evidence for the multiple‐choice test (MCQ) was assessed by distribution of the test to intensive care unit (ICU) specialists and final‐year medical students. The test was then distributed immediately before (pre‐test) and immediately after (post‐test) the course and again six to eight weeks after the course (retention‐test). Individual results of pre‐, post‐ and retention‐tests. Post‐test scores were positively correlated with pre‐test scores, but the effect size was small: beta = 0.16 (P < .05), corresponding to an average of 1 additional point in the post‐test for every 6.3 additional points in the pre‐test.

DISCUSSION

ICU treatment of COVID‐19 patients requires specific knowledge and skills acquired over many years. The framework for this study was a 1‐day course designed specifically to introduce non‐intensivist doctors to care for critically ill COVID‐19 patients, thereby allowing them to assist intensivists in COVITA. The results of this study suggest that non‐intensivist doctors, irrespective of experience level, may not have the required knowledge a priori. However, a focused 1‐day course increased the basic knowledge above the desired level, which was determined by the validity study of the test. Our findings indicate that focused educational efforts in crisis situations should be prioritized. At our institution, time and resources were well spent on nine 1‐day fast‐track courses for 54 doctors. As a concrete and directly implementable outcome, this study has successfully developed and gathered validity evidence for an MCQ test in intensive care of patients with COVID‐19. Our data do not support the use of pretest scores as a basis for prioritizing doctors to recruit since posttest scores were only marginally higher for participants with higher pretest scores. We chose the MCQ format for assessment since it is easily scalable and requires little time and few resources. The test can be completed in approximately 10 minutes, and by using an online version, the test score will be readily available. Therefore, it is also easily repeatable and could be used for repetition and re‐certification purposes with minimal costs. A weakness of the MCQ format is the rigid dichotomous scoring of the questions that makes it very easy to score the test but does not allow for qualified elaborate answers. Most importantly though, we should keep in mind that the MCQ format only tests specific knowledge and not clinical experience, intuition, leadership, or procedural skills; important qualifications which are likely to correlate with seniority. We tested knowledge retention 6‐8 weeks after the course. A high proportion of the doctors (80%) participated in the retention test. Traditionally, decline in knowledge is described by the Ebbinghaus retention curve with the steepest decline in the first short time period after learning. Anticipating a flattening of the knowledge decline before 6 weeks, our finding of a score decline of 15% is low. The use of tests on the course day could contribute to this, but also the doctors’ anticipation of clinical duty, which could have motivated self‐directed repetition, for example, using the guidelines distributed on the course day. Furthermore, by studying retention at 6‐8 weeks, we have undoubtedly enhanced its duration further, since the repeated test itself functions as a formalized spaced repetition of the curriculum. In fact, spaced repetitions tests may be used in a structured manner to maintain knowledge, thereby ensuring preparedness also for the next epidemic wave. When discussing education in relation to a viral outbreak, it is worth remembering that there have been several major viral epidemics during the last 20 years, including severe acute respiratory syndrome (SARS) in 2003, swine flu (H1N1 influenza virus infection) in 2009‐2010, Middle East respiratory syndrome (MERS) in 2012, and the Ebola virus outbreak in 2014‐2016. In order to gather information about previous educational efforts, we recently performed a systematic review on training during viral epidemics. (Nayahangan LJ, Konge L, Russell L, Andersen SAW Training and education of healthcare workers during viral epidemics: A systematic review, UNPUBLISHED, submitted to BMJ Open, August 2020.). Despite the abundance of publications on viral epidemics, we could only identify 46 studies on educational interventions, among which there was a wide range of content, strategies, and evaluation. Predictably, the studies consistently reported positive benefits from any training intervention, typically evaluated by learner satisfaction and self‐assessed learning outcome. However, since physicians’ self‐evaluated competence does not correlate with their actual skills, objective outcome assessment is critical when evaluating educational efforts. Use of assessment motivates the learners and supports long‐term knowledge retention. , , , This study demonstrates that it is possible to develop a curriculum and a test supported by validity evidence despite a time‐limited situation. In general, and most certainly during a pandemic, requirements for training should be based on educational needs and local conditions. Simulation‐based training, which has well‐documented positive effects, is an efficient way of providing training while protecting trainees and patients from unnecessary harm. , Especially stress‐free training in use of personal protective equipment is beneficial, , , and was suggested by the majority in our needs assessment from Wuhan. However, simulation‐based training is resource demanding, and if local training programs are not in place, e‐learning programs could be a cost‐effective option. Still, e‐learning is most efficient when combined with other educational modalities. The WHO Health Emergencies Programme has launched free online training resources for care of COVID‐19 patients. More recently, the European Society of Intensive Care Medicine (ESICM) has introduced their COVID‐19 Skills Preparation Course (C19 Space), which is funded by the European Union. This program consists of online learning and local on‐site training, though the recruitment of local trainers in the different European countries is currently ongoing. When planning the curriculum, it is important to consider local factors, so that the training matches the demands that the participants will encounter in real‐life situations. In COVITA, the work rotation was based on teams of doctors with different specialist backgrounds; each team consisting of an ICU specialist, an anesthetist, and a doctor with no previous ICU skills (ie, the doctors who participated in this course). Therefore, this course was designed specifically for those non‐ICU doctors, to complement the real‐life situations handled by the team. In Denmark, as well as in other Nordic countries, ICU doctors typically come from a background as anesthesiologists. One significant and unanticipated benefit worth emphasizing is that we found that by having highly qualified doctors from other specialties working in the ICU, a contributional and mutually educational environment was created. Finally, the COVID‐19 pandemic has highlighted that, even in well‐developed health‐care systems like ours, education programs for new infection threats are often not in place. Previous outbreaks often required urgent preparations, including management of critically ill patients and correct use of personal protective equipment. However, as the last epidemics thankfully faded out, focus went elsewhere. As a result, when the COVID‐19 pandemic struck, our hospital, just as many others, had to start from scratch to develop training programs. The lesson learned is that education and training for crises should not solely be performed during an ongoing viral epidemic but also during “peacetime” in order to be well prepared for the next viral epidemic outbreak. Given the second wave of infections, this reinforces the need to reflect on the key lessons from the initial wave and thereby ensure that we have relevant training curricula in place to prepare for future infectious threats.

Limitations

Importantly, this study did not measure actual clinical performance. Although 54 doctors completed the course and tests, the sample size is insufficient to explore differences between different specialties. Risk of recruitment bias exists since participants in the novice group (part A) as well as the course participants volunteered upon advertisement and participants in the expert group (part A) were invited personally. Most course participants were doctors at Rigshospitalet, Copenhagen, and the external validity of these findings has not been explored. The novice group were final‐year medical students, not qualified doctors. They had, however, completed all mandatory courses in intensive care medicine. Recruitment was done through social media groups, making it impossible to check the accuracy of the information required. However, manual validation of the dataset did not reveal obvious “false” entries. The test was not administered in completely the same way for all groups. This was due to clinical duties and the risk of infection. The novice group tests (part A) and the retention tests (part C) were administered using an online version. The experts’ tests (part A) and the course participants’ pre‐ and posttest were done in a printed format. To minimize the risk for use of references when taking the online test, an automatic timer for each question was set to 60 seconds. Similarly, the supervised printed tests were all completed in less than 15 minutes. In conclusion, we have no reason to suspect that data are systematically biased by “cheating.”

CONCLUSION

In this study, we developed a focused 1‐day evidence‐based course for non‐intensivist doctors in caring for critically ill COVID‐19 patients. Using a newly developed test supported by validity evidence, we found that doctors acquired relevant knowledge to work in the ICU and that knowledge was largely retained.

CONFLICT OF INTEREST

None.
  27 in total

Review 1.  Does simulation-based medical education with deliberate practice yield better results than traditional clinical education? A meta-analytic comparative review of the evidence.

Authors:  William C McGaghie; S Barry Issenberg; Elaine R Cohen; Jeffrey H Barsuk; Diane B Wayne
Journal:  Acad Med       Date:  2011-06       Impact factor: 6.893

Review 2.  The critical role of retrieval practice in long-term retention.

Authors:  Henry L Roediger; Andrew C Butler
Journal:  Trends Cogn Sci       Date:  2010-10-15       Impact factor: 20.229

3.  Simulation in healthcare education: a best evidence practical guide. AMEE Guide No. 82.

Authors:  Ivette Motola; Luke A Devine; Hyun Soo Chung; John E Sullivan; S Barry Issenberg
Journal:  Med Teach       Date:  2013-08-13       Impact factor: 3.650

Review 4.  Pandemic preparedness and response--lessons from the H1N1 influenza of 2009.

Authors:  Harvey V Fineberg
Journal:  N Engl J Med       Date:  2014-04-03       Impact factor: 91.245

5.  Surviving Sepsis Campaign: International Guidelines for Management of Sepsis and Septic Shock: 2016.

Authors:  Andrew Rhodes; Laura E Evans; Waleed Alhazzani; Mitchell M Levy; Massimo Antonelli; Ricard Ferrer; Anand Kumar; Jonathan E Sevransky; Charles L Sprung; Mark E Nunnally; Bram Rochwerg; Gordon D Rubenfeld; Derek C Angus; Djillali Annane; Richard J Beale; Geoffrey J Bellinghan; Gordon R Bernard; Jean-Daniel Chiche; Craig Coopersmith; Daniel P De Backer; Craig J French; Seitaro Fujishima; Herwig Gerlach; Jorge Luis Hidalgo; Steven M Hollenberg; Alan E Jones; Dilip R Karnad; Ruth M Kleinpell; Younsuk Koh; Thiago Costa Lisboa; Flavia R Machado; John J Marini; John C Marshall; John E Mazuski; Lauralyn A McIntyre; Anthony S McLean; Sangeeta Mehta; Rui P Moreno; John Myburgh; Paolo Navalesi; Osamu Nishida; Tiffany M Osborn; Anders Perner; Colleen M Plunkett; Marco Ranieri; Christa A Schorr; Maureen A Seckel; Christopher W Seymour; Lisa Shieh; Khalid A Shukri; Steven Q Simpson; Mervyn Singer; B Taylor Thompson; Sean R Townsend; Thomas Van der Poll; Jean-Louis Vincent; W Joost Wiersinga; Janice L Zimmerman; R Phillip Dellinger
Journal:  Intensive Care Med       Date:  2017-01-18       Impact factor: 17.440

6.  Understanding Workflow and Personal Protective Equipment Challenges Across Different Healthcare Personnel Roles.

Authors:  Molly Harrod; Laura Petersen; Lauren E Weston; Lynn Gregory; Jeanmarie Mayer; Matthew H Samore; Frank A Drews; Sarah L Krein
Journal:  Clin Infect Dis       Date:  2019-09-13       Impact factor: 9.079

Review 7.  Hospital surge capacity in a tertiary emergency referral centre during the COVID-19 outbreak in Italy.

Authors:  L Carenzo; E Costantini; M Greco; F L Barra; V Rendiniello; M Mainetti; R Bui; A Zanella; G Grasselli; M Lagioia; A Protti; M Cecconi
Journal:  Anaesthesia       Date:  2020-04-22       Impact factor: 6.955

Review 8.  Test-enhanced learning in health professions education: A systematic review: BEME Guide No. 48.

Authors:  Michael L Green; Jeremy J Moeller; Judy M Spak
Journal:  Med Teach       Date:  2018-02-01       Impact factor: 3.650

Review 9.  Intensive care management of coronavirus disease 2019 (COVID-19): challenges and recommendations.

Authors:  Jason Phua; Li Weng; Lowell Ling; Moritoki Egi; Chae-Man Lim; Jigeeshu Vasishtha Divatia; Babu Raja Shrestha; Yaseen M Arabi; Jensen Ng; Charles D Gomersall; Masaji Nishimura; Younsuck Koh; Bin Du
Journal:  Lancet Respir Med       Date:  2020-04-06       Impact factor: 30.700

10.  Contrasting groups' standard setting for consequences analysis in validity studies: reporting considerations.

Authors:  Morten Jørgensen; Lars Konge; Yousif Subhi
Journal:  Adv Simul (Lond)       Date:  2018-03-09
View more
  8 in total

1.  Feasibility Study of a Fully Synchronous Virtual Critical Care Elective Focused on Learner Engagement.

Authors:  Soyun Michelle Hwang; Ambrose Rice; Serkan Toy; Rachel Levine; Lee Goeddel
Journal:  Cureus       Date:  2022-05-28

2.  Preparedness and management during the first phase of the COVID-19 outbreak - a survey among emergency primary care services in Norway.

Authors:  Jonas Nordvik Dale; Tone Morken; Knut Eirik Eliassen; Jesper Blinkenberg; Guri Rørtveit; Steinar Hunskaar; Ingrid Keilegavlen Rebnord; Valborg Baste
Journal:  BMC Health Serv Res       Date:  2022-07-11       Impact factor: 2.908

3.  Prediction of SARS-CoV-2-Related Lung Inflammation Spreading by V:ERITAS (Vanvitelli Early Recognition of Inflamed Thoracic Areas Spreading).

Authors:  Ciro Romano; Domenico Cozzolino; Giovanna Cuomo; Marianna Abitabile; Caterina Carusone; Francesca Cinone; Francesco Nappo; Riccardo Nevola; Ausilia Sellitto; Annamaria Auricchio; Francesca Cardella; Giovanni Del Sorbo; Eva Lieto; Gennaro Galizia; Luigi Elio Adinolfi; Aldo Marrone; Luca Rinaldi
Journal:  J Clin Med       Date:  2022-04-26       Impact factor: 4.964

4.  Training non-intensivist doctors to work with COVID-19 patients in intensive care units.

Authors:  Morten Engberg; Jan Bonde; Sigurdur T Sigurdsson; Kirsten Møller; Leizl J Nayahangan; Marianne Berntsen; Camilla T Eschen; Nicolai Haase; Søren Bache; Lars Konge; Lene Russell
Journal:  Acta Anaesthesiol Scand       Date:  2021-03-03       Impact factor: 2.274

Review 5.  The COVID-19 Patient in the Surgical Intensive Care Unit.

Authors:  Ian Monroe; Matthew Dale; Michael Schwabe; Rachel Schenkel; Paul J Schenarts
Journal:  Surg Clin North Am       Date:  2021-09-29       Impact factor: 2.741

6.  A descriptive study of the surge response and outcomes of ICU patients with COVID-19 during first wave in Nordic countries.

Authors:  Michelle S Chew; Salla Kattainen; Nicolai Haase; Eirik A Buanes; Linda B Kristinsdottir; Kristin Hofsø; Jon Henrik Laake; Reidar Kvåle; Johanna Hästbacka; Matti Reinikainen; Stepani Bendel; Tero Varpula; Sten Walther; Anders Perner; Hans K Flaatten; Martin I Sigurdsson
Journal:  Acta Anaesthesiol Scand       Date:  2021-10-03       Impact factor: 2.274

7.  The impact of a "short-term" basic intensive care training program on the knowledge of nonintensivist doctors during the COVID-19 pandemic: An experience from a population-dense low- and middle-income country.

Authors:  Suhail Sarwar Siddiqui; Sulekha Saxena; Shuchi Agrawal; Ayush Lohiya; Syed Nabeel Muzaffar; Sai Saran; Saumitra Misra; Nitin Rai; Avinash Agrawal
Journal:  Aust Crit Care       Date:  2022-08-25       Impact factor: 3.265

8.  COVID-19 and reduced bystander cardiopulmonary resuscitation: A thanatophobic attitude leading to increased deaths from cardiac arrest?

Authors:  Giovanni Babini; Giuseppe Ristagno
Journal:  Acta Anaesthesiol Scand       Date:  2022-09-23       Impact factor: 2.274

  8 in total

北京卡尤迪生物科技股份有限公司 © 2022-2023.