Literature DB >> 35489796

COVID-19 survey burden for health care workers: literature review and audit.

S N Gnanapragasam1, A Hodson2, L E Smith3, N Greenberg3, G J Rubin3, S Wessely3.   

Abstract

OBJECTIVES: Concerns have been raised about the quantity and quality of research conducted during the COVID-19 pandemic, particularly related to the mental health and wellbeing of health care workers (HCWs). For understanding the volume, source, methodological rigour and degree of overlap in COVID-19, studies were conducted among HCWs in the United Kingdom (UK). STUDY
DESIGN: Mixed methods approach, literature review and audit.
METHODS: First, a literature review of published research studies and second, an audit of studies HCWs have been invited to complete. For the literature review, we searched Medline, PsycINFO and Nexis, webpages of three medical organisations (Royal Society of Medicine, Royal College of Nursing and British Medical Association), and the YouGov website. For the audit, a non-random purposive sample of six HCWs from different London NHS Trusts reviewed email, WhatsApp and SMS messages they received for study invitations.
RESULTS: The literature review identified 27 studies; the audit identified 70 study invitations. Studies identified by the literature review were largely of poor methodological rigour: only eight studies (30%) provided response rate, one study (4%) reported having ethical approval, and one study (4%) reported funding details. There was substantial overlap in the topics measured. In the audit, volunteers received a median of 12 invitations. The largest number of study invitations were for national surveys (n = 23), followed by local surveys (n = 16) and research surveys (n = 8).
CONCLUSION: HCWs have been asked to complete numerous surveys that frequently have methodological shortcomings and overlapping aims. Many studies do not follow scientific good-practice and generate questionable, non-generalisable results.
Copyright © 2021 The Royal Society for Public Health. Published by Elsevier Ltd. All rights reserved.

Entities:  

Keywords:  COVID-19; Health care workers; Research burden; Research quality; Survey fatigue

Mesh:

Year:  2021        PMID: 35489796      PMCID: PMC8148427          DOI: 10.1016/j.puhe.2021.05.006

Source DB:  PubMed          Journal:  Public Health        ISSN: 0033-3506            Impact factor:   4.984


Introduction

Since the start of the COVID-19 pandemic, more than 352,956 related studies have been published in the world’s medical and scientific literature. During the early period, from late January to mid-April 2020, the median estimated time ‘from receipt to acceptance’ was just six days (p.666). Many have praised the rapid communication of early findings, citing the urgency in which decisions were needed to be made. This is particularly notable in areas where a strong evidence base was lacking, and decisions were based on expert-considered best practice. Others have drawn attention to the poor quality of much of this research, which in normal times may be more vigorously moderated by ethics boards, external funders, and non-expedited peer reviews. This is especially pertinent since approximately a quarter of all COVID-19 research has been presented via preprint servers that bypass traditional peer review. As such, clinicians and decision-makers are potentially reliant on ‘shaky data’ and ‘bad science’.4, 5, 6 Indeed, concerns related to the generalisability of research conducted among health care workers (HCWs) were raised in the early stages of the pandemic. Further, the speed at which research has been conducted, and if at all peer reviewed, means that corners may have been cut and errors have gone unnoticed. Not surprisingly, there have already been highly publicised cases of papers having to be withdrawn because of major errors. , As such, it has been suggested that funders or ethics committees can act as gatekeepers to prevent duplicative or poor research. In addition to the vast number of studies published, there are also many other studies, often based on survey data, that have not yet made it into the scientific literature and perhaps never will. There is a danger that sought-after populations may be overwhelmed with such requests. One particular population of interest is HCWs. While it is important to understand the impact of COVID-19 in this occupational group, multiple participation requests and the notion of being ‘overburdened’, may lead to research fatigue among HCWs. Despite these concerns, the nature of repeated research requests for individual members of staff has yet to be quantified. In this paper, we investigated: The volume and source of research studies conducted with HCWs in the United Kingdom (UK); The methodological rigour of these studies (whether they reported: ethical approval, funding information and response rates); The degree to which research studies or surveys overlapped in terms of their aims.

Methods

The study was conducted in two ways. First, a literature review of published research studies was conducted with HCWs since the start of the COVID-19 outbreak in the UK. Second, an audit to assess the volume and type of studies HCWs have been invited to complete since the COVID-19 outbreak. Studies identified in both stages were categorised by the following study type definitions: Local evaluation or audit: employer surveys and NHS Trust level service evaluations and audits; National evaluation or audit: professional bodies, think tanks, media outlets, national service evaluations and national audits; Research: studies and projects that attempted to derive new knowledge to answer a research question. This included surveys that self-identified as a research project, even where the methodological quality was unclear; Unspecified: unclear if the study met the criteria to be categorised in the definitions above due to lack of information. For the literature review, we searched for studies conducted with HCWs in the UK – these were UK-based workers in the health or social care sector (e.g. doctors, nurses, allied health professionals, care home workers) who work in either the National Health Service (NHS) or privately – including currently non-practicing members of the Royal Colleges. First, we searched Medline and PsycINFO using the following terms ‘(Coronavirus or COVID-19) and (doctor∗ or nurse∗ or surgeon or ICU work∗ or ITU work∗ or health personnel or health care worker or clinical care∗ or care home work∗)’. All searches were limited to 2020 and were conducted between 11/06/20 and 28/07/20. Search terms were tailored for each database. Second, we searched the websites of three medical organisations – Royal College of Nursing (RCN), British Medical Association (BMA) and the Royal Society of Medicine (RSM) using ‘Coronavirus’, ‘COVID-19’ and ‘COVID’ as search terms. The first 20 pages of results were searched. To identify studies that may have been run through individual market research companies, we also ran an additional search of the YouGov website, using the search terms ‘coronavirus and health care worker’. To identify studies reported in the mass media, we searched Nexis® using terms relating to COVID-19 and HCWs and additional terms specific to surveys and/or polls. The final search term was ‘(coronavirus or COVID-19) and (doctor∗ or nurse∗ or surgeon or ICU work∗ or ITU work∗ or health personnel or health care worker or clinical care∗ or care home work∗) and (survey or poll∗)’. Search output was limited to articles written in English and published in the UK since January 2020. Any survey mentioned in a newspaper article identified through Nexis® was searched for through Google, using search terms used in the article. The first twenty pages of results were searched. We used Google to search for additional mentions of studies. Seven searches were conducted in total. These used the phrase ‘coronavirus study with’, followed by doctor, nurse, surgeon, ICU/ITU workers, health personnel, health care worker and care home worker. The first 20 pages of results were searched. Studies were eligible for inclusion if they presented: novel data that have been gathered since January 2020 in the UK among HCWs. We extracted data relating to study design, sample size, topic investigated, and the study type (e.g. local, national, research and unspecified). In respect to the quality appraisal of included studies, we reported whether they had (i) ethical approval, (ii) reported funding information, and (iii) reported response rate. For the audit, volunteers (n = 6) were recruited using word-of-mouth invitations. This was a non-random purposive sample of junior doctors (core-trainees) employed within health care settings in London, UK. Recruited volunteers were from six different NHS Trusts and four specialities (anaesthesia/intensive care, n = 1; psychiatry, n = 2; acute/general medicine, n = 2; general practice, n = 1). Volunteers were asked to identify studies, surveys and questionnaires that they were invited to participate in between 01/01/2020 and 10/07/2020 that related to COVID-19 and their role as HCWs (e.g. wellbeing, mental health, change in service provision, redeployment or attitudes/preferences/behaviours). Volunteers reviewed their email (work and personal), SMS and WhatsApp messages. Volunteers were asked to use the search function (where available) across these platforms with the terms ‘survey’, ‘questionnaire’, ‘research’, ‘wellbeing’, ‘docs.google’, ‘redcap’, ‘qualtrics’ and ‘tinyurl’. Volunteers were also asked to recall any relevant studies, surveys or questionnaires that they had been asked to complete by word-of-mouth, social media and other clinical/non-clinical interactions. On identifying surveys, participants were asked to forward relevant links or details to the research team. From each study forwarded, we extracted topics investigated, study type (e.g. local, national, research and unspecified), whether consent was sought, presence of ethical approval, nature of sampling strategy and dissemination of results (e.g. any preliminary results published, if so where). Where information was not clear or available, the study leads were emailed. Research surveys were further sub-classified to reflect the nature of the sampling strategy: Defined sample frame: studies that had a defined and known sample frame, and therefore, able to report data on response rates and address the related issues of bias. For example, survey of members of a group/institution in which all members of that group, or a probability sample of the group, were invited to participate. No explicit sample frame: Studies without an explicit sample frame and therefore unable to determine response rate or consider the related risk of bias. For example, an online survey open to all HCWs that recruited participants via social media and/or word-of-mouth. No ethical approval under the NHS Health Research Authority was required for this study, given that no personal data was gathered from participants.

Results

The literature review identified 3115 citations. After screening, 27 citations were included (see Fig. 1 for further details). Included studies were conducted across a range of HCWs, including nurses, doctors and allied health care professionals from a variety of medical disciplines (see Table 1 for sample details).
Fig. 1

Search strategy for literature review. ∗This number includes two British Medical Association tracker surveys: one involving five separate waves of surveys and another with two separate waves, split for GPs and hospital doctors, respectively.

Table 1

Details of sample as specified in studies.

SampleNumber of studies included in (%)∗
Specific medical specialities (e.g. Psychiatry, surgery and anaesthesia)10 (37)
All doctors (including GPs, hospital doctors and junior doctors)5 (19)
Nurses and/or midwives5 (19)
All health care professionals4 (15)
Black, Asian and Minority Ethnic (BAME) staff across the NHS2 (7)
Social care workers1 (4)
Search strategy for literature review. ∗This number includes two British Medical Association tracker surveys: one involving five separate waves of surveys and another with two separate waves, split for GPs and hospital doctors, respectively. Details of sample as specified in studies. Less than a third of studies (n = 8, 30%) provided a response rate.11, 12, 13, 14, 15, 16, 17, 18 One study reported ethical approval, and one other reported that they did not require it; all others (n = 25 studies, 93%) gave no information concerning ethical approval. One study (4%) reported details of funding. Seventeen studies (63%) did not report response rates, details of ethical approval or exemption, or funding information (see Table 2 ).
Table 2

Quality appraisal and topic(s) measured.

Citation or first authorResponse rate reportedDetails of ethical approval or exemption reportedFunding information reportedType of studyTopic(s) investigated
Atunes19NRYesYesResearchPrescribing
British Medical Association20NRNRNRNationalPPE, Personal experience of COVID, Changes to work
British Medical Association21NRNRNRNationalPhysical or mental health impact, Personal experience of COVID, Changes to work, Official virus response, PPE, Experience of harassment or discrimination in the workplace, Non- COVID care, NHS or Trust preparation, Home environment, Future response to COVID
British Medical Association22NRNRNRNationalNHS or Trust preparation, PPE, Physical or mental health impact
British Medical Association11YesNRNRNationalCOVID-specific work, NHS or Trust preparation
Channel 423NRNRNRNationalNHS or Trust preparation, PPE, Changes to work
Channel 424NRNRNRNationalNHS or Trust preparation, PPE, Changes to work
Iqbal12YesYesNRResearchNHS or Trust preparation, PPE, COVID-specific work, Official virus response, Personal experience of COVID
ITV25NRNRNRNationalChanges to work, PPE, BAME-specific issues
Rimmer26NRNRNRResearchPPE, NHS or Trust preparation, Changes to work
Royal College of Anaesthetists27NRNRNRNationalPersonal experience of COVID, Changes to work, Physical or mental health impact
Royal College of Nursing28NRNRNRNationalPPE, NHS or Trust preparation
Royal College of Nursing29NRNRNRNationalPPE, NHS or Trust preparation
Royal College of Nursing30NRNRNRNationalPersonal experience of COVID
Royal College of Nursing Research Society31NRNRNRNationalPersonal experience of COVID
Royal College of Nursing Research Society32NRNRNRNationalPersonal experience of COVID, Home environment
Royal College of Psychiatrists13YesNRNRNationalPPE, Personal experience of COVID, Changes to work
Royal College of Psychiatrists14YesNRNRNationalChanges to work
Royal College of Psychiatrists15YesNRNRNationalPPE, Personal experience of COVID, Changes to work
Royal College of Psychiatrists16YesNRNRNationalBAME-specific issues, Physical or mental health impact
Royal College of Psychiatrists17YesNRNRNationalPPE, Personal experience of COVID, Future response to COVID, Changes to work
Royal College of Psychiatrists18YesNRNRNationalPPE, Personal experience of COVID, Future response, Changes to work
Royal College of Surgeons of England33NRNRNRNationalPPE, Personal experience of COVID
Royal College of Surgeons of England34NRNRNRNationalPPE, Personal experience of COVID
Royal College of Surgeons of England35NRNRNRNationalChanges to work, PPE, Non- COVID care, NHS or Trust preparation
YouGov36NRNRNRNationalCOVID-specific work, Non- COVID care
YouGov37NRNRNRNationalCOVID-specific work, Non- COVID care

BAME, Black, Asian and Minority Ethnic; NR, not reported; PPE, personal protective equipment.

Quality appraisal and topic(s) measured. BAME, Black, Asian and Minority Ethnic; NR, not reported; PPE, personal protective equipment. All 27 of the included studies covered the issue of personal protective equipment (PPE), considering factors such as access to and availability of appropriate PPE (see Table 3 ). Most surveys (n = 21, 78%) assessed the personal experience of COVID-19 among HCWs, including their experience of symptoms, access to COVID-19 testing, self-isolation and shielding. Twenty studies (74%) explored preparation of the NHS or specific Trusts, including COVID-19 guidance and COVID specific information provided, and safe work environments. Nineteen studies (70%) assessed changes to work patterns, including re-deployment and workload. Studies had substantial overlap in the topics measured.
Table 3

Topics investigated by studies.

TopicNumber of studies included in (%)
PPE (e.g. availability, personal purchase)27 (100)
Personal experience of COVID (e.g. self-isolation, testing, symptoms, shielding)21 (78)
NHS or Trust preparation (e.g. COVID guidance, COVID-specific information, safe work environment)20 (74)
Changes to work (e.g. redeployment, workload, schedule, finances, move to frontline)19 (70)
Physical or mental health impact (e.g. work-related burnout, anxiety, exhaustion)12 (44)
Non- COVID care10 (37)
Official virus response8 (30)
Home environment6 (22)
Future response to COVID (e.g. second wave)5 (19)
COVID-specific work (e.g. care, training)3 (11)
Experience of harassment or discrimination in the workplace3 (11)
BAME-specific issues (e.g. disproportionate impact of COVID)2 (7)
Prescribing1 (4)

BAME, Black, Asian and Minority Ethnic; PPE, personal protective equipment.

Topics investigated by studies. BAME, Black, Asian and Minority Ethnic; PPE, personal protective equipment. In the audit, volunteers reported receiving 70 survey invitations in total, relating to 50 unique studies, with each volunteer receiving between four and 18 invitations (median 12). The largest number of study invitations were for national surveys (n = 23 unique invitations), followed by local (n = 16 unique invitations) and research surveys (n = 8 unique invitations; see Table 4 ). Three surveys could not be classified due to insufficient information.
Table 4

Overall survey characteristics and domains.

Cumulative – n (% of total)Unique – n (% of total)Range of survey invites received by each volunteer (a – b)
Research11 (16)8 (16)0–4
Locala16 (23)16 (32)1–6
Nationalb40 (57)23 (46)3–10
Unspecifiedc3 (4)3 (6)0–2
All surveys70 (100)50 (100)418

n = number of survey invitations.

Includes employer surveys, NHS Trust level service evaluations and audits.

Includes professional bodies, think tanks, media outlets, national service evaluations and national audits.

Unclear if service evaluation or research. Limited or no identifiable participant information, consent process or contact details for the survey conducting team.

Overall survey characteristics and domains. n = number of survey invitations. Includes employer surveys, NHS Trust level service evaluations and audits. Includes professional bodies, think tanks, media outlets, national service evaluations and national audits. Unclear if service evaluation or research. Limited or no identifiable participant information, consent process or contact details for the survey conducting team. Of the research studies, only one research study had a defined sample frame (12.5%; see Table 5 ). Seventy-five per cent of the studies had ethical approval, and 87.5% sought research consent.
Table 5

Characteristics of research studies.

Unique Studies n (% of total)Consenta n (% of category)Ethical Approval n (% of category)
Defined Sample Frame1 (13)1 (100)1 (100)
No Explicit Sample Frame6 (75)5 (83)5 (83)
Unknown Sampling Frame1 (13)1 (100)0 (0)
All surveys8 (100)7 (88)6 (75)

Participant consent was sought directly for the research study, for example, through ticking consent box in online survey.

Characteristics of research studies. Participant consent was sought directly for the research study, for example, through ticking consent box in online survey. As of 31/07/2020, approximately a third of all studies identified in the audit (n = 18, 36%) had published their results. Of the research studies, only one had shared its findings. This was in a national newspaper as an online comment article. None of the others (n = 7, 83%) had published their results to date (either own website, media, pre-print or academic journals). The majority of national surveys had published results on their own websites (n = 17, 74% of all unique national surveys), including all of those conducted by a Royal College. A number of national studies (n = 9, 36% of all unique national surveys) shared their results in multiple places, for example, their own website and in the media. None of the local surveys had publicly disseminated findings on their own website, media or in academic portals, and it was not possible to reliably establish whether survey results had been locally disseminated in other ways (i.e. shared in departmental or senior management meetings, or on the NHS Trust intranet website).

Discussion

Concerns have been raised about the quantity and quality of research being conducted during the COVID-19 pandemic. Our study found that there has been a large number of studies conducted with HCWs since January 2020 related to the outbreak of COVID-19. These had overlapping aims – for example all surveys identified in the literature review covered the issue of PPE – and were generally of low methodological rigour. The methodological rigour of studies was investigated using the proxy of whether they had ethical approval, included funding information and whether they reported a response rate. Our findings indicate that the majority of studies conducted on UK HCWs during the COVID-19 pandemic bypassed ethics committees and funding gatekeepers. Indeed, most studies identified in both the literature review and the audit did not report details of funding information nor ethical approval. Furthermore, very few studies have been published in peer-reviewed literature. Peer review is an important step in scientific publication, ensuring conclusions drawn from results are reasonable and that findings have not been overstated. Most studies included in the literature review failed to report a response rate (n = 21, 78%) or were based on a quota sample. Therefore, results from these studies cannot be generalised to the wider population as it is unclear whether survey participants are representative of the wider HCW population. For example, HCWs who have experienced shortages of PPE may be more inclined to take part in a study investigating PPE, and those from lower income or ethnic minority groups may be less likely to take part. The omission of this step in much of the dissemination of results means that some reporting has been subject to dramatisation and may not have given a full, accurate depiction of the true picture results e.g. ‘Coronavirus is whipping up a mental health storm for NHS workers’. While many studies conducted by local and national groups had a defined sampling frame (i.e. members) and would, therefore, be able to calculate a response rate, it is unclear if assessments of risk of bias are undertaken during analysis and contextualised in the dissemination of results. The large number of study invitations received by HCWs may contribute to ‘survey fatigue’, and it seems that willing participants have spent a lot of time completing studies of varied quality. This problem is particularly pertinent in busy, restricted samples, such as HCWs. Even in the pre-pandemic period, recruiting HCWs into methodologically robust studies was challenging. This has been exacerbated during the pandemic. Our volunteers reported a median of 12 invitations in a six-month period. While we are not able to comment about the impact of receiving a large number of survey requests on response or completion rates, it is clear that receiving many, repeated invitations may leave participants feeling exploited. For individual participants, it is more difficult to distinguish studies of better quality, and while some are generous with their time, others will reasonably complete the first invitation and then no more, regardless of source or quality. Therefore, it is important to ensure that HCWs receive study requests for methodologically robust studies that will help further understanding, especially of a previously under-researched topic. We found that many medical associations have run multiple surveys of their members, likely reflective of the desire to monitor for changes during the evolving pandemic. The potential burden of receiving multiple survey invitations is compounded by the fact that many HCWs will be members of both a medical Royal College and the British Medical Association and will, therefore, receive invites through both platforms. Topics investigated by the studies identified in the literature review and audit had substantial overlap in their aims and study populations, indicating limited evidence of cross-organisation collaboration or partnerships. In future, greater thought should be given to some degree of co-ordination, as everyone, including the sponsoring organisations, would benefit from having fewer surveys with higher response rates. Smaller, more localised studies, or service evaluations, for example, investigating the needs of HCWs within specific departments and trusts, may also have merit. A further thought is needed to consider how best to coordinate efforts across local, national and research institutions, given there is likely to be some divergence in priorities. Based upon our findings, it appears that a gate-keeping strategy aimed at ethics committees or funders would fail to screen most of the surveys we identified. As such, the open science movement may offer guiding principles in remedial efforts. For example, registration of study protocols in advance. NHS Trusts themselves may benefit from having a registry for studies or surveys intended to be sent to their staff via work-based email mailing lists. It is thought that this will provide a better opportunity to monitor data and responses, as well as the overall quality of study invitations that staff are receiving through their workplace. This may mitigate survey fatigue. There are several methodological limitations to consider in our study. With regard to the literature review, first, the approach to searching grey and internet literature means that there are likely to be studies that have been missed. Indeed, the individual searches through Google – done to identify standalone studies mentioned in newspaper articles – highlighted surveys conducted through many of the Royal Colleges. Since the completion of this search, additional such surveys have been identified. Second, we applied different search terms to each separate search in the literature review resulting in potential inconsistencies in outcomes. Our rationale for doing this was to choose search terms appropriate for each distinct platform and because searching some platforms resulted in large numbers of citations that were unfeasible to search by hand (e.g. a Nexis search gave 78,000+ citations). In the audit, first, volunteers were recruited using a non-probability purposive sample. Given this sampling strategy, and the number of volunteers (n = 6), this audit is not generalisable to the experience of all doctors in London, nor the UK more widely. There are likely to be variations based on training grade, speciality, proximity to academic centres and geographical locations. Second, the audit only explored the experience of doctors, and therefore, the results may not capture the experience of other clinical staff or the wider health workforce. However, the nature of the limitations of both parts of our study means our estimates almost certainly under-rather than over-estimate the size of the problem we have set out to highlight. In conclusion, HCWs during the pandemic have been asked to complete numerous surveys that often have methodological shortcomings. As well as the possibility of generating questionable, non-generalisable results this can also result in survey fatigue among HCWs. While undoubtedly these surveys will have been formulated with the best of intentions, and indeed reflect the commendable speed at which clinical and academic communities sought to address the challenges posed by this novel virus, some of the most methodologically questionable studies have garnered wider prominence in the national media. As such, they may well have influenced policymakers and decisions either consciously or unconsciously, as well as helping to create a somewhat inaccurate public narrative. The high degree of overlap in topics investigated by studies suggests a pressing need for co-ordination of studies so as to reduce the research burden on this already busy population. Making sure that studies follow scientific good-practice – report methodological details, funding information, seek and report details of ethical approval or exemption, and are peer-reviewed – before being widely disseminated may help to reduce overlap in study topics and ensure that what research is conducted is methodologically sound. Our results suggest that there is a danger that we will fail to generate sufficiently meaningful data to learn the relevant lessons to enable us to protect the wellbeing of staff during this pandemic, and the next crisis, whenever that might be. Our results also underline the need for more in-depth analyses around survey burden and possible impact on response rates.

Author statements

Ethical approval

No ethical approval under the NHS Health Research Authority was required for this study, given that no personal data was gathered from participants.

Funding

None declared.

Competing interests

All authors have completed the ICMJE uniform disclosure form at www.icmje.org/coi_disclosure.pdf and declare: NG runs a psychological health consultancy that provides resilience training for a wide range of organisations, including a few NHS teams. This study was funded by the National Institute for Health Research Health Protection Research Unit (NIHR HPRU) in Emergency Preparedness and Response, a partnership between Public Health England, King’s College London and the University of East Anglia. The views expressed are those of the authors and not necessarily those of the NIHR, Public Health England or the Department of Health and Social Care.

Transparency statement

The lead authors∗ affirm that this manuscript is an honest, accurate, and transparent account of the study being reported and that no important aspects of the study have been omitted; and that any discrepancies from the study as planned (and, if relevant, registered) have been explained.

Contributorship statements

The study was conceptualised by SW, NG and JR. The literature review screening and data extraction were done by AH and checked by LS. The audit data extraction was done by SG and checked by AH. The original draft was led by AH and SG. The manuscript was reviewed and edited by AH, SG, LS, NG, JR, SW prior to submission. AH and SG are the guarantors. Further, we are grateful to the six HCWs who volunteered to review their email/messages for HCW study invitations.
  12 in total

1.  Pandemic publishing poses a new COVID-19 challenge.

Authors:  Adam Palayew; Ole Norgaard; Kelly Safreed-Harmon; Tue Helms Andersen; Lauge Neimann Rasmussen; Jeffrey V Lazarus
Journal:  Nat Hum Behav       Date:  2020-07

2.  Will the pandemic permanently alter scientific publishing?

Authors:  Ewen Callaway
Journal:  Nature       Date:  2020-06       Impact factor: 49.962

3.  Research on covid-19 is suffering "imperfect incentives at every stage".

Authors:  Stephen Armstrong
Journal:  BMJ       Date:  2020-05-28

4.  Waste in covid-19 research.

Authors:  Paul P Glasziou; Sharon Sanders; Tammy Hoffmann
Journal:  BMJ       Date:  2020-05-12

5.  Research fatigue in COVID-19 pandemic and post-disaster research: Causes, consequences and recommendations.

Authors:  Sonny S Patel; Rebecca K Webster; Neil Greenberg; Dale Weston; Samantha K Brooks
Journal:  Disaster Prev Manag       Date:  2020-06-22

6.  Says who? The significance of sampling in mental health surveys during COVID-19.

Authors:  Matthias Pierce; Sally McManus; Curtis Jessop; Ann John; Matthew Hotopf; Tamsin Ford; Stephani Hatch; Simon Wessely; Kathryn M Abel
Journal:  Lancet Psychiatry       Date:  2020-06-02       Impact factor: 27.083

7.  Retraction-Hydroxychloroquine or chloroquine with or without a macrolide for treatment of COVID-19: a multinational registry analysis.

Authors:  Mandeep R Mehra; Frank Ruschitzka; Amit N Patel
Journal:  Lancet       Date:  2020-06-05       Impact factor: 79.321

8.  Anticipatory prescribing in community end-of-life care in the UK and Ireland during the COVID-19 pandemic: online survey.

Authors:  Bárbara Antunes; Ben Bowers; Isaac Winterburn; Michael P Kelly; Robert Brodrick; Kristian Pollock; Megha Majumder; Anna Spathis; Iain Lawrie; Rob George; Richella Ryan; Stephen Barclay
Journal:  BMJ Support Palliat Care       Date:  2020-06-16       Impact factor: 3.568

9.  Provision of obstetrics and gynaecology services during the COVID-19 pandemic: a survey of junior doctors in the UK National Health Service.

Authors:  M P Rimmer; B H Al Wattar
Journal:  BJOG       Date:  2020-05-27       Impact factor: 7.331

10.  Mixed signals about the mental health of the NHS workforce.

Authors:  Danielle Lamb; Neil Greenberg; Sharon A M Stevelink; Simon Wessely
Journal:  Lancet Psychiatry       Date:  2020-09-03       Impact factor: 77.056

View more
  3 in total

1.  Characterizing experiences of non-medical switching to trastuzumab biosimilars using data from internet-based surveys with US-based oncologists and breast cancer patients.

Authors:  Elizabeth Lerner Papautsky; Martha Carlson; Sheila M Johnson; Hannah Montague; Deanna J Attai; Maryam B Lustberg
Journal:  Breast Cancer Res Treat       Date:  2022-05-14       Impact factor: 4.624

2.  Predictors of response rates of safety culture questionnaires in healthcare: a systematic review and analysis.

Authors:  Louise A Ellis; Chiara Pomare; Kate Churruca; Ann Carrigan; Isabelle Meulenbroeks; Maree Saba; Jeffrey Braithwaite
Journal:  BMJ Open       Date:  2022-09-16       Impact factor: 3.006

3.  Capturing the experiences of UK healthcare workers during the COVID-19 pandemic: A structural topic modelling analysis of 7,412 free-text survey responses.

Authors:  Danielle Lamb; Liam Wright; Hannah Scott; Bethany Croak; Sam Gnanapragasam; Mary Docherty; Neil Greenberg; Matthew Hotopf; Sharon A M Stevelink; Rosalind Raine; Simon Wessely
Journal:  PLoS One       Date:  2022-10-07       Impact factor: 3.752

  3 in total

北京卡尤迪生物科技股份有限公司 © 2022-2023.