| Literature DB >> 21829330 |
Carol Bennett1, Sara Khangura, Jamie C Brehaut, Ian D Graham, David Moher, Beth K Potter, Jeremy M Grimshaw.
Abstract
BACKGROUND: Research needs to be reported transparently so readers can critically assess the strengths and weaknesses of the design, conduct, and analysis of studies. Reporting guidelines have been developed to inform reporting for a variety of study designs. The objective of this study was to identify whether there is a need to develop a reporting guideline for survey research. METHODS ANDEntities:
Mesh:
Year: 2011 PMID: 21829330 PMCID: PMC3149080 DOI: 10.1371/journal.pmed.1001069
Source DB: PubMed Journal: PLoS Med ISSN: 1549-1277 Impact factor: 11.069
Instructions to authors—Examples of relevant text per category.
| Category | Examples from Instructions to Authors |
|
| “[ |
| “Regular articles include but are not limited to clinical trials, interventional studies, cohort studies, case-control studies, epidemiologic assessments, and surveys.” | |
|
| “If appropriate, include how many participants were assessed out of those enrolled, e.g. what was the response rate for a survey.” |
| “All randomized controlled trials should include the results of intention-to-treat analysis, and all surveys should include response rates.” | |
| “The results should include: … the number of patients/hips in the updated series who were examined, the number who responded to questionnaires, and the number with available radiographs…” | |
|
| “Survey Research. Manuscripts reporting survey data, such as studies involving patients, clinicians, the public, or others, should report data collected as recently as possible, ideally within the past 2 years [ref]. Survey studies should have sufficient response rates (generally at least 60%) and appropriate characterization of nonresponders to ensure that nonresponse bias does not threaten the validity of the findings. For most surveys, such as those conducted by telephone, personal interviews (e.g., drawn from a sample of households), mail, e-mail, or via the Web, authors are encouraged to report the survey outcome rates by using standard definitions and metrics, such as those proposed by the American Association for Public Opinion Research [ref]” |
Figure 1Flow diagram of records and reports—Guidelines for survey research and evidence on the quality of reporting of surveys.
Checklist items for reporting survey research.
| Reporting Item | Kelley | Burns | Draugalis | AAPOR |
|
| ||||
| Justification of research method | x | x | x | |
| Background literature review | x | x | ||
| Explicit research question | x | x | x | |
| Clear study objectives | x | x | x | |
|
| ||||
| Description of methods used for data analysis | x | x | x | |
| Method of questionnaire administration | x | x | ||
| Location of data collection | x | x | x | |
| Dates of data collection | x | |||
| Number and types of contact | x | x | x | |
| Methods sufficiently described for replication | x | x | ||
| Evidence of reliability | x | |||
| Evidence of validity | x | |||
| Methods for verifying data entry | x | |||
| Use of a codebook | x | x | ||
|
| ||||
| Sample size calculation | x | x | ||
| Representativeness | x | x | x | x |
| Method of sample selection | x | x | x | |
| Description of population and sample frame | x | |||
|
| ||||
| Description of the research tool | x | x | x | x |
| Description - development of research tool | x | x | ||
| Instrument pretesting | x | x | ||
| Instrument reliability and validity | x | x | x | |
| Scoring methods | x | x | ||
|
| ||||
| Results of research presented | x | x | ||
| Results address objectives | x | x | ||
| Clear description - results based on part sample | x | |||
| Generalisability | x | x | ||
|
| ||||
| Response rate stated | x | x | x | |
| How response rate was calculated | x | x | ||
| Discussion of nonresponse bias | x | |||
| All respondents accounted for | x | x | ||
|
| ||||
| Interpret and discuss findings | x | x | ||
| Conclusions and recommendations | x | x | ||
| Limitations | x | x | ||
|
| ||||
| Consent | x | x | ||
| Sponsorship | x | |||
| Research ethics approval | x | |||
| Evidence of ethical treatment of human subjects | x |
Systematic review – evidence on the of quality of reporting of survey research.
| Reporting Criteria | Reference | Journals Reviewed | Number of Surveys | Results |
| Response rates | Badger 2005 | 3 nursing journals 2002 | 270 | 49% did not report a response rate or provide sufficient sample disposition to calculate |
| Smith 2002 | 8 journals: political science, sociology, survey research 1998–2001 | 571 | 60% did not provide a response rate; lower for survey research (54%) than sociology (59%) or political science (73%) | |
| Asch 1997 | 111 Medical Journals 1991 | 321 (178 articles) | 30% did not report a response rate or provide sufficient sample disposition to calculate | |
| Johnson 2003 | 9 social science and 9 health science journals 2000–2003 | 95 | 5% did not report a response rate. Quality of response rate varied by mode of administration - mail surveys providing a more complete sample disposition | |
| Cummings 2002 | Physician surveys 1986–1995 | 257 | 5% did not report a response rate; of those that did, a further 3% did not provide the number of individuals in the sample or the number responding | |
| Non-response analysis | Werner 2007 | 9 management journals 2000–2004 | 705 | 31% reported non-response analyses |
| Asch 1997 | 111 Medical Journals 1991 | 321 (178 articles) | 26% reported non-response analyses | |
| Cummings 2001 | Physician surveys 1986–1995 | 257 | 18% reported non-response analyses | |
| Survey instrument | Schilling 2006 | 3 general medicine journals 2000–2003 | 93 | 8% provided access to the questionnaire. When corresponding authors were contacted, 46% failed to provide the questionnaire despite repeated contact |
| Rosen 2006 | 4 epidemiological journals 2005 | 71 | 85% did not provide access to the complete questionnaire. 13% did not indicate the type of questionnaire (i.e., interviewer or self-administered); and of those indicating the type 10% did not report mode of administration |
Figure 2Identification process for article selection—Review of published reports of self-reported surveys.
Items reported by 117 included articles.
| Criteria | Category | Number (%) |
|
| ||
| Design of study stated | Both title and abstract | 90 (77) |
| Either title or abstract | 23 (20) | |
| Not stated | 4 (3) | |
|
| ||
| Background provided | Yes | 117 (100) |
| Purpose/aim of paper explicitly stated | Yes | 100 (85) |
| No | 17 (15) | |
|
| ||
|
| ||
| Description of the questionnaire | Questionnaire provided | 16 (14) |
| Core questions provided | 25 (21) | |
| One complete question provided | 36 (31) | |
| Questions not provided | 40 (34) | |
| Existing tool, psychometric properties presented | Yes | 12 (10) |
| No | 40 (34) | |
| Not applicable | 65 (56) | |
| Existing tool, references to original work provided | Yes | 50 (43) |
| No | 2 (2) | |
| Not applicable | 65 (56) | |
| New tool, procedures to develop and pre-test provided | Yes | 20 (17) |
| No | 91 (78) | |
| Not applicable | 6 (5) | |
| New tool, reliability and validity reported | Both | 3 (3) |
| Reliability only | 11 (9) | |
| Validity only | 8 (7) | |
| Neither | 88 (75) | |
| Not applicable | 7 (6) | |
| Description of the scoring procedures provided | Yes | 38 (32) |
| No | 63 (54) | |
| Not applicable | 16 (14) | |
|
| ||
| Description of survey population and sample frame | Both | 4 (3) |
| Survey population | 43 (37) | |
| Sample frame | 63 (54) | |
| Neither | 7 (6) | |
| Description of representativeness of the sample | Yes | 13 (11) |
| No | 104 (89) | |
| Sample size calculation or rationale/justification presented | Yes | 7 (6) |
| No | 110 (94) | |
|
| ||
| Mode of administration | 67 (57) | |
| In-person self-administered | 13 (11) | |
| Mixed-mode | 14 (12) | |
| Not explicitly stated | 23 (20) | |
| Information on the type and number of contacts provided | Type and number | 61 (52) |
| Type only | 15 (13) | |
| No information | 41 (35) | |
| Information on financial incentives provided | Yes | 27 (23) |
| No | 90 (77) | |
| Description of who approached potential participants | Yes | 15 (13) |
| No | 102 (87) | |
|
| ||
| Method of data analysis described | Adequate | 50 (43) |
| Inadequate | 55 (47) | |
| No description | 12 (10) | |
| Method for analysis of nonresponse error provided | Yes | 15 (13) |
| No | 102 (87) | |
| Method for calculating response rate provided | Yes | 5 (4) |
| No | 112 (96) | |
| Definitions for complete versus partial completions provided | Yes | 5 (4) |
| No | 112 (96) | |
| Methods for handling item missing data provided | Yes | 13 (11) |
| No | 104 (89) | |
|
| ||
| Response rate reported | Yes, defined | 89 (76) |
| Yes, not defined | 20 (17) | |
| Partial information | 6 (5) | |
| No information | 2 (2) | |
| All respondents accounted for | Yes | 15 (13) |
| No | 102 (87) | |
| Information on how non-respondents differ from respondents provided | Yes | 33 (28) |
| Issue addressed | 4 (3) | |
| No information | 80 (68) | |
| Results clearly presented | Yes – complete | 42 (36) |
| Yes – partial | 39 (33) | |
| No | 36 (31) | |
| Results address objectives | Yes | 114 (97) |
| No | 3 (3) | |
|
| ||
| Results summarized referencing study objectives | Yes | 117 (100) |
| Strengths of the study stated | Yes | 27 (23) |
| No | 90 (77) | |
| Limitations of the study stated | Yes | 110 (94) |
| No | 7 (6) | |
| Generalisability of results discussed | Yes | 47 (40) |
| No | 70 (60) | |
|
| ||
| Study funding reported | Yes | 86 (74) |
| No | 31 (27) | |
| Research Ethics Board (REB) review reported | Yes | 69 (59) |
| Reported REB exempt | 8 (7) | |
| No | 40 (34) | |
| Subject consent procedures reported | Yes | 27 (23) |
| Reported waiver of informed consent | 2 (2) | |
| No | 88 (75) |