| Literature DB >> 28878704 |
Naomi Schalken1, Charlotte Rietbergen1.
Abstract
Objective: The goal of this systematic review was to examine the reporting quality of the method section of quantitative systematic reviews and meta-analyses from 2009 to 2016 in the field of industrial and organizational psychology with the help of the Meta-Analysis Reporting Standards (MARS), and to update previous research, such as the study of Aytug et al. (2012) and Dieckmann et al. (2009).Entities:
Keywords: MARS; industrial and organizational psychology; replicability; reporting quality; systematic review; transparency
Year: 2017 PMID: 28878704 PMCID: PMC5572251 DOI: 10.3389/fpsyg.2017.01395
Source DB: PubMed Journal: Front Psychol ISSN: 1664-1078
Summary of previous studies on methodological decisions and/or reporting quality.
| 1 | Wanous et al., | The influence of judgment calls on the results of meta-analyses in the field of I/O psychology | The consequences of judgment calls on the final results of four pairs of meta-analyses were analyzed | Judgment calls made during the research process resulted in differences in results in the pairs of meta-analyses. These differences were mainly caused by judgment calls in the definition of inclusion criteria, in the data extraction, in the search and in the selection of studies. Interpretation of authors could also play a role in differences in the final results |
| 2 | Ada et al., | The influence of methodological decisions on the final conclusions of meta-analyses about the business value of information technology | Several hypotheses were tested while varying the meta-analytic decisions such as the inclusion of studies, the exclusion of outliers and the selection of the statistical meta-analysis method | Meta-analytic choices influenced the results of meta-analyses, especially the choice for inclusion and exclusion criteria and in some cases the choice for the statistical method. Hypotheses with a theoretical foundation were more robust to different decisions made in the meta-analytic process |
| 3 | Geyskens et al., | The current state of meta-analytic research in the field of management and the influence of decisions on the final conclusions of meta-analyses | 69 meta-analyses between 1984 and 2007 from 14 management journals were evaluated on the researcher decisions made. Also four meta-analyses were performed to investigate the influence of researcher decisions | Decisions regarding the statistical methods in meta-analyses could have substantial influence on the final results of meta-analyses. Important information about choices in statistical methods was omitted in a substantial part of the studied meta-analyses |
| 4 | Aguinis et al., | The effects of meta-analytic choices and judgment calls on effect sizes and substantive conclusions of meta-analyses in the field of I/O psychology and management | The content of 196 meta-analyses from 1982 to 2009 including their effect sizes, were analyzed on different methodological choices and judgment calls | Methodological choices and judgment calls had little impact on the final derived effect sizes and substantial conclusions in the meta-analyses |
| 5 | Nieminen et al., | The influence of researcher decisions on the processes and findings of meta-analyses about telecommuting | The influence of researcher decisions was studied in three telecommuting meta-analyses | No direct influence of researcher decisions was found on the findings of the meta-analyses, but some differences existed in the prior decisions of meta-analyses, such as in the inclusion criteria and in the selection of studies and moderator variables |
| 6 | Aytug et al., | Evaluating the reporting transparency and completeness of meta-analyses in the field of I/O psychology in a systematic review | Meta-analyses from 1995 to 2008 were retrospectively assessed with items of reporting guidelines such as QUOROM and MARS | Only half of the included meta-analyses reported enough information for replication or the assessment of validity. The reporting of the literature search, the inclusion/exclusion criteria and statistical information was often incomplete. Information about limitations of the review, possible bias in primary studies and the amount of heterogeneity was often lacking |
| 7 | Kepes et al., | Reviewing the MARS guideline and discussing best practices in meta-analytic reviews in the organizational sciences | Discussion of several MARS items and the implementation of best practices in meta-analyses in the organizational sciences | Meta-analytic reviews did not comply with recommendations in guidelines such as MARS, and still had problems with the accuracy, transparency and replicability of their results |
| 8 | Ahn et al., | Reviewing the methodological quality of meta-analyses in the field of education | 56 educational meta-analyses from the 2000s were reviewed on different methodological aspects | Meta-analytic practices were adequate in the problem formulation and data collection stages. Improvements could be made in the data evaluation and data analysis stages |
| 9 | Dieckmann et al., | Evaluating practice and reporting at each stage of the meta-analytic process | A random sample of meta-analyses from psychology and related fields from 1994 to 2004 were assessed | Improvements could be made in the discussion of publication bias, in the coding procedures and reliability, in dispersion measures, and in the discussion of limitations. Inclusion criteria, the list of primary reports and literature search methods were often fully reported |
| 10 | Fehrmann and Thomas, | Evaluating the reporting of computer searches in systematic reviews | Systematic reviews from the PsycINFO and Cochrane Library were evaluated with the Computer Search Report Checklist | The majority of the reviews reported if more than one computer source was used and reported the used alternate search terms. Techniques like truncation searching, the use of a controlled vocabulary, and search tools for articles that cited a study of interest were underreported |
Figure 1Flow chart of the literature search and selection process of the review.
Characteristics of systematic reviews and meta-analyses in the review (n = 120).
| Journal of Applied Psychology | 43 (35.8) |
| Personnel Psychology | 11 (9.2) |
| Journal of Organizational Behavior | 12 (10.0) |
| Journal of Vocational Behavior | 19 (15.8) |
| Journal of Occupational Health Psychology | 5 (4.2) |
| Work and Stress | 7 (5.8) |
| Organizational Behavior and Human Decision Processes | 5 (4.2) |
| European Journal of Work and Organizational Psychology | 5 (4.2) |
| Journal of Business and Psychology | 6 (5.0) |
| Journal of Occupational and Organizational Psychology | 7 (5.8) |
| Yes | 48 (40.0) |
| No | 72 (60.0) |
| 2009 | 14 (11.7) |
| 2010 | 16 (13.3) |
| 2011 | 22 (18.3) |
| 2012 | 14 (11.7) |
| 2013 | 16 (13.3) |
| 2014 | 13 (10.8) |
| 2015 | 18 (15.0) |
| 2016 | 7 (5.8) |
| Systematic review | 3 (2.5) |
| Meta-analysis (with and without a systematic review) | 117 (97.5) |
The literature search was conducted through April 2016.
Compliance with the coding guideline for reporting quality based on MARS (American Psychological Association Publications and Communications Board Working Group on Journal Article Reporting Standards, 2008) (n = 120).
| 1 | Inclusion and exclusion criteria | Operational characteristics of independent (predictor) and dependent (outcome) variable(s) | 50 (41.7) | 32 (26.7) | 38 (31.7) |
| 2 | Eligible participant populations | 35 (29.2) | 85 (70.8) | ||
| 3 | Eligible research design features (e.g., random assignment only, minimal sample size) | 35 (29.2) | 85 (70.8) | ||
| 4 | Time period in which studies needed to be conducted/published | 7 (5.8) | 1 (0.8) | 112 (93.3) | |
| 5 | Geographical and/or cultural restrictions | 2 (1.7) | 118 (98.3) | ||
| 6 | Moderator and mediator analyses | Definition of all coding categories used to test moderators or mediators of the relation(s) of interest | 52 (43.3) | 18 (15.0) | 50 (41.7) |
| 7 | Search strategies | Reference and citation databases searched | 117 (97.5) | 3 (2.5) | |
| 8 | Keywords used to enter databases and registries | 105 (87.5) | 4 (3.3) | 11 (9.2) | |
| 9 | Time period covered by the search | 28 (23.3) | 55 (45.8) | 37 (30.8) | |
| Other efforts to retrieve all available studies: | |||||
| 10 | • Listservs queried | 29 (24.2) | 91 (75.8) | ||
| 11 | • Contacts made with authors | 47 (39.2) | 73 (60.8) | ||
| 12 | • Reference lists of reports examined | 80 (66.7) | 40 (33.3) | ||
| 13 | Method of addressing reports in languages other than English | 21 (17.5) | 99 (82.5) | ||
| Process for determining study eligibility: | |||||
| 14 | • Aspects of reports were examined (i.e., title, abstract, and/or full text) | 16 (13.3) | 2 (1.7) | 102 (85.0) | |
| 15 | • Number and qualifications of relevance judges | 1 (0.8) | 7 (5.8) | 112 (93.3) | |
| 16 | • Indication of agreement, how disagreements were resolved | 1 (0.8) | 3 (2.5) | 116 (96.7) | |
| 17 | Treatment of unpublished studies | 74 (61.7) | 46 (38.3) | ||
| 18 | Coding procedures | Number and qualifications of coders (e.g., level of expertise in the area, training) | 12 (10.0) | 72 (60.0) | 36 (30.0) |
| 19 | Inter-coder reliability or agreement | 80 (66.7) | 40 (33.3) | ||
| 20 | Whether each report was coded by more than one coder and if so, how disagreements were resolved | 80 (66.7) | 2 (1.7) | 38 (31.7) | |
| 21 | Assessment of study quality | 2 (1.7) | 118 (98.3) | ||
| 22 | How missing data were handled | 109 (90.8) | 11 (9.2) | ||