| Literature DB >> 29953471 |
Annette M O'Connor1, Sarah C Totton2, Jonah N Cullen1, Mahmood Ramezani3, Vijay Kalivarapu4, Chaohui Yuan5, Stephen B Gilbert3.
Abstract
Systematic reviews are increasingly using data from preclinical animal experiments in evidence networks. Further, there are ever-increasing efforts to automate aspects of the systematic review process. When assessing systematic bias and unit-of-analysis errors in preclinical experiments, it is critical to understand the study design elements employed by investigators. Such information can also inform prioritization of automation efforts that allow the identification of the most common issues. The aim of this study was to identify the design elements used by investigators in preclinical research in order to inform unique aspects of assessment of bias and error in preclinical research. Using 100 preclinical experiments each related to brain trauma and toxicology, we assessed design elements described by the investigators. We evaluated Methods and Materials sections of reports for descriptions of the following design elements: 1) use of comparison group, 2) unit of allocation of the interventions to study units, 3) arrangement of factors, 4) method of factor allocation to study units, 5) concealment of the factors during allocation and outcome assessment, 6) independence of study units, and 7) nature of factors. Many investigators reported using design elements that suggested the potential for unit-of-analysis errors, i.e., descriptions of repeated measurements of the outcome (94/200) and descriptions of potential for pseudo-replication (99/200). Use of complex factor arrangements was common, with 112 experiments using some form of factorial design (complete, incomplete or split-plot-like). In the toxicology dataset, 20 of the 100 experiments appeared to use a split-plot-like design, although no investigators used this term. The common use of repeated measures and factorial designs means understanding bias and error in preclinical experimental design might require greater expertise than simple parallel designs. Similarly, use of complex factor arrangements creates novel challenges for accurate automation of data extraction and bias and error assessment in preclinical experiments.Entities:
Mesh:
Year: 2018 PMID: 29953471 PMCID: PMC6023607 DOI: 10.1371/journal.pone.0199441
Source DB: PubMed Journal: PLoS One ISSN: 1932-6203 Impact factor: 3.240
Design element groups and annotation options for use with an AFLEX (automatic functional language recognition/EXtraction) interface for annotating portable document format files.
| Design element | Element options | Comments |
|---|---|---|
| None | There is only one group in the study and this group received the intervention. This group may serve as its own control, i.e., the outcome is assessed prior to and following application of the intervention(s). | |
| Concurrent | The design has two or more comparison groups that occur at the same time. | |
| Historic | The design has at least one comparison group that completed the study before the other comparison group(s) entered the study. | |
| Group | The factors are applied at the level of the group, such as cage or other housing. | |
| Individual | The factors are applied at the level of the individual. | |
| Nested | There are two or more hierarchical levels of the factors (e.g., one factor applied to pregnant mother, and a second factor applied to the pups). | |
| Parallel | Two or more experimental groups are followed over time. Interaction between factors is not studied. | |
| Cross-over | At least two experimental groups are in the study, and the groups swap interventions. | |
| Complete factorial | At least two factors are studied and all possible combinations of these factors are present in the design. These factors are applied at the same level. (all factors applied at single level). | |
| Incomplete factorial | At least two factors are studied but not all possible combinations of these factors are present in the design. These factors are applied at the same level. (all factors applied at single level). | |
| Split-plot | Factors are investigated at two or more hierarchical levels in the study, i.e., one or more factors are nested within another factor (e.g., whole mouse, two or more tissues within the mouse). | |
| Random | Refers to the use of a random allocation methods | |
| Systematic | Refers to the use of alternation methods. | |
| Minimization | Minimization includes matching on known confounders based on previously enrolled animals. | |
| Haphazard | A method that is none of the above, such as allocating the next intervention to the next mouse caught. Rarely is the word "haphazard" used; however, a described method might appear haphazard. | |
| Blinded intervention allocation | The investigators indicated whether the allocation sequence was concealed prior to enrolment. | |
| Blinded outcome assessment | The investigators indicated whether the outcome assessor(s) was/were blinded to the intervention groups. | |
| Pseudo-replication | Pseudo-replication is considered multiple measures of an outcome designed to capture random experimental noise, i.e., multiple pups within a litter when the dam had been allocated to treatment or multiple tissue sections within an animal. | |
| Repeated measures | Repeated measures refers to multiple measurements of an outcome when a factor is varied. The multiple outcome measurements are spread across a factor of potential interest, such as time or decibels. | |
| Investigator-Identified Study Design | The study design, as identified by the study investigator(s) in the Title, Abstract, Keywords, Objectives, and/or Methods sections of the article. | |
| All could be randomized | The investigators examined only factors that could be randomized (e.g., drugs, exercise treatments, diets, etc.) |
Frequency of description of design elements and design element options in the two preclinical datasets evaluated (CAMARADES (brain trauma/stroke) and toxicology).
| Design element | Element options | CAMARADES | Toxicology |
|---|---|---|---|
| Control group | None | 0 | 0 |
| Concurrent | 98 | 92 | |
| Historic | 0 | 0 | |
| Unclear | 2 | 8 | |
| Unit of concern | Group | 0 | 12 |
| Individual | 92 | 37 | |
| Nested | 0 | 17 | |
| Unclear | 8 | 34 | |
| Arrangement of the factors | Complete factorial | 27 | 42 |
| Cross-over | 0 | 0 | |
| Incomplete factorial | 12 | 11 | |
| Parallel | 58 | 24 | |
| Split-Plot | 0 | 20 | |
| Unclear | 3 | 3 | |
| Allocation | Haphazard | 0 | 0 |
| Minimization | 0 | 0 | |
| Random | 79 | 62 | |
| Systematic | 0 | 0 | |
| Unclear | 21 | 38 | |
| Concealment (a) | Intervention allocation | 12 | 0 |
| NDD | 88 | 100 | |
| Concealment (b) | Outcome assessment | 60 | 12 |
| NDD | 40 | 88 | |
| Independence (a) | Pseudo-replication | 44 | 50 |
| NDD | 56 | 50 | |
| Independence (b) | Repeated measures | 40 | 59 |
| NDD | 60 | 41 | |
| Investigator-identified study design | Provided | 0 | 7 |
| NDD | 100 | 93 | |
| Nature of the factors | All could be randomized | 94 | 69 |
| Some could be randomized | 5 | 30 | |
| None could be randomized | 0 | 0 | |
| Unclear | 1 | 1 |
* NDD = no discernable description: neither reviewer/reader was able to find text that described this element.