| Literature DB >> 22911755 |
Jeppe Bennekou Schroll1, Emma Maund, Peter C Gøtzsche.
Abstract
BACKGROUND: Misclassification of adverse events in clinical trials can sometimes have serious consequences. Therefore, each of the many steps involved, from a patient's adverse experience to presentation in tables in publications, should be as standardised as possible, minimising the scope for interpretation. Adverse events are categorised by a predefined dictionary, e.g. MedDRA, which is updated biannually with many new categories. The objective of this paper is to study interobserver variation and other challenges of coding.Entities:
Mesh:
Year: 2012 PMID: 22911755 PMCID: PMC3401103 DOI: 10.1371/journal.pone.0041174
Source DB: PubMed Journal: PLoS One ISSN: 1932-6203 Impact factor: 3.240
Figure 1The MedDRA 5-level hierarchy demonstrated by using ‘common cold’ as an example.
Figure 2Flow chart of the process of identifying studies.
Description of included studies.
| Author/Year | Aim | Study design | Main findings |
| Brown 1996 | To determine MedDRA's adequacy in representing medical terms used in UK data sheets | A product from each of the main drug classes in the British National Formulary was scrutinised for medical terms which were then coded using MedDRA. Matches were classed for accuracy | Identical or acceptable matches for 90% of the side effects |
| Brown 1997 | To compare MedDRA to the COSTART for specificity of coding clinical trial data and for the effects of coding on the analysis and presentation of safety data from the trial | Verbatim descriptions of adverse events from a phase II trial were coded by MedDRA and COSTART and the association was assessed for accuracy. The incidence of adverse events using the different dictionaries was compared. | Using MedDRA resulted in more exact matches than using COSTART (90% vs 62%). With MedDRA 267 codes were used, with COSTART only 169. The two terminologies gave different breakdowns of adverse events |
| Brown 2002 | To explore the numerical and conceptual relationships between WHO-ART and the MedDRA and their ability to detect signals | A sample of approximately one sixth of all WHO-ART preferred terms was taken. MedDRA was searched for each of these terms to find the best match | 315 WHO-ART terms were identified and were matched with 943 MedDRA preferred terms |
| Brown 2004 | To identify common adverse events in clinical trials by looking at product labeling and comparing this to MedDRA terms | Adverse events from 10 randomly selected drugs in the Physician's Desk Reference were compared with MedDRA terms | Some terms in the product labels were associated with hundreds of MedDRA terms. E.g. “infection” (several hundreds) and “pain” (168 items) |
| Fescharek 2004 | To investigate MedDRA's impact on retrievel strategies, analysis and presentation of coded data | Comparison of trial data coded in WHO-ART with the same data recoded in MedDRA | In WHO-ART 214 different terms were used; whereas in MedDRA 312 different terms were used. They were grouped quite differently |
| Journot 2008 | To be able to use the MedDRA hierarchy for data analysis by redefining the hierarchy to fit trial objectives | The authors developed a new general 5-step strategy to select a SOC (system organ class) for an adverse event as trial primary SOC, consistent with trial-specific objectives. This was applied to clinical trial data and compared to the original MedDRA hierarchy | Altogether, 23% of MedDRA primary SOCs were modified |
| Nilsson 2001 | To analyse the impact of defining “treatment emergent adverse events” | Since only treatment emergent adverse events are reported in trials the authors identified in how many ways this could be defined and the consequences on test data | At least 26 different strategies for censoring adverse events exist. Depending on the chosen strategy the same data resulted in 2 to 7 adverse events. |
| Toneatti 2005 | To assess the feasibility of coding with MedDRA. To develop an approach for MedDRA implementation within an institutional research unit that contributes to an efficient, concise and reproducible event coding | 1) Two blinded coders used MedDRA to code 260 verbatim descriptions of adverse events from a clinical trial and reported difficulties in coding. Variability between the two coders was measured and accuracy was determined by a medical coding committee.2) MedDRA 6.1 was applied to both the list of frequent adverse events and a trial coded with MedDRA 5.0 | 1) 32 adverse events (12%) were coded differently by the two coders; 13% of the adverse events were assessed to be “non-accurate”. 2) When changing to a new MedDRA version, 38 (9%) adverse events changed. |
| White 1998 | To obtain a preliminary assessment of the impact of MedDRA on the frequency of expedited adverse event reports based on current (non-MEDDRA) labeling | Verbatim adverse event reports (surveillance) for two different marketed drugs were coded with WHO-ART and MEDDRA and it was determined whether the code was mentioned in the product label. A rating scale was used to quantify the differences | Twenty-seven terms (13%) had some syntactic differences although these were not considered medically significant. Thirty-two terms (16%) were rated as medically significantly different but did not affect the label. Ten terms (5%) were rated as both medically different and resulted in a labeling discrepancy |
| Zhao-Wong 2006 | The purpose was to obtain more user input on issues related to the feasibility study and MedDRA terminology in general | A survey of MedDRA users performed by the MSSO, the organization maintaining MedDRA | Received 12 responses out of 29 invited. The majority of MedDRA users relied on primary paths for both re-porting and analysis. The usage of secondary links was limited |
| MedDRA Term Selection 2011 | To aid medical coders in choosing codes consistently | Not a study but a manual | Describes many situations where there might be doubt on how to code a reported adverse event and suggests a solution |
| MedDRA Data Retrieval 2011 | To aid investigators in presenting adverse events | Not a study but a manual | Describes how adverse events can be presented by the hierarchy and how to use standard and custom searches to lump related adverse events together |