Literature DB >> 36048863

Variation in detected adverse events using trigger tools: A systematic review and meta-analysis.

Luisa C Eggenschwiler1, Anne W S Rutjes2, Sarah N Musy1, Dietmar Ausserhofer1,3, Natascha M Nielen1, René Schwendimann1,4, Maria Unbeck5,6, Michael Simon1.   

Abstract

BACKGROUND: Adverse event (AE) detection is a major patient safety priority. However, despite extensive research on AEs, reported incidence rates vary widely.
OBJECTIVE: This study aimed: (1) to synthesize available evidence on AE incidence in acute care inpatient settings using Trigger Tool methodology; and (2) to explore whether study characteristics and study quality explain variations in reported AE incidence.
DESIGN: Systematic review and meta-analysis.
METHODS: To identify relevant studies, we queried PubMed, EMBASE, CINAHL, Cochrane Library and three journals in the patient safety field (last update search 25.05.2022). Eligible publications fulfilled the following criteria: adult inpatient samples; acute care hospital settings; Trigger Tool methodology; focus on specialty of internal medicine, surgery or oncology; published in English, French, German, Italian or Spanish. Systematic reviews and studies addressing adverse drug events or exclusively deceased patients were excluded. Risk of bias was assessed using an adapted version of the Quality Assessment Tool for Diagnostic Accuracy Studies 2. Our main outcome of interest was AEs per 100 admissions. We assessed nine study characteristics plus study quality as potential sources of variation using random regression models. We received no funding and did not register this review.
RESULTS: Screening 6,685 publications yielded 54 eligible studies covering 194,470 admissions. The cumulative AE incidence was 30.0 per 100 admissions (95% CI 23.9-37.5; I2 = 99.7%) and between study heterogeneity was high with a prediction interval of 5.4-164.7. Overall studies' risk of bias and applicability-related concerns were rated as low. Eight out of nine methodological study characteristics did explain some variation of reported AE rates, such as patient age and type of hospital. Also, study quality did explain variation.
CONCLUSION: Estimates of AE studies using trigger tool methodology vary while explaining variation is seriously hampered by the low standards of reporting such as the timeframe of AE detection. Specific reporting guidelines for studies using retrospective medical record review methodology are necessary to strengthen the current evidence base and to help explain between study variation.

Entities:  

Mesh:

Year:  2022        PMID: 36048863      PMCID: PMC9436152          DOI: 10.1371/journal.pone.0273800

Source DB:  PubMed          Journal:  PLoS One        ISSN: 1932-6203            Impact factor:   3.752


Introduction

For the last two decades, patient safety has become and remained a key issue for health care systems globally [1]. One major driver of patient harm in acute care hospitals are adverse events (AEs)—“unintended physical injury resulting from or contributed to by medical care that requires additional monitoring, treatment or hospitalization, or that results in death” [2]. Reported AE rates vary between 7% and 40% [3], increasing health care costs by roughly 10,000 Euros per index admission [4]. Considering that approximately 40% of admissions can be associated with AEs, it is likely that the consequences, both on health care service costs and on patient suffering, are underestimated [4, 5]. While some AEs are hardly avoidable, others are: studies have indicated that 6%–83% of AEs are deemed to be preventable [6, 7]. Retrospective medical record reviews are commonly used when collecting data about patient safety such as AEs. Medical record review methodology using available data [8], was found to identify more AEs when compared with other methods [9, 10], can be repeated over time and can target specific AE types, or the overall AE rate [11]. There are several medical record review methods, and the most used ones are the Harvard Medical Practice Study (HMPS) methodology [12], with subsequently modifications [13], and the Global Trigger Tool (GTT) [2]. The GTT, popularised by the Institute for Healthcare Improvement (IHI) in the US, was primarily designed as a measurement tool in clinical practice to estimate and track AE rates over time, extending beyond traditional incident reports, and aiming to measure the effect of safety interventions [14, 15]. The GTT includes a two-step medical record review process. In the first step, knowledgeable hospital staff—mainly nurses, conduct primary reviews to identify potential AEs using predefined triggers as outlined in the GTT guidance. In the second step, physicians verify the reviews from the first step and authenticate their consensus. A "trigger" (or clue) is either a specific term or an event in a medical record that could indicate the occurrence of an AE, e.g., readmissions within 30 days or pressure ulcers [2]. Its main methodological advantage is that it is an open, inductive process, sensitive to detect various types of AEs [2]. GTT based studies typically report inter-rater reliability coefficients that represent satisfactory reliability (kappa 0.34 to 0.89; mean: 0.65) [16]. GTT’s triggers are grouped into six modules (e.g., Care Module, Medication Module). Some researchers use all six of these [17, 18] while most use only those relevant to their setting [19, 20]. Yet others either create additional modules (e.g., Oncology Module [21, 22]) or develop modified versions tailored specifically to their patient and care settings [3, 23]. While former versions diverge too importantly from the original GTT to label it as GTT, they are still considered as trigger tools (TTs). When using the GTT outside of the USA, even in cases where translation is unnecessary, triggers need to be adapted to reflect local norms (e.g., blood level limits). Additionally, medication labels need to be adjusted as appropriate [24, 25]. Although the GTT was developed as a manual method, with the rise of electronic health records, the GTT process can be semi or fully automated [26]. Recent systematic reviews focussing on AEs detected via GTT or TT showed high detection rate variability [3, 6, 26]. Some of this variability may reflect differences in the studies’ methodological features. Adaptations in triggers, review processes or patient record selection protocols might influence detection rates, thereby impacting the comparability of detected AEs. Such differences in medical record review methodology have not yet been systematically addressed. Therefore, this study has two aims: (1) to synthesize the evidence identified by the TT methodology regarding AE incidence in acute care inpatient settings; and (2) to explore whether between study variation in the incidence of AEs can be explained by study characteristics and study quality.

Methods

Design

This systematic review and meta-analyses adhered to the preferred reporting items for PRISMA guideline [27, 28].

Search strategy and information sources

Our search strategy was developed and validated using methods suggested by Hausner et al. [29, 30]. This involves generating a test set, developing and validating a search strategy and documenting the strategy using a standardized approach [30]. The medical subject headings (MeSH) and keywords for titles and abstracts in our search string were: (trigger[tiab] OR triggers[tiab]) AND (chart[tiab] OR charts[tiab] OR identif*[tiab] OR record[tiab] OR records[tiab]) AND (adverse[tiab] OR medical error[mh]). We used this to query four electronic databases: PubMed, EMBASE, CINAHL and Cochrane Library. In addition, we also hand-searched the top three journals publishing about GTT/TT (BMJ Quality & Safety; Journal of Patient Safety; International Journal for Quality in Health) and screened all authors’ personal libraries. In all searches, publication dates were unrestricted. The detailed search strategy used for this review and further explanations on chosen journals is published in Musy et al. [26]. The index search was conducted in November 2015, additional five update searches in April 2016, July 2017, January 2020, September 2020, and the latest update on May 25 2022.

Eligibility criteria

We included publications fulfilling six criteria:1. publication in English, French, German, Italian or Spanish; 2. adult inpatient samples; 3. acute care hospital settings; 4. medical record review performed manually via GTT or other TT methods; 5. specialties in internal medicine, surgery (including orthopaedics), oncology, or any combination of these (mixed); and 6. outcome data relevant to our study, e.g., number of detected AEs. Systematic reviews and studies addressing only adverse drug events or exclusively deceased patients were excluded.

Study selection and data extraction

Titles and abstracts were screened independently by two researchers in a first round if they included any information on GTT or TT and in a second round on the eligibility criteria. After screening the titles and abstracts, two researchers individually assessed the full-text articles for eligibility. To ensure high-quality data entry, data were extracted by one researcher and verified by a second. Information on study characteristics (e.g., number of admissions, setting, patient demographics) and patient outcomes (incidence, preventability) were collected into an online data collection instrument (airtable.com). Where studies of authors of this report were considered, a pair without direct involvement in the primary study was chosen to abstract and appraise the study. Differences between researchers were then discussed in the research group to reach consensus. Our main outcome of interest was AEs per 100 admissions ((number of AEs / number of admissions) * 100). In addition, we included three secondary outcomes: AEs per 1,000 inpatient days ((number of AEs / number of inpatient days) * 1,000), the percentage of admissions with one or more AEs (number of admissions with ≥1 AE / number of admissions) and percentage of preventable AEs (number of preventable AEs / number of AEs). We included nine TT methodology characteristics in our statistical analysis to assess their potentially influence on AE detection rates. We categorized these under four headings: setting (type of hospital, type of specialty), patient characteristics (age, length of stay), design (AE definition, timeframe of AE detection, commission/ omission) and reviewer (training, experience). Definitions of our variables, our categorisations of the selected characteristics and our rationale for the chosen variable and its categorisation are available in Table 1.
Table 1

Study characteristics for stratified analysis.

VariableDefinitionCategorisationRationale
Setting
    HospitalType of hospitalAcademic hospitalWe reasoned that academic hospitals tend to receive more severely ill or complex patients at higher risk of experiencing AEs when compared to other hospital types [31].
Non-academic hospital
Mixed
Not reported
    SpecialtyType of unitInternal medicineWe expected the AE incidence to vary by type of specialty. We combined surgical and orthopaedical units as an important fraction of admitted orthopaedical patients was expected to undergo surgical interventions. Mixed = a combination of the three categories mentioned above or combined with other specialties [3, 32, 33].
Surgery and orthopaedics
Oncology
Mixed
Not reported
    Patient characteristics
    AgeMean or median age of patients at admission> 70 yearsMulti-morbidity and polypharmacy are expected to occur more often in elderly patients. We anticipated patients with multimorbid conditions or polypharmacy to be at higher risk for AEs [31, 33, 34].
≤ 70 years
Not reported
    Length of stay (LOS)Mean or median length of hospital stayLOS > 5 daysPatients with longer LOS are at higher risk of experiencing AEs. As the average LOS in the US and many European countries ranges between 4 and 6 days, we chose a cut-off at five days [23, 35, 36].
LOS ≤ 5 days
Not reported
Design
    AE definitionIHI AE definitionIHI likeWe expected that differences in the AE definition between studies lead to variation in estimates of AE incidence [33, 37].Definition: “unintended physical injury resulting from or contributed to by medical care that requires additional monitoring, treatment or hospitalisation, or that results in death” [2]
“Narrower” than IHI GTT
“Wider” than IHI GTT
Not reported
    Timeframe of AE detectionDefinition of the time period in which AEs were detected.Hospital stay plus time after dischargeThe frequency of AEs varies depending on the timeframe and setting considered, i.e., before and after index admission [38].If a study reported AEs only during hospitalisation, it was categorized into the category “hospital stay plus time before admission”.
Hospital stay plus time before admission
Hospital stay plus time
before and after admission
Not reported
    Commission and omissionEvaluation of commission or omission of careInclusion of commission onlyThe IHI GTT focuses on AEs related to commission (doing the wrong thing), however in recent years authors have included omissions (failing to do the right thing). Including omissions in medical record reviews may lead to more AEs detected [3].
Inclusion of commission and omission
Not reported
Reviewer
    TrainingThe reviewer’s training before starting with data collectionTraining plus pilot phaseWe reasoned that trained and/or experienced reviewers were less likely to miss AEs than untrained or unexperienced reviewers [37, 39, 40].
Training only
No training
Not reported
    ExperienceThe reviewer’s experience in application of the GTT method or similar medical record review method.GTT or medical record review experience
No experience
Not reported

AE, Adverse event; GTT, Global Trigger Tool; IHI, Institute for Healthcare Improvement; LOS, length of stay

AE, Adverse event; GTT, Global Trigger Tool; IHI, Institute for Healthcare Improvement; LOS, length of stay

Quality assessment

To assess the risk of bias and applicability-related concerns for each included study, we developed and piloted a quality assessment tool (QAT) (see S1 File). This was inspired by the Quality Assessment Tool for Diagnostic Accuracy Studies 2 (QUADAS-2) tool and by the QAT developed by Musy et al. [41]. While assessing our included studies, we used both QUADAS-2 tool dimensions: the risk of bias and applicability-related concerns [41]. We assessed five domains: 1) patient selection; 2) rater or reviewer; 3) trigger tool method; 4) outcomes; and 5) flow and timing. Following the QUADAS-2 structure each domain included standardised signalling questions to help researchers’ rate each of the two dimensions, i.e., risk of bias and applicability-related concerns. Possible dimension classifications were low, high, or unclear. For each study, a QAT was completed by one researcher and reviewed by a second. To reach consensus, differences were discussed between the two and, if necessary, within the research group.

Statistical analysis

To analyse and plot our results we used R version 4.1.3 on Linux [42] with the meta [43] and metafor [44] packages. We determined the number of AEs per 100 admissions and the number of AEs per 1,000 patient days from the reported data. If the number of AEs was not explicitly described, we calculated it from the reported estimate of AEs per 100 admissions and number of patient admissions. The number of patient days could for example be calculated from the total number of AEs per 1,000 patient days. For studies published by this study’s co-authors or in some cases by their research colleagues, when samples overlapped, we asked them for additional information in order to avoid double counting of admissions and AEs [34, 45, 46]. Pooled estimates for AEs per 100 admissions and AEs per 1,000 patient days were derived using a random effects Poisson regression approach within the R metarate function [43, 44]. With the R metaprop function, a random effects logistic regression model was used to obtain summary estimates and confidence intervals (derived by the Wilson method) for the outcomes expressed as percentage of admissions with ≥1 AE and percentage of preventable AEs [43].

Subgroup analysis

Heterogeneity was explored by stratified analyses, which were performed on the main outcome measure, i.e. number of AEs per 100 admissions to evaluate the influence of the nine study characteristics: type of hospital, type of specialty, patient age, length of stay, AE definition, timeframe of AE detection, commission and omission, reviewer training, and reviewer experience. In addition, we analysed five elements relating to risk of bias and the three for applicability-related concerns. P-values were derived from the likelihood ratio test for model fit (p < 0.05 was considered significant). Furthermore, between study heterogeneity was evaluated visually and by calculating the prediction intervals [47, 48]. To assess the risk of publication bias related to small study size, we created a funnel plot regressing the logit of the AEs per 100 admissions on the standard error, assessed the symmetry of the distribution and performed the Egger test [49].

Results

The index search and update searches produced 9,780 returns. Deleting duplicates left 6,685 separate entries. The more detailed screening process left 54 studies, which were published in 72 publications [5, 9, 10, 14, 15, 17–22, 24, 34, 37–40, 45, 46, 50–102]. Fig 1 depicts the complete review procedure.
Fig 1

Flow diagram of literature search and included studies.

From [27] (GTT, Global Trigger Tool, TT, Trigger Tool).

Flow diagram of literature search and included studies.

From [27] (GTT, Global Trigger Tool, TT, Trigger Tool).

Study characteristics

The 54 included studies were all published between 2009 and 2022. Their study periods ranged from one month to six years (Table 2). They were conducted in 26 countries, most of them in Europe (34 studies, 63%), followed by the US (12 studies, 22%) and Others (8 studies, 15%).
Table 2

Characteristics of the 54 included studies.

Sorted by continent; within continent alphabetically by country code, and within the country by year.

StudyCountryStudy period number of monthsSample size number of recordsPatient ageLength of stayClinical specialtyType of hospitalTimeframe of AE detection
Europe
    Hoffmann 2018 [86]AUT12239≤70 years> 5 daysSURGAcademicNR
    Grossmann 2019 [19]CHE12240≤70 years> 5 daysMEDAcademicStay + Before
    Gerber 2020 [21]CHE1.5224≤70 years≤ 5 daysONCOMixedStay + After + Before
    Nowak 2022 [100]CHE12150>70 years> 5 daysMEDAcademicStay + After + Before
    Lipczak 2011 [69, 88]DNK6572NRNRONCONRNR
    von Plessen 2012 [40]DNK18NR≤70 yearsNRMIXNRNR
    Mattson 2014 [22, 68]DNK12240NRNRONCOAcademicNR
    Bjorn 2017 [52]DNK6120NRNRMIXAcademicNR
    Brösterhaus 2020 [82]DEU280NR> 5 daysSURGAcademicNR
    Suarez 2014 [63, 91]ESP721,440NRNRMIXNon-acaNR
    Guzman Ruiz 2015 [64, 67]ESP12291>70 years> 5 daysMEDNon-acaNR
    Perez Zapata 2015 [53, 66]ESP12350≤70 yearsNRSURGAcademicNR
    Toribio-Vicente 2018 [94]ESP12233NRNRMIXAcademicNR
    Kaibel 2020 [97]ESP12251≤70 years≤ 5 daysSURGAcademicStay + After
    Menendez-Fraga 2021 [98]ESP12240>70 years> 5 daysMEDAcademicStay + After
    Perez Zapata 2022 [101]ESP91132≤70 years> 5 daysSURGMixedStay + After
    Mayor 2017 [56]GBR364,833≤70 yearsNRMIXMixedNR
    Mortaro 2017 [60]ITA66513≤70 yearsNRMIXNon-acadNR
    Cihangir 2013 [70]NLD12129NRNRONCONRNR
    Deilkas 2015 [24, 81, 92]NOR3429,865NRNRMIXMixedNR
    Farup 2015 [80]NOR24272≤70 years> 5 daysMEDNon-acadNR
    Mevik 2016 [57, 58]NOR121,680≤70 years> 5 daysMIXAcademicStay + After + Before
    Haukland 2017 [54, 85]NOR48812≤70 years> 5 daysONCONon-acadNR
    Deilkas 2017 [61]NOR1210,986NRNRMIXMixedNR
    Pierdevara 2020 [102]PRT9176>70 years> 5 daysMIXMixedNR
    Schildmeijer 2012 [72]SWE850≤70 years≤ 5 daysMIXNRNR
    Unbeck 2013 [37]SWE12350≤70 years≤ 5 daysSURGAcademicStay + After + Before
    Rutberg 2014 [73]SWE48960≤70 years> 5 daysMIXAcademicStay + After + Before
    Nilsson 2016 [46]SWE123,301≤70 years> 5 daysSURGMixedNR
    Rutberg 2016 [34]SWE244,994>70 years> 5 daysSURGMixedStay + After + Before
    Deilkas 2017 [61]SWE1219,141NRNRMIXMixedNR
    Nilsson 2018 [45, 84]SWE4856,447≤70 years> 5 daysMIXMixedNR
    Hommel 2020 [20, 89, 90]SWE361,998>70 years> 5 daysSURGMixedStay + After
    Kelly-Pettersson 2020 [96]SWE24163>70 years> 5 daysSURGAcademicStay + After
    Kurutkan 2015 [18]TUR12229≤70 years≤ 5 daysMIXAcademicNR
North America
    Griffin 2008 [83]USA12854NRNRSURGNRNR
    Naessens 2010 [9, 14]USA251,138NRNRMIXAcademicNR
    Landrigan 2010 [39, 77]USA722,341≤70 yearsNRNRMixedNR
    Classen 2011 [10]USA1795≤70 years≤ 5 daysNRMixedNR
    Garrett 2013 [5, 79]USA3617,295≤70 years≤ 5 daysMIXMixedNR
    O’Leary 2013 [74]USA12250≤70 years> 5 daysMEDAcademicNR
    Kennerly 2014 [15, 50, 78]USA609,017NRNRMIXNon-acadStay + After + Before
    Mull 2015 [76]USA4273≤70 years> 5 daysMIXNon-acadNR
    Croft 2016 [38, 59]USA11296≤70 years≤ 5 daysMIXAcademicStay + After + Before
    Lipitz-Snyderman 2017 [55]USA12400≤70 yearsNRONCOAcademicNR
    Zadvinskis 2018 [95]USA1317≤70 years≤ 5 daysMIXAcademicNR
    Sekijima 2020 [93]USA4300≤70 years> 5 daysMEDAcademicNR
Other
    Moraes 2021 [99]BRA1220≤70 years> 5 daysMIXAcademicStay + After
    Xu 2020 [62]CHN12240≤70 years> 5 daysMIXAcademicStay + After
    Hu 2019 [87]CHN12480>70 years> 5 daysMIXAcademicNR
    Wilson 2012 [71]*EGY121,358*≤70 yearsNRNRMixedNR
JOR3,769
KEN1,938
MAR984
ZAF931
SDN3,977
RUN930
YEM1,661
    Najjar 2013 [75]ISR4640≤70 years≤ 5 daysMIXMixedNR
    Hwang 2014 [17]KOR6629≤70 years> 5 daysNRAcademicNR
    Asavaroengchai 2009 [51]THA1576≤70 years≤ 5 daysMIXAcademicNR
    Müller 2016 [65]ZAF8160≤70 years> 5 daysMEDAcademicStay + Before

NR, not reported; MED, internal medicine; MIX, mixed; ONCO, oncology; SURG, surgery/orthopaedics; Academic, academic hospital; Non-acad, non-academic hospital; Stay + After, hospital stay plus time after discharge; Stay + Before, hospital stay plus time before admission; Stay + After + Before, hospital stay plus time before and after admission; *After coding these countries A-H, this studies’ authors linked each number directly to a letter, but failed to link each letter to a particular country, therefore it is impossible to reconcile these numbers with the countries listed.

Characteristics of the 54 included studies.

Sorted by continent; within continent alphabetically by country code, and within the country by year. NR, not reported; MED, internal medicine; MIX, mixed; ONCO, oncology; SURG, surgery/orthopaedics; Academic, academic hospital; Non-acad, non-academic hospital; Stay + After, hospital stay plus time after discharge; Stay + Before, hospital stay plus time before admission; Stay + After + Before, hospital stay plus time before and after admission; *After coding these countries A-H, this studies’ authors linked each number directly to a letter, but failed to link each letter to a particular country, therefore it is impossible to reconcile these numbers with the countries listed. Four studies (7%) did not report their clinical specialties [10, 17, 71, 77]. For those remaining, almost half (24 studies, 44%) involved mixed specialties. One study included no information on the number of included records [40]. The numbers of included records ranged from 50 to 56,447. Overall, we included 194,470 index admissions in our report. Table 3 illustrates AE rates’ key characteristics. In seven studies, we could not retrieve the main outcome measure AEs per 100 admissions [14, 24, 40, 55, 70, 80, 94]; for the remaining 47, rates ranged from 2.5 to 140 per 100 admissions. Per 1,000 patient days, the 36 (67%) studies with sufficient data yielded counts ranging from 12.4 to 139.6. And in the 48 studies whose data allowed us to calculate percentages of admissions with one or more AEs, these ranged from 7% to 69%. AE preventability percentages, which 37 studies (69%) reported, ranged from 7% to 93%; however, four of these studies provided no relevant raw data [21, 45, 55, 56].
Table 3

Main characteristics of adverse events (AE) rates.

StudyAEs per 100 admissionsAEs per 1,000 patient days% of admissions with ≥ 1 AE% of preventable AEs out of all AEs
Wilson 2012 [71], Country B2.5NRNR83.9
Wilson 2012 [71], Country F5.5NRNR84.4
Wilson 2012 [71], Country A6.0NRNR72.8
Hwang, 2014 [17]7.812.47.261.2
Wilson 2012 [71], Country E8.2NRNR55.3
Wilson 2012 [71], Country G8.3NRNR85.7
Mayor, 2017 [56]8.9NR8.0AEs detected by TT not reported separately
Najjar, 2013 [75]14.2NR14.259.3
Nilsson, 2018 [45, 84]$14.420.211.4Included sample not reported separately
Wilson 2012 [71], Country C14.5NRNR76.9
Wilson 2012 [71], Country D14.8NRNR85.6
Deilkas, 2017 [61] (NOR)15.2NR13.0NR
Griffin, 2008 [83]16.2NR14.6NR
Deilkas, 2017 [61] (SWE)16.8NR14.4NR
Wilson 2012 [71], Country H18.4NRNR93.1
Rutberg, 2016 [34]$19.027.014.773.4
Nilsson, 2016 [46]$19.929.615.462.5
Zadvinskis, 2018 [95]21.168.9NRNR
Mattson, 2014 [22, 68]23.337.420.8NR
Landrigan, 2010 [39, 77]25.156.518.161.9
Mevik, 2016 [57, 58]26.639.320.7NR
Rutberg, 2014 [73]$28.233.220.571.2
Xu, 2020 [62]29.232.122.5NR
Kurutkan, 2015 [18]29.380.7217.064.2
Suarez, 2014 [63, 91]29.424.523.365.8
Schildmeijer, 2012 [72]30.045.320.060.0
Mortaro, 2017 [60]*30.431.921.6NR
Haukland, 2017 [54, 85]31.237.124.3NR
O’Leary, 2013 [74]34.4NR21.67.0
Brösterhaus, 2020 [82]*36.231.627.5NR
Müller, 2016 [65]36.925.824.447.5
Garrett 2013 [5, 79]38.085.026.0NR
Kennerly 2014 [15, 50, 78]38.061.332.118.0
Unbeck, 2013 [37]$39.174.128.080.3
Mull, 2015 [76]39.952.421.6NR
Asavaroengchai, 2009 [51]41.052.924.055.9
Classen, 2011 [10]44.5NRNRNR
Lipczak, 2011 [69, 88]45.5NRNRNR
Perez Zapata, 2015 [53, 66]46.0NR31.754.7
Sekijima, 2020 [93]*46.373.728.3NR
Guzman Ruiz, 2015 [64, 67]51.263.035.432.2
Perez Zapata, 2022 [101]52.9NR31.534
Menendez-Fraga, 2021 [98]57.149.844.649.6
Hoffmann, 2018 [86]*61.931.533.5NR
Kelly-Pettersson, 2020 [96]$62.6104.238.060.8
Nowak, 2022 [100]72.090.642.754.6
Gerber, 2020 [21]75.4106.642.0Included sample not reported separately
Kaibel, 2020 [97]76.1NR45.892.1
Pierdevara, 2020 [102]80.742.1NRNR
Bjorn, 2017 [52]81.7139.644.2NR
Moraes, 2021 [99]90.576.140.9NR
Hommel, 2020 [20, 89, 90]$105.993.258.675.9
Croft, 2016 [38, 59]114.2NRNR50.0
Hu, 2019 [87]12722.468.550.8
Grossmann, 2019 [19]14095.760.029.2
Cihangir, 2013 [70]*NRNR36.4NR
Deilkas, 2015 [24, 81, 92]*NRNR15.1NR
Farup, 2015 [80]*NRNR14.0NR
Lipitz-Snyderman, 2017 [55]NRNR36.0AEs detected by TT not reported separately
Naessens, 2010 [9, 14]NRNR27.0NR
Toribio-Vicente, 2018 [94]*NRNR20.2NR
von Plessen, 2012 [40]NR59.825#NR

NR, not reported; TT, Trigger Tool.

* Pooled estimate.

• Mean estimate.

‡ Calculated total number of AEs.

$ Additional outcome data included.

# Original data reported.

NR, not reported; TT, Trigger Tool. * Pooled estimate. • Mean estimate. ‡ Calculated total number of AEs. $ Additional outcome data included. # Original data reported. Our quality assessment results (Fig 2) indicate that most of the domains of the risk of bias are rated as low (range: 48%–93%). However, the “patient selection” and “reviewer” domains received respectively 15% and 13% high ratings—considerably more than the other domains (range: 2%–6%). In two domains, risk of bias was largely unclear: “reviewer” and “trigger tool method” received this rating respectively in 39% and 30% of cases.
Fig 2

Quality assessment of all included studies.

Assessments are presented in risk of bias and applicability-related concerns. (TT method, Trigger Tool method).

Quality assessment of all included studies.

Assessments are presented in risk of bias and applicability-related concerns. (TT method, Trigger Tool method). Overall applicability-related concerns were predominantly low (range of domains: 65%–87%). High ratings were most prevalent (17%) in the “patient selection” domain; unclear ratings were most common (28%) for “reviewer”. Quality assessment results on study-level are provided in S1 Table.

Summary estimates from meta-analyses

The forest plot in Fig 3 presents AEs per 100 admissions by sample size. Forty-five samples from single countries contributed, as well as two multi-country (n = 10) samples [61, 71]. The summary estimate was 30.0 AEs per 100 admissions (95% CI 23.9–37.5). Visual inspection of the forest plot indicated a high level of between study heterogeneity, which was confirmed by an I2 of 99.7% (95% CI 99.7–99.7). The prediction interval ranged from 5.4 to 164.7 AEs per 100 admissions. Four studies had exceptionally high detection rates [19, 20, 38, 87]. At the opposite side, seven study samples reported fewer than ten AEs per 100 admissions [17, 56, 71].
Fig 3

Forest plot of adverse events per 100 admissions.

Ordered by sample size [5, 10, 15, 17–22, 34, 37–39, 45, 46, 50–54, 56–69, 71–79, 82–91, 93, 95–102]. In Wilson et al. 2012, countries were not further specified. (AEs, Adverse events; * pooled estimate; • mean estimate; ‡ calculated total number of AEs).

Forest plot of adverse events per 100 admissions.

Ordered by sample size [5, 10, 15, 17–22, 34, 37–39, 45, 46, 50–54, 56–69, 71–79, 82–91, 93, 95–102]. In Wilson et al. 2012, countries were not further specified. (AEs, Adverse events; * pooled estimate; • mean estimate; ‡ calculated total number of AEs). S1–S3 Figs present additional forest plots for the three secondary outcomes, respectively AEs per 1,000 patient days (n = 36 studies), percentages of admissions with AEs (n = 48 studies), and percentages of preventable AEs (n = 33 studies). Our meta-analysis showed a summary estimate of 48.3 AEs per 1,000 patient days (95% CI 40.4–57.8) with high level of between study heterogeneity (prediction interval 15.9–147.0). Twenty-six percent of admissions experienced one or more AEs (95% CI 22.0–29.5, prediction interval 7.8–58.3). Within the studies that rated preventability, 62.6% of AEs were classified as preventable (95% CI 54.0–70.5, prediction interval 16.8–93.3). Similarly, visual inspection indicated a high between study heterogeneity. Funnel plot exploration did not suggest evidence for publication bias or other biases related to small study size (P from Egger test = 0.3, S4 Fig).

Effect of study characteristics

Eight of nine analysed study characteristics explained part of the heterogeneity between studies (Fig 4).
Fig 4

Forest plot with stratified analysis of the nine selected study characteristics.

(AE, adverse event; CI, confidence interval; GTT, Global Trigger Tool; IHI, Institute for Healthcare Improvement; N Studies, number of studies).

Forest plot with stratified analysis of the nine selected study characteristics.

(AE, adverse event; CI, confidence interval; GTT, Global Trigger Tool; IHI, Institute for Healthcare Improvement; N Studies, number of studies). As for the type of hospital study characteristic, academic medical centres (n = 25, 45%) detected more AEs per 100 admissions than non-academic hospitals (respectively 47.1, 95% CI 36.6–60.5 and n = 6, 11%; 35.8, 95% CI 30.8–41.7), but as the summary estimate for mixed types of hospitals (n = 21, 38%; 17.0, 95% CI 11.7–24.8) is lower than either academic and non-academic hospitals, this association is likely confounded by a third feature. For type of clinical specialty, the significant differences within categories were driven by the not reported category (n = 11, 20%), which had fewer AEs per 100 admissions compared to the others (10.6, 95% CI 6.8–16.7). The internal medicine specialty (n = 7, 13%) had the highest number of AEs per 100 admissions (56.4, 95% CI 40.5–78.5), followed by surgery/orthopaedics (n = 11, 20%; 41.7, 95% CI 29.5–59.0). Oncology (n = 4, 7%) had numbers similar to those of the mixed designation (respectively 40.0, 95% CI 26.2–61.3 vs. 33.5, 95% CI 25.0–44.8). Older patients (mean > 70 years; n = 8, 15%) had a higher incidence of AEs than younger ones (mean ≤ 70 years; n = 38, 69%), although only eight studies specifically investigated older patients (respectively 63.7, 95% CI 43.6–93.0 and 25.9, 95% CI 19.6–34.2). As occurred with the type of clinical specialty, for the category length of stay, the not reported category (n = 20, 36%) has a driving effect, with a mean of 16.7 AEs per 100 admissions (95% CI 11.6–23.9). Greater lengths of stay (mean >5 days; n = 24, 44%) had slightly higher AE rates than shorter ones (<5 days; n = 11, 20%) (respectively 42.9, 95% CI 32.7–56.4 and 40.8, 95% CI 29.0–57.3). Almost all studies reported an IHI-like definition of AEs (n = 45, 82%). Of the five (9%) that did not report such a definition, AE rates were lower (respectively 29.0, 95% CI 22.4–37.5 and 22.6, 95% CI 13.9–36.8). The remaining five (9%) studies applying a wider than IHI AE definition reported clearly higher AE rates (55.3, 95% CI 42.1–72.7). For the two characteristics, timeframe of AE detection and commission and omission the studies failed to report in 69% and 82% of the cases, seriously hampering the analyses. Studies that employed a pilot phase as part of the reviewer training (n = 14, 25%) might have had slightly higher detection rates than training only (respectively 36.8, 95% CI 26.3–51.5 and n = 31, 56%; 24.9, 95% CI 18.0–34.4). Reviewers with no experience in medical record review (n = 11, 20%) detected fewer AEs than those with experience (respectively 12.4, 95% CI 7.3–21.2) and n = 16, 29%; 40.9, 95% CI 30.6–54.4). Half of all studies did not report (n = 28, 51%) whether their reviewers had experience in medical record review. In those cases, the reported AE rates were comparable to those of experienced reviewers (35.8, 95% CI 27.5–46.5).

Effect of risk of bias

Our quality assessment explained some of the variation regarding AE detection rates (S5 Fig). In eight studies (15%), patient selection was rated as high risk of bias because they included a slightly different patient population than defined in the inclusion criteria. These studies had higher rates of AEs than studies with a low risk of bias (respectively 61.2 vs. 32.5 AEs per 100 admissions). In studies where the risk of bias for the trigger tool methodology, the outcome category and the flow and timing were rated as high or unclear, considerably lower AE rates were detected than in those with a low risk of bias. Similarly, regarding the trigger tool methodology’s applicability-related concerns, ratings of unclear correlated with lower AE rates than those of low (respectively 10.7 vs. 38.7 AEs per 100 admissions).

Discussion

The aim of this systematic review and meta-analysis was to synthesize AE detection rates with TT methodology and to explore variations in AE rates and assess the study quality in acute care inpatient settings. Reporting of study characteristics varied widely, and non-reporting of characteristics ranged from 5% to 82%. The summary estimate for AEs per 100 admissions was 30 (95% CI 23.9–37.5). An AE rate of 48 per 1,000 patient days, which translates into, 48 AEs in 200 patients with a length of stay of 5 days. Twenty-six percent of patients experience at least one AE related to their hospital stay and 63% out of all AEs were deemed preventable. Eight out of nine study characteristics explained variation in reported AE results. Studies conducted in academic medical centres, or with older populations reported higher AE rates than non-academic centres or younger adult populations. For several risk of bias categories (e.g., outcome, flow and timing), a higher risk of bias in a study indicated lower AE rates, which points to an underestimation of AE detection rates in low quality studies. Analysing 17 studies in general inpatients, Hibbert et al. [3] reported AE rates of 8–51 per 100 admissions—a far smaller range than we detected (2.5–140). Our studies’ larger range of AEs could result from our larger study sample (n = 54). Further, their rates of admissions with AEs ranged from 7% to 40%, with a cluster of nine falling between 20% and 29% [3]. We found a wider range—7%–69%, but the average (26%) is close to Hibbert et al. [3]. Schwendimann et al.’s scoping review [32] of multicentre studies reported a median of 10% of admissions with AEs, which is less than half what we found. But this is congruent with Zanetti et al.’s integrative review, which reported between 5% and 11% [7]. Both of those reviews, especially Schwendimann et al.’s, concentrated solely on studies applying the HMPS methodology, not TT methodology [7, 32]. One possible reason for the lower rates could be that TT methodology requires the research team to include all identified AEs (if present, several AEs for one patient, not only the most severe, like in HMPS) [2, 12]. Interestingly, Panagioti et al.’s meta-analysis [6] found that half of their sample’s AEs were preventable whereas our meta-analysis indicated an overall preventability of 61%. For an academic hospital with 32,000 annual admissions, a preventable percentage of 61 would mean roughly 5,000 AEs could be prevented annually–given effective prevention strategies could be implemented. The confidence intervals reported by Panagioti and our 95% CI largely overlaps despite the difference in selection criteria for inclusion. They included every study that explored AEs’ preventability and many of those used the HMPS methodology, i.e., targeting more severe AEs [6]. Our meta-analysis explained part of the broad variation in AE detection via the selected study characteristics. One unanticipated finding was that, for many of these characteristics, essential details (e.g., length of stay) were not provided. For those, the not reported group had a dominant influence on AE detection rates. Although four study characteristics—type of specialty, length of stay, timeframe of AE detection, and commission and omission—showed differences in the subgroups, as the differences were driven by the not reported category, these only slightly explain the variation between AE detection rates. For all four characteristics, eight countries from which Wilson et al. [71] drew their samples fell within the not reported category, which might explain some of this result. Compared to other categories, academic hospitals [34], higher patient age [75], and experienced reviewers [39] all corresponded with more AEs per 100 admissions. Supporting Sharek et al. [39] we found that experienced reviewers were less likely to miss AEs than unexperienced reviewers. These results support many published medical record review studies [23, 31–33]. Nevertheless, the findings need to be interpreted with some caution. Regarding type of specialty, the data on internal medicine and surgery including orthopaedic both involve wide confidence intervals (respectively 95% CI 40.5–78.5, and 95% CI 29.5–59.0); therefore, their higher numbers of AEs per 100 admissions (respectively 56.4 and 41.7) are to be questioned: numerous publications have found that surgical patients typically experience more AEs during their hospital stay than medical patients [6, 37, 103]. Addressing the overall quality of the included studies, we rated both their risk of bias and applicability-related concerns as low. This finding is supported by those of two earlier systematic reviews. First, Klein et al.’s [104] assessment of 24 of our 66 included publications indicated reasonable overall quality; second, also using a study sample that overlapped somewhat with ours, Panagioti et al. [6] rated all of the overlapping studies’ risk of bias as low. Nevertheless, regarding adherence to TT methodology, including data completeness and usability, our meta-analysis clearly showed that our overall study sample’s reporting quality was inadequate. Our QAT explained part of the AE detection rate’s high variability: where risk of bias is rated as high or unclear for “outcome”, “trigger tool method” and “flow and timing”, AE rates are lower than where risk of bias is rated as low. This suggests that insufficient reporting resulted in lower estimates, i.e., the actual AEs per 100 admissions are likely higher than reported here. Although patterns of publication bias in the field of single arm studies measuring the incidence of AEs are not well understood, we decided to perform a funnel plot analysis to evaluate any association between small study size and the magnitude of the estimates of AEs per 100 admissions. Whenever an uncontrolled study evaluates effects and safety of a therapeutic intervention, publication bias may still be expected, where higher estimates of AE may be less likely to be published. If this type of publication bias is associated with small study size, funnel plot exploration may detect it. The studies included in our review were more about health services and delivery research and we did not anticipate to find obvious signs of publication bias [105], which was eventually confirmed. The vast majority of studies did not report the occurrence of AEs per patient days. Rather than considering this as potential selective reporting bias, we reason that the field is insufficiently aware of the advantage of using person-time incidence rates over incidence proportions, where former facilitates comparison across studies.

Strengths and limitations

Our systematic review was based on an exhaustive search strategy so that it is unlikely we missed studies that would have changed our findings. Throughout the search we have included two studies that were not identified with our search strategy. Those were lacking on of the core components like “adverse” [40] or “record” [86]. We did not do a systematic search of “grey literature” which may lead to remaining studies not identified. In absence of a suitable risk of bias tool for the type of studies included, we adapted an existing QAT to simultaneously address risk of bias and applicability-related concerns of the included studies. We conducted stratified analyses not only to evaluate effects of studies’ characteristics but also to evaluate effects of QAT domains. Our systematic review included a considerable high number of included studies when compared to previous reviews and resulted in a proportionately higher number of index admissions. However, we also acknowledge further limitations. One was the exclusion of psychiatric, rehabilitation, emergency departments and intensive care settings. We set this criterion to maximize comparability across study settings. Similarly, by excluding studies focussed only on adverse drug events, we avoided skewing AE rates based on single-event results. Despite their benefits, both decisions reduced the final sample size. Also, although we consider the identification and labelling of adverse events vital, we chose not to address either the types of AEs or their severity. Furthermore, we did not conduct an analysis of the influence of reported conflict of interest or funding in the included studies, which could further explain some of the variation. For the future, we also acknowledge that the registration of the review protocol on an open access repository is necessary. Still, the most important limitation is that high levels of not reported information that hampered a full appreciation of the findings. The data did not allow to run multivariable models in a meaningful manner, so that all findings from univariable analyses need to be interpreted with caution, as we cannot exclude that some of the observed association, such as the effect of type of hospital, are confounded. For future studies on AEs via retrospective medical record review, irrespective of the detection methods used, the certainty of the evidence base would benefit from the standard use of a dedicated reporting guideline. Such a guideline is currently lacking for the type of studies included.

Conclusion

Based on our analyses of 54 studies using TT methodology, we found an overall incidence of 30.0 AEs per 100 admissions—affecting 26% of patients. Of these we estimated that 63% were preventable, indicating a high potential to improve patient safety. However, lack of reporting and high levels of statistical heterogeneity limit these estimates’ reliability. Of nine TT study characteristics evaluated, our analyses indicate that eight explained part of the wide variation in AE incidence estimates. In four of those, most of the variation was driven by the not reported category (type of specialty, length of stay, timeframe of AE detection, commission and omission). For two characteristics (time frame of AE detection, commission and omission), studies even failed to report the methodological information in 69% and 82%. To enhance comparability—and the reporting of TT studies clearly needs improvement—we recommend the development and implementation of a reporting checklist accompanied with a guidance document specifically for studies on the use of retrospective medical record review methods for AE detection.

PRISMA 2020 checklist.

(DOCX) Click here for additional data file.

Quality assessment tool template.

(PDF) Click here for additional data file.

Assessments of risk of bias and applicability-related concerns.

(PDF) Click here for additional data file.

Forest plot of AEs per 1000 patient days.

* = pooled estimate, • = mean estimate, ‡ = calculated total number of AEs, ~ = calculated total number of patient days [5, 15, 17–22, 34, 37, 39, 40, 45, 46, 50–52, 54, 57, 58, 60, 62–65, 67, 68, 72, 73, 76–79, 82, 84–87, 89–91, 93, 95, 96, 98–100, 102]. (TIF) Click here for additional data file.

Forest plot percentage of admissions with at least one adverse event (AE).

CI, confidence interval; * = pooled estimate, • = mean estimate, + = calculated total number of admissions with ≥ 1 AE [5, 9, 14, 15, 17–22, 24, 34, 37, 39, 45, 46, 50–58, 60–68, 70, 72–87, 89–94, 96–101]. (TIF) Click here for additional data file.

Forest plot percentage of preventable adverse events (AEs).

CI, confidence interval; * = pooled estimate, • = mean estimate, ¢ = calculated number of preventable AEs [15, 17–20, 34, 37–39, 46, 50, 51, 53, 59, 63–67, 71–75, 77, 78, 87, 89–91, 96–98, 100, 101]. (TIF) Click here for additional data file.

Funnel plot for AEs per 100 admissions [5, 10, 15, 17–22, 34, 37–39, 45, 46, 50–54, 56–69, 71–79, 82–91, 93, 95–102].

(TIF) Click here for additional data file.

Forest plot with stratified analysis of the risk of bias and applicability-related concerns.

AE, adverse events; N studies, number of studies; CI, confidence interval [5, 10, 15, 17–22, 34, 37–39, 45, 46, 50–54, 56–69, 71–79, 82–91, 93, 95–102]. (TIF) Click here for additional data file. 13 May 2022
PONE-D-21-40420
Variation in Detected Adverse Events using Trigger Tools: A Systematic Review and Meta-Analysis
PLOS ONE Dear Dr. Simon, Thank you for submitting your manuscript to PLOS ONE. After careful consideration, we feel that it has merit but does not fully meet PLOS ONE’s publication criteria as it currently stands. Therefore, we invite you to submit a revised version of the manuscript that addresses the points raised during the review process. Please submit your revised manuscript by Jun 26 2022 11:59PM. If you will need more time than this to complete your revisions, please reply to this message or contact the journal office at plosone@plos.org. When you're ready to submit your revision, log on to https://www.editorialmanager.com/pone/ and select the 'Submissions Needing Revision' folder to locate your manuscript file. Please include the following items when submitting your revised manuscript:
A rebuttal letter that responds to each point raised by the academic editor and reviewer(s). You should upload this letter as a separate file labeled 'Response to Reviewers'. A marked-up copy of your manuscript that highlights changes made to the original version. You should upload this as a separate file labeled 'Revised Manuscript with Track Changes'. An unmarked version of your revised paper without tracked changes. You should upload this as a separate file labeled 'Manuscript'. If you would like to make changes to your financial disclosure, please include your updated statement in your cover letter. Guidelines for resubmitting your figure files are available below the reviewer comments at the end of this letter. If applicable, we recommend that you deposit your laboratory protocols in protocols.io to enhance the reproducibility of your results. Protocols.io assigns your protocol its own identifier (DOI) so that it can be cited independently in the future. For instructions see: https://journals.plos.org/plosone/s/submission-guidelines#loc-laboratory-protocols. Additionally, PLOS ONE offers an option for publishing peer-reviewed Lab Protocol articles, which describe protocols hosted on protocols.io. Read more information on sharing protocols at https://plos.org/protocols?utm_medium=editorial-email&utm_source=authorletters&utm_campaign=protocols. We look forward to receiving your revised manuscript. Kind regards, Mojtaba Vaismoradi Academic Editor PLOS ONE Journal Requirements: When submitting your revision, we need you to address these additional requirements. 1. Please ensure that your manuscript meets PLOS ONE's style requirements, including those for file naming. The PLOS ONE style templates can be found at https://journals.plos.org/plosone/s/file?id=wjVg/PLOSOne_formatting_sample_main_body.pdf and https://journals.plos.org/plosone/s/file?id=ba62/PLOSOne_formatting_sample_title_authors_affiliations.pdf [Note: HTML markup is below. Please do not edit.] Reviewers' comments: Reviewer's Responses to Questions Comments to the Author 1. Is the manuscript technically sound, and do the data support the conclusions? The manuscript must describe a technically sound piece of scientific research with data that supports the conclusions. Experiments must have been conducted rigorously, with appropriate controls, replication, and sample sizes. The conclusions must be drawn appropriately based on the data presented. Reviewer #1: Partly Reviewer #2: Yes ********** 2. Has the statistical analysis been performed appropriately and rigorously? Reviewer #1: I Don't Know Reviewer #2: Yes ********** 3. Have the authors made all data underlying the findings in their manuscript fully available? The PLOS Data policy requires authors to make all data underlying the findings described in their manuscript fully available without restriction, with rare exception (please refer to the Data Availability Statement in the manuscript PDF file). The data should be provided as part of the manuscript or its supporting information, or deposited to a public repository. For example, in addition to summary statistics, the data points behind means, medians and variance measures should be available. If there are restrictions on publicly sharing data—e.g. participant privacy or use of data from a third party—those must be specified. Reviewer #1: Yes Reviewer #2: Yes ********** 4. Is the manuscript presented in an intelligible fashion and written in standard English? PLOS ONE does not copyedit accepted manuscripts, so the language in submitted articles must be clear, correct, and unambiguous. Any typographical or grammatical errors should be corrected at revision, so please note any specific errors here. Reviewer #1: No Reviewer #2: Yes ********** 5. Review Comments to the Author Please use the space provided to explain your answers to the questions above. You may also include additional comments for the author, including concerns about dual publication, research ethics, or publication ethics. (Please upload your review as an attachment if it exceeds 20,000 characters) Reviewer #1: This is a systematic review and meta-analysis of 48 studies investigating the use of Trigger Tools for the assessment of adverse events in medical record review and estimating the rate of adverse events per 100 admission and several subgroups based on patient characteristics. The abstract does not adhere to PRISMA 2020 abstract, the method section does not adhere to PRISMA 2020 and the results section is difficult to follow as many results and analyses are reported. Furthermore, the last date of search is more than 12 months ago. To increase the readability and transparency of the reporting PRISMA 2020 should be followed and the result section revised. Please see specific comments below. Major: 1. The overall message of the study is difficult to follow, you report many results and subgroups(?) and these are not specified in the method section. Could you rearrange the result section with subheadings or omit some of the analyses to guide the reader. 2. The date of search is difficult to find, and it seems that the date of the last search >1,5-2 years ago. The search should be updated. 3. You state in the method section that PRISMA 2020 was identified but several items and the flow diagram from PRISMA 2020 is missing. I have listed some below in minor revisions but I recommend that you upload a PRISMA 2020 checklist stating where each item can be located. Minor: 1. Acute care or acute-care? Please uniform 2. Incidence or prevalence 3. Abstract: please add the eligibility criteria on language and exclusion criteria that you describe in the method section. 4. Abstract: Please provide the date last searched (PRISMA 2020 for abstracts checklist. Item 4: https://www.equator-network.org/reporting-guidelines/prisma-abstracts/) 5. Abstract: Please describe methods to assess risk of bias (PRISMA 2020 for abstracts checklist. Item 5: https://www.equator-network.org/reporting-guidelines/prisma-abstracts/) 6. Abstract: Please report I^2 7. Abstract: I do not understand the results, could you simply? Several terms have not been introduced: e.g. applicability-related concerns, commission and omission, reviewers’ level of experience, the evidence on the remainder. 8. Abstract: Please provide details on registration and funding (PRISMA 2020 for abstracts checklist. Item 11+12: https://www.equator-network.org/reporting-guidelines/prisma-abstracts/) 9. Your REF 2 is the Trigger Tool – would ICH GCP not be better suited? 10. Consider using the term from the cited reference [8] “medical record review” rather that “record review” throughout your article. 11. Introduction: Please revise sentence and commas for: “Record review uses available data [8], was found to identify more AEs when 70 compared with many other methods [9, 10], can be repeated over time and can target specific AE 71 types, or the overall AE rate [11].” 12. Introduction: Please correct: “A "trigger" (or clue) consists either of specific wording or an event in a medical 82 record that could indicate the occurrence of an AE, e.g., readmissions within 30 days or pressure 83 ulcers [2].” 13. Methods: Please revise: “Design Systematic review and meta-analyses [27].” So that it reflect that you reported according to PRISMA 2020 [27]. 14. Methods: Should the subheading “data sources” rather be “information sources” (PRISMA 2020, item 6)? 15. Methods: Your specific search strategy is difficult to follow: a. You write that “The medical subject headings (MeSH) and keywords for titles and abstracts” was your search limited to title and abstract or were all fields searched (PRISMA 2020, item 7)? b. Was “medical error” combined with AND or put in quotation marks? c. Which of your search terms were Mesh terms? How were these translated from PubMed to the other databases? d. Please provide the full (and specific) search strategy to each database as recommended by PRISMA 2020, item 7. e. You first state that “Our search strategy was developed and validated using methods suggested by Hausner et 111 al. [28, 29]. This involves generating a test set, developing and validating a search strategy and 112 documenting the strategy using a standardized approach [29]” but later that “The detailed search strategy used 119 for this review is that described by Musy et al. [26].” – did you or reference [26] develop the search strategy and applied the methods. 16. Methods: Please provide date of last search (PRISMA 2020, 6), if the date of last search is >12 months ago I recommend that you update the search. 17. Methods: from you flow diagram, it seems that you have a restriction on the search date (2015 and onwards”, please report this, PRISMA 2020, item 6. 18. Methods: Were title and abstracts screened by one researcher or two researcher independently? Please specify in the manuscript. 19. Please add details on data collection process, PRISMA 2020, item 9. 20. Please add information on PRISMA 2020, item 10b. 21. Methods: why did you have to invent a new bias assessment tool? 22. Methods: How was heterogenicity measured, which cut-offs did you use? 23. Methods: your approach “Because R's 176 metaprop function does not accept proportions exceeding 100%, we adapted results of four 177 studies where the number of AEs exceeded the number of patient admissions. To reduce 178 oversized values to less than 100 AEs per 100 admissions, we reduced the number of AEs 179 detected to one less than the number of admissions (e.g., for a patient group of 240 with 336 180 AEs, we entered 239 AEs).” Can you provide a reference for this? 24. Results: dates should be reported in methods. 25. Results: Please help me understand your flow diagram – the layout of PRISMA 2020 has not been used. Did you use automatic tolls for the screening and exclusion of the 4531 non-trigger tools? Please specify in the method section if you did and only screened 406 title/abstracts independently. Only full-text exclusion reasons must be explained in detail. 26. Results: Please correct: “The reviewed studies were all published between 2009 and 2020” to “included”. 27. Results: please uniform: “Overall, we included 192,316 index admissions in our report” in the abstract these as described as patients, which is more correct? 28. Results: which type of studies was included in the review? Cohort studies, RCTs? This is not described in the method section or table 1. 29. Result: There are a lot of results reported in this section – and a lot of analysis. The section is difficult to follow and not all subgroups are evident from the method section. Could you omit so analysis or add some aiding subheadings? 30. Please add PRISMA 2020, item 22. 31. Discussion: Please provide a key findings paragraph in the beginning of the discussion section with the key findings of your study without references to other studies. 32. Discussion: please expand your limitations section. 33. Did you analyse conflict of interest and funding of included studies and accounted for that in the analyses? Reviewer #2: TITLE The title is clear with enough detail for the reader to know what to expect. RELEVANCE AND ORIGINALITY Adverse events are an ongoing occurrence in the health landscape and the mechanism of identifying and reporting adverse events is not consistent across or between countries. This review is relevant as it provides an argument (using a recognised high quality and rigorous approach, i.e., a systematic review and meta-analysis) for the need to address this inconsistency with clearer reporting guidance. AUTHENTICITY AND REFERENCING The manuscript appears to be the work of the author with appropriate attribution to the work of others both in text and in the reference list. ABSTRACT/INTRODUCTION The abstract is comprehensive and an accurate reflection of the manuscript. The introduction is brief yet provides enough information from the literature to support the need for the review. In addition, key terms, (e.g., ‘global trigger tool’, ‘trigger’) are explained and operationalised for the review. The introduction leads logically to the gap in the literature and the aims of the study. AIMS Dual aims are clear. METHODOLOGY The methodology is well described and replicable, apart from a few queries: • One evidence source searched, ‘all authors’ personal libraries’ (line 117), is not defined or described. Are the authors referring to self -authored publications or simply publications amassed in personal collections? If the former, then these papers would presumably be indexed in one of the other databases searched. If the latter, then it renders the search not replicable. Removal of ‘personal libraries’ or explanation for its inclusion might address any concerns raised by its inclusion. • Similarly, an explanation for the choice of the three journals hand searched would allay any concerns of bias in the search strategy. Re eligibility criteria – point #3 is “…acute care (including elective admissions) hospital settings… (line 122). I would think elective admissions are inherently a part of an acute care cohort. Did the authors mean ‘emergency admissions’? In either case, this could be clarified. Table 1 is particularly helpful. RESULTS Results are comprehensive and well organised. TABLES AND FIGURES Tables and figures are well presented and do not replicate information in text. DISCUSSION/CONCLUSION The discussion is supported by the findings and the findings are situated within the current body of evidence on the topic. Recommendations for future practice related to adverse events and future research reporting on adverse events, albeit very brief (e.g., one sentence each), logically derives from the findings and discussion. OTHER COMMENTS Use of a reporting guideline is not evident but is a conventional expectation. The authors might consider adding reference to this in some way. WRITING STYLE The writing style is easy to academically sound and easy to read. SCHOLARLY APPROACH The authors have used a scholarly approach that begins with a clearly stated premise so that compelling arguments can be presented and supported with up-to-date literature, including empirical research evidence. Providing more critique of the studies cited in the introduction and in the discussion would elevate this further. OVERALL COMMENTS My comments have been provided in the spirit of collegiality to hopefully assist the authors in further preparing their manuscript for publication. I commend the authors on this high-quality report of their systematic review and meta-analysis. ********** 6. PLOS authors have the option to publish the peer review history of their article (what does this mean?). If published, this will include your full peer review and any attached files. If you choose “no”, your identity will remain anonymous but your review may still be made public. Do you want your identity to be public for this peer review? For information about this choice, including consent withdrawal, please see our Privacy Policy. Reviewer #1: Yes: Siv Fonnes Reviewer #2: Yes: Sonya R Osborne [NOTE: If reviewer comments were submitted as an attachment file, they will be attached to this email and accessible via the submission site. Please log into your account, locate the manuscript record, and check for the action link "View Attachments". If this link does not appear, there are no attachment files.] While revising your submission, please upload your figure files to the Preflight Analysis and Conversion Engine (PACE) digital diagnostic tool, https://pacev2.apexcovantage.com/. PACE helps ensure that figures meet PLOS requirements. To use PACE, you must first register as a user. Registration is free. Then, login and navigate to the UPLOAD tab, where you will find detailed instructions on how to use the tool. If you encounter any issues or have any questions when using PACE, please email PLOS at figures@plos.org. Please note that Supporting Information files do not need this step. Submitted filename: PONE-D-21-40420-adverse events SR-reviewed.pdf Click here for additional data file. 24 Jun 2022 We appreciate the opportunity to address the very helpful reviewers’ comments and revise our manuscript. Below, please find item-by-item responses to the Reviewers’ comments, which are included verbatim. All page and paragraph numbers refer to locations in the revised manuscript. Submitted filename: Response to Reviewers.docx Click here for additional data file. 16 Aug 2022 Variation in Detected Adverse Events using Trigger Tools: A Systematic Review and Meta-Analysis PONE-D-21-40420R1 Dear Dr. Simon, We’re pleased to inform you that your manuscript has been judged scientifically suitable for publication and will be formally accepted for publication once it meets all outstanding technical requirements. Within one week, you’ll receive an e-mail detailing the required amendments. When these have been addressed, you’ll receive a formal acceptance letter and your manuscript will be scheduled for publication. An invoice for payment will follow shortly after the formal acceptance. To ensure an efficient process, please log into Editorial Manager at http://www.editorialmanager.com/pone/, click the 'Update My Information' link at the top of the page, and double check that your user information is up-to-date. If you have any billing related questions, please contact our Author Billing department directly at authorbilling@plos.org. If your institution or institutions have a press office, please notify them about your upcoming paper to help maximize its impact. If they’ll be preparing press materials, please inform our press team as soon as possible -- no later than 48 hours after receiving the formal acceptance. Your manuscript will remain under strict press embargo until 2 pm Eastern Time on the date of publication. For more information, please contact onepress@plos.org. Kind regards, Prof, Mojtaba Vaismoradi, PhD, MScN, BScN Academic Editor PLOS ONE Additional Editor Comments (optional): Reviewers' comments: Reviewer's Responses to Questions Comments to the Author 1. If the authors have adequately addressed your comments raised in a previous round of review and you feel that this manuscript is now acceptable for publication, you may indicate that here to bypass the “Comments to the Author” section, enter your conflict of interest statement in the “Confidential to Editor” section, and submit your "Accept" recommendation. Reviewer #1: All comments have been addressed Reviewer #2: All comments have been addressed ********** 2. Is the manuscript technically sound, and do the data support the conclusions? The manuscript must describe a technically sound piece of scientific research with data that supports the conclusions. Experiments must have been conducted rigorously, with appropriate controls, replication, and sample sizes. The conclusions must be drawn appropriately based on the data presented. Reviewer #1: Yes Reviewer #2: Yes ********** 3. Has the statistical analysis been performed appropriately and rigorously? Reviewer #1: Yes Reviewer #2: Yes ********** 4. Have the authors made all data underlying the findings in their manuscript fully available? The PLOS Data policy requires authors to make all data underlying the findings described in their manuscript fully available without restriction, with rare exception (please refer to the Data Availability Statement in the manuscript PDF file). The data should be provided as part of the manuscript or its supporting information, or deposited to a public repository. For example, in addition to summary statistics, the data points behind means, medians and variance measures should be available. If there are restrictions on publicly sharing data—e.g. participant privacy or use of data from a third party—those must be specified. Reviewer #1: Yes Reviewer #2: Yes ********** 5. Is the manuscript presented in an intelligible fashion and written in standard English? PLOS ONE does not copyedit accepted manuscripts, so the language in submitted articles must be clear, correct, and unambiguous. Any typographical or grammatical errors should be corrected at revision, so please note any specific errors here. Reviewer #1: Yes Reviewer #2: Yes ********** 6. Review Comments to the Author Please use the space provided to explain your answers to the questions above. You may also include additional comments for the author, including concerns about dual publication, research ethics, or publication ethics. (Please upload your review as an attachment if it exceeds 20,000 characters) Reviewer #1: Thank you for your comprehensive work on revising and improving your systematic review and meta-analysis. The reporting according to PRISMA 2020 has improved the readability and transparency of the manuscript. The revision of the results section and the key findings paragraph in the discussion section has made the message and the results of your study easier to understand and follow. Congratulations on your comprehensive and hard work. Reviewer #2: The authors have addressed all of my comments in the revision. I acknowledge the data has been updated in light of an updated search. ********** ********** 23 Aug 2022 PONE-D-21-40420R1 Variation in detected adverse events using trigger tools: A systematic review and meta-analysis Dear Dr. Simon: I'm pleased to inform you that your manuscript has been deemed suitable for publication in PLOS ONE. Congratulations! Your manuscript is now with our production department. If your institution or institutions have a press office, please let them know about your upcoming paper now to help maximize its impact. If they'll be preparing press materials, please inform our press team within the next 48 hours. Your manuscript will remain under strict press embargo until 2 pm Eastern Time on the date of publication. For more information please contact onepress@plos.org. If we can help with anything else, please email us at plosone@plos.org. Thank you for submitting your work to PLOS ONE and supporting open access. Kind regards, PLOS ONE Editorial Office Staff on behalf of Professor Mojtaba Vaismoradi Academic Editor PLOS ONE
  94 in total

1.  Adverse events identified by the global trigger tool at a university hospital: A retrospective medical record review.

Authors:  Qiaozhi Hu; Bin Wu; Mei Zhan; Weiguo Jia; Yimei Huang; Ting Xu
Journal:  J Evid Based Med       Date:  2018-12-03

2.  Developing and implementing a standardized process for global trigger tool application across a large health system.

Authors:  Paul R Garrett; Christine Sammer; Antoinette Nelson; Kathleen A Paisley; Cason Jones; Eve Shapiro; Jackie Tonkel; Michael Housman
Journal:  Jt Comm J Qual Patient Saf       Date:  2013-07

3.  Bias in meta-analysis detected by a simple, graphical test.

Authors:  M Egger; G Davey Smith; M Schneider; C Minder
Journal:  BMJ       Date:  1997-09-13

4.  [Detection of adverse events in hospitalized adult patients by using the Global Trigger Tool method].

Authors:  O Guzmán-Ruiz; P Ruiz-López; A Gómez-Cámara; M Ramírez-Martín
Journal:  Rev Calid Asist       Date:  2015-05-27

5.  Application of the IHI Global Trigger Tool in measuring the adverse event rate in a Turkish healthcare setting.

Authors:  Mehmet Nurullah Kurutkan; Esra Usta; Fatih Orhan; M C Emre Simsekler
Journal:  Int J Risk Saf Med       Date:  2015

6.  The Quality in Australian Health Care Study.

Authors:  R M Wilson; W B Runciman; R W Gibberd; B T Harrison; L Newby; J D Hamilton
Journal:  Med J Aust       Date:  1995-11-06       Impact factor: 7.738

7.  Is a modified Global Trigger Tool method using automatic trigger identification valid when measuring adverse events?

Authors:  Kjersti Mevik; Tonje E Hansen; Ellen C Deilkås; Alexander M Ringdal; Barthold Vonen
Journal:  Int J Qual Health Care       Date:  2019-08-01       Impact factor: 2.038

8.  Predictive Power of the "Trigger Tool" for the detection of adverse events in general surgery: a multicenter observational validation study.

Authors:  Ana Isabel Pérez Zapata; Elías Rodríguez Cuéllar; Marta de la Fuente Bartolomé; Cristina Martín-Arriscado Arroba; María Teresa García Morales; Carmelo Loinaz Segurola; Manuel Giner Nogueras; Ángel Tejido Sánchez; Pedro Ruiz López; Eduardo Ferrero Herrero
Journal:  Patient Saf Surg       Date:  2022-02-08

9.  A pilot study on record reviewing with a priori patient selection.

Authors:  Sezgin Cihangir; Ine Borghans; Karin Hekkert; Hein Muller; Gert Westert; Rudolf B Kool
Journal:  BMJ Open       Date:  2013-07-19       Impact factor: 2.692

10.  Incidence of adverse events in Sweden during 2013-2016: a cohort study describing the implementation of a national trigger tool.

Authors:  Lena Nilsson; Madeleine Borgstedt-Risberg; Michael Soop; Urban Nylén; Carina Ålenius; Hans Rutberg
Journal:  BMJ Open       Date:  2018-03-30       Impact factor: 2.692

View more

北京卡尤迪生物科技股份有限公司 © 2022-2023.