Literature DB >> 35063009

Reliability and validity of multicentre surveillance of surgical site infections after colorectal surgery.

Janneke D M Verberk1,2,3, Stephanie M van Rooden4, David J Hetem5, Herman F Wunderink6, Anne L M Vlek7, Corianne Meijer8, Eva A H van Ravensbergen9, Elisabeth G W Huijskens10, Saara J Vainio11, Marc J M Bonten6,4, Maaike S M van Mourik6.   

Abstract

BACKGROUND: Surveillance is the cornerstone of surgical site infection prevention programs. The validity of the data collection and awareness of vulnerability to inter-rater variation is crucial for correct interpretation and use of surveillance data. The aim of this study was to investigate the reliability and validity of surgical site infection (SSI) surveillance after colorectal surgery in the Netherlands.
METHODS: In this multicentre prospective observational study, seven Dutch hospitals performed SSI surveillance after colorectal surgeries performed in 2018 and/or 2019. When executing the surveillance, a local case assessment was performed to calculate the overall percentage agreement between raters within hospitals. Additionally, two case-vignette assessments were performed to estimate intra-rater and inter-rater reliability by calculating a weighted Cohen's Kappa and Fleiss' Kappa coefficient. To estimate the validity, answers of the two case-vignettes questionnaires were compared with the answers of an external medical panel.
RESULTS: 1111 colorectal surgeries were included in this study with an overall SSI incidence of 8.8% (n = 98). From the local case assessment it was estimated that the overall percent agreement between raters within a hospital was good (mean 95%, range 90-100%). The Cohen's Kappa estimated for the intra-rater reliability of case-vignette review varied from 0.73 to 1.00, indicating substantial to perfect agreement. The inter-rater reliability within hospitals showed more variation, with Kappa estimates ranging between 0.61 and 0.94. In total, 87.9% of the answers given by the raters were in accordance with the medical panel.
CONCLUSIONS: This study showed that raters were consistent in their SSI-ascertainment (good reliability), but improvements can be made regarding the accuracy (moderate validity). Accuracy of surveillance may be improved by providing regular training, adapting definitions to reduce subjectivity, and by supporting surveillance through automation.
© 2022. The Author(s).

Entities:  

Keywords:  Colorectal surgery; Epidemiology; Infection prevention; Inter-rater reliability; Surgical site infection; Surveillance

Mesh:

Year:  2022        PMID: 35063009      PMCID: PMC8780777          DOI: 10.1186/s13756-022-01050-w

Source DB:  PubMed          Journal:  Antimicrob Resist Infect Control        ISSN: 2047-2994            Impact factor:   4.887


Introduction

Surgical site infections (SSI) are one of the most common healthcare-associated infections (HAI) [1], and are associated with substantial morbidity and mortality, increased length of hospital stay and costs [2-6]. The highest SSI incidences are reported after colorectal surgeries, possibly due to the risk of (intra-operative) bacterial contamination and post-operative complications [7-9]. Worldwide, incidence rates range from 5 to 30% and are affected by several risk factors, including the type of surgery, age, sex, underlying health status, diabetes mellitus, blood transfusion, ostomy creation, prophylactic antibiotic use [10-12] and by the definition used to identify SSIs [4, 13]. Surveillance is an important component of prevention initiatives and most surveillance programs include colorectal surgeries [14]. Large variabilities in SSI rates between centres remain, even after correction for factors that increase the risk of SSIs. Previous studies reported significant variability in surveillance methodology and in inter-rater agreement, introducing uncertainty regarding whether observed differences in colorectal SSI rates reflect real differences in hospital performance [15-21]. For the purpose of comparing SSI rates between hospitals, accurate adherence to standardized surveillance protocols is required. Furthermore, case definitions should be unambiguous to avoid subjective interpretation. To reduce subjectivity the Dutch national surveillance network (PREZIES) has modified the case-definition on two criteria as compared to the definitions set out by the (European) Center of Disease Control and Prevention ((E)CDC) [22-25]. First, the diagnosis of an SSI made by a surgeon or attending physician only is not incorporated in the Dutch definitions. Second, in case of anastomotic leakage or bowel perforation, a deep or organ-space SSI can only be scored by purulent drainage from the deep incision, or when there is an abscess or other evidence of infection involving the deep soft tissues found on direct examination. A positive culture obtained from the (deep) tissue is not applicable in case of anastomotic leakage. Moreover, to increase standardization, the Dutch surveillance only includes primary resections of the large bowel and rectum, in contrast to the (E)CDC, who also allows biopsy procedures, incisions, colostomies or secondary resections. Awareness of the correctness of applying the definition and vulnerability to inter-rater variation is crucial for correct interpretation and use of surveillance data. The aim of this study was to investigate the reliability and validity of SSI surveillance after colorectal surgery using the Dutch (PREZIES) SSI definitions and protocol. Secondary aims were to report the accuracy of determining anastomotic leakage and to provide insights in the SSI incidence and epidemiology in the Netherlands.

Methods

Study design

In this multicentre prospective observational study, seven Dutch hospitals (academic (tertiary referral university hospital) n = 2; teaching n = 3; general n = 2) collected surveillance data for occurrence of SSI after colorectal surgeries performed in 2018 and/or 2019, according to the Dutch PREZIES surveillance protocol [23, 25, 26]. Three hospitals had no prior experience in performing SSI surveillance after colorectal surgeries and four hospitals already performed this surveillance for more than five years as part of their quality program. Participation in SSI surveillance after colorectal surgery is voluntary, hence not all hospitals include this in their surveillance programme. When executing the surveillance, additionally intra- and inter-rater reliability and validity were determined by two case-vignette assessments and a local case assessment. Reliability refers to the consistency and reproducibility of SSI-ascertainment and was determined by three agreement measures: 1) the intra-rater reliability, reflecting the agreement within one single rater over time; 2) the inter-rater reliability, which is the agreement between two raters within one hospital; and 3) the overall inter-rater reliability between all 14 raters of seven hospitals [27, 28]. Validity refers to how accurately the surveillance definition is applied and was determined by the correctness of ascertainment compared to a medical panel as described in detail below. The Medical Ethical Committee of the University Medical Centre Utrecht approved this study and waived the requirement of informed consent (reference number 19–493/C). All data were processed in accordance with the General Data Protection Regulation. Hospitals were randomly assigned the letters A-G for reporting of the results.

SSI surveillance after colorectal surgery

All hospitals included all primary colorectal resections of the large bowel and rectum performed in 2018 and/or 2019 in patients above the age of 1 year. Per hospital two raters, mostly ICPs, manually reviewed the electronic medical records for all included procedures retrospectively and classified procedures into three categories: (1) no SSI, (2) superficial SSI or (3) deep SSI or organ-space SSI within a follow-up period of 30 days post-surgery. SSIs were registered in their own hospital’s surveillance registration system. All identified SSIs and questionable cases were validated and discussed with each facility’s medical microbiologist or surgeon after completing the assessments which are described below.

Case-vignette assessment

Case-vignettes were used to assess the validity, intra-rater and inter-rater reliability. Four medical doctors developed standardised case-vignettes in Dutch language, based on 20 patients selected from a previous study [29]. Each vignette described demographics, the medical history, type of surgical procedure and the postoperative course. An external medical panel of seven experts in the field of colorectal surgeries and surveillance classified the case-vignettes as a superficial SSI, deep SSI, or no SSI according to the Dutch SSI definition, and indicated presence or absence of anastomotic leakage. Their conclusion was considered the reference standard. Each rater who performed surveillance completed the case-vignettes individually through an online questionnaire. Three months later, the same vignettes were judged once more by the same raters, but presented in a different random order.

Local case assessment

The reliability of surveillance data also depends on the ability to find the information necessary for case-ascertainment in the medical records. As this is not measured by the case-vignettes, we additionally performed a local case assessment: within each hospital, 25 consecutive colorectal surgeries included in surveillance were scored independently by the two raters, on separate digital personal forms. After sending the completed forms to the research team, raters discussed the results and entered the final decision into their hospital’s surveillance registration system.

Training

Before starting the surveillance activities, a training session was organized to ensure the quality of the data collection and to practice SSI case-ascertainment. Thereby, before starting the reliability assessments, each ICP had to complete at least 20 inclusions for surveillance to assure familiarity with the surveillance procedure. In case of any questions, the research team was available to provide assistance.

Statistical analyses

Descriptive statistics were generated to describe the surveillance period, number of inclusions and epidemiology. The number of SSIs per hospital were reported and displayed in funnel plots. The primary outcomes of this study were the reliability and validity of the surveillance. From the case-vignette assessments, the intra-rater and inter-rater reliability were analysed by calculating a weighted Cohen’s Kappa coefficient (κ). The scale used to interpret the κ estimates was as follows: ≤ 0, no agreement; 0.01–0.20, slight agreement; 0.21–0.40, fair agreement; 0.41–0.60, moderate agreement; 0.61–0.80, substantial agreement; 0.81–1.00, almost perfect agreement [27]. For the inter-rater reliability within a hospital, we used the second questionnaire round of the case-vignettes, to account for a possible learning curve over time. The overall inter-rater reliability among all 14 raters was estimated using a weighted Fleiss’ Kappa. For all Kappa’s, 95%-confidence intervals were estimated using bootstrapping methods (1000 repetitions). Inter-rater reliability was also measured from the local case assessment, from which the overall percentage agreement was calculated per hospital. Validity was determined by comparing the answers of the two case-vignettes questionnaires with the answers of the medical panel. The same comparison was performed to investigate the accuracy related to the determination of anastomotic leakage. Analyses were performed with R version 3.6.1 (R Foundation for Statistical Computing, Vienna, Austria) [30] with the use of packages irr [31] for inter-rater reliability and the boot [32] package for bootstrapping.

Results

Epidemiology

1111 colorectal surgeries were included in the surveillance, in majority right-sided hemicolectomies (n = 445, 40.1%). The overall incidence of SSI was 8.8% (n = 98); 46.9% developed superficial SSI (n = 46) versus 53.1% deep SSI (n = 52). In 23 deep SSIs (44.2%) there was anastomotic leakage. Table 1 provides an overview of the cumulative incidence of SSIs per hospital and Fig. 1 displays the incidence of SSIs taking into account the number of surgical procedures. SSIs were observed more frequently in open surgeries than laparoscopic procedures, with the highest SSI incidence in open sigmoid colectomies (19.4%), followed by open left hemicolectomies, open right hemicolectomies and open low anterior resections (17.5%, 11.0% and 9.6% respectively). Other risk factors are shown in Table 2.
Table 1

Overview of colorectal surgeries and number of SSIs per participating hospital

Type of hospitalSurveillance periodNumber of colorectal surgeries (n)Superficial SSI (n, %)Deep SSI (n, %)Total SSIs (n, %)
Hospital AGeneral20192211 (0.5%)9 (4.1%)10 (4.5%)
Hospital BTeaching201920510 (4.9%)7 (3.4%)17 (8.3%)
Hospital CGeneral20191484 (2.7%)3 (2.0%)7 (4.7%)
Hospital DAcademic2018–2019844 (4.8%)8 (9.5%)12 (14.3%)
Hospital E*Teaching2019a1443 (2.1%)9 (6.3%)12 (8.3%)
Hospital F*Teaching2019a14212 (8.5%)11 (7.7%)23 (16.2%)
Hospital G*Academic2018-2019a16712 (7.2%)5 (3.0%)17 (10.2%)
Total111146 (4.1%)52 (4.7%)98 (8.8%)

SSI surgical site infection, n number

*Hospitals that started surveillance for the purpose of this study

aJanuary–June 2019

Fig. 1

Overview of SSI incidence per hospital accounting for the number of surgical procedures. The black dotted line shows the mean incidence rate, the grey curved lines are the corresponding 95% confidence interval. a Overview of all SSIs per hospital. b Overview of superficial SSIs per hospital. c Overview of deep SSIs per hospital

Table 2

Baseline characteristics and risk factors of patients who underwent a primary colorectal surgery

No SSI (n = 1013)Superficial SSI (n = 46)Deep SSI (n = 52)
Sex (n, (%))
Male506 (50.0)29 (63.0)31 (59.6)
Female507 (50.0)17 (37.0)21 (40.4)
Age in years (mean, (SD))65.7 (13.7)61.8 (15.0)63.2 (15.4)
Pre-operative risk factors
BMI (mean, (SD))26.1 (4.6)27.0 (4.8)27.6 (7.0)
Missing (n, (%))29 (2.9)2 (4.3)2 (3.8)
ASA grade (n, (%))
Grade I94 (9.3)5 (10.9)3 (5.8)
Grade II542 (53.5)20 (43.5)24 (46.2)
Grade III289 (28.5)12 (26.1)17 (32.7)
Grade IV43 (4.2)5 (10.9)2 (3.8)
Grade V7 (0.7)--
Missing (n, (%))38 (3.8)4 (8.6)6 (11.5)
Procedure-related risk factors
Type of surgery (n, (%))
 Right hemicolectomy, closed procedure285 (28.1)9 (19.6)6 (11.5)
 Right hemicolectomy, open procedure129 (12.7)6 (13.0)10 (19.3)
 Left hemicolectomy, closed procedure72 (7.1)1 (2.2)5 (9.6)
 Left hemicolectomy, open procedure33 (3.3)3 (6.5)4 (7.7)
 Sigmoid colectomy closed procedure171 (16.9)2 (4.3)5 (9.6)
 Sigmoid colectomy open procedure108 (10.7)17 (37.0)9 (17.3)
 Low anterior colectomy, closed procedure168 (16.6)4 (8.7)12 (23.1)
 Low anterior colectomy, open procedure47 (4.6)4 (8.7)1 (1.9)
Surgical approach (n, (%))
Closed696 (68.7)16 (34.8)28 (53.8)
Open317 (31.3)30 (65.2)24 (46.2)
Duration of surgery in minutes (median, (IQR))a132 (68)143 (64)137 (56)
Missing (n, (%))11 (1.1)--
Emergency (n, (%))b
Yes124 (18.8)13 (48.1)12 (40.0)
No528 (80.1)14 (51.9)18 (60.0)
Missing (n, (%))7 (1.1)--
Wound class (n, (%))c
Clean-contaminated (class 2)724 (81.0)20 (58.8)26 (63.4)
Contaminated (class 3)104 (11.6)2 (5.9)7 (17.1)
Dirty-infected (class 4)65 (7.3)11 (32.4)8 (19.5)
Missing (n, (%))1 (0.1)1 (2.9)-
Malignancy (n, (%))
Yes695 (68.6)24 (52.2)33 (63.5)
No243 (24.0)20 (43.5)16 (30.8)
Missing (n, (%))75 (7.4)2 (4.3)3 (5.8)
Stoma (n, (%))
Yes233 (23.0)28 (60.9)22 (42.3)
No780 (77.0)18 (39.1)30 (57.7)
Post-operative risk factors
30-day mortality (n, (%)) d
 Yes28 (3.8)1 (3.2)4 (10.5)
 No703 (96.2)30 (96.8)34 (89.5)
ICU admission (n, (%)) e
 Yes162 (24.6)11 (40.7)16 (53.3)
 No497 (75.4)16 (59.3)14 (46.7)
Microbiology
Microorganism (n,(%))
 No microorganism identified or no culture takenNA28 (60.9)15 (28.8)
 Positive culture fNA18 (39.1)37 (71.2)
  Escherichia coli6 (25.0)20 (31.3)
  Enterococcus faecalis2 (8.3)7 (10.9)
  Enterococcus faecium3 (12.5)6 (9.3)
  Pseudomonas aeruginosa5 (20.8)6 (9.3)
  Klebsiella pneumonia1 (4.2)4 (6.3)
  Staphylococcus aureus2 (8.3)0 (0.0)
  Other5 (20.9)21 (32.9)

SSI, surgical site infection; n, number; SD, standard deviation; BMI, body mass index; ASA, American Society of Anaesthesiologists Physical Status; IQR, Interquartile range; ICU, Intensive Care Unit; NA, not applicable

aNot available for hospital F

bNot available for hospital D, E and G, so percentage was calculated without these hospitals

cNot available for hospital F, so percentage was calculated without this hospital

dNot available for hospital E and G, so percentage was calculated excluding these hospitals

eNot available for hospital D, E and G, so percentage was calculated excluding these hospitals

fPercentage was calculated relative to the total number of cultured microorganisms

Overview of colorectal surgeries and number of SSIs per participating hospital SSI surgical site infection, n number *Hospitals that started surveillance for the purpose of this study aJanuary–June 2019 Overview of SSI incidence per hospital accounting for the number of surgical procedures. The black dotted line shows the mean incidence rate, the grey curved lines are the corresponding 95% confidence interval. a Overview of all SSIs per hospital. b Overview of superficial SSIs per hospital. c Overview of deep SSIs per hospital Baseline characteristics and risk factors of patients who underwent a primary colorectal surgery SSI, surgical site infection; n, number; SD, standard deviation; BMI, body mass index; ASA, American Society of Anaesthesiologists Physical Status; IQR, Interquartile range; ICU, Intensive Care Unit; NA, not applicable aNot available for hospital F bNot available for hospital D, E and G, so percentage was calculated without these hospitals cNot available for hospital F, so percentage was calculated without this hospital dNot available for hospital E and G, so percentage was calculated excluding these hospitals eNot available for hospital D, E and G, so percentage was calculated excluding these hospitals fPercentage was calculated relative to the total number of cultured microorganisms

Reliability and validity

All 14 raters completed the two rounds of online questionnaire with case-vignettes. Of those, two had less than one year of experience with HAI surveillance, six had 2–5 years of experience, five persons 6–15 years and one more than 25 years. The estimated Cohen’s Kappa for agreement within a rater (intra-rater reliability) calculated from the case-vignette assessment varied from 0.73 to 1.00, indicating substantial to perfect agreement (Table 3). The inter-rater reliability within hospitals showed more variation, with lowest estimates reported for hospital A (κ = 0.61, 95%-CI 0.23–0.83) and the highest in hospital C (κ = 0.94, 95%-CI 0.75–1.00). The overall inter-rater agreement of all 14 raters in the second round case-vignettes was 0.72 (95%-CI 0.59–0.83). From the local case assessment it was estimated that the overall percent agreement between raters within a hospital was almost perfect (mean = 95%, range 90–100%). Regarding the accuracy of determining SSIs correctly, 87.9% (range 70%-95%) of the answers given by the raters were in accordance with the medical panel: 3 raters had similar SSI rates compared to the medical panel, five raters underestimated the number of SSIs, four had higher SSI rates because of incorrect ascertainment and there were two raters who had overestimated SSI in the first round, and an underestimation in the second round. Presence of anastomotic leakage was accurately scored in the vignettes where it was present, however misclassified in cases where anastomotic leakage was absent (Table 3).
Table 3

Intra-rater-, Inter-rater reliability and accuracy measured by two questionnaire rounds of 20 case vignettes each

Years of working experience in infectious disease surveillanceIntra-rater reliability (κ, 95%-CI)Inter-rater reliability per hospital (κ, 95%-CI)#Accuracy (%, First round/Second round)Accuracy in determination of presence of anastomotic leakage, n = 4. (%, First round/Second round)Accuracy in determination of absence of anastomotic leakage, n = 16 (%, First round/Second round)
Hospital ARater 14–50.78 (0.46–1.00)0.61 (0.23–0.83)95/8575/10093/87
Hospital ARater 22–30.95 (0.74–1.00)85/80100/7593/93
Hospital BRater 111–150.83 (0.49–0.99)0.72 (0.42–1.00)80/8575/10093/93
Hospital BRater 26–100.73 (0.44–1.00)95/90100/10093/93
Hospital CRater 111–151.00 (1.00–1.00)0.94 (0.75–1.00)90/9075/7593/93
Hospital CRater 211–150.94 (0.76–1.00)90/9575/7593/93
Hospital DRater 10–10.75 (0.47–1.00)0.69 (0.36–0.92)90/85100/10093/87
Hospital DRater 24–50.89 (0.72–1.00)90/95100/10093/87
Hospital E*Rater 12–30.89 (0.59–1.00)0.65 (0.38–0.92)80/80100/10093/93
Hospital E*Rater 24–50.73 (0.46–1.00)85/70100/10093/81
Hospital F*Rater 12–30.79 (0.57–1.00)0.69 (0.34–0.92)90/90100/10087/81
Hospital F*Rater 211–150.89 (0.59–1.00)90/90100/10087/87
Hospital G*Rater 10–10.79 (0.55–1.00)0.84 (0.61–1.00)90/90100/10087/93
Hospital G*Rater 2 > 250.94 (0.75–1.00)95/90100/10093/93

κ, Cohen’s Kappa coefficient; 95% CI, 95% confidence interval, n, number

*Hospitals that started surveillance for the purpose of this study

#Inter-rater reliability was calculated from the second round questionnaire case vignettes

Intra-rater-, Inter-rater reliability and accuracy measured by two questionnaire rounds of 20 case vignettes each κ, Cohen’s Kappa coefficient; 95% CI, 95% confidence interval, n, number *Hospitals that started surveillance for the purpose of this study #Inter-rater reliability was calculated from the second round questionnaire case vignettes

Discussion

In this study we observed good reliability of SSI surveillance after colorectal surgeries in seven Dutch hospitals. Based on the case-vignette assessment, the intra-rater reliability was estimated substantial to perfect (κ = 0.71–1.00) and the inter-rater agreement within hospitals was substantial, but varied between hospitals (κ = 0.61–0.94). The local case assessment showed 95% agreement within hospitals. Despite the fact that individual raters were consistent in their scoring, validity was moderate: in 12.1% (range 5%-30%) the case-ascertainment was not correct as compared to the conclusions of the medical panel. The SSI rate determined by surveillance would therefore be under-or overestimated. To the best of our knowledge, there is only one other study assessing the inter-rater reliability explicitly for SSI after colorectal surgeries. Hedrick et al. [18] concluded from their results that SSIs could not reliable be assigned and reproduced: they demonstrated large variation in SSI incidence between raters with only modest inter-rater reliability (i.e. κ = 0.64). They therefore opt for alternative definitions such as the ASEPSIS score [33]. In the present study similar estimates for inter-reliability were found in 2 out of 7 hospitals (κ = 0.61 in hospital A and κ = 0.65 in hospital E), for the other five hospital we found estimates above 0.69. The higher reliability estimates found in the present study may be explained by several factors. First, the definitions and method used in the Netherlands aim to be more objective: a previous study has shown that surgeon's diagnosis – not included the Dutch definition– lead to biased results [34, 35]. Another factor that may influence reliability is the years of surveillance experience of the raters and their ability to find information in the electronic health records needed for case-ascertainment [36]. From Table 3 it seems that more experienced raters produce more consistent results. However, the design of this study did not allow to investigate this type of causal relationships. The reliability estimates of this study show that SSIs after colorectal surgery are an appropriate measure to use for surveillance: the same result can be consistently achieved, making them reproducible and suitable for monitoring trends and detecting changes in SSI rates within a hospital. However, at this moment, using SSI incidence as a quality measure for benchmarking may be hampered because of three reasons. First, we found that on average 12.1% of patients in the case-vignettes were misclassified: one rater misclassified 6 out of 20 vignettes while another had only one misclassification. This will lead to unreliable comparisons of SSI rates, although in practice difficult cases may be discussed in a team hence improving accuracy. As superficial SSIs rely on more subjective criteria, focusing on deep SSI may improve accuracy and comparability. Additionally, we observed that anastomotic leakage was too often assigned while it was actually absent. This may lead to an underestimation as these cases cannot be scored by a positive culture anymore according to the Dutch definition (as explained in the introduction). Second, Kao et al. [16] and Lawson et al. [15] investigated whether SSI surveillance after colorectal surgeries has good ability to differentiate high and low quality performance (i.e. the statistical reliability of SSIs). They both concluded that the measure can only be used as hospital quality measure when an adequate number of cases have been reported, which can be challenging for some hospitals as shown in Table 1. Third, another challenge in using SSI rates for interhospital comparisons is the lack of a sufficient method for risk adjustment. To obtain valid SSI comparisons, you have to correct for differences in the surveillance population and their risk factors. However, to date no method has been proven generalizable and appropriate [12, 37]. The points raised above show that the overall SSI incidence of 8.8% in this study is difficult to compare to others. Overall, the SSI incidence was lower compared to other studies, but in line with numbers previously reported to the Dutch national surveillance network [13, 38, 39]. When SSIs after colorectal surgery are used for monitoring and perhaps benchmarking, continuous training of raters is required to assure correct use and alignment of surveillance definitions and methodology. Reliability and validity of surveillance may be improved by automatization methods as they can help to support case-finding [40-42]. Furthermore, hospitals should perform a certain number of colorectal surgeries to generate representative estimates of performance. If there is no appropriate case-mix correction, comparisons should be made with caution, preferably between similar types of hospitals with comparable patient groups.

Strengths and limitations

This study was performed within multiple Dutch centres, including different types of hospitals. The 14 raters in this study were well-trained according to standardized methods to minimalize differences possibly caused by years of surveillance experiences between hospitals. Unfortunately, this design was not suitable for explaining which factors enhance SSI-ascertainment or will improve reliability and validity estimates. Second, we aimed to produce Cohen’s Kappa coefficients from the local case assessment as well, however it appeared that there was too little variation in outcomes and number of cases hindering this calculation.

Conclusion

Awareness of the validity of surveillance and vulnerability to inter-rater variation is crucial for correct interpretation and use of surveillance data. This study showed that raters were consistent in their SSI-ascertainment, but improvements can be made regarding the accuracy. Hence, SSI surveillance results for colorectal surgery are reproducible and thus suitable for monitoring trends, but not necessarily correct and therefore less adequate for benchmarking. Based on prior literature, accuracy of surveillance may be improved by providing regular training, adapting definitions to reduce subjectivity, and by supporting case-finding by automation.
  34 in total

1.  The impact of surgical-site infections in the 1990s: attributable mortality, excess length of hospitalization, and extra costs.

Authors:  K B Kirkland; J P Briggs; S L Trivette; W E Wilkinson; D J Sexton
Journal:  Infect Control Hosp Epidemiol       Date:  1999-11       Impact factor: 3.254

2.  Risk factors for surgical site infection after elective resection of the colon and rectum: a single-center prospective study of 2,809 consecutive patients.

Authors:  R Tang; H H Chen; Y L Wang; C R Changchien; J S Chen; K C Hsu; J M Chiang; J Y Wang
Journal:  Ann Surg       Date:  2001-08       Impact factor: 12.969

3.  Effect of surgeon's diagnosis on surgical wound infection rates.

Authors:  G Taylor; M McKenzie; T Kirkland; R Wiens
Journal:  Am J Infect Control       Date:  1990-10       Impact factor: 2.918

4.  Failure of Colorectal Surgical Site Infection Predictive Models Applied to an Independent Dataset: Do They Add Value or Just Confusion?

Authors:  John R Bergquist; Cornelius A Thiels; David A Etzioni; Elizabeth B Habermann; Robert R Cima
Journal:  J Am Coll Surg       Date:  2016-01-14       Impact factor: 6.113

5.  An evaluation of surgical site infection surveillance methods for colon surgery and hysterectomy in Colorado hospitals.

Authors:  Sara M Reese; Bryan C Knepper; Connie S Price; Heather L Young
Journal:  Infect Control Hosp Epidemiol       Date:  2015-03       Impact factor: 3.254

6.  Post-discharge surgical site infections after uncomplicated elective colorectal surgery: impact and risk factors. The experience of the VINCat Program.

Authors:  E Limón; E Shaw; J M Badia; M Piriz; R Escofet; F Gudiol; M Pujol
Journal:  J Hosp Infect       Date:  2013-12-01       Impact factor: 3.926

7.  Preventing surgical-site infections after colorectal surgery.

Authors:  Mao Hagihara; Mieko Suwa; Yumi Ito; Yuki Muramatsu; Yukiko Kato; Yuka Yamagishi; Hiroshige Mikamo
Journal:  J Infect Chemother       Date:  2011-09-09       Impact factor: 2.211

8.  Burden of surgical site infections in the Netherlands: cost analyses and disability-adjusted life years.

Authors:  M B G Koek; T I I van der Kooi; F C A Stigter; P T de Boer; B de Gier; T E M Hopmans; S C de Greeff
Journal:  J Hosp Infect       Date:  2019-07-19       Impact factor: 3.926

9.  Can we define surgical site infection accurately in colorectal surgery?

Authors:  Traci L Hedrick; Robert G Sawyer; Sara A Hennessy; Florence E Turrentine; Charles M Friel
Journal:  Surg Infect (Larchmt)       Date:  2014-05-08       Impact factor: 2.150

10.  Surgical wound infection as a performance indicator: agreement of common definitions of wound infection in 4773 patients.

Authors:  A P R Wilson; C Gibbons; B C Reeves; B Hodgson; M Liu; D Plummer; Z H Krukowski; J Bruce; J Wilson; A Pearson
Journal:  BMJ       Date:  2004-09-14
View more

北京卡尤迪生物科技股份有限公司 © 2022-2023.