Literature DB >> 29610774

Implementation of Failure Mode and Effects Analysis to the specimens flow in a population-based colorectal cancer screening programme using immunochemical faecal occult blood tests: a quality improvement project in the Milan colorectal cancer screening programme.

Silvia Deandrea1,2, Enrica Tidone2, Aldo Bellini2, Luigi Bisanti2, Nico Gerardo Leonardo2, Anna Rita Silvestri2, Dario Consonni3.   

Abstract

BACKGROUND: A multidisciplinary working group applied the Healthcare Failure Mode and Effects Analysis (HFMEA) approach to the flow of kits and specimens for the first-level test of a colorectal cancer screening programme using immunochemical faecal occult blood tests.
METHODS: HFMEA comprised four steps: (1) identification and mapping of the process steps (subprocesses); (2) analysis of failure modes and calculation of the risk priority numbers (RPNs); (3) identification of corrective actions; and (4) follow-up and evaluation of corrective actions.
RESULTS: The team identified 9 main failure modes, 12 effects and 34 associated causes. RPN scores ranged from 2 to 96. Failure modes within the first five positions in the ranking list ordered by RPN concerned: 'degraded haemoglobin in the specimen', 'mixed-up kits' and 'anonymous specimen'. All of these could lead to false-negative results and/or subjects with positive tests not being recalled for assessment. The team planned corrective actions for those failure modes. As a result, the follow-up of corrective actions showed a significant decrease in the proportion of anonymous kits from 11.6 to 4.8 per 1000 (relative reduction of 59%). The HFMEA exercise led to a reduction in: missed positive tests; missed cancer and high-risk adenomas; complaints about the communication of test results to a person who never did the test; and false-negative results due either to haemoglobin degradation or an expired sampling tube.
CONCLUSIONS: HFMEA is a useful tool for reducing errors in colorectal cancer screening programmes using faecal occult blood tests and is characterised by a straightforward interpretation of results and ease of communication to healthcare managers and decision makers.

Entities:  

Keywords:  failure modes and effects analysis (fmea); healthcare quality improvement; quality improvement

Year:  2018        PMID: 29610774      PMCID: PMC5878255          DOI: 10.1136/bmjoq-2017-000299

Source DB:  PubMed          Journal:  BMJ Open Qual        ISSN: 2399-6641


Introduction

The Failure Mode and Effects Analysis (FMEA)1 2 is a technique for proactive analysis of failure modes and their causes and effects, aimed at eliminating the possibility of unacceptable hazards and minimising the impact of unavoidable risks. The Healthcare FMEA (HFMEA)3 was introduced in 2001 by the US Department of Veterans Affairs National Center for Patient Safety, in response to the new proactive risk assessment requirement from The Joint Commission.4 Since then, several publications have reported data on improvements in the quality of services after an HFMEA in different healthcare areas, such as radiotherapy,5 nephrology,6 chemotherapy,7 surgery8 and medical laboratories,9 but not for population-based screening programmes for colorectal cancer. A population-based cancer screening programme10 implies that the target population is actively invited at each screening round to participate in the screening test and then that any positive subjects are referred to a second-level assessment. This results in a very complex process, in which the plurality of care providers and actors, plus the significant number of citizens receiving the service each day, may result in a high likelihood of errors and mistakes. Despite this, although the recording of adverse events in screening programmes is a routine monitoring activity, the literature on a proactive analysis of errors in the area of screening is poor. Federici et al11 performed an HFMEA in 12 mammography screening programmes in the Italian region of Lazio, covering several screening procedures (eg, invitation and screening mammography). The colorectal cancer screening programme organised by Milan’s local health authority (now the Health Protection Agency, Metropolitan Area of Milan)12 has been monitoring errors and incidents since it began its activities in 2005. Every 2 years, around 400 000 citizens are invited to take part in the screening using an immunochemical faecal occult blood test (FIT), and around 75 000 tests are analysed by the laboratory each year. Around 2200 colonoscopies are performed in associated endoscopy centres on FIT-positive subjects referred by the screening programme. Although the number of adverse events after colonoscopy has always been below the recommended standard and only one major event was reported in 2011, the programme’s management team was concerned by the number of errors related to the flows of FIT kits, taking into account the fact that no acceptable or desirable standards for this measure were set. In 2011, for instance, the number of anonymous (ie, not linked to a specific person) specimens was over 1000, and the Screening Communication Centre received 305 complaints (63% of the total) relating to lost kits and specimens, mix-ups and so on. Therefore, while the safety of the assessment process seemed to be under control, apparently there was room for improvement in the first-level test procedure. The decision to implement the HFMEA methodology was motivated by some of the process’ characteristics: vulnerability to errors: this is demonstrated by data from the programme’s quality system and monitoring of indicators; complexity: many actors are involved (approximately 700 people with different responsibilities within the process) and are employed by different entities (local health authority, pharmacies, wholesalers and so on) with different missions and priorities; high dependence on the human factor: each person involved in the process must pay strict attention to performing their tasks and, in many cases, specific training and skills are required (eg, knowledge of the screening programme software). The programme managers therefore decided that a method such as HFMEA, which is characterised by a multidisciplinary approach with a systemic focus on errors and their causes, would be a useful tool for improvement. The possibility to get a list of failure modes graded according to the magnitude of risk, hence easier to prioritise, was also considered as an advantage of this methodology. The HFMEA results were used for a quality improvement project, the results of which are also presented in this paper. The description of such a project is reported according to the Standards for QUality Improvement Reporting Excellence (SQUIRE) checklist.13

Methods

Organisation of a colorectal screening programme

The HFMEA methodology was used in Milan’s population-based colorectal cancer screening programme, in which eligible people between the ages of 50 years and 69 years are invited to be screened every 2 years for colorectal cancer using FIT. Subjects receive an invitation letter to participate in the programme and to acquire a FIT kit at their local pharmacy. The pharmacist provides the kit, together with an informed consent form, associating the person’s unique identifier (ID) with their kit by means of a barcode scanner. FIT kits are regularly delivered to every pharmacy by the same wholesalers that are in charge of the supply of medication for sale. Once the test has been completed in the privacy of the person’s own home, the kit must be returned to the pharmacy which then sends it, via the wholesaler, to the laboratory associated with the screening programme (Milan’s public health laboratory). Returned sampling tubes are stored in refrigerated containers and tested within 1 week of collection. Analyses follow a completely automated procedure using the equipment provided by the manufacturers (OC-Sensor, Eiken (Tokyo, Japan) and NS-Plus, Alfresa Pharma (Japan)), depending on the company providing the tests at that time. During the study period (2011–2016), the threshold for positivity was set at 100 ng/mL haemoglobin for both tests. People are informed of a negative-test result by post. Positive results are communicated personally by a programme healthcare operator in a phone call, and positive subjects are referred for a colonoscopy. Participation in the programme is voluntary, and there is no cost for completing the test (first level) or the colonoscopy (second level). Subjects with negative FIT results are invited for a repeat screening after 2 years and to visit their general practitioner for any bowel complaints occurring in the interval between screenings.

Rolling out HFMEA

The HFMEA methodology applied to the process was developed in five consecutive stages, following the methodology proposed by the Department of Veterans Affairs National Center for Patient Safety.3 This exercise took place in the second half of 2011. Choice of process The process that this analysis focuses on extends from the purchase of the FIT kit from the supplier to communication of the test result back to the user. Establishment of a multidisciplinary team to conduct the analysis The HFMEA team involved subject-matter experts in the process, that is, people who work for the programme on a daily basis and people from outside the programme. Before starting the exercise, the experts carefully revised the programme’s risk management indicators, as well as feedback collected from programme users and other people involved in the process (eg, pharmacists and wholesale companies). This would enable them to provide a contribution that was informed by the relevant quantitative and qualitative information relating to the programme’s performance. The internal members of the team were: a health visitor and coordinator of the screening communication centre (ET) the person responsible for the screening programme’s IT system (NGL) a medical doctor trained in HFMEA, acting as facilitator and leader (SD) the director of the programme, as a member of the team with decision-making capacity in the process (LB, then ARS). The team member who is not an expert in the process was represented by a medical doctor working as an epidemiologist in another healthcare facility (DC). Identification and mapping of the process steps The team’s first activity involved breaking down the process into its subprocesses in terms of time and responsibility and communicating the output of this exercise in a flow chart. For reasons of presentation and improved readability, a simplified version of the HFMEA flow chart is shown in figure 1.
Figure 1

Flow chart of the selected processes and failure modes identified.

Analysis of failure modes During a brainstorming session, the team identified ‘ways of error or failure’ (failure modes) for each of the subprocesses, namely all of the omissions or mistakes that could lead to failure. According to DeRosier et al,3 failure modes were operationally defined as ‘the different ways that a particular sub-process can fail to accomplish its intended purpose’. The team identified the potential causes and effects for each failure mode and arranged them in a worksheet, with each cause–failure–effect relationship shown in an individual record. Calculation of the risk priority number (RPN) For each failure mode, the team considered: the severity of its consequences (S) the frequency (or probability) of occurrence (P) the possibility of it being detected and intercepted before it occurs (D). To each failure mode, the team assigned a numerical score proportional to the severity of the failure mode, its probability and its detectability. For severity and detectability, they adopted a four-level scale.3 The team also decided to rate error frequency according to the scale proposed by Federici et al,11 based on the expected occurrence out of the total number of screening tests carried out, as it was considered more appropriate within the context of a population-based programme. For most of the failure modes, quantitative estimates of the occurrence were retrieved in the programme’s IT system. In the other cases, the experts provided an opinion based on their knowledge of the process, and any disagreements were resolved by consensus. Estimates of severity were also mapped out using the classification adopted for a screening programme by Federici et al11, translating their 10-point scale into a 4-point scale. The team estimated the severity of consequences not present in Federici et al, because they are specific to colorectal cancer screening by consensus. The rating scales used are summarised in table 1. While the scores for the seriousness of the effects and the probability of occurrence are directly proportional to the severity and likelihood (minimum: severity and probability low; maximum: severity and probability high), the scores are computed the other way round for detectability, so higher scores are attributed when it is more difficult to identify the error. The RPN for each record in the worksheet (cause–failure–effect) was obtained by multiplying the values (RPN=S×P×D).
Table 1

Rating scales used to compute the risk priority number

Severity
1Minor eventNo consequences; delay in execution of the test
2Moderate eventLess effective communication of positive result; request for test to be repeated
3Major eventFailure in the communication of a negative result; subjects not taking the test receive a result communication (damage of trust in the programme); lack of informed consent
4Catastrophic eventFalse negative; failure in communication of a positive result
Detection
1CertainThe error can certainly be detected and corrected
2HighHigh possibility of error being detected and corrected
3MediumModerate possibility of error being detected and corrected
4RemoteThere is no or only a remote possibility of the error being detected and corrected
Occurrence
1Remote<1/10 000
2Very lowBetween 1/10 000 and 1/1000
3LowBetween 1/1 000 and 5/1000
4ModerateBetween 5/1000 and 1/100
5HighBetween 1/100 and 5/100
6Very high>5/100
Identification of corrective actions Due to the big number of failure modes identified, the team decided to prioritise the first five (RPN >half of the highest RPN) and worked out a possible solution (corrective action), with the objective of reducing or eliminating the failure mode and its effects. Each action has been reassessed by applying the same failure analysis and recalculating the RPN to highlight possible new ‘ways of error’ as a result of redesigning the organisational processes. For each corrective action, the team assigned measurable outcomes and professional profiles to be responsible for implementation and monitoring. The statistical analyses for anonymous kits reported in this paper were planned at this stage of the project. Follow-up of corrective actions For each corrective action, the quarterly trend for the quantitative indicator has been described for the periods before and after implementation of the actions, extending the follow-up to 2 years after completion of the previous action. As the project did not involve human subjects, authorisation from the local ethical committee was deemed unnecessary. Flow chart of the selected processes and failure modes identified. Rating scales used to compute the risk priority number

Statistical analyses

Data were computed from March 2011 onwards, because the IT system in place before that date was different and we could not guarantee the comparability of the information extracted. We compared the proportions of anonymous kits before and after the intervention by calculating the prevalence ratio, the prevalence difference and their 95% CI. Then, given the complex pattern of the proportions of anonymous kits before intervention, we fitted a polynomial logistic regression containing the covariate time (in trimesters; linear, quadratic, cubic and quartic components), intervention (0 before, 1 after) and an interaction term between time (linear component) and intervention, centred on the trimester when the intervention began.14 As a sensitivity analysis, we also fitted a simpler linear logistic model including time (in trimesters; linear component only), intervention and their interaction. Finally, we also evaluated the trend in the proportion of anonymous kits after the intervention, using a simple linear logistic regression model that included time only (in trimesters). The number of advanced lesions (cancers and advanced adenomas) missed because of anonymous kits was estimated based on the programme’s known positive predictive value (PPV) for FIT and adjusted for PPV time trend. We have estimated the difference in the number of missed advanced lesions (per 100 000; postintervention vs preintervention) by calculating risk difference (RD) and 95% CI. These analyses were performed with the Stata V.14 software. We also assessed the impact of the intervention on the number of lost specimens by means of a run chart, interpreted according to the rules set out by Perla et al.15 The analysis was performed with the Excel tool made available on the Institute of Healthcare Improvement website.16

Results

Implementation of the HFMEA methodology

During brainstorming, based on the process flow chart, the team identified nine failure modes: (1) a kit is associated with the wrong ID code (‘mixed-up kits’); (2) the laboratory is provided with a specimen viable for analysis but without an ID identifier (‘anonymous specimen’); (3) a kit with a specimen returned to the pharmacy is never received by the laboratory (‘lost specimen’); (4) the specimen analysed by the laboratory has a haemoglobin concentration lower than that which could have been detected if the specimen was preserved within the recommended time and temperature (‘degraded specimen’); (5) the user receives an expired sampling tube (‘expired sampling tube’); (6) the laboratory is provided with a specimen viable for analysis but without the informed consent signed by the user (‘specimen without consent’); (7) the user cannot receive the kit from the pharmacy because kits are sold out (‘kit out of stock’); (8) the laboratory cannot analyse the specimen because the material is not suitable for processing (eg, the tube is too full or too empty) or the sampling tube is dirty (‘inadequate specimen’); and (9) a user cannot be reached to communicate the results of their test (‘user not reachable’). These failure modes are also mapped in figure 1. There were 12 effects of the failures identified and 34 relevant causes (22 single and 12 associated with more than one failure). The RPN scores ranged from 2 to 96 (average 32, median 11), and failures were ranked according to their RPN score, with values from 1 to 18. The failure modes in the first five positions in the ranking list, ordered by their RPN (between 64 and 36), concern: ‘degraded specimen’, ‘mixed-up kits’ and ‘anonymous specimen’ (table 2).
Table 2

Worksheet with risk priority numbers (RPNs): first five ranks

Failure modeEffectSeverityPossible causesOccurrenceDetectionRPNRank
Degraded specimenFalse negative4Time between sampling and processing too long (>6 days) because of late delivery to the pharmacy44641
Degraded specimenFalse negative4Time between sampling and processing too long (>6 days) because of late delivery from pharmacy to laboratory44641
Degraded specimenFalse negative4Time between sampling and processing too long (>6 days) because of delay in the laboratory44641
Mixed-up kitsNegative test result never communicated to the user3Wrong code because of manual entry by the pharmacist54602
Degraded specimenFalse negative4Inadequate environmental temperature at user’s home34483
Degraded specimenFalse negative4Inadequate environmental temperature during transport34483
Degraded specimenFalse negative4Inadequate environmental temperature in the pharmacy34483
Degraded specimenFalse negative4Inadequate environmental temperature in the wholesaler’s vehicle34483
Anonymous specimenPositive test result never communicated to the user4The pharmacist provides the kit without using the programme’s management software43483
Anonymous specimenNegative test result never communicated to the user3Wrong code because of manual entry by the pharmacist53454
Mixed-up kitsA person who has not done the test receives communication of a negative result by letter3Wrong code because of manual entry by the pharmacist43365
Mixed-up kitsA person who has not done the test receives communication of a positive result by phone call3Wrong code because of manual entry by the pharmacist43365
Anonymous specimenNegative test result never communicated to the user3The pharmacist provides the kit without using the programme’s management software43365
Degraded specimenFalse negative4Inadequate environmental temperature in the laboratory33365
Worksheet with risk priority numbers (RPNs): first five ranks The failures ‘degraded specimen’ and ‘anonymous specimen’ have given rise to the consequences that the team considered to be most serious, that is, a false-negative result and an actual positive test, the result of which has not been communicated to the user. These consequences would affect the programme’s detection rate, which is a proxy for the programme’s final outcome (cause-specific colorectal cancer mortality reduction). A ’degraded specimen’ may result from two different causes: (A) the environmental temperatures are inadequate for preservation of the specimen (ie, at the user’s home, in the pharmacy, during transport in the wholesaler’s vehicle or in the laboratory); and/or (B) the period of time between sampling and the laboratory’s quantitative analysis is too long. Both phenomena may lead to a reduction in the concentration of haemoglobin in the sample,17 and quantitative results that are above the cut-off may therefore go below instead (false-negative result). The lack of a procedure linking the sampling tube code to the user’s ID, either when the kit is delivered to the pharmacy or when it is collected (‘anonymous specimen’), or loss of the specimen after it is collected from the pharmacy or while in the wholesaler’s vehicle (‘lost specimen’), may also result in a failure to refer positive subjects to assessment. Lastly, ranked higher are the RPNs related to minor/moderate events (inadequate sample leading to repetition of the test) and/or less frequent causes (kits mixed up between spouses, change in the laboratory equipment and so on) (see online supplementary appendix).

Corrective actions

For the first five rankings, the team planned corrective actions that resulted in reducing the RPN for all the failure modes considered (table 3 and table 4). The corrective actions involved a modification in the IT system, improvement of communication with the users and modifications of the procedures within the pharmacies and the laboratory.
Table 3

New Risk Priority Numbers (RPNs) and improvement programmes

Project namePotential causesFailure modeEffectsCorrective actionsNew OccurrenceNew DetectionRPN changeIndicator
Traceability of the kitsTime between sampling and processing too long because of late delivery to the pharmacyDegraded specimenFalse negativeAmendment of instruction to the users, with request to register the date of sampling on the sampling tube2164 → 8Percentage FITs tested in laboratory within six calendar days of sampling (target value: 100%)
Time between sampling and processing too long because of late delivery from pharmacy to laboratoryControl of sampling date by the pharmacist Registration of specimen’s code by the pharmacist2164 → 8
Time between sampling and processing too long because of delay in the laboratoryAutomatic detection by the software of the difference between the date of delivery to a pharmacy and reading in the laboratory2164 → 8
Improving practices in the pharmacyWrong code because of manual entry by the pharmacistMixed-up kitsNegative test result never communicated to the userDouble-checking of the identity of the specimen’s owner by verbal request Double-typing of the code by the pharmacist Registration of the specimen’s code by the pharmacist Self-certification of barcode-reading software by the pharmacy2160 → 6Percentage of anonymous specimens (target value:<1.0%)
A person who has not done the test receives communication of a positive result2136 → 6Percentage of complaints about communication of a test result to a person who never took the test (target value: 0%)
A person who has not done the test receives communication of a negative result2136 → 6Percentage of complaints for the communication of a test result to a person who never took the test (target value: 0%)
Wrong code because of manual entry by the pharmacistAnonymous specimenNegative test result never communicated to the user2145 → 6Percentage of anonymous specimens (target value:<1.0%)
The pharmacist provides the kit without using the programme’s management softwareAnonymous specimenPositive test result never communicated to the user2148 → 8Percentage of anonymous specimens (target value:<1.0%)
Inadequate environmental temperature in the pharmacyDegraded specimenFalse negativeTraining for pharmacists on monitoring the storage temperature in the pharmacy1448 → 16Percentage FITs tested in laboratory within six calendar days of sampling (target value: 100%)
Lack of control of stocks by the pharmacistExpired sampling tubeUser dissatisfiedTraining for pharmacists on monitoring expiry dates1212 → 8Percentage of sampling tubes that have not expired (target value: 100%)
Improving information to usersInadequate environmental temperature at user’s homeDegraded specimenFalse negativeAmendment of instruction leaflet for users, including: correct sampling quantity, how to clean the tube, reporting of personal ID, collection modalities, delivery time after collection and preservation1448 → 16Leaflet amended
Improving laboratory practiceInadequate environmental temperature in the laboratoryDegraded specimenFalse negativeSharing and updating common procedures with the laboratory2236 → 16Regular meetings organised with the laboratory staff

FIT, faecal immunochemical test.

Table 4

Tubes and inadequate monitoring of anonymous specimens before and after changing the return link

YearQuarterTotal testingAnonymous, n (%)Total positive, nPositive lost (% of total n, positive)Advanced lesions missed (estimated)Inadequate tests, nTests delivered within 6 days, n (%)
2011213 499314 (2.3)72517 (2.3)140
323 407244 (1.0)11967 (0.6)226
421 099300 (1.4)11018 (0.7)206
Total 2011 (from 1 March)58 005858 (1.5)302232 (1.1)10572Not available
2012116 807200 (1.2)8273 (0.4)190
214 932139 (0.9)7594 (0.5)172
321 95273 (0.3)10682 (0.2)458
419 061148 (0.8)82613 (1.6)552
Total 201272 752538 (0.7)348022 (0.6)31372Not available
2013119 861278 (1.4)82518 (2.2)169
216 308302 (1.9)67117 (2.5)48
325 068262 (1.0)101212 (1.2)199
418 194266 (1.5)98611 (1.1)118
Total 201379 4311050 (1.3)344458 (1.7)8534Not available
2014116 259116 (0.7)7699 (1.2)14
217 108134 (0.8)8815 (0.6)46
313 575211 (1.6)6748 (1.2)19
423 611175 (0.7)11209 (0.8)720
Total 201470 553605 (0.9)344431 (0.9)3799Not available
2015119 259115 (0.6)8906 (0.7)79518 526 (96.2)
219 08299 (0.5)9195 (0.5)99018 134 (95.0)
317 37780 (0.5)7700 (0.0)97516 607 (95.6)
423 671125 (0.5)11104 (0.4)137923 273 (98.3)
Total 201579 389404 (0.5)368915 (0.4)2413976 540 (96.4)
2016120 721104 (0.5)9867 (0.7)82520 157 (97.3)
217 69168 (0.4)7935 (0.6)59317 160 (97.0)
314 99634 (0.2)6602 (0.3)64314 611 (97.4)
419 53856 (0.3)9061 (0.1)85818 869 (96.6)
Total 201672 946262 (0.4)334515 (0.5)2291970 797 (97.1)
New Risk Priority Numbers (RPNs) and improvement programmes FIT, faecal immunochemical test. Tubes and inadequate monitoring of anonymous specimens before and after changing the return link The action with the greatest impact on reducing the RPN concerned the traceability of the kits/specimens and involved implementation of a complete tracking system for them using IT tools, from delivery to the user to analysis in the laboratory. The addition of a checkpoint when the specimen is returned to the pharmacist enables a reduction in both the frequency of the ‘degraded specimen’ error (by making it possible to calculate the time elapsed between specimen collection and laboratory processing automatically), and the ‘mixed-up kits’ and ‘anonymous specimens’ errors, because there is an additional control step in the link between the user’s ID and the sampling tube code, when it is returned to the pharmacy. However, while planning this action, the team noted that its implementation would lead to the introduction of a new failure mode when the time between sampling and analysis is shown to be longer than 6 days, either at the pharmacy (‘specimen not accepted’) or at the laboratory (‘inadequate specimen’). The team therefore estimated the RPN of this new failure mode to assess the appropriateness of the corrective action. The team allocated a probability of between 1/100 and 5/100 to the occurrence of a ‘specimen not accepted’ and an ‘inadequate specimen’. As the new RPN was greater than the eighth ranking, the action was implemented. With regard to the increase in ‘specimen not accepted’ and ‘inadequate specimen’, the team suggested monitoring this phenomenon alongside a reduction in anonymous tests. The local health authority therefore purchased the new functionality for the programme management software enabling the full electronic traceability of kits/specimens. The new procedure linking the specimen to the return path was introduced in October 2014 (quarter 3 of 2014, in table 4). The aim of improving the information available to users by updating the kit’s leaflet was principally to reduce the likelihood of the occurrence of ‘degraded specimen’ as the result of inadequate temperatures at home by giving users better information about storage standards for specimens. The new information kit also included clearer information on other aspects such as the correct sampling of faeces, which could, in turn, potentially have beneficial effects on another failure mode: ‘inadequate specimen’. The intervention for pharmacists was multifactorial and was delivered in the form of training aimed at harmonising the procedures that lead to the ‘degraded specimen’, ‘anonymous specimen’, ‘mixed-up kits’ and ‘expired sampling tube' failure modes, as well as on-site visits to assess implementation of the correct procedures. The training was mostly focused on implementing a consistent procedure for the identification of samples and users (including use of the new IT system) and on more careful participation of pharmacists in the programme, paying greater attention to the storage of samples (temperature, time and so on) and control of stocks. The intervention for the laboratory consists of a revision of the procedures aimed at guaranteeing prompt analysis of the sample. Within 2 years of completing the HFMEA, all of the actions had been finalised, and a continuous process to assess results through the quarterly measurement of indicators had been put in place.

Follow-up, monitoring and evaluation of corrective actions

The frequency of the ‘anonymous specimen’ failure was assessed before the action was implemented, and for a further 2 years afterwards, in order to evaluate the effectiveness of the intervention (table 4 and table 5) and whether the improvement had been sustained over time. Direct monitoring was not possible for the ‘mixed-up kits’ and ‘degraded specimen’ errors. ‘Mixed-up kits’ cannot be distinguished from the total number of ‘anonymous specimens’ and a ‘degraded specimen’ can only be assessed by monitoring false-negative results that may also recognise other causes. False-negative tests, in particular, can only be assessed through the analysis of interval cancers.
Table 5

Comparison of proportions of anonymous specimens before and after the intervention

Before interventionAfter intervention
Total kits (n)257 130175 947
Anonymous kits (n)2987840
Anonymous kits per 1 000 kits (95% CI)11.6 (11.2 to 12.0)4.8 (4.5 to 5.1)
Prevalence ratio (95% CI)Reference0.41 (0.38 to 0.44)
Prevalence difference per 1000 kits (95% CI)Reference−6.8 (−6.3 to −7.4)
Comparison of proportions of anonymous specimens before and after the intervention In the period 2011–2016, the screening programme witnessed a volume of activity ranging from 70 000 to 80 000 tests per year (table 4). The anonymous kits represented 1.5% of the total tests in 2011 (data available from quarter 2 only), 0.7% in 2012, 1.3% in 2013 and 0.7% in 2014, with wide variability in the different quarters, ranging from 0.3% in the third quarter of 2013 to 2.3% in the second quarter of 2011. Since the third quarter of 2014, when the new procedure for kit traceability was implemented, the percentage of anonymous specimens has not exceeded 0.6%. It fell from 1.2% in the preintervention period to 0.5% after intervention, corresponding to a relative reduction of 59% (prevalence ratio of 0.41) and an absolute reduction (prevalence difference) of 0.7% (table 5). The result obtained even went beyond the target that the team set for this indicator:<1% (table 4). The proportions of anonymous kits preintervention fell sharply from the first to the second trimester, before becoming more or less stable (although with considerable variability from one trimester to another) until the 13th trimester (figure 2: solid line). From the 14th trimester (the first after the intervention), the proportion of anonymous kits dropped (P<0.0001) and then continued in a gradual linear decline (figure 2: solid line). The intervention effect was confirmed (P<0.0001) after fitting a simple linear logistic regression (figure 2: dashed line). Following the intervention, we calculated a relative linear decrease of 10% in the proportion of anonymous vials per trimester, that is, a prevalence ratio of 0.90 per trimester (95% CI 0.87 to 0.92; P<0.0001). We also estimated a reduction in the number of advanced lesions missed, with an RD of −5.3 per 100 000 between the preintervention and postintervention periods (95% CI −0.6 to −10.1 per 100 000).
Figure 2

Trend in the proportions of anonymous specimens (per 1000) before and after intervention.

Trend in the proportions of anonymous specimens (per 1000) before and after intervention. When the process was also assessed with a run chart (figure 3), the detection of a shift after the intervention was implemented confirmed the results obtained through the statistical analysis. The run chart also showed a trend in the postintervention period (only interrupted by the last observation), suggesting that the change was not only sustained over time but was also dynamic, as there was a further improvement in the period after intervention.
Figure 3

Run chart of the lost specimens process.

Run chart of the lost specimens process. The number of tests not accepted because of an excessive interval between sampling and processing (‘specimen not accepted’ and ‘inadequate specimen’ when the reason stated by the laboratory was an excessive time interval) represented 3.6% of the total in 2015 and 2.9% in 2016. The team considered this percentage to be acceptable in view of the fact that, in the past, the same number of tests would have been processed even with the risk of false-negative results. However, a new action designed to reduce the number of tests refused by improving the information given to users is in place. The percentage trend seems to be improving, after a peak that was possibly caused by the checkpoint being introduced at the pharmacy. Furthermore, the target of 100% of tests analysed by the laboratory within 6 days of sampling has been met. Complaints about the communication of a test result to a person who did not take the test were received less than once a year after the improvement actions were implemented.

Discussion

The full HFMEA cycle (analysis of the process—HFMEA exercise—corrective actions—monitoring results) resulted in (A) a significant reduction in the proportion of anonymous specimens from 1.2% to 0.5% (relative reduction of 59%); (B) a reduction (although still not fully quantifiable) of false-negative results due to haemoglobin degradation or an expired sampling tube; (C) fewer complaints about the communication of a test result to a person who did not take the test; (D) better compliance with the correct instructions for taking the test by improving the information leaflets given to users; and (E) more effective communication with the pharmacies and the laboratory, thanks to improved procedures. These results are consistent with the project’s initial aims to reduce the errors related to the kits/specimens flow. The improvement projects implemented could lead to better performance of the programme through a reduction in the number of lesions missed as a result of positive samples being lost, or the degradation or inadequate preservation of samples. This would enable errors that may have a major impact on citizens’ trust in the programme (eg, communicating the result of a test to a person who did not take it) to be controlled.

Strengths and limitations based on the study’s design

No other improvement projects were implemented in the period 2011–2015 that had an impact on both the first-level test and the specimen flow, which would have led to a confounding bias.18 In November 2012, after a new tender, the FIT brand changed from NS-Plus to OC-Sensor. This change had consequences in terms of the percentage of positive tests resulting from the different characteristics of the test,19 but there were no differences in the sampling device or instruction sheet that could have led to a change in the number of anonymous specimens. It is therefore reasonable to assume that the differences observed in the number of anonymous specimens are entirely attributable to changes in the tracking procedure prompted by the HFMEA exercise. The number of anonymous specimens is automatically recorded by the IT system without the need for human intervention, so we can assume that the results collected in this way are not biased, particularly by expectations of improvement following implementation of the new strategy (detection bias).18 As the whole of the screened population was included in the analysis and continuous follow-up, this study would seem not to be biased as a result of incomplete follow-up (attrition bias) and a lack of representativeness of the sample (selection bias).18 Our study shares the same limitations as the HFMEA method itself, namely the low external validity and reproducibility of results obtained in a certain context,20 the subjectivity of judgements,21 as well as the diversity of scales applied to calculate the RPN.21 In particular, the choice of a different occurrence scale, although required because of the specificity of the context, may hamper comparability with other HFMEA exercises conducted using the De Rosier method. This study considered the risks relating to the specimen’s route without taking into account the effect of other programme procedures on the population, such as the selection of the test type and cut-off,21 as well as the different characteristics of the various tests, such as varying sensitivity to high ambient temperatures.17 22 Although these issues fall outside the scope of this study, they have to be taken into account in a comprehensive assessment of the risks of a screening programme based on FITs.

Study findings in the context of current research

Population-based screening programmes have a long tradition of evaluation and a different set of indicators is currently in use for monitoring purposes, performance evaluation and impact assessment. Most of the reports of adverse events from colorectal cancer screening programmes are related to the endoscopy test, for exampe, bleeding following polypectomy and large bowel perforations.23 To our knowledge, systematic monitoring and studies on errors related to the FIT or guaiac test are still scarce and errors are, most likely, managed in the context of the daily quality management of a screening programme (eg, laboratory non-conformities) rather than being systematically reported as with other aspects of the screening test, such as diagnostic accuracy. So far, our study is a unique example of a quality improvement project that used the HFMEA methodology in a mass screening programme and showed a statistically significant improvement in performance as a result. This experience was useful for retrieving a scale for the occurrence of errors that is meaningful for a health intervention with a large target population. Unfortunately, other findings are not comparable with ours as, for example, mammography screening11 is based on an imaging technique performed on women who actually attend a healthcare facility, while a colorectal cancer screening test involves self-sampling at home and returning the sample to a laboratory for analysis. Similarly, HFMEA experiences such as that of Flegar-Meštrić et al9 focus on internal laboratory steps and do not involve the transportation of specimens from the producer to the user and from the user to the laboratory.

HFMEA on FITs in the colorectal cancer screening quality assurance scenario

The changes implemented allowed the standards set out in the first edition of the European Guidelines for Quality Assurance in Colorectal Cancer Screening and Diagnosis24 to be met, in particular, Recommendation 4.9 on user identification, and Recommendation 4.18 on quality assurance for laboratory performance, which explicitly includes uptake, undelivered mail/samples, time from collection to analysis and lost and spoiled kits. The implementation of the HFMEA methodology also meets the requirements of ISO standards specific to testing activities (eg, International Organization for Standardization 15189 for laboratories)25 and healthcare accreditation systems, such as The Joint Commission.26

The impact and generalisability of the study’s findings

Currently, colorectal cancer screening is recommended worldwide as an effective public health tool for cancer prevention. In the European Union, colorectal cancer screening programmes have been implemented nationally or regionally in 20 Member States, with a total of 4 302 916 faecal occult blood tests performed annually27 in 2015. It may not be possible to generalise our results entirely with programmes using a different procedure for sending out and collecting kits (eg, by post) or using the guaiac faecal occult blood test. However, as some features are common among different test modalities, such as the need for full traceability, temperature control and so on, and the types of incidents detected may be the same, our study could still provide useful indications for programmes based on faecal occult blood testing, as each programme should guarantee that the management of users’ specimens is as safe and effective as possible.

Implications for costs and sustainability

As a multidisciplinary analysis of the process highlighted the weaknesses of the specimen path, a comprehensive improvement plan was set up, which took into account the priorities and actions that may have a greater impact on quality. In the context of a scarcity of resources within healthcare systems, particularly in the quality field, this proved to be a very suitable method for improving quality and for receiving adequate funding (owing to its clear impact on outcomes). The results, along with the expected effect of corrective actions, were also shown to be reported in a way that policymakers and healthcare management outside of the process could understand. The corrective actions were endorsed by all of the stakeholders, at an acceptable cost. In fact, all of the corrective actions were implemented without additional costs, with the exception of the integration of the linkage procedure in the software. That corresponded to just 4.7% of the total amount spent on the screening programme (largely overestimated as the total does not include personnel and laboratory costs) [data not shown].

Conclusions

In conclusion, the HFMEA methodology reported in this paper has enabled Milan’s screening programme to reduce the number of specimens lost significantly, with a resulting increase in the programme’s effectiveness, risk reduction and user satisfaction. New applications of the HFMEA methodology in screening programmes and further technical development could constitute new challenges for the future and could offer an affordable tool for the overall improvement of health interventions, with only positive consequences for the population concerned.
  17 in total

1.  Using health care Failure Mode and Effect Analysis: the VA National Center for Patient Safety's prospective risk analysis system.

Authors:  Joseph DeRosier; Erik Stalhandske; James P Bagian; Tina Nudell
Journal:  Jt Comm J Qual Improv       Date:  2002-05

2.  Influence of seasonal variations in ambient temperatures on performance of immunochemical faecal occult blood test for colorectal cancer screening: observational study from the Florence district.

Authors:  Grazia Grazzini; Leonardo Ventura; Marco Zappa; Stefano Ciatto; Massimo Confortini; Stefano Rapi; Tiziana Rubeca; Carmen Beatriz Visioli; Stephen P Halloran
Journal:  Gut       Date:  2010-07-05       Impact factor: 23.059

3.  [Risk management in a regional screening program for breast cancer in the region of Lazio, Italy].

Authors:  A Federici; C A Consolante; A Barca; D Baiocchi; P Borgia; L Marzolini; G Guasticchi
Journal:  Ann Ig       Date:  2006 Nov-Dec

4.  GRADE: an emerging consensus on rating quality of evidence and strength of recommendations.

Authors:  Gordon H Guyatt; Andrew D Oxman; Gunn E Vist; Regina Kunz; Yngve Falck-Ytter; Pablo Alonso-Coello; Holger J Schünemann
Journal:  BMJ       Date:  2008-04-26

5.  Failure mode and effects analysis: a comparison of two common risk prioritisation methods.

Authors:  Lisa M McElroy; Rebeca Khorzad; Anna P Nannicelli; Alexandra R Brown; Daniela P Ladner; Jane L Holl
Journal:  BMJ Qual Saf       Date:  2015-07-13       Impact factor: 7.035

Review 6.  A Review of Healthcare Failure Mode and Effects Analysis (HFMEA) in Radiotherapy.

Authors:  M Giardina; M C Cantone; E Tomarchio; I Veronese
Journal:  Health Phys       Date:  2016-10       Impact factor: 1.316

7.  Risk analysis of the preanalytical process based on quality indicators data.

Authors:  Zlata Flegar-Meštrić; Sonja Perkov; Andrea Radeljak; Mirjana Marijana Kardum Paro; Ingrid Prkačin; Ana Devčić-Jeras
Journal:  Clin Chem Lab Med       Date:  2017-03-01       Impact factor: 3.694

8.  Ambient temperature and FIT performance in the Emilia-Romagna colorectal cancer screening programme.

Authors:  Gianfranco De Girolamo; Carlo A Goldoni; Rossella Corradini; Orietta Giuliani; Fabio Falcini; Priscilla Sassoli De'Bianchi; Carlo Naldoni; Stefano Zauli Sajani
Journal:  J Med Screen       Date:  2016-04-28       Impact factor: 2.136

9.  Interrupted time series regression for the evaluation of public health interventions: a tutorial.

Authors:  James Lopez Bernal; Steven Cummins; Antonio Gasparrini
Journal:  Int J Epidemiol       Date:  2017-02-01       Impact factor: 7.196

10.  SQUIRE 2.0 (Standards for QUality Improvement Reporting Excellence): revised publication guidelines from a detailed consensus process.

Authors:  Greg Ogrinc; Louise Davies; Daisy Goodman; Paul Batalden; Frank Davidoff; David Stevens
Journal:  BMJ Qual Saf       Date:  2015-09-14       Impact factor: 7.035

View more
  1 in total

1.  Improving patient safety during intrahospital transportation of mechanically ventilated patients with critical illness.

Authors:  Shwu-Jen Lin; Chin-Yuan Tsan; Mao-Yuan Su; Chao-Ling Wu; Li-Chin Chen; Hsiu-Jung Hsieh; Wei-Ling Hsiao; Jui-Chen Cheng; Yao-Wen Kuo; Jih-Shuin Jerng; Huey-Dong Wu; Jui-Sheng Sun
Journal:  BMJ Open Qual       Date:  2020-04
  1 in total

北京卡尤迪生物科技股份有限公司 © 2022-2023.