Literature DB >> 34221635

Use of Middleware Data to Dissect and Optimize Hematology Autoverification.

Rachel D Starks1, Anna E Merrill1, Scott R Davis1, Dena R Voss1, Pamela J Goldsmith1, Bonnie S Brown1, Jeff Kulhavy1, Matthew D Krasowski1.   

Abstract

BACKGROUND: Hematology analysis comprises some of the highest volume tests run in clinical laboratories. Autoverification of hematology results using computer-based rules reduces turnaround time for many specimens, while strategically targeting specimen review by technologist or pathologist.
METHODS: Autoverification rules had been developed over a decade at an 800-bed tertiary/quarternary care academic medical central laboratory serving both adult and pediatric populations. In the process of migrating to newer hematology instruments, we analyzed the rates of the autoverification rules/flags most commonly associated with triggering manual review. We were particularly interested in rules that on their own often led to manual review in the absence of other flags. Prior to the study, autoverification rates were 87.8% (out of 16,073 orders) for complete blood count (CBC) if ordered as a panel and 85.8% (out of 1,940 orders) for CBC components ordered individually (not as the panel).
RESULTS: Detailed analysis of rules/flags that frequently triggered indicated that the immature granulocyte (IG) flag (an instrument parameter) and rules that reflexed platelet by impedance method (PLT-I) to platelet by fluorescent method (PLT-F) represented the two biggest opportunities to increase autoverification. The IG flag threshold had previously been validated at 2%, a setting that resulted in this flag alone preventing autoverification in 6.0% of all samples. The IG flag threshold was raised to 5% after detailed chart review; this was also the instrument vendor's default recommendation for the newer hematology analyzers. Analysis also supported switching to PLT-F for all platelet analysis. Autoverification rates increased to 93.5% (out of 91,692 orders) for CBC as a panel and 89.8% (out of 11,982 orders) for individual components after changes in rules and laboratory practice.
CONCLUSIONS: Detailed analysis of autoverification of hematology testing at an academic medical center clinical laboratory that had been using a set of autoverification rules for over a decade revealed opportunities to optimize the parameters. The data analysis was challenging and time-consuming, highlighting opportunities for improvement in software tools that allow for more rapid and routine evaluation of autoverification parameters. Copyright:
© 2021 Journal of Pathology Informatics.

Entities:  

Keywords:  Algorithms; clinical laboratory information system; hematology; informatics; middleware

Year:  2021        PMID: 34221635      PMCID: PMC8240550          DOI: 10.4103/jpi.jpi_89_20

Source DB:  PubMed          Journal:  J Pathol Inform


INTRODUCTION

Autoverification, the use of computer-based rules employed in the laboratory information system (LIS) and/or middleware software to determine release of laboratory test results, is now a routine practice in core clinical laboratories.[1234] The use of well-designed autoverification rules improves both quality and efficiency.[124] Autoverification rules have been described in detail for clinical chemistry, blood gas, and coagulation analysis, often achieving autoverification rates of >90%.[56789101112] In contrast, published studies regarding the application of autoverification in hematopathology are more limited.[1314] Zhao et al. describe the implementation of autoverification rules in hematology analysis in a multicenter setting with 76%–85% autoverification rates.[14] The necessity of manual review of peripheral blood smears precludes achieving the high autoverification rates seen in clinical chemistry. On the other hand, high rates of manual review may place a strain on limited laboratory resources and delay turnaround time without adding clinical value. In 2005, The International Consensus Group for Hematology (ICGH) issued guidelines to establish a uniform set of criteria for manual review of automated hematology testing.[15161718] The proposed criteria for manual review includes quantitative and qualitative parameters. Pratumvinit et al. optimized the ICGH guidelines to significantly reduce their review rates and increase autoverification.[18] The basic qualitative criteria used for manual review are well-established; however, the specific quantitative cutoffs to trigger manual review are largely set by the individual laboratory, with some recommendations for individual parameters provided by instrument vendors or published literature.[71516192021] Individual laboratories ideally should optimize their own set of rules to maintain both quality and efficiency within their own context of instrumentation, staffing, and patient population. However, data analysis on specific flags and their clinical impact may be quite challenging to assess. In this study, we evaluated autoverification rules at an 800-bed tertiary/quarternary academic medical center core clinical laboratory for a complete blood count (CBC) with white blood cell (WBC) count differential (Diff) and the “a la carte” ordering of individual CBC components. The laboratory had developed and validated autoverification protocols over a decade. Feedback from laboratory staff suggested that some rules were resulting in manual review without clear clinical benefit. We therefore sought opportunities for improvement by assessing the flags that most frequently held specimens for manual review. Our analysis also illustrates some of the data analytical challenges associated with evaluating hematology autoverification.

METHODS

Institutional details

The present study was performed at an approximately 800-bed tertiary/quaternary care academic medical center. The medical center services included pediatric and adult inpatient units, multiple intensive care units (ICUs), a level I trauma capable emergency treatment center, and outpatient services. Pediatric and adult hematology/oncology services include both inpatient and outpatient populations. For the purpose of this study, patients 18 years and older were classified as adults, with pediatric patients <18-years old. The data in the study were collected as part of a retrospective study approved by the university Institutional Review Board (protocol #201801719) covering the time period from January 1, 2018, to July 31, 2018. This study was carried out in accordance with the Code of Ethics of the World Medical Association (declaration of Helsinki).

Data extraction and analysis

The electronic health record (EHR) throughout the retrospective study period was Epic (Epic Systems, Inc., Madison, Wisconsin, USA), which has been in place for May 2009. The middleware software was Data Innovations (DI) Instrument Manager (DI, Burlington Vermont, USA) version 8.14; autoverification rules are predominantly within the DI middleware.[522] The laboratory information system is Epic Beaker Clinical Pathology.[23] Data were extracted from DI using Microsoft Open Database Connectivity (Microsoft Corporation, Redmond, Washington, USA) and analyzed using Microsoft Excel. Instrument flag data were retrieved from the analyzer and required extensive data cleanup and manual review to assure integrity. One major challenge is that the error messages concatenate on one another in a variety of combinations. Additional File 1 shows an example of the data, de-identified to remove identifying data fields related to accession number, dates/times, and personnel performing the testing. The flag fields are not transmitted to the laboratory information system (Epic Beaker Clinical Pathology)[23] nor are the operation identification numbers that specify who reviewed, released, and rejected results. These fields would be needed to calculate percent autoverification in the laboratory information system if that were a goal.

Instrument flags

In our laboratory, instrument flags are generated either from the automated hematology instrument manufacturer (Sysmex, America) or by our own laboratory-validated rules built in middleware (summarized in Table 1 and indicating origin of rule). These flags are either global (i.e., applied to every sample) or patient-specific (e.g., a patient known to have previous samples that required special handling or analysis). When a sample triggers a flag, several outcomes are possible: (1) automatically release the CBC component results but hold the WBC Diff for manual review, (2) hold both the CBC and WBC Diff for manual review, and (3) release all results to LIS/EHR without manual review (assuming no other flags intervene). For example, the flag for the presence of immature granulocytes (IG) above a set percentage will hold only the WBC Diff and release the CBC, while the thrombocytopenia flag will hold both the CBC and WBC Diff for manual review. IGs on manual review include metamyelocytes, myelocytes, and promyelocytes. Critical value flags, in the absence of other flags, do not preclude autoverification; notification of the clinical services for critical values is by telephone per protocol.
Table 1

Flags for manual review of complete blood cell count and white blood cell count differential tests

Flag typeFlagHold for reviewParameters
DI ruleAge <3 daysOnly WBC differential
DI ruleFetal specimenBoth CBC and WBC differential
SysmexFunction errorBoth CBC and WBC differential
DI ruleSample collection time >24 hBoth CBC and WBC differential
SysmexWBC abnormal scattergramOnly WBC differential
DI ruleWBC linearity: Dilute X7Both CBC and WBC differential
DI ruleWBC >100.0Both CBC and WBC differential
DI ruleWBC >30Only WBC differential
DI ruleWBC criticalNo, unless held for another flag1.0 low 50.0 high
DI ruleWBC has a nonspecific error flagBoth CBC and WBC differential
DI ruleLeukopeniaOnly WBC differential
SysmexNeutropeniaOnly WBC differential<10%
SysmexIG presentOnly WBC differential>5%
SysmexImmature granulocytesOnly WBC differential
SysmexLeft shiftOnly WBC differential
SysmexAbnormal lymphocytes/blastsOnly WBC differential
SysmexAtypical lymphocytesOnly WBC differential
DI ruleLymphocytosisOnly WBC differential>11,500 (01 month)
>17,500 (1 month3 months)
>14,000 (3 months6 months)
>11,000 (6 months12 months)
>10,000 (1 year2 years)
>8500 (2 years5 years)
>7000 (5 years18 years)
>5000 (>18 years)
DI ruleMonocytosisOnly WBC differential>20%
DI ruleEosinophiliaOnly WBC differential>50%
Specific patientReview smear for sezary cellsBoth CBC and WBC differential
Specific patientCirculating lymphoma cellsBoth CBC and WBC differential
SysmexRBC abnormal distributionBoth CBC and WBC differential
DI ruleRBC linearity: Dilute X7Both CBC and WBC differential
SysmexReticulocytes abnormal Scattergram: Dilute X3Both CBC and WBC differential
SysmexRBC agglutinationBoth CBC and WBC differential
DI ruleHb clinically significantNo, unless held for another flag
DI ruleHb criticalNo, unless held for another flag6.0 low 22.0 high
DI ruleHb delta failureBoth CBC and WBC differential25%
DI ruleTurbidity/Hb interference: Dilute X7Both CBC and WBC differentialIf MCHC>38 g/dL
DI ruleHCT criticalNo, unless held for another flag65 (01 month old) 55 (>1 month old)
DI ruleHCT linearity: Dilute X7Both CBC and WBC differential
DI ruleMCV delta failureBoth CBC and WBC differential7%
DI ruleMCV highOnly WBC differential105
DI ruleMCV low: Scan slideNo, unless held for another flag
SysmexDimorphic populationNo, unless held for another flag
SysmexFragmentsNo, unless held for another flag
SysmexAbsurd MCHC LowBoth CBC and WBC differential
DI ruleHigh MCHC and RDWNo, unless held for another flag
DI ruleMCHC <30 and MCV >100Both CBC and WBC differential
DI ruleRDW HighNo, unless held for another flag
DI ruleRDWSD highNo, unless held for another flag60
DI ruleNRBC# linearity: Dilute X7Both CBC and WBC differential
Flag typeFlagHold for reviewParameters
Specific PatientVerify NRBC count at scopeBoth CBC and WBC differential
SysmexPLT abnormal scattergramBoth CBC and WBC differential
SysmexPLT abnormal distributionBoth CBC and WBC differential
DI rulePLT criticalNo, unless held for another flag10 low, 1000 high
DI rulePLT delta failureBoth CBC and WBC differential50%
DI rulePLT increase deltaBoth CBC and WBC differential
DI ruleThrombocytopenia: Scan PLTBoth CBC and WBC differential
SysmexPLT clumpsBoth CBC and WBC differential
DI rulePreviously clumped PLT resultBoth CBC and WBC differential
DI ruleBurn unit high PLTBoth CBC and WBC differential
Specific patientPLT satellitismBoth CBC and WBC differential
DI ruleProbable cold agglutininBoth CBC and WBC differential
Specific patientMild cold agglutininBoth CBC and WBC differential

CBC: Complete blood cell count, WBC: White blood cell count, IG: Immature granulocytes, RBC: Red blood cell count, Hb: Hemoglobin, HCT: Hematocrit, MCV: Mean cell volume, MCHC: Mean cell hemoglobin concentration, RDW: Red cell distribution width, RDW-SD: RDW-standard deviation, NRBC: Nucleated red blood cell count, PLT: Platelet

Flags for manual review of complete blood cell count and white blood cell count differential tests CBC: Complete blood cell count, WBC: White blood cell count, IG: Immature granulocytes, RBC: Red blood cell count, Hb: Hemoglobin, HCT: Hematocrit, MCV: Mean cell volume, MCHC: Mean cell hemoglobin concentration, RDW: Red cell distribution width, RDW-SD: RDW-standard deviation, NRBC: Nucleated red blood cell count, PLT: Platelet

Automated analyzers

Automated hematology testing was performed by a Sysmex XN-9000 hematology analyzer with a fully automated hematology slide preparation and staining system (Sysmex America, Inc., Lincolnshire, Illinois, USA). This instrument performs platelet (PLT) enumeration either by disruption of electrical current (PLT-I) or by a flow cytometric method using a fluorescent oxazine dye (PLT-F). Briefly, for the PLT-F method, the dye binds to platelet organelles, is then irradiated by laser beam, and the corresponding forward-scattered light and side-scattered fluorescence are plotted.[24] PLT-F method better distinguishes between platelets and fragmented red blood cells.[242526] During the timeframe for the present study, PLT-F used higher cost reagents than PLT-I (approximately 50% more at onset of project).

RESULTS

Volume of testing and frequency of flags

Over a 6-month period, a total of 132,432 specimens had CBC with or without WBC Diff or an a la carte order for individual CBC components (PLT, hemoglobin, and hematocrit). Manual review by a technologist was performed on 10,314 of those specimens (7.8%). During this period, a total of 53,396 instrument flags were triggered (note that an individual specimen may trigger up to 15 flags), with 80.3% of samples not associated with any flag. Overall, 9.7% of specimens triggered a single flag, 5.0% triggered two flags, and <1% of samples triggered 5 or more flags [Figure 1a].
Figure 1

The number of samples during a 6-month period without an associated flag (80.3%) or with one to four flags are shown in (a). The distribution of samples by patient care area for adult and pediatric patients is shown in (b). Heme/Onc: Hematology/Oncology, ICU: Intensive care unit, ED: Emergency department, OR: Operating room

The number of samples during a 6-month period without an associated flag (80.3%) or with one to four flags are shown in (a). The distribution of samples by patient care area for adult and pediatric patients is shown in (b). Heme/Onc: Hematology/Oncology, ICU: Intensive care unit, ED: Emergency department, OR: Operating room Pediatric ICUs (including both neonatal and pediatric units) had the highest percentage of flagged samples, with one or more flags on 52.5% of specimens [Figure 1b]. Adult and pediatric non-ICU inpatient units had 29.6% and 28.4% samples, respectively, with at least one flag. Adult hematology/oncology services, which include both an inpatient bone marrow transplant unit and outpatient clinics, had a 28.8% rate of samples with one or more flags. Rate of sample flags was much lower in outpatient (excluding hematology/oncology), emergency department, and operating room locations, at approximately 10% or less in both adult and pediatric populations.

Frequently triggered flags

To analyze the patterns of flags that frequently triggered manual review for both WBC and PLT parameters, we began by reviewing WBC parameters. This was limited to a 30-day period of analysis due to the extensive nature of data cleanup and manual review for the middleware and instrument data. We looked at two outcomes: (1) flags that would release the CBC while holding the WBC Diff for manual review and (2) flags that would hold both the CBC and WBC Diff for manual review. In the first category of releasing the CBC and holding the WBC Diff for manual review, the IG present flag represented 9.6% of flags during a 30-day review period (20,576 samples and 1,980 flags) [Figure 2a]. The next most frequently triggered flag was the WBC abnormal scattergram at 5.3% (1,087 flags) followed by abnormal lymphocytes or blasts flag at 4.7% (962 flags) [Figure 2a]. These top three most frequently triggered flags are instrument flags, with the ≥2% IG cutoff specified by the laboratory (discussed in more detail below).
Figure 2

The most frequently triggered flags that resulted in manual review of WBC differential while automatically releasing the CBC during a 30-day period are shown in (a) with IG Present as the only flag triggered in 9.6% of samples. In (b), the six-most frequently triggered flags that hold both the CBC and WBC differential for manual review are shown with the most frequently triggered flag Thombocytopenia, Rerun PLT-F, 8.0%. IG: Immature granulocytes, Abn WBCs: Abnormal white blood cells, Abn Lymphs/Blasts: Abnormal lymphocytes or blasts, Lymphs: Lymphocytes, MCV: Mean corpuscular volume, PLT: Platelet, HGB: Hemoglobin

The most frequently triggered flags that resulted in manual review of WBC differential while automatically releasing the CBC during a 30-day period are shown in (a) with IG Present as the only flag triggered in 9.6% of samples. In (b), the six-most frequently triggered flags that hold both the CBC and WBC differential for manual review are shown with the most frequently triggered flag Thombocytopenia, Rerun PLT-F, 8.0%. IG: Immature granulocytes, Abn WBCs: Abnormal white blood cells, Abn Lymphs/Blasts: Abnormal lymphocytes or blasts, Lymphs: Lymphocytes, MCV: Mean corpuscular volume, PLT: Platelet, HGB: Hemoglobin For platelets, the PLT-I method was the main methodology used to generate a platelet count, with PLT-F used in certain circumstances. Samples were run for PLT-F based on the following flags: (1) PLT-I <70 k/mm3 (“thrombocytopenia”), (2) 50% change in either direction within the last 7 days (“delta failure”), (3) pediatric inpatients and pediatric hematology/oncology clinic patients (due to known higher rate of red blood cell fragmentation and other specimen challenges), and/or (4) platelet abnormal distribution flag on the hematology analyzer. For 20,576 samples and 1,637 flags during the review period, we identified PLT-I <70 k/mm3 as accounting for 8.0% of flags that were holding both the CBC and WBC Diff to re-run for PLT-F [Figure 2b]. The next most frequently triggered flags to hold CBC and WBC Diff for manual review were PLT clumps (2.2%, 460 flags) and PLT delta failure (1.7%, 349 flags) [Figure 2b].

Most frequently triggered single flag

Next, we examined the samples during a 6-month period that had only a single flag. By far, the IG flag (intended to detect metamyelocytes, myelocytes, and promyelocytes) was the most frequently triggered single-flag, representing 6.0% of flags (3200 samples) [Figure 3a]. The left shift and the abnormal lymphocyte/blasts flags both represent 0.80% (each 425 flags), while 0.37% of single flags (199 flags) was due to the WBC abnormal scattergram [Figure 3a]. All four flags are generated by instrument rules. The left shift flag primarily detects bands and metamyelocytes. In 1.1% of samples, the IG and left shift flags occurred together and were the only flags present (608 flags) [Figure 3a].
Figure 3

When a single flag for manual review was triggered, the four most frequent rules identified are shown including a potential overlap of parameters in IG Present and Left Shift in (a). Shown in (b) is the difference in manual review rates when the IG cutoff is changed from ≥2% (804 samples) to ≥5% (234 samples). IG: Immature granulocytes, Abn Lymphs or Blasts: Abnormal lymphocytes or blasts, WBC Abn Scatter: White blood cell abnormal scattergram

When a single flag for manual review was triggered, the four most frequent rules identified are shown including a potential overlap of parameters in IG Present and Left Shift in (a). Shown in (b) is the difference in manual review rates when the IG cutoff is changed from ≥2% (804 samples) to ≥5% (234 samples). IG: Immature granulocytes, Abn Lymphs or Blasts: Abnormal lymphocytes or blasts, WBC Abn Scatter: White blood cell abnormal scattergram

Optimization of immature granulocyte flag

The IG flag data prompted us to perform more detailed review of the clinical utility of this flag. The IG flag had been set for ≥2% based on a validation study performed on an earlier generation of hematology analyzer used in the laboratory. The instrument vendor recommended a default trigger for the IG flag at 5%, while a range of 3–5% IG has been reported in the literature.[272829] In order to assess the effect on our patient population if we changed the IG parameter to ≥5%, we performed detailed chart review on CBC samples that had only the IG rule triggered. In a 30-day period, 804 samples underwent manual review due solely to the IG flag with the rule set to trigger at ≥2%; of those reviewed, only 29.1% (234 samples) had an IG of ≥5% [Figure 3b]. Of the 570 samples with <5% IG but ≥2%, most came from inpatient units, with a breakdown of 412 inpatients (72.3%), 145 outpatients (25.4%), and 13 emergency department patients (2.3%). Within the 570 samples, manual chart review identified 4.7% samples from 27 unique patients with promyelocytes (0.9–2.0%) and one with blasts (0.9%). All of these samples were from patients on inpatient or adult hematology/oncology services and were follow-up specimens from patients already worked-up and being followed for hematologic issues. Fourteen patients with promyelocytes identified were positive for malignancy, six of which were simultaneously receiving chemotherapy. Seventeen of the 27 patients identified with promyelocytes were receiving daily CBCs during an inpatient encounter. The data wrelative toere then analyzed to see how the IG estimate compared to the identification of metamyelocytes, myelocytes, and promyelocytes in these specimens by a technologist. Manual review of the 570 samples led to lower %IG in 91.1% of samples and higher %IG in only 8.6% of samples. Thus, the IG flag appears to over-estimate manual slide review. Extrapolating from the 1 month of data, samples with <5% IG but ≥2% comprise an estimated 6,840 samples per year. Given that chart review of this subset did not identify any case where the manual review led to the identification of promyelocytes or blasts that had not already been identified in previous laboratory studies, we made decision to raise IG threshold to 5% to match the manufacturer recommendation. Thereafter, the IG parameter, if present as the only flag, only triggered manual review if 5% or greater. The in this threshold did not impact measurement of other flags.

Decreased review and re-running of complete blood counts wie chanth PLT-F

Based on the data and support from the published literature, the laboratory made the decision to switch to the PLT-F method instead of the PLT-I method. Similar to the change in IG threshold, the switch to PLT-F method had highest impact on inpatient samples, with a breakdown of 59.2% inpatient (15.1% of which was ICUs), 31.0% outpatient, and 9.8% emergency department samples during the period of the study. The biggest impact on autoverification resulted from not needing to perform PLT-F for PLT-I <70 k/mm3.

Overall impact of changes

In combination with the above-mentioned change in IG threshold, autoverification rates increased. Figure 4 compares the autoverification rates before and after the changes in PLT-F and IG threshold. The percent increase in autoverification was 5.7% for CBC as a panel and 4.0% for individual CBC components. This translates to an estimated absolute reduction in manual review of 13,266 CBC panels and 1,248 CBC individual components per year. This has substantial impact on turnaround time for individual samples, since average turnaround time for manual differential is about 90 min depending on staffing levels and competing workload. Average time to actually perform manual differential depends on complexity of pathologic findings and technologist experience but is typically 5–15 min. Using 10 min as an approximate average time for review, the reduction would translate to nearly a full-time equivalent position (approximately 100 h/year or nearly 300 8-h shifts).
Figure 4

Comparison of platelet-related flags with the switch to universal use of platelet by fluorescent method (PLT-F) method shown in (a) and autoverification rates for complete blood count and individually ordered complete blood count-components in (b)

Comparison of platelet-related flags with the switch to universal use of platelet by fluorescent method (PLT-F) method shown in (a) and autoverification rates for complete blood count and individually ordered complete blood count-components in (b)

DISCUSSION

There is a growing body of literature related to the development and optimization of autoverification rules in hematopathology.[1314] This complements investigations of autoverification for clinical chemistry, blood gas, and coagulation analysis.[56789101112] Hematopathology presents particular challenges for autoverification in that rules are intended for a range of purposes including review of abnormal cells that might be misidentified or missed by instruments (e.g., blasts, Sezary cells), detection of phenomena that can distort analysis (e.g., RBC agglutination and platelet clumping), and unusual changes in quantitative parameters (e.g., dramatic decrease or increase in hemoglobin/hematocrit).[131430] Some of the flags are associated with phenomena that might be a pre-analytical sample issue or a pathological process in the patient.[141831323334] A primary challenge for autoverification in hematopathology is to balance efficiency and turnaround time while performing manual review for samples where the review is likely to provide clinical benefit.[414313335] This is especially a challenge for laboratories that analyze a high percentage of samples from patients with hematologic abnormalities, especially those who undergo repeated laboratory analysis over time. In the present study, we evaluated autoverification rules that had been developed over years in our core clinical laboratory. In this process, we were confronted with rules that had been adopted per manufacturer recommendation (especially instrument flags) and those that had been developed and validated over years into an autoverification rule set. We were particularly looking for rules and thresholds that might represent “low hanging fruit” in generating the high frequency of flags but with low clinical value. A central challenge identified in our study is the difficulty in extracting and analyzing specific data for autoverification. Our laboratory uses middleware software for most of the autoverification rules. Data retrieval required running a third-party application every month to capture middleware data prior to off-site archival (where the extraction would be more difficult). As described in the methods, the data required extensive cleanup and formatting to be able to drill down to specific flags for patient specimens. Operational improvements were facilitated by our analysis. The two main changes that were implemented based on the autoverification analysis were to increase the IG flag cutoff requiring manual review from 2% to 5% and to switch to the PLT-F method for all PLT counts. Ironically, the default manufacturer recommendation for the IG flag of 5% was a choice that minimized unnecessary manual intervention, as we did not identify any clear clinical advantage in the lower threshold that had been set based on experience with an earlier generation of hematology analyzer. The autoverification analysis related to platelets demonstrated the improved efficiency and lower rerun rates with the PLT-F method that can better distinguish between platelets and fragmented RBCs.[2425363738] Given that our laboratory receives many pediatric samples, including from hematology/oncology patients, use of PLT-F minimized repeat analysis for specimens that often contain low sample volumes. The rule changes reported in the present study have now been in place, and we are not aware of any clinical issues arising from these changes. Future directions would be the development of software that more easily enables analysis of autoverification rates and the impact of specific rules and flags. This may be with commercial vendor and/or home-grown software development. A data warehouse is a possibility. In the present study, such a warehouse would need to be able to access the DI database or the DI database would need to be regularly duplicated to a different server. To allow for reliable evaluation of auto-verification, the data warehouse would ideally have discrete data for specimen comments/flag and operator identification (which could indicate manual versus auto-verification). One practical challenge would be to avoid causing latency issues on the production server. Given limited resources and competing informatics projects, we have not yet pursued such a project. For laboratories seeking to further increase autoverification rates, even identifying one or two rules associated with a high rate of triggering manual review may allow for a significant increase in autoverification while maintaining high quality patient care.

Financial support and sponsorship

Nil.

Conflicts of interest

There are no conflicts of interest.
  38 in total

1.  Elimination of instrument-driven reflex manual differential leukocyte counts. Optimization of manual blood smear review criteria in a high-volume automated hematology laboratory.

Authors:  Kay L Lantis; R Jayne Harris; Gerald Davis; Nancy Renner; William G Finn
Journal:  Am J Clin Pathol       Date:  2003-05       Impact factor: 2.493

2.  ICSH recommendations for the standardization of nomenclature and grading of peripheral blood cell morphological features.

Authors:  L Palmer; C Briggs; S McFadden; G Zini; J Burthem; G Rozenberg; M Proytcheva; S J Machin
Journal:  Int J Lab Hematol       Date:  2015-03-02       Impact factor: 2.877

3.  Analytical performance of automated platelet counts and impact on platelet transfusion guidance in patients with acute leukemia.

Authors:  Chaicharoen Tantanate; Ladawan Khowawisetsut; Kasama Sukapirom; Kovit Pattanapanyasat
Journal:  Scand J Clin Lab Invest       Date:  2019-02-14       Impact factor: 1.713

4.  Validation rules for blood smear revision after automated hematological testing using Mindray CAL-8000.

Authors:  Sabrina Buoro; Tommaso Mecca; Michela Seghezzi; Barbara Manenti; Giovanna Azzarà; Cosimo Ottomano; Giuseppe Lippi
Journal:  J Clin Lab Anal       Date:  2016-10-06       Impact factor: 2.352

5.  Laboratory productivity and the rate of manual peripheral blood smear review: a College of American Pathologists Q-Probes study of 95,141 complete blood count determinations performed in 263 institutions.

Authors:  David A Novis; Molly Walsh; David Wilkinson; Mary St Louis; Jonathon Ben-Ezra
Journal:  Arch Pathol Lab Med       Date:  2006-05       Impact factor: 5.534

6.  Performance evaluation of platelet counting by novel fluorescent dye staining in the XN-series automated hematology analyzers.

Authors:  Yuzo Tanaka; Yumiko Tanaka; Kazumi Gondo; Yoshiko Maruki; Tamiaki Kondo; Satomi Asai; Hiromichi Matsushita; Hayato Miyachi
Journal:  J Clin Lab Anal       Date:  2014-03-19       Impact factor: 2.352

7.  Do the flags related to immature granulocytes reported by the Sysmex XE-5000 warrant a microscopic slide review?

Authors:  Heidi Eilertsen; Tor-Arne Hagve
Journal:  Am J Clin Pathol       Date:  2014-10       Impact factor: 2.493

8.  Autoverification process improvement by Six Sigma approach: Clinical chemistry & immunoassay.

Authors:  Edward W Randell; Garry Short; Natasha Lee; Allison Beresford; Margaret Spencer; Marina Kennell; Zoë Moores; David Parry
Journal:  Clin Biochem       Date:  2018-03-05       Impact factor: 3.281

9.  Accuracy of a New Platelet Count System (PLT-F) Depends on the Staining Property of Its Reagents.

Authors:  Atsushi Wada; Yuri Takagi; Mari Kono; Takashi Morikawa
Journal:  PLoS One       Date:  2015-10-23       Impact factor: 3.240

10.  Evaluation of criteria of manual blood smear review following automated complete blood counts in a large university hospital.

Authors:  Samuel Ricardo Comar; Mariester Malvezzi; Ricardo Pasquini
Journal:  Rev Bras Hematol Hemoter       Date:  2017-07-31
View more
  1 in total

1.  Customized middleware experience in a tertiary care hospital hematology laboratory.

Authors:  Kristine Roland; Jim Yakimec; Todd Markin; Geoffrey Chan; Monika Hudoba
Journal:  J Pathol Inform       Date:  2022-09-24
  1 in total

北京卡尤迪生物科技股份有限公司 © 2022-2023.