| Literature DB >> 35935788 |
Abstract
Digital pathology has gone through considerable technical advances during the past few years and certain aspects of digital diagnostics have been widely and swiftly adopted in many centers, catalyzed by the COVID-19 pandemic. However, analysis of requirements, careful planning, and structured implementation should to be considered in order to reap the full benefits of a digital workflow. The aim of this review is to provide a practical, concise and hands-on summary of issues relevant to implementing and developing digital diagnostics in the pathology laboratory. These include important initial considerations, possible approaches to overcome common challenges, potential diagnostic pitfalls, validation and regulatory issues and an introduction to the emerging field of image analysis in routine.Entities:
Keywords: artificial intelligence; digital pathology; image analysis; scanner acquisition; validation
Year: 2022 PMID: 35935788 PMCID: PMC9354827 DOI: 10.3389/fmed.2022.888896
Source DB: PubMed Journal: Front Med (Lausanne) ISSN: 2296-858X
Example performance test for scanners under consideration for purchase.
| Technical aspects | |
| File size | File sizes can vary considerably (up to over fourfold) among different scanners and will therefore have a major impact on the storage space needed. |
| Scan time at 40× magnification and for dayload | The scanning speed can be a bottleneck in the diagnostic workflow depending on the number of slides, the type of specimen (biopsies vs. resections) and the number of pathologists to be signing out cases digitally. In our experience, scan times may vary up to a minute per slide on different scanners at the same magnification. |
| Interruptions | The vulnerability to interruptions is one of the most important aspects of scanner performance and can have an especially profound impact on overnight scanning. Software, hardware and slide-related issues may all contribute to interruptions. |
| Rescan rate | After quality control check, establish the proportion of slides that need to be rescanned (e.g., due to focus issues and missing tissue). |
| Focus | Tissue can be either entirely or only partially out of focus. The technology of continuous autofocusing can result in different areas of the slide being out of focus but lower rates of WSI completely out of focus in comparison to scanners using focus points. |
| Tissue identification | Scanners use their own algorithms to detect tissue on a slide and therefore variability in tissue detection may be seen among different scanners. For patient safety reasons it is crucial that all tissue is scanned and this must be ensured for each slide. Faintly stained tissue (e.g., myxoid substance or fat) can be missed and only partially scanned by some scanners. |
| Openness of file format | The scan format should ideally be open for the integration for an independent choice of image analysis tools and convertible into other formats. |
|
| |
| User friendliness | The laboratory staff should be involved in evaluating the usability of scanner software and hardware. |
| Loading of racks | There are several possibilities: manual loading of slides one by one, overturning of racks from staining machines in one step and direct loading of racks from stainers. However, slides with wet mounting medium tend to stick to the rack and will not be scanned. |
| Continuous scanning | Some scanners stop to open, others continue with the process even if opened to reload, which can have a considerable effect on case distribution depending on the workflow. |
| LIS integration | This is crucial, especially for work in the remote setting in terms of work lists and case management. Cooperation with LIS providers is important to ensure that the requirements of the institute are met. |
|
| |
| IMS | The IMS may be from a different provider than the scanner itself, but is essential for digital sign-out and should be considered during scanner testing. Pathologists should feel comfortable using the IMS, which should provide certain tools (measurement, area calculation, regions of interest, snapshot etc.). |
| Pathologist evaluation | Pathologists should compare their impression of WSI in terms of quality of the scans and their level of diagnostic confidence. A case set should including potentially difficult cases (e.g., special stains containing microorganisms). Side by side comparisons of WSI from different can be especially helpful in the decision making process. |
FIGURE 1Layout of our histology laboratory according to Lean principles. The red arrows that represent the path taken by a specimen from acquisition to the MD are unidirectional (“LEAN biopsy/resection street”). Therefore, the Lean solution was to place scanners in the slide sorting area (red circle), which minimizes waste both for a hybrid solution (digital and conventional sign-out) and fully digital sign-out.
FIGURE 2Scanned Giemsa stains from two different scanners (A,B) used in diagnostic routine at our institute in a case of H. pylori gastritis (both at 40×, images courtesy of Ursina Begré). Note the differences in color and brightness between scanners. In a survey of our MDs, scanner B provided a higher level of diagnostic security on Giemsa scans of gastric biopsies but both scanners performed similarly at 40×. Therefore, Giemsa stains for gastric biopsies are scanned at higher magnification per default.
Summary of good practice statements (GPS) of the College of American pathologists (3).
| GPS 1: All pathology laboratories implementing digital pathology for diagnostic purposes should carry out their own validation studies. |
| GPS 2: Validation should be appropriate for and applicable to the intended clinical use and clinical setting of the particular application. Validation of WSI systems should involve specimen preparation types relevant to intended use. If a new application for WSI is desired and differs materially from the previously validated use, a separate validation for the new application should be performed. |
| GPS 3: Validation should closely simulate the real-world clinical environment in which the technology will be used. |
| GPS 4: Validation should encompass the entire WSI system. However, it is not necessary to validate each individual component (i.e., computer hardware, monitor, network, scanner) of the system or the individual steps of the digital imaging process. |
| GPS 5: Laboratories should have procedures in place to address changes to the digitized system that could impact clinical results. |
| GPS 6: Pathologists adequately trained to use the WSI system must be involved in the validation process. |
| GPS 7: The validation process should confirm all material on a glass slide to be scanned is included in the digital image. |
| GPS 8: Documentation should be maintained recording the method, measurements, and final approval of validation for the WSI system to be used in the laboratory. |
| GPS 9: Pathologists should review slides in a validation set in random order. This applies both to the review modality (glass slides or digital) and the order in which slides are reviewed within each modality. |
Considerations for validation study design for image analysis tools.
| Ground truth definition | The algorithm output must be compared to a ground truth to establish precision and recall (precision addresses the proportion of positive identifications that was actually correct, i.e., a model with no false positives has a precision of 1.0; recall addresses the proportion of actual positives that were correctly identified, i.e., a model with no false negatives has a recall of 1.0) This can be done in several ways:
Manual annotation: most exact method for comparison to algorithm output but time-consuming Eyeballing (if applicable): Region of interest can be pre-set, replicates real-life diagnostic setting Comparison with previously reported values (derived from LIS): Least exact method, region of interest not standardized, but least time-consuming |
| Case selection | There are no published guidelines on the number of cases that should be included but the case mix should reflect the real life setting in terms of morphological heterogeneity and complexity (e.g., different histological variants/subtypes) |
| Acceptable range of output values | Define acceptable range of deviation to ground truth. This may depend on clinically relevant cutoffs that determine therapy (e.g., PD-L1, Ki67) |
| Possible confounding effects | If several scanners are used to run the algorithm on check whether the scanner has an effect on algorithm output |
| Identify discrepant cases and analyze reasons for discrepancy | Output values outside the defined acceptable range are discrepant to the ground truth. Can systematic reasons be identified (for example threshold of color detection, falsely identified tumor cells etc.)? Are the ground truth values really correct? In the case of substantial discrepancies, support of the provider may be warranted. |