| Literature DB >> 35481680 |
Peter Bankhead1,2,3.
Abstract
The potential to use quantitative image analysis and artificial intelligence is one of the driving forces behind digital pathology. However, despite novel image analysis methods for pathology being described across many publications, few become widely adopted and many are not applied in more than a single study. The explanation is often straightforward: software implementing the method is simply not available, or is too complex, incomplete, or dataset-dependent for others to use. The result is a disconnect between what seems already possible in digital pathology based upon the literature, and what actually is possible for anyone wishing to apply it using currently available software. This review begins by introducing the main approaches and techniques involved in analysing pathology images. I then examine the practical challenges inherent in taking algorithms beyond proof-of-concept, from both a user and developer perspective. I describe the need for a collaborative and multidisciplinary approach to developing and validating meaningful new algorithms, and argue that openness, implementation, and usability deserve more attention among digital pathology researchers. The review ends with a discussion about how digital pathology could benefit from interacting with and learning from the wider bioimage analysis community, particularly with regard to sharing data, software, and ideas.Entities:
Keywords: computational pathology; digital pathology; image analysis; image processing; open science; software
Mesh:
Year: 2022 PMID: 35481680 PMCID: PMC9324951 DOI: 10.1002/path.5921
Source DB: PubMed Journal: J Pathol ISSN: 0022-3417 Impact factor: 9.883
Figure 1Nucleus segmentation using image processing, with and without deep learning. (A) Original H&E image. (B) Result of processing the image using colour deconvolution and image filtering to extract the haematoxylin information. (C) Segmented nuclei using a publicly available StarDist deep‐learning model trained for fluorescence data [44]. By using the processed image as input rather than the original, the model can achieve reasonable nucleus segmentation performance despite not being trained for H&E images. (D) Result of QuPath's built‐in cell detection using conventional image processing. The StarDist deep‐learning approach results in more regular contours and better handles densely‐packed regions, although the total number of nuclei detected are similar (319 and 331, respectively).
Figure 2The impact of algorithm parameters and cutoff thresholds on images. QuPath's ‘positive cell detection’ command is used to determine Ki67 labelling indices for the same field of view, acquired using two different scanners. This conventional image‐processing algorithm uses multiple adjustable parameters, although here only the thresholds for nucleus detection and DAB positivity are varied. Horizontally adjacent images are from the same scanner, while vertically adjacent images are generated using the same thresholds. Detected nuclei are shown as red or blue, depending upon whether they are classified as positive or negative, respectively. Changing either threshold or the scanner can substantially change the results, although typically in predictable ways (e.g. a high detection threshold leads to negative nuclei being missed, and the labelling index is inflated; a high DAB threshold leads to positive nuclei being misclassified as negative, and the labelling index is reduced). Combining this knowledge with a careful evaluation of the markup images, it is possible for a user to identify and address many errors by adjusting algorithm parameters accordingly.