| Literature DB >> 35284186 |
Petru Manescu1, Michael Shaw1,2, Lydia Neary- Zajiczek1, Christopher Bendkowski1, Remy Claveau1, Muna Elmi1, Biobele J Brown3,4,5, Delmiro Fernandez-Reyes1,3,4,5.
Abstract
Automated digital high-magnification optical microscopy is key to accelerating biology research and improving pathology clinical pathways. High magnification objectives with large numerical apertures are usually preferred to resolve the fine structural details of biological samples, but they have a very limited depth-of-field. Depending on the thickness of the sample, analysis of specimens typically requires the acquisition of multiple images at different focal planes for each field-of-view, followed by the fusion of these planes into an extended depth-of-field image. This translates into low scanning speeds, increased storage space, and processing time not suitable for high-throughput clinical use. We introduce a novel content-aware multi-focus image fusion approach based on deep learning which extends the depth-of-field of high magnification objectives effectively. We demonstrate the method with three examples, showing that highly accurate, detailed, extended depth of field images can be obtained at a lower axial sampling rate, using 2-fold fewer focal planes than normally required. Published by Optica Publishing Group under the terms of the Creative Commons Attribution 4.0 License. Further distribution of this work must maintain attribution to the author(s) and the published article’s title, journal citation, and DOI.Entities:
Year: 2022 PMID: 35284186 PMCID: PMC8884220 DOI: 10.1364/BOE.448280
Source DB: PubMed Journal: Biomed Opt Express ISSN: 2156-7085 Impact factor: 3.732
Fig. 1.CAMI-Fusion. a) Trade-offs between imaging speed, objective magnification and depth of field in optical microscopy. CAMI-Fusion enlarges this design space. (b) Example of a blood sample imaged with low and high NA objective requiring imaging lower and higher number of focal planes respectively. The sample thickness was estimated at 4-5 µm. Malaria parasites in their ring stage can only be clearly distinguished with high NA objectives (100x) as stated in [10]. (c) Overview of the proposed pipeline for image fusion. Ground-truth EDoF generation: high resolution z-stacks with a 0.5 µm axial step size (small z-step size) are acquired using a high magnification objective (100x/1.4NA) and their corresponding extended depth of field images are computed using a wavelet-based approach (yi). The focal planes corresponding to a larger z-step are selected (xi) and passed through a convolutional neural network trained to fuse and restore yi from xi. (d) Network architecture. CAMI-Fusion passes each focal plane through the encoder part of the network and choses the maximum activations after the residual layers before the decoder part of the network. There are 5 identical residual blocks (ResBlock) before the fusion operation. RGB Patches of 256 × 256 pixels per focal plane are used during training.
Fig. 2.Results on TBF malaria samples. (a) Malaria Parasite (MP) detection accuracy in terms of Average Precision (AP) of a RetinaNet object detector trained to detect MP in high resolution EDoF image fields obtained by fusing 14 focal planes. RetinaNet was tested on 30 different image fields obtained by fusing 14, 7 and 3 planes respectively using the wavelet transform approach and CAMI-Fusion. (b) Image quality assessment for the three different axial steps values. Box-dot plots (n = 30) show SSIM (higher is better) for the resolved EDoF images obtained with wavelet transform fusion approach and the images obtained with CAMI-Fusion. (c)Image fusion outputs for thick blood film stained with Giemsa used in malaria diagnosis. Shown are fusion results using the wavelet transform approach and CAMI-fusion for 14, 7 and 3 focal planes respectively. Malaria parasites are highlighted. (d-e) Additional image similarity comparisons. CORR: Pearson Correlation Coefficient [26]. HPSI: Haar wavelet-based perceptual similarity index [27]. Comparison of the results obtained using combined loss function during training the fusion models with a simple loss function (L1) can be found in Fig. S1 (See Supplemental document 1).
Fig. 3.Fusion of image stacks from peripheral blood smears. (a) Image quality assessment for 1 µm axial step values compared to 0.5 um. Box-dot plots (n = 14) show SSIM (higher is better) for the resolved EDoF images obtained with wavelet transform fusion approach and the images obtained with CAMI-Fusion. (c) Image fusion outputs for PBS stained with Giemsa used in malaria diagnosis. Shown are fusion results using the wavelet transform approach and CAMI-fusion for 7 (0.5 µm axial step size) and 3 (1 µm axial step size) focal planes respectively. Red blood cells infected with malaria parasites are highlighted. (b)-(d) Additional image similarity comparisons. CORR: Pearson Correlation Coefficient [26]. HPSI: Haar wavelet-based perceptual similarity index [27].
Fig. 4.Fusion of image stacks from bone marrow aspirates. (a) Image quality assessment for 1 µm axial step values compared to 0.5 um. Box-dot plots (n = 8) show SSIM (higher is better) for the resolved EDoF images obtained with wavelet transform fusion approach and the images obtained with CAMI-Fusion. (c) Image fusion outputs for BMA stained with Wright used in AML diagnosis. Shown are fusion results using the wavelet transform approach and CAMI-fusion for 7 and 3 focal planes respectively. White blood cells are highlighted. (b)-(d) Additional image similarity comparisons. CORR: Pearson Correlation Coefficient [26]. HPSI: Haar wavelet-based perceptual similarity index [27].
Fig. 5.Z-stack acquisition and processing times. The processing times were measured on an Intel Core i9 3.1 GHZ CPU with a NVIDIA GeForce RTX GPU with 12 Gb of memory. *Cami-fusion makes use of GPU capabilities. **The Wavelet-based EDoF [13] was implemented in Matlab and does not use GPU capabilities.