| Literature DB >> 35805897 |
Yoonsuk Hyun1, Doory Kim2,3,4,5.
Abstract
Recent developments in super-resolution fluorescence microscopic techniques (SRM) have allowed for nanoscale imaging that greatly facilitates our understanding of nanostructures. However, the performance of single-molecule localization microscopy (SMLM) is significantly restricted by the image analysis method, as the final super-resolution image is reconstructed from identified localizations through computational analysis. With recent advancements in deep learning, many researchers have employed deep learning-based algorithms to analyze SMLM image data. This review discusses recent developments in deep-learning-based SMLM image analysis, including the limitations of existing fitting algorithms and how the quality of SMLM images can be improved through deep learning. Finally, we address possible future applications of deep learning methods for SMLM imaging.Entities:
Keywords: computer vision; deep learning; single-molecule localization microscopy; super-resolution microscopy
Mesh:
Year: 2022 PMID: 35805897 PMCID: PMC9266576 DOI: 10.3390/ijms23136896
Source DB: PubMed Journal: Int J Mol Sci ISSN: 1422-0067 Impact factor: 6.208
Figure 1Single-molecule localization microscopy (SMLM). (A,B) The principles of (A) STORM and (B) PALM. Adapted from the article of [8] under Creative Commons Attribution (CC BY) license.
Figure 2A visualization of deep learning architectures. (A) Multi-Layer Perceptron. (B) CNN based feature network [44]. (C) Encoder–Decoder architecture. (D) Recurrent neural network. (E) Transformer [45]. (F) Generative adversarial network [46].
Figure 3A list of computer vision algorithms. (A) Image classification. (B) Object Detection. (C) Semantic segmentation. (D) Image reconstruction. The top represents a super-resolution algorithm, while the bottom represents an image deblurring. (E) Image generation from random noise. The sample image is brought from MS COCO dataset provided under a Creative Commons Attribution 4.0 License [69].
Comparison of reported studies on the deep-learning-based single-molecule localization image analysis.
| Type | Name | Architecture | Algorithm | Input | Output | Training Data | Reference |
|---|---|---|---|---|---|---|---|
| Acceleration of single-molecule localization | DeepSTORM | Decoder-Encoder | Image Reconstruction | Camera Images with multiple PSFs | SR image in 2D | -Simulated images of emitters and microtubules -Experimental images of microtubules | [ |
| smNET | ResNet-like CNN | Regression | Individual Image of PSFs | 3D coordinates of PSFs, orientation, etc. | -Simulated images of emitters | [ | |
| -- | CNN | Regression | Individual Image of PSFs | 3D coordinates of PSFs | -Simulated and experimental images of beads | [ | |
| DeepLOCO | CNN + FC with Residual Connection | Regression | Camera Images with PSFs | 3D coordinates of PSFs | -Simulated and contest data | [ | |
| Constructing high-density super-resolution image | ANNA-PALM | U-Net, GAN | Image Generation | Widefield image, Image Sequences with multiple PSFs | Super-resolved 2D image | -Simulated image of microtubules -Experimental images of microtubules, nuclear pores and mitochondria. | [ |
| -- | CNN | Image Reconstruction | Image of individual PSFs | Super-resolved 2D image | -Experimental images of microtubules, mitochondria, and peroxisome. | [ | |
| Improvement of localization precision | BGnet | U-Net | Image Reconstruction | Images of Individual PSFs | Background and intensity of PSFs | -Experimental images of microtubules | [ |
| Localization of overlapping PSFs | DECODE | U-Nets | Image Reconstruction | Image Sequences with multiple PSFs | 3D coordinates of PSFs, Intensity, Background, Uncertainty | -Contest data | [ |
| Extracting additional spectral information from PSF | -- | FC | Classification, Regression | Individual Image of PSFs | Axial Position, Color | -Simulated images of emitters | [ |
| -- | CNN, Encoder–Decoder | (1) Classification, | (1) Image of individual PSFs | (1) Color channel | -Simulated images of emitters | [ | |
| -- | FC | Classification | Full Spectra | Color Channel | -Experimental images of microtubules and mitochondria. | [ | |
| DeepSTORM3D | CNN with skipped connection | Image Reconstruction | (1) Simulated point sources | (1) PSFs | -Simulated images of emitters | [ |
Figure 4A deep-learning-based method for the fast single-molecule localization (A) Network architecture of Deep-STORM. A set of diffraction-limited images of blinking emitters is fed into the convolutional neural network to generate the final super-resolved image. (B) Experimental microtubule images. (Left) Diffraction-limited low resolution image. (Middle) Reconstructed image by the CEL0 method. (Right) Reconstructed image by Deep-STORM. Scale bar is 2 μm. (C) Comparison of runtime between different methods. Adapted from the article of [31] under the OSA Open Access Publishing Agreement.
Figure 5A deep-learning-based method for constructing high-density super-resolution image from a low-density image. (A) The trained deep CNNs reconstruct the high-density multicolor image using a small number of image frames. (B) The two-color STORM images of AF647 labeled tubulin (cyan) and CF660C labeled mitochondria (magenta) in a COS-7 cell. (Left) Reconstructed image from 3000 frames by existing fitting method. (Middle) Reconstructed image from 3000 frames by deep CNN. (Right) Reconstructed image from 3000 frames by existing fitting method from 19,997 frames. Scale bar = 1.5 μm. Adapted from the article of [36] under the OSA Open Access Publishing Agreement.
Figure 6A deep-learning-based method for improving the localization precision. (A) The entire process of BGnet. The predicted BG can be readily subtracted from the input PSF image by BGnet and it is used to generate the BG-corrected PSF for subsequent analysis, for example via MLE fitting for position estimation. Scale bar: 1 μm. (B) Network architecture of BGnet. (C) (Left) Super-resolution images of microtubules in fixed BSC-1 cells using the BG correction with a constant BG estimate or with BGnet. (Right) The magnified images of the boxed region are shown on the right. Scale bars: 5 μm for the left images and 500 nm for the right images. Adapted from the article of [37] under Creative Commons Attribution (CC BY) license.
Figure 7A deep-learning-based method for localization of overlapping PSFs. (A) The network architecture of DECODE. Using the information from multiple image frames, DECODE predicts output maps representing the detection probability, subpixel spatial coordinates, brightness, uncertainty, and an optional background for each pixel. (B) Comparison between the performances of the DECODE and the CSpline algorithm for the high density, low signal double-helix challenge training data. Scale bars, 1 μm. Adapted with permission [19].