| Literature DB >> 30559434 |
Hongda Wang1,2,3, Yair Rivenson1,2,3, Yiyin Jin1, Zhensong Wei1, Ronald Gao4, Harun Günaydın1, Laurent A Bentolila3,5, Comert Kural6,7, Aydogan Ozcan8,9,10,11.
Abstract
We present deep-learning-enabled super-resolution across different fluorescence microscopy modalities. This data-driven approach does not require numerical modeling of the imaging process or the estimation of a point-spread-function, and is based on training a generative adversarial network (GAN) to transform diffraction-limited input images into super-resolved ones. Using this framework, we improve the resolution of wide-field images acquired with low-numerical-aperture objectives, matching the resolution that is acquired using high-numerical-aperture objectives. We also demonstrate cross-modality super-resolution, transforming confocal microscopy images to match the resolution acquired with a stimulated emission depletion (STED) microscope. We further demonstrate that total internal reflection fluorescence (TIRF) microscopy images of subcellular structures within cells and tissues can be transformed to match the results obtained with a TIRF-based structured illumination microscope. The deep network rapidly outputs these super-resolved images, without any iterations or parameter search, and could serve to democratize super-resolution imaging.Entities:
Mesh:
Year: 2018 PMID: 30559434 PMCID: PMC7276094 DOI: 10.1038/s41592-018-0239-0
Source DB: PubMed Journal: Nat Methods ISSN: 1548-7091 Impact factor: 28.547