Literature DB >> 26112291

Real-time multi-view deconvolution.

Benjamin Schmid1, Jan Huisken1.   

Abstract

UNLABELLED: In light-sheet microscopy, overall image content and resolution are improved by acquiring and fusing multiple views of the sample from different directions. State-of-the-art multi-view (MV) deconvolution simultaneously fuses and deconvolves the images in 3D, but processing takes a multiple of the acquisition time and constitutes the bottleneck in the imaging pipeline. Here, we show that MV deconvolution in 3D can finally be achieved in real-time by processing cross-sectional planes individually on the massively parallel architecture of a graphics processing unit (GPU). Our approximation is valid in the typical case where the rotation axis lies in the imaging plane.
AVAILABILITY AND IMPLEMENTATION: Source code and binaries are available on github (https://github.com/bene51/), native code under the repository 'gpu_deconvolution', Java wrappers implementing Fiji plugins under 'SPIM_Reconstruction_Cuda'. CONTACT: bschmid@mpi-cbg.de or huisken@mpi-cbg.de SUPPLEMENTARY INFORMATION: Supplementary data are available at Bioinformatics online.
© The Author 2015. Published by Oxford University Press.

Entities:  

Mesh:

Year:  2015        PMID: 26112291      PMCID: PMC4595906          DOI: 10.1093/bioinformatics/btv387

Source DB:  PubMed          Journal:  Bioinformatics        ISSN: 1367-4803            Impact factor:   6.937


1 Introduction

MV imaging is particularly useful in light-sheet microscopy where consecutive views are acquired in short succession, allowing reconstruction of entire developing organisms without artifacts (Huisken ). Due to the low photo-toxicity in light sheet microscopy, time-lapse experiments are oftentimes run over days and terabytes of data accumulate quickly. MV fusion is therefore particularly desirable to be performed in real-time to eliminate redundant information from different views. Best fusion results, however, are achieved by combining fusion with 3D deconvolution (Swoger ; Verveer ; Wu ). Although efficient Bayesian MV deconvolution based on the Richardson–Lucy (RL) algorithm has been shown recently to outperform existing methods in terms of fusion quality and convergence speed, it is still too slow for real-time processing of typical data volumes (Preibisch ). The RL deconvolution iterations consist only of convolutions and pixel-wise arithmetic operations and could therefore be significantly accelerated using dedicated hardware such as a graphics processing unit (GPU). The large memory requirements of MV deconvolution, however, exceed the limited resources of modern GPUs even for moderate data sizes (Supplementary Note S1). Previous attempts therefore required splitting the data into blocks of appropriate size. Each block then either had to be transferred to and from the GPU in each RL iteration (Preibisch ), or blocks needed to share a considerable amount of overlap to avoid border artifacts (Temerinac-Ott ). Therefore, GPU-based implementations only achieved a three-times performance gain (Preibisch ).

2 Results

The primary goal of MV fusion is the improvement of the poor axial resolution in a single 3D dataset using the superior lateral resolution of an additional, overlapping dataset, and not necessarily to improve resolution beyond the intrinsic lateral resolution. We therefore approximated the full 3D point spread function (PSF) with a 2D PSF, neglecting one lateral component (along the rotation axis), and processed each plane orthogonal to the rotation axis independently (Fig. 1a). Memory requirements were thereby reduced by the number of lines read out from the camera chip, i.e. typically 100–1000 fold (Fig. 1b). This allowed us to implement the entire MV deconvolution on a GPU. Taking advantage of three CUDA (Compute Unified Device Architecture) streams, we interleaved GPU computations with data transfers, such that not only expensive copying to and from GPU memory, but also reading and writing data from and to the hard drive came without additional cost (Supplementary Note S2). Compared with 3D MV deconvolution, with and without GPU support, we thereby reduced processing times by a factor of up to 25 and 75, respectively (Fig. 1c, Supplementary Table S1), while producing comparable results.
Fig. 1.

Plane-wise multi-view deconvolution concept and performance. (a) Concept of plane-wise deconvolution for two views. Each dataset is resliced into planes orthogonal to the microscope’s rotation axis. Datasets are deconvolved plane-by-plane. (b) Memory requirements for traditional 3D and our plane-wise multi-view deconvolution, for various data sizes and numbers of views, on a logarithmic scale. (c) Execution times for plane-wise multi-view deconvolution, implemented on GPU and CPU, and 3D deconvolution, with and without GPU support. Memory requirements for 3D deconvolution timings for the 20483 pixel dataset were beyond the capabilities of our workstation. (d–i) Resulting images of a 9 h post-fertilization transgenic Tg(h2afva:h2afva-mCherry) zebrafish embryo, using different methods (view along the rotational axis, scale bar 100 , 10 in the inset): (d, e) acquired raw data, (f–i) fusion performed by (f) averaging, (g) entropy-weighted averaging, (h) 3D multi-view deconvolution and (i) plane-wise multi-view deconvolution (10 iterations). (Dell T6100, Intel E5-2630 @2.3 GHz 2 processors, 64 GB RAM; Graphics card: Nvidia GeForce GTX TITAN Black)

Plane-wise multi-view deconvolution concept and performance. (a) Concept of plane-wise deconvolution for two views. Each dataset is resliced into planes orthogonal to the microscope’s rotation axis. Datasets are deconvolved plane-by-plane. (b) Memory requirements for traditional 3D and our plane-wise multi-view deconvolution, for various data sizes and numbers of views, on a logarithmic scale. (c) Execution times for plane-wise multi-view deconvolution, implemented on GPU and CPU, and 3D deconvolution, with and without GPU support. Memory requirements for 3D deconvolution timings for the 20483 pixel dataset were beyond the capabilities of our workstation. (d–i) Resulting images of a 9 h post-fertilization transgenic Tg(h2afva:h2afva-mCherry) zebrafish embryo, using different methods (view along the rotational axis, scale bar 100 , 10 in the inset): (d, e) acquired raw data, (f–i) fusion performed by (f) averaging, (g) entropy-weighted averaging, (h) 3D multi-view deconvolution and (i) plane-wise multi-view deconvolution (10 iterations). (Dell T6100, Intel E5-2630 @2.3 GHz 2 processors, 64 GB RAM; Graphics card: Nvidia GeForce GTX TITAN Black) We compared the results of our implementation to the methods commonly used in the light-sheet community, such as established 3D deconvolution (Preibisch ), averaging and entropy-based fusion (Preibisch ) (Fig. 1d–i). Both averaging and entropy-based fusion were blurry and showed cross-shaped artifacts, originating from the elongated PSFs along the optical axes. Three dimensional deconvolution and our plane-wise variant reduced artifacts and enhanced the contrast, thus truly improving the resolution in the fused dataset (Fig. 1h and i; Supplementary Fig. S1). Although registration of the different views is still required, it can be performed in pre-processing before starting a time-lapse experiment, due to the repeatability of high-quality microscope stages. Multi-view deconvolution can then be performed in real time directly as the data is transferred from the camera. We provide our software as a C library that can be directly linked to camera acquisition software for real-time processing, and as plugins for Fiji (Schindelin ) (Supplementary Material).

3 Validation

Our plane-wise deconvolution approximates 3D deconvolution by neglecting the contribution of the PSF along the rotation axis. It is therefore suited for systems with a single rotation axis lying within the imaging plane. Using artificial data (Supplementary Fig. S2 and Table S2), we confirmed the applicability of our approximation even if the rotation axis is slightly tilted (Supplementary Fig. S3). Its validity is independent of the amount of noise (Supplementary Fig. S4), but depends on the lateral extents of the PSF. Keeping its axial standard deviation fixed at eight pixels, a typical value measured on our microscopes, we found that up to a lateral standard deviation of 2–3 pixels, results from plane-wise and 3D deconvolution are undistinguishable (Supplementary Fig. S5). The measured lateral standard deviation of the PSF was typically between 1.5 and 1.8 pixels on our microscopes.

4 Conclusion

The photo-efficiency of light-sheet microscopy enables long time-lapse imaging of living samples to study fundamental questions in developmental biology. However, its huge data rates also open new challenges for data processing. A key problem in light-sheet microscopy has been the fusion of data recorded from multiple angles. In this article, we presented a new method that performs MV deconvolution plane-wise, which reduces memory requirements compared with existing methods and thus permits an entirely GPU-based implementation. The achieved acceleration makes MV deconvolution for the first time applicable in real-time without the need for data cropping or resampling.
  7 in total

1.  Optical sectioning deep inside live embryos by selective plane illumination microscopy.

Authors:  Jan Huisken; Jim Swoger; Filippo Del Bene; Joachim Wittbrodt; Ernst H K Stelzer
Journal:  Science       Date:  2004-08-13       Impact factor: 47.728

2.  Software for bead-based registration of selective plane illumination microscopy data.

Authors:  Stephan Preibisch; Stephan Saalfeld; Johannes Schindelin; Pavel Tomancak
Journal:  Nat Methods       Date:  2010-06       Impact factor: 28.547

3.  High-resolution three-dimensional imaging of large specimens with light sheet-based microscopy.

Authors:  Peter J Verveer; Jim Swoger; Francesco Pampaloni; Klaus Greger; Marco Marcello; Ernst H K Stelzer
Journal:  Nat Methods       Date:  2007-03-04       Impact factor: 28.547

4.  Multi-view image fusion improves resolution in three-dimensional microscopy.

Authors:  Jim Swoger; Peter Verveer; Klaus Greger; Jan Huisken; Ernst H K Stelzer
Journal:  Opt Express       Date:  2007-06-25       Impact factor: 3.894

5.  Fiji: an open-source platform for biological-image analysis.

Authors:  Johannes Schindelin; Ignacio Arganda-Carreras; Erwin Frise; Verena Kaynig; Mark Longair; Tobias Pietzsch; Stephan Preibisch; Curtis Rueden; Stephan Saalfeld; Benjamin Schmid; Jean-Yves Tinevez; Daniel James White; Volker Hartenstein; Kevin Eliceiri; Pavel Tomancak; Albert Cardona
Journal:  Nat Methods       Date:  2012-06-28       Impact factor: 28.547

6.  Efficient Bayesian-based multiview deconvolution.

Authors:  Stephan Preibisch; Fernando Amat; Evangelia Stamataki; Mihail Sarov; Robert H Singer; Eugene Myers; Pavel Tomancak
Journal:  Nat Methods       Date:  2014-04-20       Impact factor: 28.547

7.  Spatially isotropic four-dimensional imaging with dual-view plane illumination microscopy.

Authors:  Yicong Wu; Peter Wawrzusin; Justin Senseney; Robert S Fischer; Ryan Christensen; Anthony Santella; Andrew G York; Peter W Winter; Clare M Waterman; Zhirong Bao; Daniel A Colón-Ramos; Matthew McAuliffe; Hari Shroff
Journal:  Nat Biotechnol       Date:  2013-10-13       Impact factor: 54.908

  7 in total
  9 in total

1.  The smart and gentle microscope.

Authors:  Nico Scherf; Jan Huisken
Journal:  Nat Biotechnol       Date:  2015-08       Impact factor: 54.908

2.  An automated workflow for parallel processing of large multiview SPIM recordings.

Authors:  Christopher Schmied; Peter Steinbach; Tobias Pietzsch; Stephan Preibisch; Pavel Tomancak
Journal:  Bioinformatics       Date:  2015-12-01       Impact factor: 6.937

3.  Light-sheet fluorescence imaging to localize cardiac lineage and protein distribution.

Authors:  Yichen Ding; Juhyun Lee; Jianguo Ma; Kevin Sung; Tomohiro Yokota; Neha Singh; Mojdeh Dooraghi; Parinaz Abiri; Yibin Wang; Rajan P Kulkarni; Atsushi Nakano; Thao P Nguyen; Peng Fei; Tzung K Hsiai
Journal:  Sci Rep       Date:  2017-02-06       Impact factor: 4.379

4.  Three-dimensional bright-field microscopy with isotropic resolution based on multi-view acquisition and image fusion reconstruction.

Authors:  Gianmaria Calisesi; Alessia Candeo; Andrea Farina; Cosimo D'Andrea; Vittorio Magni; Gianluca Valentini; Anna Pistocchi; Alex Costa; Andrea Bassi
Journal:  Sci Rep       Date:  2020-07-29       Impact factor: 4.379

5.  Comparison of Multiscale Imaging Methods for Brain Research.

Authors:  Jessica Tröger; Christian Hoischen; Birgit Perner; Shamci Monajembashi; Aurélien Barbotin; Anna Löschberger; Christian Eggeling; Michael M Kessels; Britta Qualmann; Peter Hemmerich
Journal:  Cells       Date:  2020-06-01       Impact factor: 6.600

6.  Adaptive particle representation of fluorescence microscopy images.

Authors:  Bevan L Cheeseman; Ulrik Günther; Krzysztof Gonciarz; Mateusz Susik; Ivo F Sbalzarini
Journal:  Nat Commun       Date:  2018-12-04       Impact factor: 14.919

7.  Fast objective coupled planar illumination microscopy.

Authors:  Cody J Greer; Timothy E Holy
Journal:  Nat Commun       Date:  2019-10-02       Impact factor: 14.919

8.  Two plus one is almost three: a fast approximation for multi-view deconvolution.

Authors:  Manuel Hüpfel; Manuel Fernández Merino; Johannes Bennemann; Masanari Takamiya; Sepand Rastegar; Anja Tursch; Thomas W Holstein; G Ulrich Nienhaus
Journal:  Biomed Opt Express       Date:  2021-12-07       Impact factor: 3.732

9.  Developing 3D microscopy with CLARITY on human brain tissue: Towards a tool for informing and validating MRI-based histology.

Authors:  Markus Morawski; Evgeniya Kirilina; Nico Scherf; Carsten Jäger; Katja Reimann; Robert Trampel; Filippos Gavriilidis; Stefan Geyer; Bernd Biedermann; Thomas Arendt; Nikolaus Weiskopf
Journal:  Neuroimage       Date:  2017-11-28       Impact factor: 6.556

  9 in total

北京卡尤迪生物科技股份有限公司 © 2022-2023.