Literature DB >> 33574612

Real-time volumetric reconstruction of biological dynamics with light-field microscopy and deep learning.

Zhaoqiang Wang1,2,3, Lanxin Zhu1, Hao Zhang1, Guo Li1, Chengqiang Yi1, Yi Li1,4, Yicong Yang1, Yichen Ding3, Mei Zhen5, Shangbang Gao6, Tzung K Hsiai7,8,9, Peng Fei10.   

Abstract

Light-field microscopy has emerged as a technique of choice for high-speed volumetric imaging of fast biological processes. However, artifacts, nonuniform resolution and a slow reconstruction speed have limited its full capabilities for in toto extraction of dynamic spatiotemporal patterns in samples. Here, we combined a view-channel-depth (VCD) neural network with light-field microscopy to mitigate these limitations, yielding artifact-free three-dimensional image sequences with uniform spatial resolution and high-video-rate reconstruction throughput. We imaged neuronal activities across moving Caenorhabditis elegans and blood flow in a beating zebrafish heart at single-cell resolution with volumetric imaging rates up to 200 Hz.

Entities:  

Year:  2021        PMID: 33574612      PMCID: PMC8107123          DOI: 10.1038/s41592-021-01058-x

Source DB:  PubMed          Journal:  Nat Methods        ISSN: 1548-7091            Impact factor:   28.547


Introduction

A recurring challenge in biology is the extraction of ever more spatiotemporal information from targets, as many millisecond transient cellular processes occur in three-dimensional (3D) tissues and across long time scales. Several imaging techniques, including epifluorescence and plane illumination methods, can image live samples in three dimensions at high spatial resolution[1-5]. However, they require recording a number of two-dimensional (2D) images to create a 3D volume, and the temporal resolution is compromised by the extended acquisition time of multiple frames by the camera. Light-field microscopy (LFM) has become the technique of choice for instantaneous volumetric imaging[6-14]. It permits the acquisition of transient 3D signals via post-processing of the light-field information recorded by single 2D camera snapshots. Because LFM provides high-speed volumetric imaging limited only by the camera frame rate, it has delivered promising results in various applications, such as the recording of neuronal activity[7-10] and visualization of cardiac dynamics in model organisms[11]. Despite these advances, the low and non-uniform spatial resolution and the presence of reconstruction artefacts prevents LFM’s more widespread application for capturing millisecond time-scale biological processes at single-cell resolution[7, 11]. While these problems can be mitigated by optimizing the way the light field is recorded[9, 12, 14], the extra system complexity could impede the wide dissemination of the LFM technique. Furthermore, current LFMs rely on a computationally demanding, iterative recovery process that limits the overall throughput of LFM reconstruction, compromising its potential for long time-scale applications. Here, we describe a LFM strategy based on view-channel-depth neural network processing of the light-field data, termed VCD-LFM. Using a wave optics model[13], we generated synthetic light-field images from high-resolution 3D images that we previously obtained experimentally, and paired them as input and ground-truth data for network training. We designed the VCD network (VCD-Net) to extract multiple views from these 2D light-fields and transform them back to 3D depth images, which would be compared with the high-resolution ground truth images to guide optimization of the network. Through iteratively minimizing the loss of spatial resolution by incorporating abundant structural information from training data, this deep-learning VCD-Net could be gradually strengthened until it is capable of deducing 3D high-fidelity signals at uniform resolution across the imaging depth. Once the VCD-Net is properly trained, it can deduce an image stack from a light-field measurement at a millisecond time scale, achieving a throughput suitable for time-lapse video processing. We demonstrated the ability of the VCD-LFM method by imaging motor neuron activity in L4-stage C. elegans rapidly moving inside a 300 × 300 × 50 μm microfluidic chamber, at an acquisition rate of 100 Hz and processing throughput of 13 volumes per second. This allowed us to extract the 4D spatiotemporal patterns of neuronal calcium signaling and track correlated worm behaviors at single-cell resolution, which was notably better than classic light-field deconvolution microscopy (LFDM). Furthermore, we performed in toto imaging of blood flow in the beating heart of zebrafish larvae, enabling velocity tracking of blood cells and ejection fraction analysis of the heartbeat across a 250 × 250 × 150 μm volume at 200 Hz.

Results

Principle and performance of VCD-LFM

Our VCD-LFM involves training of a VCD convolutional neural network and its inferences based on the images obtained from LFM (Fig. 1a, Supplementary Fig. 1, 2). To create the data for network training, we first obtained high-resolution (HR) 3D images of stationary samples using synthetic or experimental methods (Fig. 1a). Using the wave optics model of LFM[13], we projected these HR 3D images into 2D light-field images, which we then used as the input for network training (Fig. 1b, step 1, Supplementary Note 1). Each synthetic light-field image was first re-arranged into different views, from which features were extracted and incorporated into multiple channels in each convolutional layer. The final output channels were then assigned to a number of planes representing different depths to generate an image stack. Using cascaded convolution layers (U-Net architecture, Supplementary Fig. 3 and Supplementary Table 1) to repetitively extract features, this VCD procedure generated intermediate 3D reconstructions (Fig. 1b, step 2). The pixel-wise mean-square-error (MSE) was counted as the loss function to indicate how different these outputs were to the ground truth images. By iteratively minimizing the loss function (Fig. 1b, step 3), the network was gradually optimized until it could transform the synthetic light-fields into 3D images that were similar to the ground-truth images (Supplementary Fig. 4, Supplementary Note 2). After training on gigavoxels of data (Supplementary Table 2), the network was capable of implementing VCD transformations of sequential light-field measurements of dynamic processes, and inferring a sequence of 3D volumes at a rate up to 13 volumes s−1 (Fig. 1b, step 4).
Figure 1.

VCD-LFM and its performance.

(a) HR 3D imaging of stationary samples by a confocal microscope (upper row) and instantaneous recording of dynamic samples by a light-field microscope (bottom row). (b) The VCD-Net reconstruction pipeline, containing: 1. Forward light-field projection (LFP) from the HR image stacks; 2. VCD transformation of synthetic light-field inputs into intermediate 3D image stacks; 3. Network training via iteratively minimizing the difference between VCD inferences and confocal ground truths; 4. Inference of 3D images from the recorded light-field images by a trained VCD-Net. (c)–(f) Maximum intensity projections (MIPs) of the same fluorescent beads and achieved resolution (FWHM) by wide-field microscopy (c), LFDM (d), and VCD-LFM trained with anisotropic (e) and isotropic (f) HR data, respectively. White lines, intensity profiles of all the resolved beads shown in the MIPs. Blue lines, intensity profiles across a selected bead (indicated by a vertical line) at 20 μm off the focal plane. Scale bars, 10 μm. (g) Average axial (dashed lines) and lateral (solid lines) FWHM of the beads across the volumes reconstructed by LFDM (n = 2,039 beads), anisotropic (n = 2,527 beads) and isotropic VCD-LFM (n = 2,731 beads), respectively. Aniso-VCD achieves uniform resolution of 1.1 ± 0.08 (mean ± standard deviation) μm and 3.0 ± 0.1 μm in x/y and z, respectively, and Iso-VCD achieves near isotropic resolution of 1.0 ± 0.15 μm. Center lines represent means and error bars denote standard deviations. (h) One frame of a light-field video recording the activity of two synthetic firing neurons adjacent to each other (indicated by blue and red colors). (i),(j) Reconstructions of the light-field video by VCD-Net (i) and LFD (j), with signal traces extracted in the indicated ROIs (dashed circles), and compared to the ground truth. Circles denote the signal cross-talks between adjacent neurons due to the blurring. Scale bar, 5 μm.

We built a custom microscope to enable in situ light-field recording and 3D wide-field imaging (Supplementary Fig. 1). To demonstrate the capability of the VCD-LFM, we reconstructed sub-diffraction beads captured using a 40×/0.8w objective, and quantified the resolution improvement resulting from the network by comparing the results with those from conventional light-field deconvolution (LFD, Fig. 1c–f). As verified by 3D wide-field imaging of the same volume, the fluorescence of individual beads was correctly localized throughout the volume (Fig. 1e, f, Supplementary Fig. 5). The VCD-LFM with a network trained on wide-field 3D image yielded an average resolution of 1.1 μm (x,y) and 3.0 μm (z) (n = 2527 beads), which was uniform across a 60-μm imaging depth (1.0 μm (x,y) and 2.9 μm (z) at the best plane, 1.3 μm (x,y) and 3.1 μm (z) at the outer edge of the axial field of view) (Fig. 1g, Supplementary Fig. 6). This demonstrates notable improvements compared to LFDM on the same image data with an average resolution of 2.6 μm (x,y) and 5.0 μm (z), and a broader distribution (1.6 μm (x,y) and 3.8 μm (z) at the best plane, 4.0 μm (x,y) and 7.0 μm (z) at the outer edge of the axial field of view). We note that the performance of VCD-LFM is dependent on training data; hence, the beads can be reconstructed isotropically (1.0 μm x,y,z) by including higher-resolution data in the training (Fig. 1f, g, Supplementary Fig. 7). Furthermore, VCD-LFM substantially removed the mosaic-like artefacts near the focal plane that are common in LFDM (Fig. 1d, Supplementary Fig. 5, 6), and it performed well even when the signals were weak with high background noise or dense (Supplementary Fig. 8, 9). To further validate the accuracy of reconstructed signals, we applied VCD-Net to the reconstruction of synthetic firing neurons which are adjacent to each other. The improved image quality achieved by VCD-Net suppressed signal cross-talk from blurring and artefacts (Fig. 1h–j), and thus contributed to accurate recovery of signal fluctuations when recording the activity of densely labeled neurons. We further validated the reconstruction accuracy of VCD-Net on both static and moving neurons with varying signal magnitude and density (Supplementary Fig. 10, 11).

VCD-LFM reveals locomotion-associated neural activity in moving C.elegans

We show VCD-LFM to be suitable for capturing dynamic processes in live animals by demonstrating the imaging of neuronal activity in moving C. elegans and the beating heart of a zebrafish larva. We used a microfluidic chip to permit C. elegans (L4-stage) to rapidly move inside a micro-chamber (300 × 300× 50 μm, Fig. 2a). We used a 40×/0.8w objective to image the activity of fluorescently-labeled motor neurons (ZM9128 hpIs595[Pacr-2(s)::GCaMP6(f)::wCherry]) at a 100-Hz acquisition rate, yielding 6000 light-fields in both green and red channels in a 1-minute observation (Supplementary Fig. 1). VCD-Net reconstructed the neuronal signals at single-cell resolution during fast body movement (Fig. 2b, Supplementary Fig. 12, Supplementary Video 1, 2). In contrast, LFD suffered from suboptimal cellular resolution and deteriorated image quality around the focal plane, as shown on the same image data (Fig. 2c). We identified A- and B- motor neurons that have been associated with motor-program selection[15-17], and mapped their calcium activity over time after ratiometrically correcting the calcium signals (GCaMP6(f)) using RFP baselines (wCherry) to remove motion noise[18] (Fig. 2d, e, Supplementary Fig. 13, Supplementary Video 3). By applying an automatic segmentation of the worm body contours (Supplementary Note 3), we calculated the worm’s velocity and curvatures related to its locomotion and behavior, thereby allowing classification of the worm motion into forward, backward, or irregular crawling (Fig. 2f–h). We found the patterns of transient calcium signaling to be relevant to the switches from forward to backward crawling of the worm, which is consistent with previous findings[18-20]. Furthermore, the non-iterative VCD reconstruction could sequentially recover 3D images at a volume rate of 13.5 Hz, ~900-times faster than the iterative LFD method (Fig. 2i, Supplementary Table 3). Our VCD-LFM thus demonstrated advantages for visualizing sustained biological dynamics, which is computationally challenging using conventional deconvolution methods.
Figure 2.

Whole-animal imaging of moving C. elegans using VCD-LFM. (a) Configuration combining light field microscopy with a microfluidic technique for imaging motor neuron activity in L4-stage C. elegans (strain ZM9128 hpIs595[Pacr-2(s)::GCaMP6(f)::wCherry]) moving in a microfluidic chip (300 × 300 × 50 μm, top panel) at 100-Hz recording rate. (b),(c) MIPs as well as magnified views of the indicated regions of one instantaneous volume reconstructed by VCD (b) and LFD (c), respectively. The data shown are representative of n = 10 independent C.elegans. Scale bars, 10 μm. (d) Schematics of the worm with identified motor-neurons labeled (left) and body curvature annotated (right). (e) Heatmap visualizing the neuronal activity of 18 identified motor neurons during a 1-minute observation of the moving worm. Each row shows the signal fluctuation of an individual neuron with color indicating the percent fluorescence changes (), where F is ratiometrically corrected by the ratio of GCaMP6(f) fluorescence to wCherry fluorescence. (f) Curvature kymograms along the body of the moving worm. (g) Velocity plot shows the displacement in the direction of body. An ethogram describing the worm behavior over time (lower panel) is obtained by analyzing the curvature and velocity change. (h) Selected volumes with time-coded traces (duration: left and middle, 150 ms; right, 500 ms) in accordance with the ethogram visualizing the backward (left), forward (middle), and irregular (right) crawling tendency of the worm. Scale bars, 20 μm. (i) The reconstruction throughput of VCD-LFM and LFDM for processing the same C.elegans light-field video.

Volumetric imaging of fast dynamics in the beating zebrafish heart

We captured the cardiac hemodynamics in the beating zebrafish heart. To reduce the background from body tissue, we generated a rod-like laser beam to selectively illuminate the heart region[21], and we recorded the light-field video at a 200-Hz volume rate using a 20×/0.5w objective (Fig. 3a, Supplementary Fig. 2). We reconstructed red blood cells (RBCs, labeled with Tg(gata1a:dsRed)) and beating cardiomyocytes (nuclei were labeled with Tg(myl7:nls-gfp)) in four dimensions using VCD-Net with resolution, structural similarity, and processing throughput notably better than conventional LFD approaches (Fig. 3b–g, Supplementary Fig. 14–16, Supplementary Video 4, 5). We also show that direct 2D-3D VCD transformation is better than enhancing 3D LFD results by further processing with established deep-learning restoration methods[22] in term of reconstruction accuracy and when dealing with densely labeled signals (Supplementary Note 4). We further demonstrated that our VCD-LFM could reconstruct a 3D beating heart with densely labeled trabecular myocardium (Tg(cmlc2:gfp)), Fig. 3h, i, Supplementary Fig. 17, Supplementary Video 6), and the limit of signal density that could be accurately recovered by VCD was relatively high, as compared to LFD (Supplementary Fig. 18). VCD-Net could also generalize well when trained on one type of cardiac sample, e.g. RBCs, and applied to another, e.g. cardiomyocyte nuclei, or trained on hybrid cardiac datasets including both and applied to each of them (Supplementary Note 5). VCD-LFM reconstruction with single-cell resolution (Supplementary Fig. 19) permitted quantitative investigation of transient cardiac hemodynamics. We tracked 19 individual RBCs throughout the entire cardiac cycle of 415 ms, during which the blood was pumped in and out of the ventricle at a speed of over 3000 μm s−1 (Fig. 3d, e). By segmenting the heart boundary, we quantified the volume change of the myocardium during the diastole and systole, and calculated the ejection fraction of the heartbeat (Fig. 3j, k). Furthermore, through a combination of VCD-LFM and selective plane illumination microscopy (SPIM), we visualized in 3D and analyzed blood flow in the zebrafish circulation system (labeled with Tg(fli1:gfp; gata1a:dsRed) for blood cells in vessels, Tg(gata1a:dsRed; cmlc2:gfp) for blood cells in the heart), Supplementary Fig. 20, Supplementary Video 7–9).
Figure 3.

Imaging of various cardiac dynamics in beating zebrafish heart using VCD-LFM.

(a) Schematic of selective volume illumination based light-field imaging for zebrafish experiments. High-contrast light-field sequences of RBCs in the beating heart are recorded under a rod-like beam illumination setup that selectively excites the fluorescence signals within the volumes of interest. (b),(c) MIPs in x-y (left) and y-z (right) planes of one instantaneous volume of RBCs by VCD-LFM (b) and LFDM (c), respectively. The dash lines indicate the heart. (d) Tracks of 19 single RBCs throughout the cardiac cycle. A static heart is outlined for reference. (e) Velocity map computed from two consecutive volumes of RBCs during systole. (f),(g) MIPs in x-y (top) and x-z (bottom) planes of one instantaneous volume of nuclei of beating cardiomyocytes by VCD-LFM (f) and LFDM (g), respectively. (h),(i) 3D visualization of beating myocardium at one time point by VCD-LFM (h) and LFDM (i), respectively. The myocardium was labeled by GFP. Arrows indicate the inlet and outlet of cardiac pumping. A: atrium; V: ventricle. (j) Volume of the myocardium during the diastole and systole in one cardiac cycle. (k) Volume change ratio of the ventricle during the diastole and systole in one cardiac cycle. The rate is calculated by (V - ESV) / EDV, where V is the time-varying volume of the ventricle during heartbeat and ESV and EDV represent the volumes at the end of systole and diastole, respectively. The ejection fraction of the heartbeat (EF), given by (EDV - ESV) / EDV, also shown as the peak in the curve, is calculated to be ~71%. Twenty out of a total of 120 time points (1 of each 6) are selected for the analysis during a ~400 ms cardiac cycle. Scale bars, 50 μm. The data shown are representative of n = 8, 4, 3 independent fish for blood cells, cardiomyocyte nuclei and myocardium imaging, respectively.

Discussion

The VCD-LFM achieves real-time recording and video-rate reconstruction of instantaneous 3D processes in whole moving C. elegans. Combined with efficient network-based locomotion analysis, it offers an efficient pipeline for the study of sustained worm neural activity and related locomotion behavior at high throughput. The robust performance of VCD-LFM enables the suppression of signal artefacts, elimination of motion blurs, and accurate quantification of calcium signaling. For cardiovascular imaging, recent scanning-based approaches for volumetrically imaging in zebrafish larvae have required complicated optics, an ultra-fast camera, and high-intensity excitation[23, 24]. In contrast, our method based on a relatively simple system and easily adoptable deep-learning framework offers a compelling solution for investigating the dynamic properties and functions of the cardiovascular system. Therefore, VCD-LFM could be a valuable tool for studying dynamics on fast timescales, potentially benefiting a variety of applications such as behaviorally relevant neuronal activity studies and dysfunctions of the heart and blood transport system in model organisms. In summary, we introduced a VCD-LFM approach and demonstrated its ability to image transient biological dynamics with improved spatial resolution, minimal reconstruction artefacts, and increased reconstruction throughput compared to conventional LFDM approaches. The network-based VCD computational model is robust, versatile, and ready for widespread application. While VCD-LFM improves the reconstruction quality from one originally determined by the optical system to one can be optimized via the training procedure, it requires the pre-acquisition of a considerable number of training images. We expect this will improve with the continued development of deep learning technique, which aims for strong generalization ability and weak training supervision, thus allowing the model implementation with much fewer training data being required. Aside from the combination with a basic LFM setup[7,13], we note that VCD-Net is compatible with modified LFM modalities, such as a dual-objective setup[11] or a Fourier LFM setup[14]. Finally, we expect that VCD-LFM could potentially bring new insights for computational imaging techniques by raising the possibility of restoring image beyond the system optical limit rather than just approaching it, and showing the capability of increasing the image dimension while minimally compromising the image quality. Taken together, we can further push the spatiotemporal limits for in toto observation of dynamic biological processes.

Methods

Epi-illumination LFM setup.

An epi-fluorescence light-field setup was built on an upright microscope (BX51, Olympus). The light-field and wide-field detection paths were appended to the camera port of host microscope, with using a flip mirror to switch between two detection modes. A motorized z-stage (Z812B, Thorlabs) together with a water chamber were directly mounted onto the microscope stage (x-y), to three-dimensionally control the samples inside the chamber. A water immersion objective (LUMPlanFLN 40×/0.8w, Olympus) was used to collect the epifluorescence signals from samples. For recording the light field, a microlens array (MLA, APO-Q-P150-F3.5 (633), OKO Optics) was placed at the native image plane to collect the light-field signals. A 1:1 relay system (AF 60 mm 2.8D, Nikon) was used to conjugate the back focal plane of MLA with the camera sensor plane (Flash 4.0 V2, Hamamatsu). The light-field path was optionally extended to dual-channel detection by dividing after MLA and adding an extra camera sensor for the C.elegans experiments. See Supplementary Fig. 1 for more details of Epi-LFM setup.

Selective volume illumination LFM setup.

We also developed a LFM setup based on selective volume illumination. Two pairs of beam reducers combined with an adjustable iris were used to generate a scalable rod-like beam (473 or 532-nm), which was finally projected onto the sample through a 4× illumination objective (Plan Fluor 4×/0.13, Nikon) placed perpendicular to the detection path. It confined the fluorescence excitation within the heart region of zebrafish embryo, reducing the excessive emission from out of the volume of interest that could smear the desired signals. This selective volume illumination mode provided light-field image with less background noise and increased contrast[21]. For observing the dynamic process of blood flowing through vessels, we also integrated a standard SPIM channel (473-nm, 4--thickness laser-sheet) to implement the in situ 3D imaging of static vessels. The illumination paths were aligned, providing double excitation to the sample from its dual sides. The detection path used a water immersion objective (Fluor 20×/0.5w, Nikon) to collect the fluorescence signals. A dichroic mirror split the GFP (vessels) and DsRed (RBCs) signals for wide-field and light-field detection, respectively, when performing dual-channel imaging. The light-field detection here followed the same design used in the epi-illumination LFM. See Supplementary Fig. 2 for more details of this hybrid setup.

VCD light-field reconstruction network.

In a general convolutional neural network (CNN), a certain Nth convolutional layer receives feature maps from the previous (N-1)th layer, and generates new feature maps using different convolution kernels. The network finally produces a multi-channel output, in which each channel is a non-linear combination of the original input. This concept shares similarities with the digital refocusing algorithms in the light-field photography, where each synthetic focal plane of the reconstructed volume can be interpreted as a superposition of different views extracted from the light-fields[25]. Through cascaded layers, our model is expected to gradually transform the original angular information from the light-field raw image into depth features, eventually forming the conventional 3D image stack, and reconstructing the scene. In our implementation, the customized VCD-Net is based on a modified U-Net architecture that contains a downsampling path and a symmetric upsampling path[26]. Along both paths, each layer has three parameters: n, f and s, denoting the output channels number, the filter size of convolution kernel and the step size of the moving kernel, respectively, as specified in Supplementary Fig. 3 and Supplementary Table 1. The pixels from the input 2D light-field raw image (dimension: a × b, height × width) are first reformatted into a series of different views (dimension: a/d × b/d × d, height × width × views) according to their relative positions to each lenslet. A subpixel up-scaling part further interpolates these views to dimension a × b × d[2] (height × width × views). Then in the first VCD layer, the initial transformation from “view” to “channel” is done by convolutions with all these views using different kernels of each channel, generating an output with dimension a × b × n, where n is the channel numbers. The following convolution layers keep combining old channels from previous layers and generating new ones to excavate the hidden features from the input image. Local residual connections are integrated into the downsampling path to fully extract the hierarchical features. Finally, the last layer outputs a 3D image with the channel number n equal to the desired number of synthetic focal planes c, thereby finishing the transformation from “channel” to “depth” (dimension: a × b × c, height × width × depth). For VCD-Net training, HR 3D reference images were acquired from confocal microscopy of static samples or synthetic data. Then a light-field projection based on wave optics model[13] was applied to these HR references to generate corresponding synthetic 2D light-field images as inputs (Supplementary Note 1). The VCD-Net was gradually trained via iteratively minimizing the difference between its intermediate outputs and the HR references. By setting appropriate loss functions, such as the MSE (Mean Square Error) of the pixel intensity, the VCD-Net obtained optimized kernel parameters for each layer and efficiently converged to a well-trained status, at which the network can transform the synthetic light-field inputs back to 3D images. At inference stage, the trained VCD-Net directly infers a sequence of 3D images from an input light-field video, which contains many light-field frames recording the dynamic biological processes. The time consumption for the VCD-Net procedure depends on the dataset size and computational resources. As a reference point, the VCD-Net converged after training on 4580 pairs of blood cell image patches (size: 176 × 176 × (51) pixels) with 110 epochs. The time cost was ~4 hours on a single GPU. Then the trained network spent ~15 seconds to reconstruct 450 consecutive volumes (size: 341 × 341 × 51 pixels) from acquired light-field videos. This 4D reconstruction throughput was compared to ~11.8 hours (~42467 s) by running LFD (8 iterations) on the same workstation. The computation was performed on a workstation equipped with Intel(R) Core (TM) i9–7900X CPU @3.3GHz, 128G RAM, and Nvidia GeForce RTX 2080 Ti graphic cards. For more details about VCD-Net training and inference, see Supplementary Note 2 and our open-source code. The comparative performance of our direct 2D-3D VCD-Net recovery and LFD recovery (2D-3D) plus other deep-learning image restoration (3D-better 3D) is shown in Supplementary Note 4. The generalization ability of VCD-Net, reflected by the hybrid-sample and cross-sample learning application, is discussed in Supplementary Note 5.

PSF measurements.

We used both light-field (40×/0.8w objective) and 3D wide-field (40×/0.8w objective plus 0.5-μm z-step) modes to image the same volume of sub-diffraction fluorescent beads (0.5 μm Lumisphere, BaseLine) distributed in a piece of hydrogel (0.7% low melting agarose solution, BBI Life Sciences). To demonstrate the tunable performance of VCD-Net with applying different training data, we trained anisotropic and isotropic VCD-Nets using synthetic anisotropic and isotropic beads, respectively. These synthetic beads with random 3D distributions were generated using a 3D Gaussian kernel with controllable FWHMs (anisotropic: 1 × 1 × 3 μm; isotropic: 1 × 1 × 1 μm). Then 3D image stacks with 60-μm depth were recovered from the recorded light-field image using LFD, anisotropic and isotropic VCD-Nets. The reconstruction yielded 61 z planes with voxel size 0.34 × 0.34 × 1 μm for anisotropic VCD-Net and LFD (8 iterations), and 177 z planes, with voxel size 0.34 × 0.34 × 0.34 μm for isotropic VCD-Net. The reconstructed beads were then detected and fitted with 1D Gaussian function in each dimension to determine the full width at half maximum (FWHM) as spatial resolution metrics using a custom-written MATLAB script. The PSFs of LFD and VCD-Nets at certain depth were then compared via plotting the line profiles of the same resolved beads. The achieved lateral and axial resolutions at certain depth by each method were indicated through calculating the averaged FWHM values of the resolved beads, e.g., z = −14 μm. The performance of anisotropic and isotropic VCD-Net versus LFD at different depths were further analyzed via measuring the FWHMs of resolved beads across a 60-μm depth, with their variation indicating the non-uniformity of light-field recovery.

C. elegans strain.

The strain ZM9128 hpIs595[Pacr-2(s)::GCaMP6(f)::wCherry], expressing GCaMP6f in A-and B- class motor neurons, was used to detect neuronal activity in the moving worm (Fig. 2). The strain QW1217: hpIs491[Prgef-1::GCaMP6(f)::wCherry]; hpIs467[Prab-3::NLS::RFP], with nuclei labelled in all neurons, was used to demonstrate the imaging performance of VCD-LFM (Supplementary Fig. 12). All C. elegans were cultured on standard Nematode Growth Medium (NGM) plates seeded with OP50 and maintained at 22 °C incubators until L4 stage.

C. elegans imaging.

To obtain HR 3D data for network training, the L4-stage anesthetized worms (QW1217 hpIs467, ZM9128 hpIs595, by 2.5 mM levamisole in M9 buffer, Sigma-Aldrich) were first imaged using a 40×/0.95 objective on a confocal microscope (FV3000, Olympus). To demonstrate the performance of VCD-LFM, anesthetized worms (QW1217 hpIs467) embedded in agarose were in situ imaged by both light-field and wide-field detection modes in our Epi-illumination LFM. The acquired 3D wide-field image stacks (1-μm step size) were deconvolved in Amira software (Thermo Scientific), for comparison with light-field reconstructions by LFD and VCD-Net. The 3D reconstructions encompassed 31 z planes, with voxel size of 0.34 × 0.34 × 1 μm. In the behavior studies of C. elegans, the awake L4-stage worms (ZM9128 hpIs595) were loaded into a microfluidic chamber (~300 × 300 × 50 μm), which allows the worms to move within the FOV of a 40× objective. The GCaMP and RFP signals of the moving worm were then recorded for one minute at 100 Hz frame rate (2-ms exposure time).

Quantitative Analysis of neuronal activity and behavior of moving C. elegans.

We performed semi-automated tracking of the movement and intensity fluctuation of each individual neuron using TrackMate Fiji Plugin[27]. Neurons were detected automatically in each volume by applying a circular ROI through the Difference of Gaussian (DoG) detector and then tracked using Kalman filter. In cases when this automatic tracking failed due to fast movement of the neuron, the missing detections and tracking mistakes were manually corrected. For each neuron in each volume of GCaMP or RFP channel, all the pixels within the ROI of a certain neuron were averaged to generate a single value or representing the fluorescence intensity of this neuron in each channel. To extract calcium dynamics, we measured , where , is the ratio of GCaMP fluorescence to RFP fluorescence , and is the neuron-specific baseline being the average of the lowest 100 values. Worm behavior analysis was then implemented based on the same fluorescence images. We developed an efficient worm analysis pipeline which could 1. rapidly infer the worm outlines throughout the whole time period using a U-Net based image segmentation[26]; 2. extract the center lines from the segmented outlines to calculate the changing curvatures and motion velocities[28]. More details are given in Supplementary Note 3. The body curvature map shown in Fig. 2f indicates the time-varying worm postures. The velocity curve shown in Fig. 2g represents the movement along the worm body.

Fish husbandry and lines.

Transgenic zebrafish lines Tg(gata1a:dsRed; cmlc2:gfp) (Fig. 3, Supplementary Fig. 14, Video 4, 9), Transgenic zebrafish lines Tg(cmlc2:gfp) (Fig. 3, Supplementary Fig. 17, Video 6), Tg(fli1:gfp; gata1a:dsRed) (Supplementary Fig. 14, 20, Video 7, 8), Tg(myl7:nls-gfp) (Supplementary Fig. 15, Video 5) were used in our experiments. Embryonic fishes were maintained until 3–4 dpf in standard E3 medium, which was supplemented with extra PTU (Sigma Aldrich, MO) to inhibit melanogenesis. Then, the larvae were anesthetized with tricaine (3-amino benzoic acidethylester, Sigma Aldrich, MO) and immobilized in 1% low-melting-point agarose inside FEP (Fluorinated Ethylene Propylene) tube for further imaging. In the imaging of cardiac blood flow, the embryos were injected with gata2a morpholino oligonucleotide at the single-cell stage, to slow down hematopoiesis, and thereby reduce the density of RBCs. All the experiments were performed in compliance with and with the approval of a UCLA IACUC protocol.

Zebrafish imaging.

The fish samples were moved by a custom stage (xyz + rotation) so that their signals of interest could be positioned within the optimal imaging region, determined by the size of selective volume illumination and the FOV of the detection objective (LUMPlanFLN 20×/0.5w, Olympus). During cardiac imaging, the center of the heart was moved to the focal plane so that the volumetric illumination (473 or 532-nm) could selectively excite the RBCs, myocytes nuclei or myocardium. The light-field movies were recorded at 200 Hz frame rate, with 768 × 768 frame size that corresponds to a lateral FOV of ~250 × 250 μm. The movies covered 4–5 cardiac cycles and contained 450 frames (5 ms exposure for each frame). For imaging the dynamic process of blood flow inside vessels, the cameras of dsRed and GFP channels recorded a light-field video of rapidly flowing RBCs and a SPIM image stack of static tail vessels, respectively. Each light-field movie contained 600 frames (10 or 5 ms exposure). The SPIM stack contained 50 planes that covered a 100-μm z depth of the fish tail (step size 2 μm). To obtain HR 3D images of RBCs, myocyte nuclei and myocardium for VCD-Net training, deeply anesthetized fish larvae with immobilized hearts were embedded in 1% low-melting agarose for sustained confocal imaging (SP8-STED/FLIM/FCS, Leica) using a 20×/0.75 Objective (HC PL APO CS2). Then we trained three VCD-Nets based on these HR images of static blood cells (16 fishes), nuclei (23 fishes), and myocardium (11 fishes), and their paired LFPs. Besides the confocal imaging of immobilized heart, we also combined SPIM with a retrospective gating method[23, 29, 30], to obtain the light-sheet based HR 3D image of periodically beating myocardium for network training. A movie stack of light-sheet images of the dense myocardium was first acquired, containing 60–70 movies at different planes (z step = 2 μm) that captured the complete myocardial structures. Each movie was then temporally aligned using retrospective gating, to reconstruct a beating 3D heart. When the training using these prior data was finished, the network inferred 3D images from empirical light-fields with output voxel size 0.68 × 0.68 × 2 μm (blood cell, myocardium) or 0.326 × 0.326 × 3 μm (myocyte nucleus). The output image stacks contain 51 planes with depth spanning from −50 to 50 μm or −75 to 75 μm.

Resolution quantification of VCD-Net and LFD reconstructions.

We applied decorrelation analysis[31] to quantify the resolutions of VCD-Net and LFD reconstructions. The analysis was conducted using the ImageJ plugin (image decorrelation analysis). We set recommended parameters (radius, the range of normalized frequencies, 0–1; Nr, the number of sampling points, 50; Ng, the number of intermediate high-pass filtering used to find the resolution, 10) to calculate the cut-off frequency k. Supplementary Figs. 15–17 show higher k obtained in VCD-Net results than LFD results, which quantitatively validate the better resolution achieved by VCD-Net. We also used the Fourier spectrograms (by ImageJ plugin), to further confirm the resolution advantage of VCD-Net.

Velocity map of RBCs and volume-based ejection fraction analysis of myocardium in zebrafish heart.

Tracking of the flowing RBCs was performed using Imaris (Bitplane). Fig. 3d shows the trajectories of 19 RBCs throughout one cardiac cycle (415 ms). Fig. 3e demonstrates the velocity map that has been extracted at one specific time point during systole, which contains 31 vectors from all trackable RBCs in that frame. The analysis was written in Matlab and the vector field was visualized by Mayavi[32]. Volumetric segmentation of beating myocardium was performed using Amira. A blow tool was used to allow semiautomatic definition of the inner boundary of the ventricle at each slice. Once all the slices in certain 3D image stack of one time point were correctly segmented, the software could calculate the volume of segmented ventricle based on the slice thickness and the defined area. After the volume of the beating heart at all time points was calculated, we obtained the volume change ratio of the ventricle during the diastole and systole in one cardiac cycle.

Reporting Summary

Further information on research design is available in the Nature Research Reporting Summary linked to this article. Flowing blood cells in the tail vessels of embryonic zebrafish, acquired by VCD-LFM and light sheet fluorescence microscopy (LSFM). The red LFM channel captures the RBCs while green LSFM channel captures the static vessels. Reconstructed results have been registered to reveal how the RBCs flow through the intersegmental vessels. Numbers on the white box denote the size of the ROI. Trajectories of flowing blood cells in embryonic zebrafish tail vessels, which are automatically tracked using Imaris. Red balls denote the tracked cells and the colors on the trajectories represent the maximum travelling speeds at that position (colorbar below). Scale bar, 50 . Fig. 1g: averaged axial and lateral FWHM of the beads across the volumes reconstructed by LFDM, anisotropic and isotropic VCD-LFM, respectively. The standard deviation indicated by error bar in Fig.1g is included.Fig. 1i, j: the signal fluctuation of two synthetic adjacent neurons extracted from the reconstruction of a light field movie by VCD-Net and LFD. Fig. 3j: the volume of myocardium and ventricle chamber over 400ms. Fig. 2e: the raw intensity of GCaMP signals and RFP signals of the neurons over 60s in Fig. 2e.Fig. 2f: the curvature of the freely moving C.elegans during 60s.Fig. 2g: the velocity of the freely moving C.elegans during 60s. Dual-color imaging of the myocardium (green) and blood cells (red) in the beating embryonic zebrafish heart. The blood cells are captured by VCD-LFM while the myocardium is reconstructed by LSFM via retrospective gating. The two image sequences have been registered and rendered in 3D in xy and yz views, showing how the RBCs flow through the beating myocardium. Scale bar, 50 . VCD-Net reconstruction of pan-neuronally labeled C.elegans over 10 s. Maximum intensity projections (MIPs) are shown in view of XY (top left), XZ (bottom left), YZ (top right). Scale bar, 30. VCD-Net reconstruction of motor neurons of L4 stage C.elegans over 60 s. MIPs are shown in view of XY (top left), XZ (bottom left), YZ (top right). Scale bar, 40. The comparisons between the LFD and VCD-Net reconstructions of the flowing blood cells in a beating zebrafish heart. MIPs of the reconstructions are shown. Magnified region marked in white boxes are shown in corners. Scale bar, 30. The comparisons between the LFD and VCD-Net reconstructions of cardiomyocyte nuclei in a beating zebrafish heart. MIPs of the reconstructions are shown. Scale bar, 30. Comparisons between LFD and VCD-Net reconstructions of the myocardium in a beating zebrafish heart, shown in 3D. Dual-color imaging of neuronal activity in freely moving C.elegans using VCD-LFM over 60 s. Left, the tracked neurons of the moving worm are identified and labeled. Top right, the GCaMP6(f)/ wCherry fluorescence signals of 8 example neurons. Bottom right, body curvature and velocity characterize the behavior of the worm.
  20 in total

Review 1.  A practical guide to scanning light-field microscopy with digital adaptive optics.

Authors:  Zhi Lu; Yeyi Cai; Yixin Nie; Yuxin Yang; Jiamin Wu; Qionghai Dai
Journal:  Nat Protoc       Date:  2022-06-29       Impact factor: 17.021

2.  Real-time denoising enables high-sensitivity fluorescence time-lapse imaging beyond the shot-noise limit.

Authors:  Xinyang Li; Yixin Li; Yiliang Zhou; Jiamin Wu; Zhifeng Zhao; Jiaqi Fan; Fei Deng; Zhaofa Wu; Guihua Xiao; Jing He; Yuanlong Zhang; Guoxun Zhang; Xiaowan Hu; Xingye Chen; Yi Zhang; Hui Qiao; Hao Xie; Yulong Li; Haoqian Wang; Lu Fang; Qionghai Dai
Journal:  Nat Biotechnol       Date:  2022-09-26       Impact factor: 68.164

Review 3.  Volumetric Imaging of Neural Activity by Light Field Microscopy.

Authors:  Lu Bai; Zhenkun Zhang; Lichen Ye; Lin Cong; Yuchen Zhao; Tianlei Zhang; Ziqi Shi; Kai Wang
Journal:  Neurosci Bull       Date:  2022-08-08       Impact factor: 5.271

Review 4.  Smart imaging to empower brain-wide neuroscience at single-cell levels.

Authors:  Shuxia Guo; Jie Xue; Jian Liu; Xiangqiao Ye; Yichen Guo; Di Liu; Xuan Zhao; Feng Xiong; Xiaofeng Han; Hanchuan Peng
Journal:  Brain Inform       Date:  2022-05-11

5.  Fourier light-field imaging of human organoids with a hybrid point-spread function.

Authors:  Wenhao Liu; Ge-Ah R Kim; Shuichi Takayama; Shu Jia
Journal:  Biosens Bioelectron       Date:  2022-03-26       Impact factor: 12.545

6.  Neurophotonic tools for microscopic measurements and manipulation: status report.

Authors:  Ahmed S Abdelfattah; Sapna Ahuja; Taner Akkin; Srinivasa Rao Allu; Joshua Brake; David A Boas; Erin M Buckley; Robert E Campbell; Anderson I Chen; Xiaojun Cheng; Tomáš Čižmár; Irene Costantini; Massimo De Vittorio; Anna Devor; Patrick R Doran; Mirna El Khatib; Valentina Emiliani; Natalie Fomin-Thunemann; Yeshaiahu Fainman; Tomas Fernandez-Alfonso; Christopher G L Ferri; Ariel Gilad; Xue Han; Andrew Harris; Elizabeth M C Hillman; Ute Hochgeschwender; Matthew G Holt; Na Ji; Kıvılcım Kılıç; Evelyn M R Lake; Lei Li; Tianqi Li; Philipp Mächler; Evan W Miller; Rickson C Mesquita; K M Naga Srinivas Nadella; U Valentin Nägerl; Yusuke Nasu; Axel Nimmerjahn; Petra Ondráčková; Francesco S Pavone; Citlali Perez Campos; Darcy S Peterka; Filippo Pisano; Ferruccio Pisanello; Francesca Puppo; Bernardo L Sabatini; Sanaz Sadegh; Sava Sakadzic; Shy Shoham; Sanaya N Shroff; R Angus Silver; Ruth R Sims; Spencer L Smith; Vivek J Srinivasan; Martin Thunemann; Lei Tian; Lin Tian; Thomas Troxler; Antoine Valera; Alipasha Vaziri; Sergei A Vinogradov; Flavia Vitale; Lihong V Wang; Hana Uhlířová; Chris Xu; Changhuei Yang; Mu-Han Yang; Gary Yellen; Ofer Yizhar; Yongxin Zhao
Journal:  Neurophotonics       Date:  2022-04-27       Impact factor: 4.212

7.  Light-Field Microscopy for Optical Imaging of Neuronal Activity: When Model-Based Methods Meet Data-Driven Approaches.

Authors:  Pingfan Song; Herman Verinaz Jadan; Carmel L Howe; Amanda J Foust; Pier Luigi Dragotti
Journal:  IEEE Signal Process Mag       Date:  2022-03       Impact factor: 12.551

8.  Isotropic super-resolution light-sheet microscopy of dynamic intracellular structures at subsecond timescales.

Authors:  Yuxuan Zhao; Meng Zhang; Wenting Zhang; Yao Zhou; Longbiao Chen; Qing Liu; Peng Wang; Rong Chen; Xinxin Duan; Feifan Chen; Huan Deng; Yunfei Wei; Peng Fei; Yu-Hui Zhang
Journal:  Nat Methods       Date:  2022-03-11       Impact factor: 47.990

9.  A hybrid of light-field and light-sheet imaging to study myocardial function and intracardiac blood flow during zebrafish development.

Authors:  Zhaoqiang Wang; Yichen Ding; Sandro Satta; Mehrdad Roustaei; Peng Fei; Tzung K Hsiai
Journal:  PLoS Comput Biol       Date:  2021-07-06       Impact factor: 4.475

10.  Automatic Segmentation and Cardiac Mechanics Analysis of Evolving Zebrafish Using Deep Learning.

Authors:  Bohan Zhang; Kristofor E Pas; Toluwani Ijaseun; Hung Cao; Peng Fei; Juhyun Lee
Journal:  Front Cardiovasc Med       Date:  2021-06-09
View more

北京卡尤迪生物科技股份有限公司 © 2022-2023.