Literature DB >> 35169684

A rapid denoised contrast enhancement method digitally mimicking an adaptive illumination in submicron-resolution neuronal imaging.

Bhaskar Jyoti Borah1, Chi-Kuang Sun1,2,3.   

Abstract

Optical neuronal imaging often shows ultrafine structures, such as a nerve fiber, coexisting with ultrabright structures, such as a soma with a substantially higher fluorescence-protein concentration. Owing to experimental and environmental factors, a laser-scanning multiphoton optical microscope (MPM) often encounters a high-frequency background noise that might contaminate such weak-intensity ultrafine neuronal structures. A straightforward contrast enhancement often leads to the saturation of the brighter ones, and might further amplify the high-frequency background noise. We report a digital approach called rapid denoised contrast enhancement (DCE), which digitally mimics a hardware-based adaptive/controlled illumination technique by means of digitally optimizing the signal strengths and hence the visibility of such weak-intensity structures while mostly preventing the saturation of the brightest ones. With large field-of-view (FOV) two-photon excitation fluorescence (TPEF) neuronal imaging, we validate the effectiveness of DCE over state-of-the-art digital image processing algorithms. With compute-unified-device-architecture (CUDA)-acceleration, a real-time DCE is further enabled with a reduced time complexity.
© 2022 The Author(s).

Entities:  

Keywords:  Biological sciences research methodologies; Cell biology; Neuroscience; Optical imaging

Year:  2022        PMID: 35169684      PMCID: PMC8829796          DOI: 10.1016/j.isci.2022.103773

Source DB:  PubMed          Journal:  iScience        ISSN: 2589-0042


Introduction

Optical microscopy (Wang and Xia, 2019; Davidson and Abramowitz, 2002; Lichtman and Conchello, 2005), a widely used technique for neuronal imaging, has been helping researchers over the past several decades visualize and understand various neurological disorders, brain functions and dysfunctions. Neuronal structures (Yang and Yuste, 2019; Gao et al., 2019; Zheng et al., 2018) often show a wide variation in structural texture and signal strength. For instance, while imaging a neuron, the soma, that is, the cell body, is often expected to be substantially brighter than the adjacent fiber structures, such as axons and dendrites, which can be thinner than even a micron in diameter. This essentially leads to a broad signal intensity distribution, which is less likely to be properly visualized with a limited dynamic range of an acquisition and display system. Additionally, various optical, electrical, and environmental factors often result in a noisy background which contaminates the weaker intensity signals and worsens the signal-to-noise ratio (SNR) as well as the contrast ratio. Aside from this, to prevent possible photobleaching and/or phototoxicity (Icha et al., 2017), that is, laser-induced damage to tissue under observation, it is often recommended to maintain a low enough excitation power, which in turn further deteriorates the signal strength from an ultrafine neuronal structure. Likewise, while performing a three-dimensional (3D) optical sectioning of a deep volumetric tissue sample (Kobat et al., 2009, 2011), owing to frequency-dependent scattering and absorption issues (Jacques, 2013; Ntziachristos, 2010), the signal-of-interest tends to degrade even further as one penetrates deeper into the tissue. The issue gets even worse owing to the non-ideal optical performance of a scanning system. Particularly for a large imaging area in a mesoscopic imaging system (Sofroniew et al., 2016; Bumstead et al., 2018; Pacheco et al., 2017; Kernier et al., 2019), the optical aberrations (Egner and Hell, 2006) become prominent toward the edges and corners, unavoidably leading to non-uniform excitation and detection efficiencies across the FOV. This essentially further reduces the signal strengths of the weaker structures residing at the off-axis locations. As a matter of fact, the weaker intensity signals from the ultrafine neuronal structures tend to get closer and closer to the noisy background, even when the bright pixels of the image are almost saturated. It thus becomes a challenging task to retrieve such weaker structures with an adequate SNR and a high contrast ratio together with the brighter ones, all amidst a strong noisy background. A straightforward attempt to enhance the contrast of such weak-intensity structures might lead to the amplification of noise, and additionally, it is very likely that the brightest structures tend to saturate. Several hardware-based techniques have been reported over the years to address the dynamic range limitation in optical microscopy, which can locally enhance the weaker structures while preventing brighter structures from saturating. One promising solution is to regulate the excitation power in real-time. In such an approach, a feedback mechanism is utilized which monitors the emerging signal strength and accordingly provides suitable feedback to a tunable excitation source to regulate the excitation power (Ji et al., 2008; Yang et al., 2017; Chu et al. 2007, 2010; Hoebe et al., 2007). Another approach is to employ a real-time high dynamic range (HDR) imaging (Vinegoni et al., 2016, 2019; Feruglio et al., 2020), which collects multiple low dynamic range (LDR) images over multiple optically-separated detection channels, and subsequently fuse them together to form the HDR image. However, implementing these techniques requires dedicated hardware configurations. For instance, a proper feedback electronic circuit, a tunable excitation source, and at least one dedicated channel for monitoring the output signal strength are required for a regulated/controlled/adaptive illumination to work. A typically slower response owing to electronic limitations might, however, lead to a poor effective pixel-sampling rate especially when each digitized pixel is required to be illumination optimized. A lower effective pixel-sampling rate might lead to a Nyquist figure-of-merit (NFOM) (Borah et al., 2021) of less than 1, and might in turn result in aliasing (Pawley, 2006; Heintzmann and Sheppard, 2007), that is, an irreversible loss of digital resolution, especially when concerning a high spatial resolution over a large millimeter-scale FOV. Furthermore, such a method is often not immune to noise-amplification while locally enhancing a weak-intensity structure. Likewise, in case of real-time HDR imaging, multiple channels are dedicated to detect the same spectral regime with different signal strengths, and thus multi-spectrum detection for multi-color imaging becomes complex in the context of optical design implementation. Quite a few software-based approaches have been developed either to enhance the contrast and/or sharpness of an image while minimizing the non-uniform illumination issue or to perform various image-segmentation operations. A few of the techniques employ the subtraction of a mask/layer, which involves analyzing the relevant image or image stack, generating subtraction mask(s) accordingly, and finally subtracting the same from the original image. Traditional and modified unsharp masking methods (Russ, 2006; Kaur et al., 2021; Ye and Ma, 2018; Duan et al., 2019; Polesel et al., 2000; Joseph et al., 2019), no-neighbor/nearest-neighbors method (Agard, 1984), rolling ball/sliding paraboloid background subtraction (Sternberg, 1983; Kelley and Paschal, 2019) are some of the popular techniques in this regard. Recently, we have reported a modified unsharp masking algorithm (Borah and Sun, 2021), which was dedicated to suppress high-frequency noise in the background while mostly preserving useful information. This approach was, however, limited to suppression of noise and was further constrained by choices of multiple controlling parameters. There are several other noteworthy algorithms/techniques (Sticker et al., 2020; Liu et al., 2020; Hassan and Carletta, 2006; Bai et al., 2012; Zhao and Lu, 2017; Sysko and Davis, 2010; Cannell et al., 2006; Malkusch et al., 2001; Kuru, 2014; Lefkimmiatis et al., 2012; Poon et al., 2008; Syed et al., 2008; Selvaggio et al., 2013) which have been improving the image quality over the past several years. Aside from them, another widely used approach to eliminate high-frequency noise which can help improve both SNR as well as contrast ratio is to perform certain morphological operations (Huang and Zhu, 2009), such as erosion and opening. However, while performing an erosion operation, it is quite possible that certain high-frequency useful information from the image gets removed. A subsequent dilation operation (i.e., an opening operation involving erosion followed by dilation) might no longer be able to regenerate the lost information, and thus might lead to an irreversible resolution loss. Another promising approach to improve image contrast is to perform a traditional, or an adaptive histogram equalization (HE, or AHE) (Mustafa and Kader, 2018; Singh et al., 2019; Ismail and Sim, 2011; Zimmerman et al., 1988; Pizer et al., 1987; Li et al., 2013). However, when an image consists of a noisy background, HE, or AHE might lead to an amplification of noise and thus SNR might degrade significantly. An improved version of AHE, that is, contrast limited adaptive histogram equalization (CLAHE) (Stimper et al., 2019; Mohan and Ravishankar, 2013; Pisano et al., 1998) is a widely used state-of-the-art local contrast enhancement technique that can limit the noise-amplification issue significantly. However, the issue of noise-amplification might still persist in case of an optical microscopy image owing to the fact that a significant portion of the image might not possess useful information but might consist of a strong noisy background only, which we do not intend to amplify or enhance. Here we report a dedicated-hardware-free rapid DCE method to locally enhance the visibility of especially the noise-corrupted weak-intensity structures in terms of both contrast ratio and signal-to-noise ratio while mostly preventing the saturation of the brightest ones. The proposed method involves an efficient high-frequency noise rejection followed by a local intensity enhancement while optimizing the signal strengths or pixel intensities across a digitized image. As we have stated above, the weak-intensity structures often reside amidst a strong noisy background leading to low-contrast poor visibility. Our efficient noise rejection brings the background close to zero virtually resembling a laser-off state in the regions lacking low-frequency retrievable information. At the same time, the method locally preserves and selectively enhances the low-frequency information resembling selective laser-on states. This combined effect digitally mimics the hardware-based adaptive/controlled illumination technique and drastically improves the contrast ratio of the weak-intensity structures. To demonstrate the same in the context of neuroimaging, we performed large-FOV Nyquist-satisfied (aliasing-free) two-photon excitation fluorescence (TPEF) imaging of brain/neuronal structures at multiple excitation wavelengths with our custom-developed multiphoton microscope (MPM) (Borah et al., 2020, 2021). The effectiveness of our proposed DCE algorithm is validated by retrieving weak-intensity ultrafine neuronal structures amidst a strong noisy background, while achieving simultaneous improvements to the signal-to-noise ratio (SNR), signal-to-background ratio (SBR), and contrast ratio. To secure real-time applicability, we implement our DCE method via Graphics Processing Unit (GPU)-assisted NVIDIA’s Compute Unified Device Architecture (CUDA)-acceleration with a <3 ms of time complexity for a typical 1000 × 1000-sized input dataset in 16-bit unsigned format.

Results

Description and working principle of the proposed rapid denoised contrast enhancement method

The proposed DCE method involves a noise-suppressed contrast optimization to enable simultaneous boosts to SNR, SBR, and a contrast ratio of a noise-contaminated MPM image. Let us assume an input image as INP. The goal of DCE is to first suppress the noisy background and then to locally boost the structural details. We employ the subtraction of a layer from INP so as to reject the high-frequency noisy background before performing the local boost. For efficient suppression of noise, this layer is thus expected to possess high enough pixel intensities at those regions corresponding to the noisy background in INP. At the same time, the layer should ensure zero pixel intensities corresponding to the low-frequency structural information in INP to preserve them well during subtraction. Note that a straightforward subtraction of a blurred version of INP from INP is not suitable to obtain this layer, as a blurred version always attains lower pixel intensities compared to the original one, and thus the regions corresponding to the low-frequency structures in INP would be left with non-zero intensities. Furthermore, a pixel intensity in this layer corresponding to a noise pixel in INP would be often weaker than the noise pixel itself, and subsequent strengthening of the layer would make the non-zero intensities corresponding to the low-frequency details even stronger, leading to loss of low-frequency information while noise suppression. Alternatively, if INP is subtracted from its blurred version, it is most likely that the low-frequency regions would become zero in the layer which is, indeed, one of our purposes. However, the intensities corresponding to the noise pixels would also be reduced down to zero which we do not intend. It is, however, notable that a noise pixel in this case means a pixel whose intensity is considerably higher than its neighbors. Owing to the first blur operation, neighbors of a noise pixel have already attained non-zero values, as a blur operation helps redistribute a pixel intensity toward its neighbors. Therefore, if a second blur is applied, it becomes likely that a pixel corresponding to noise in INP would become non-zero owing to intensity redistribution from the non-zero neighbor pixels. Note that the zero intensity pixels corresponding to the low-frequencies in INP still remain zero, though the edges would slightly be affected by the second blur. Note that as the noise pixels in INP show a high-frequency nature, that is, with considerably darker neighbors, the blur operations would leave much weaker intensities in the layer, which are usually not strong enough to cancel the noise on subtraction. To improve this situation, we prefer to locally amplify INP close to saturation before performing the above steps. It is important that we do not much amplify a strong-intensity structure/location to avoid excessive saturation of that entire region. Otherwise, such a highly saturated region would be treated as a single low-frequency structure, and the proposed method thereafter might no longer be able to enhance a weak-intensity structure residing inside that specific region. Therefore, a global amplification of INP is not recommended. To locally amplify INP, we generate a first amplification layer α (r, c) to be pixel-wise multiplied with INP. As we have stated above, the idea is to take the image close to saturation, yet not over-saturating the strong intensity regions. To achieve the same, adequately smooth version of INP is first obtained by subsequent downscaling, Gaussian blurring, and upscaling operations. α (r, c) is then estimated by diving 90% of maximum intensity level by this smooth version of INP. The maximum intensity level, in this case, is simply 2 -1, that is, 65,535 for a 16-bit unsigned image, or 255 for an 8-bit unsigned image. Thus, when α (r, c) is multiplied with INP, it will take the low/moderate intensity regions close to saturation, while the strong intensity locations will remain mostly unaffected. It is important to note that an adequately smooth version of INP should be used here to obtain α (r, c). The purpose is to minimize abrupt intensity change resulting from multiplication, and thereby to avoid probable image artifacts. Furthermore, note that we propose employing subsequent downscaling, Gaussian blurring, and upscaling operations to obtain this smooth version. The reason is simply to yield an adequate smoothness with a practical blur-kernel size. Without the resizing operations, a typical Gaussian blur alone would otherwise require a much larger kernel to produce a comparable smoothness, which might become computationally expensive and might not be supported by a standard computer vision library. In our observation, a resize factor of 10 and a Gaussian kernel size of 29 × 29 (to be an odd number ≤31 to be implemented via OpenCV) should be adequate to produce a sufficiently smooth version of INP, which can be of course further optimized based on one’s visual perception. Note that we propose truncating α (r, c) at a user-defined value α. The purpose of the same would be elaborated in the following paragraphs. INP can be thus locally amplified performing pixel-wise multiplication with α (r, c). Let us assume the amplified version to be g (r, c). As stated in the previous paragraph, g (r, c) is now to be subtracted from its blurred version which is, however, not required to be excessively smooth like the above case involving multiplicative amplification. In our observation, a resize factor of three and a Gaussian kernel size of 29 × 29 should be sufficient, yet can be optimized based on visual perceptions. The subtraction result would go through a second Gaussian blur yielding non-zero intensities corresponding to the noise components in INP. Let us consider the resultant image as . Note that a kernel size of at least 3×3 would be required for the same. A larger kernel would assist an aggressive noise-rejection, however, might tend to suppress retrievable information as well (especially the weaker and finer details). In our observation, a 7×7 kernel should be suitable for typical use, which can be further optimized as per visual perceptions. At the next step, we use the same α (r, c) to obtain a second amplification layer to locally strengthen for an efficient noise rejection on subtraction from INP. We do not recommend a global strengthening of so as to minimize weakening or loss of especially the high-resolution morphologies with weaker intensities. As we have discussed earlier, for a conventional optical microscopy system especially with a large FOV, it is unavoidable to experience non-uniform excitation efficiency. Typically, the excitation efficiency is often higher at the central FOV, which might induce a stronger background as well. Besides, depending on the type, structural details, and fluorophore concentration distribution of a sample, it might happen that some regions appear to have a comparatively stronger background than the other. Note that α (r, c) already holds the information of the intensity distribution, where a higher α value means a lower pixel intensity of INP. However, to strengthen at the high-intensity locations, we require an amplification layer where a higher value corresponds to a higher INP intensity. Therefore, we propose subtracting α (r, c) from 1.25 times α to get this second amplification layer. Note that the factor 1.25 in our case prevents zeros after subtraction, as no values in α (r, c) can be higher than α. This is where the truncating of α (r, c) becomes helpful. Furthermore, note that a factor other than 1.25, which, however, must be greater than 1, would be also acceptable for the method to work. A higher factor would, however, more aggressively suppress the background. is thus locally strengthened by pixel-wise multiplying with this second amplification layer and the final layer is subtracted from INP to obtain a noise-suppressed output. Let us assume the output to be S (r, c). At the next step, we target local enhancement of the weaker intensity structures. We propose a third amplification layer again based on the same α (r, c) to be pixel-wise multiplied with S (r, c). We do not intend to saturate the morphologies already with high intensities. The basic criterion for this third amplification layer is, therefore, to attain close to unit values corresponding to the bright enough regions, whereas higher values are expected corresponding to the weaker intensity regions to adequately enhance the same. There might be numerous ways to achieve such behavior. For instance, we may define an expression as X + {α (r, c) / Y}, where X, Y, and n can be carefully chosen to serve our purpose. Let us consider a bright low-frequency structure with an intensity of more than 70% of the maximum level. This would typically lead to an α of less than 2. We thus might assign X = 0.9, Y = 4.0, and n = 2.0, so that the above expression provides a value close to 1. On the other hand, for a weaker intensity structure, α becomes higher depending on the choice of α, and thus a higher amplification factor would be obtained. Again, one might choose, for instance, Y = 3.0 and/or a cubic power as n = 3.0 to provide an aggressive enhancement of the weaker intensity structures which might be helpful in some situations, such as reducing a vignetting issue in an ultra-large FOV imaging scenario as we stated earlier. Nevertheless, the X, Y, and n values can be optimized or alternative mathematical expression can be defined based on one’s visual perception and specific application requirement. Finally, the locally amplified S (r, c) would yield the noise-suppressed contrast-enhanced output.

Implementation of denoised contrast enhancement in large-field-of-view multiphoton optical microscopy imaging

A basic block diagram representation of our data acquisition, processing, and display strategies is presented in Figure 1. A simple laser-scanning fluorescence detection unit is shown in the red dashed box in Figure 1A, where EXC and DBS, respectively, stand for a raster-scanning excitation beam emerging from a laser source which gets focused onto a biological sample by means of an objective lens, and a dichroic beam splitter for separating and guiding the generated fluorescence signal as indicated by the green arrows toward an electronic detection unit comprising of a photomultiplier tube (PMT), a transimpedance amplifier, and a digitizer with an adequate sampling rate. Once a frame is scanned and data become ready-to-process, the raster-scanning system is free to acquire the next frame, provided the number of pending frame(s) to process is not more than one and adequate data buffers are available. Note that most of the state-of-the-art digitizers are capable of providing high-bit-depth data, and therefore, we will demonstrate our approach for 16-bit unsigned format. Up to this point, to maintain the data-acquisition process, single or multiple CPU threads can be dedicated as per the choice of the digitizer and relevant Application Programming Interfaces (APIs) available.
Figure 1

Block diagram representation of data acquisition, DCE process, and display strategies

(A) A fluorescence detection unit, EXC: raster-scanning excitation beam, DBS: dichroic beam splitter. EXC gets focused onto the sample via objective lens. DBS separates and guides fluorescence signal (green arrows) toward detection unit comprising of photomultiplier tube (PMT), transimpedance amplifier, and digitizer.

(B) To ensure a real-time operation, proposed implementation asynchronously downloads the previous frame, asynchronously uploads the current frame, schedules PROCESS tasks for the current frame, displays the previous frame via a separate thread, and re-arms.

(C) PROCESS tasks. INP: input data in 16-bit unsigned format, RES1: 10× downscaled INP, BLR1: 29 × 29-kernel Gaussian-blurred RES1, ADD: BLR1 added with 1.0, RES2: upscaled ADD, DIV: (90% of (216−1)) divided by RES2, LAY1: truncated DIV at α, AMP1: INP multiplied with LAY1, RES3: 3× downscaled AMP1, BLR2: 29 × 29-kernel Gaussian-blurred RES3, RES4: upscaled BLR2, SUB1: subtraction of AMP1 from RES4, BLR3: 7×7-kernel Gaussian-blurred SUB1, SUB2 or LAY2: subtraction of LAY1 from 1.25 times α, AMP2: BLR3 multiplied with LAY2, SUB3: subtraction of AMP2 from INP, LAY3: square of one-fourth of LAY1 added with 0.9, AMP3: SUB3 multiplied with LAY3; OUT: output data in 16-bit unsigned format; controlling parameter recommended range: 3.0 ≤ α ≤ 8.0. Refer to Figure S1 for the visualization of the important intermediate steps.

Block diagram representation of data acquisition, DCE process, and display strategies (A) A fluorescence detection unit, EXC: raster-scanning excitation beam, DBS: dichroic beam splitter. EXC gets focused onto the sample via objective lens. DBS separates and guides fluorescence signal (green arrows) toward detection unit comprising of photomultiplier tube (PMT), transimpedance amplifier, and digitizer. (B) To ensure a real-time operation, proposed implementation asynchronously downloads the previous frame, asynchronously uploads the current frame, schedules PROCESS tasks for the current frame, displays the previous frame via a separate thread, and re-arms. (C) PROCESS tasks. INP: input data in 16-bit unsigned format, RES1: 10× downscaled INP, BLR1: 29 × 29-kernel Gaussian-blurred RES1, ADD: BLR1 added with 1.0, RES2: upscaled ADD, DIV: (90% of (216−1)) divided by RES2, LAY1: truncated DIV at α, AMP1: INP multiplied with LAY1, RES3: 3× downscaled AMP1, BLR2: 29 × 29-kernel Gaussian-blurred RES3, RES4: upscaled BLR2, SUB1: subtraction of AMP1 from RES4, BLR3: 7×7-kernel Gaussian-blurred SUB1, SUB2 or LAY2: subtraction of LAY1 from 1.25 times α, AMP2: BLR3 multiplied with LAY2, SUB3: subtraction of AMP2 from INP, LAY3: square of one-fourth of LAY1 added with 0.9, AMP3: SUB3 multiplied with LAY3; OUT: output data in 16-bit unsigned format; controlling parameter recommended range: 3.0 ≤ α ≤ 8.0. Refer to Figure S1 for the visualization of the important intermediate steps. To comprehend the implementation steps, please refer to Figure 1B. Once a two-dimensional (2D) dataset was acquired, we first ensured that the previous frame had been processed/displayed, and then started downloading the previously processed data from GPU in an asynchronous manner. Immediately, the newly acquired frame was asynchronously uploaded to the GPU and the whole set of subsequent PROCESS tasks were scheduled thereafter. Following this step, we ensured that the scheduled download was complete and displayed the downloaded result via a different CPU thread. In the meantime, the main thread re-armed for the next frame. Figure 1C illustrates the PROCESS tasks in terms of a simplified block diagram. For a mathematical formulation, let us first assume a noise-affected low-contrast image f (r, c) in 16-bit unsigned format, marked as INP in Figure 1C with R×C pixels, where r and c stand for row and column positions, respectively. Applying a 10× downscaling to f (r, c) we obtain with a reduced pixel number of R/×C/, as depicted in Equation 1 and RES1 in Figure 1C. Note that for all R/×C/-sized images, r/and c/stand for row and column positions, respectively. Now, a 29 × 29-kernel Gaussian blur is applied to and the blurred result is marked as BLR1 in Figure 1C. To avoid division-by-zero in the next step, each pixel-value of BLR1 is added with 1.0, and the resultant image is denoted as in Equation 2 and ADD in Figure 1C. With a bilinear interpolation, is resized back to R×C pixels and the interpolated result is denoted as l (r, c) in Equation 3 and RES2 in Figure 1C. An inverse of each l (r, c)-pixel-value is now multiplied with 90% of the maximum allowed intensity, that is, 0.9 × (216 -1) in this example, and the result is given as d (r, c) in Equation 4. Note that d (r, c) involves nothing but a division operation, and is marked as DIV in Figure 1C. Each pixel value of d (r, c) above α is truncated to α, and the resultant layer is denoted as α (r, c) in Equation 5 and LAY1 in Figure 1C. Now, a pixel-to-pixel multiplication of f (r, c) and α (r, c) is performed, and thereby AMP1 in Figure 1C and g (r, c) in Equation 6 is obtained. With a 3× downscaling on g (r, c) we obtain RES3 in Figure 1C and in Equation 7 with a reduced pixel number of R//×C//, where r//and c//stand for row and column positions, respectively. A 29 × 29-kernel Gaussian blur is applied to , and the blurred result is obtained as in Equation 8 and BLR2 in Figure 1C. With a bilinear interpolation, is resized back to R×C pixels and the interpolated result (RES4 in Figure 1C) is denoted as L (r, c) in Equation 9. Now, a subtraction of L (r, c) and g (r, c) is performed whose result is marked as SUB1 in Figure 1C, and a 7×7-kernel Gaussian blur is subsequently applied to SUB1. The blurred result thus obtained is depicted as in Equation 10 and BLR3 in Figure 1C. Based on α (r, c), a modified layer is obtained as 1.25 × α - α (r, c) which is marked as SUB2 or LAY2 in Figure 1C. Each pixel-value of is now multiplied with each corresponding pixel-value in LAY2, and the result is marked as AMP2 in Figure 1C. Subsequently, AMP2 is subtracted from f (r, c) to obtain a noise-suppressed output S (r, c) as depicted in Equation 11 and SUB3 in Figure 1C. Based on the same α (r, c) another layer LAY3 in Figure 1C is now obtained as 0.9 + {α (r, c) / 4.0}2.0, and finally each pixel-value of S (r, c) is multiplied with each corresponding pixel-value of LAY3, and thereby the noise-suppressed contrast-enhanced output is obtained as F (r, c) in Equation 12 and OUT in Figure 1C.

Demonstration of denoised contrast enhancement via two-photon excitation fluorescence images of neuronal structures

To demonstrate our approach, we acquire TPEF images of a Nav1.8-tdTomato-positive mouse dorsal-root-ganglion (DRG) section comprising of somas and fine axon fibers, and a coronal section from a Thy1-GFP-positive mouse brain cortex region comprising of axons, dendrites, and dendritic-spines, excited at central wavelengths of 1070 and 919 nm (70 MHz, <60 fs), respectively, with an average excitation power of <40 mW in each case (refer to to STAR Methods). Figures 2A and 2C depict two two-photon images of the Nav1.8-tdTomato and Thy1-GFP samples, respectively, each with a FOV of 1 × 1 mm2, scale bar of 150 μm, however, each with poor SNR, SBR, and contrast ratio. To improve the same, proposed DCE is applied, and based on the visual response, the value of α is adjusted to (A) 8.0 and (C) 6.0, and the corresponding noise-suppressed contrast-enhanced results are depicted in Figures 2B and 2D, respectively. Note that we maintain pixel sizes of around 182 and 167 nm for the excitation wavelengths of 1070 and 919 nm, respectively. We thus satisfy the Nyquist–Shannon criterion (Nyquist, 1928; Shannon, 1949) and ensure aliasing-free imaging (Pawley, 2006; Heintzmann and Sheppard, 2007) for our 0.95 numerical-aperture (NA) objective lens with diffraction-limited two-photon resolutions of 429 and 368 nm for the respective excitation wavelengths.
Figure 2

Demonstration of DCE in fluorescence microscopy imaging

(A and C) TPEF images of Nav1.8-tdTomato-positive mouse dorsal-root-ganglion (DRG) section and Thy1-GFP-positive mouse brain section, respectively, with FOV of 1 × 1 mm2, scale bar of 150 μm.

(B and D) Noise-suppressed contrast-enhanced results with (A) α = 8.0 and (C) α = 6.0, respectively; R1-3 and R4-6: 458 × 458-sized and 490 × 490-sized regions-of-interest (ROIs), respectively, cropped from A and B, and C and D, respectively, with scale bar of 15 μm; C1-6: 50 × 50-sized ROIs cropped from red-arrow marked locations in R1-6. (E) Intensity profiles along L1-6 (in C1-6), and (F)-(H) SNR, SBR, and contrast ratio plots for R1-6.

(E–H) Validate significant improvements on SNR, SBR, and contrast ratio; red and gray colors indicate before- and after-process cases, respectively. Refer to Figures S2 and S3 for two additional examples being demonstrated via the proposed method.

Demonstration of DCE in fluorescence microscopy imaging (A and C) TPEF images of Nav1.8-tdTomato-positive mouse dorsal-root-ganglion (DRG) section and Thy1-GFP-positive mouse brain section, respectively, with FOV of 1 × 1 mm2, scale bar of 150 μm. (B and D) Noise-suppressed contrast-enhanced results with (A) α = 8.0 and (C) α = 6.0, respectively; R1-3 and R4-6: 458 × 458-sized and 490 × 490-sized regions-of-interest (ROIs), respectively, cropped from A and B, and C and D, respectively, with scale bar of 15 μm; C1-6: 50 × 50-sized ROIs cropped from red-arrow marked locations in R1-6. (E) Intensity profiles along L1-6 (in C1-6), and (F)-(H) SNR, SBR, and contrast ratio plots for R1-6. (E–H) Validate significant improvements on SNR, SBR, and contrast ratio; red and gray colors indicate before- and after-process cases, respectively. Refer to Figures S2 and S3 for two additional examples being demonstrated via the proposed method. To visualize the effectiveness of DCE, an adequate magnification is required. We thus perform a 12× digital zoom and crop out three 458 × 458-sized regions-of-interest (ROIs) from the original 5500 × 5500 (R×C)-pixel Nav1.8-tdTomato-image, marked as R1-3 in Figures 2A and 2B with unique colored-dashed boxes. The magnified ROIs are shown on the right side of Figure 2B, each with a scale bar of 15 μm. Likewise, another three 490 × 490-sized ROIs from the original 6000 × 5926 (R×C)-pixel Thy1-GFP-image are marked as R4-6 in Figures 2C and 2D, which are zoomed alongside, each with a 15 μm scale bar. At this point, the effectiveness of DCE can be visualized with an observation of the two ROI columns for R1-6, indicating before- and after-processing scenarios. Remarkably, the cell bodies in R1-3 are enhanced, yet well-preserved against saturation while enhancing the nearby weaker fibers. To better study the effect, we select 50 × 50-sized ROIs from the red-arrow-marked locations in the before- and after-process sets of ROIs (R1-6), which are again zoomed as C1-6 sequentially. In Figure 2E, we now plot intensity profiles along the red-dotted lines L1-6 (marked in C1-6), where red and gray colors indicate before- (INP) and after-process (OUT) cases, respectively. We observe that in each case (L1-6), DCE effectively suppresses the noise contamination and drastically improves the contrast of the fine and weak-intensity neuronal structures. Furthermore, extending our demonstration, two 5×5-sized ROI-1 and ROI-2 are taken from a signal location and a noise-affected background location, respectively, for each case of R1-6. For each ROI-1, we calculate the mean (μROI - 1), and for each ROI-2, we calculate the mean (μROI - 2) and SD (σROI - 2), and we simply define SNR, SBR, and contrast ratio as (μROI - 1 / σROI - 2), (μROI - 1 / μROI - 2), and ((μROI - 1 - μROI - 2) / (μROI - 1 + μROI - 2)) × 100%, respectively. Following these definitions, the SNRs, SBRs, and contrast ratios for R1-6 are evaluated and plotted in Figures 2F–2H, respectively. The red and gray bars stand for before- (INP) and after-process (OUT) scenarios, respectively. These plots essentially validate significant improvements on SNRs, SBRs, and contrast ratios for all the cases. Note that to yield a consistent analysis, we will be using the same ROI and line locations, and the same definitions of SNR, SBR, and contrast ratio throughout the following analysis of our article.

Demonstration of single-parametric control: effect of α over signal-to-noise ratio, signal-to-background ratio, and contrast ratio

To quantitatively visualize the effect of α over SNR, SBR, and contrast ratio, we demonstrate Figure 3. From Figure 2, we take the same ROIs R1 and R2 for the Nav1.8-tdTomato image, and R4 and R5 for the Thy1-GFP image. The first row (INP) in Figure 3A shows the unprocessed ROIs R1-2 and R4-5, sequentially. Now, we gradually increase α from 3.0 to 8.0, and corresponding outputs are depicted in succeeding rows. The same sets of 5×5-sized ROIs as stated in the previous section (Figure 2) are considered and corresponding values of SNRs, SBRs, and contrast ratios are evaluated for R1-2 and R4-5 for each case of unprocessed input (INP) and corresponding outputs for α values of 3.0, 4.0, 5.0, 6.0, 7.0, and 8.0.
Figure 3

Demonstration of single-parametric control: effect of α over SNR, SBR, and contrast ratio

(A) ROIs R1-2 and R4-5 (from Figure 2), for unprocessed case (INP), and processed cases with α values of 3.0, 4.0, 5.0, 6.0, 7.0, and 8.0, sequentially, scale bar: 15 μm.

(B) SNR, SBR, and contrast ratio plots for R1-2 and R4-5 in the first and second columns, respectively; SNR and SBR show rapid improvements as α goes above 6.0 in R1-2, and 4.0 in R4-5; contrast ratio improves gradually for α ≥ 3.0 in each case.

(C) Intensity profiles along L1-2 and L4-5, plotted for INP (red), and respective outputs with α values of 3.0 (green), 4.0 (black), 5.0 (orange), 6.0 (blue), 7.0 (magenta), and 8.0 (gray), demonstrating simultaneous noise-suppression and contrast enhancement with increasing α.

Demonstration of single-parametric control: effect of α over SNR, SBR, and contrast ratio (A) ROIs R1-2 and R4-5 (from Figure 2), for unprocessed case (INP), and processed cases with α values of 3.0, 4.0, 5.0, 6.0, 7.0, and 8.0, sequentially, scale bar: 15 μm. (B) SNR, SBR, and contrast ratio plots for R1-2 and R4-5 in the first and second columns, respectively; SNR and SBR show rapid improvements as α goes above 6.0 in R1-2, and 4.0 in R4-5; contrast ratio improves gradually for α ≥ 3.0 in each case. (C) Intensity profiles along L1-2 and L4-5, plotted for INP (red), and respective outputs with α values of 3.0 (green), 4.0 (black), 5.0 (orange), 6.0 (blue), 7.0 (magenta), and 8.0 (gray), demonstrating simultaneous noise-suppression and contrast enhancement with increasing α. In Figure 3B, we plot the SNRs, SBRs, and contrast ratios for R1-2 and R4-5 in the first and second columns, respectively. We observe that as α gradually increases, all three parameters SNR, SBR, and contrast ratio tend to improve. SNR and SBR rapidly improve as α goes above 6.0 in case of R1-2, and 4.0 in case of R4-5. The contrast ratio tends to improve gradually for α ≥ 3.0 for each case. We do observe that both σROI - 2 and μROI - 2 as defined in the previous section become zero for α ≥ 9.0 in case of R1-2, and α ≥ 7.0 in case of R4-5. Continuing our assessment, Figure 3C plots the intensity profiles along L1-2 and L4-5 (see Figure 2, C1-2 and C4-5) for each case of INP (red) and respective outputs with α values of 3.0 (green), 4.0 (black), 5.0 (orange), 6.0 (blue), 7.0 (magenta), and 8.0 (gray). These intensity profiles essentially illustrate the progress of simultaneous noise-suppression and contrast enhancement with increasing α. A simple observation of the red and gray curves in L1-2, and the red and blue curves in L4-5 justifies the effectiveness of the proposed DCE algorithm.

Comparison with a few alternative software-based enhancement techniques

To ensure a fair comparison, we apply several alternative image processing methods to the full-FOV uncropped image of Nav1.8-tdTomato sample previously shown in Figure 2A. For a convenient visualization, however, we consider the same ROIs R1, R2, and R3 (marked in Figure 2) and crop them out. The first row (1) in Figure 4A shows these three ROIs, each with a scale bar of 15 μm. The subsequent rows in Figure 4A show the results of (2) multiplicative gain enhancement, (3) minimum-maximum range adjustment, (4) histogram equalization (HE), (5) contrast-limited adaptive histogram equalization (CLAHE), (6) unsharp masking (UM), (7) morphological erosion, (8) morphological opening, (9)-(10) rolling ball and sliding-paraboloid background subtractions, and finally, (11) DCE, for each case of R1-3. Now, following the same sets of 5×5-sized ROIs used in the previous section (Figure 2), we evaluate the SNR, SBR, and contrast ratio for each case/method (1)-(11) and for each ROI (R1-3), and subsequently plot them in Figure 4B, where gray, orange, and cyan bars denote the results for R1, R2, and R3, respectively. Further extending our comparison, in Figure 4C, we plot the intensity profiles along L1-3 (see Figure 2, C1-3) for each case of (1)-(11).
Figure 4

Comparison with a few alternative software-based enhancement techniques

(A) ROIs R1, R2, and R3 (from Figure 2) with a scale bar of 15 μm, depicted for (1) INP (unprocessed), (2) multiplicative gain enhancement, (3) minimum-maximum range adjustment, (4) histogram equalization (HE), (5) contrast-limited adaptive histogram equalization (CLAHE), (6) unsharp masking (UM), (7) morphological erosion, (8) morphological opening, (9)-(10) rolling ball and sliding paraboloid background subtractions, and (11) proposed DCE.

(B) SNR, SBR, and contrast ratio plots for (1)-(11); gray, orange, and cyan bars denote results for R1, R2, and R3, respectively. (C) Intensity profiles along L1-3 plotted for (1)-(11). Contradicting (2)-(10) cases, DCE (11) enhances SNRs, SBRs, and contrast ratios all at the same time. While enhancing weaker structures, DCE prevents the saturation of the brighter ones, for instance, the cell-bodies in R3 are mostly saturated in (2) and (3), whereas they are well preserved in (11).

Comparison with a few alternative software-based enhancement techniques (A) ROIs R1, R2, and R3 (from Figure 2) with a scale bar of 15 μm, depicted for (1) INP (unprocessed), (2) multiplicative gain enhancement, (3) minimum-maximum range adjustment, (4) histogram equalization (HE), (5) contrast-limited adaptive histogram equalization (CLAHE), (6) unsharp masking (UM), (7) morphological erosion, (8) morphological opening, (9)-(10) rolling ball and sliding paraboloid background subtractions, and (11) proposed DCE. (B) SNR, SBR, and contrast ratio plots for (1)-(11); gray, orange, and cyan bars denote results for R1, R2, and R3, respectively. (C) Intensity profiles along L1-3 plotted for (1)-(11). Contradicting (2)-(10) cases, DCE (11) enhances SNRs, SBRs, and contrast ratios all at the same time. While enhancing weaker structures, DCE prevents the saturation of the brighter ones, for instance, the cell-bodies in R3 are mostly saturated in (2) and (3), whereas they are well preserved in (11). From both analyses in Figures 4B and 4C, we observe that for morphological erosion (7) and opening (8), the SNRs tend to increase as they reduce high-frequency noise components resulting in a lower noise SD; however, SBRs and contrast ratios do not show a substantial improvement. In case of rolling ball and sliding paraboloid background subtractions (9)-(10), the contrast ratios tend to improve, however, the signal information tends to reduce simultaneously in each case (Figure 4C, 9-10), and no substantial SNR- and SBR-improvements are observed. Likewise, CLAHE (5) improves the contrast ratios, however, might still encounter noise-amplification and might result in poor SNR especially when a higher clip limit is used. Contradicting such approaches (2)-(10), DCE (11) successfully enhances SNRs, SBRs, and contrast ratios all at the same time. It is further remarkable that while enhancing the visibility of weaker structures, DCE mostly prevents the saturation of the brighter ones. For instance, the bright cell-bodies in R3 have mostly been saturated in case of (2) and (3), whereas they are well preserved in our case (11).

Assessment and comparison of time complexity, and validation of real-time applicability

Figure 5A plots the average processing time for DCE in milliseconds with respect to the input image size (16-bit unsigned format). The red curve depicts average processing time via a conventional CPU Intel i7-9800X consuming up to ∼2000 ms for a 10,000 × 10,000-sized input under a single-threaded execution. The blue and green curves indicate average processing time via two CUDA-enabled GPUs, that is, Quadro P1000 and Quadro RTX 8000 with CUDA-core numbers of 640 and 4608, respectively. Both these GPUs show a significant improvement in the processing speed. For instance, RTX 8000 consumes only ∼111 ms for a 10,000 × 10,000-sized input, which seems to be around 18 times a performance boost in comparison to 9800X. Likewise, for a 1000 × 1000-sized input, the processing time for 9800X is ∼21 ms, whereas RTX 8000 takes <3 ms for the same.
Figure 5

Assessment and comparison of processing speed, and validation of real-time applicability

(A) Average processing time in milliseconds per frame plotted for Intel i7-9800X (red), Quadro P1000 (blue), and Quadro RTX 8000 (green), with respect to input image size (16-bit unsigned format); for 10,000 × 10,000-sized input, 9800X consumes ∼2000 ms, whereas RTX 8000 takes ∼111 ms with an 18× performance boost.

(B) Average processing time in milliseconds per frame with respect to input image size (16-bit unsigned format), plotted for contrast limited adaptive histogram equalization (CLAHE) at different tile grid sizes and proposed DCE; CLAHE processing time tends to increase for larger tile grid size, whereas DCE is consistent for any α.

(C) Average of total processing and display time in milliseconds per frame (left axis) and respective frame rate (right axis) for proposed DCE, plotted with respect to input image size; observed frame rate (green) is substantially higher than mechanical frame rate for a state-of-the-art 12 kHz resonant scanner (orange).

Assessment and comparison of processing speed, and validation of real-time applicability (A) Average processing time in milliseconds per frame plotted for Intel i7-9800X (red), Quadro P1000 (blue), and Quadro RTX 8000 (green), with respect to input image size (16-bit unsigned format); for 10,000 × 10,000-sized input, 9800X consumes ∼2000 ms, whereas RTX 8000 takes ∼111 ms with an 18× performance boost. (B) Average processing time in milliseconds per frame with respect to input image size (16-bit unsigned format), plotted for contrast limited adaptive histogram equalization (CLAHE) at different tile grid sizes and proposed DCE; CLAHE processing time tends to increase for larger tile grid size, whereas DCE is consistent for any α. (C) Average of total processing and display time in milliseconds per frame (left axis) and respective frame rate (right axis) for proposed DCE, plotted with respect to input image size; observed frame rate (green) is substantially higher than mechanical frame rate for a state-of-the-art 12 kHz resonant scanner (orange). We now compare our processing time with a widely used state-of-the-art technique, contrast limited adaptive histogram equalization (CLAHE). For a reasonable comparison, both CLAHE and DCE are tested through RTX 8000 while using 16-bit unsigned format images. In Figure 5B, the green curve plots the average processing time for DCE, while the others plot the same for CLAHE at different tile grid sizes. It is observed that for smaller tile grid size, CLAHE seems to be faster than DCE; however, its performance reduces gradually as we increase the tile grid size. On the other hand, for a fixed input size, the performance of DCE is consistent for any recommended value of α. Note that each reported GPU-processing time in Figures 5A and 5B includes uploading the input data from host to GPU, processing the data in GPU, and downloading the output data from GPU to host. Up to this stage, we do not consider displaying the downloaded (from GPU) data to a computer screen. To assess the effective performance of DCE, we now implement a real-time displaying of the processed images to a monitor and measure the total time required for uploading to GPU, processing in GPU, downloading from GPU, and displaying to a monitor, for different input image sizes. The gray and green curves in Figure 5C plot the total time required per frame and the corresponding frame rate, respectively, while varying the input data size. We observe that for a 10,000 × 10,000-sized input image, a total time of ∼154 ms is consumed, thus resulting in a frame rate of >6 frames per second (fps). Likewise, for a 1000 × 1000-sized input image, the total time per frame is found to be ∼4 ms, and thereby a frame rate of ∼248 fps becomes feasible. Note that for our analysis, we employed different image sizes from 1000 × 1000 up to 10,000 × 10,000. However, our display screen was limited with a resolution of 3840 × 2160 and a refresh rate of 60 Hz. For a high speed and high digital resolution laser-scanning MPM, one can employ a fast enough resonant scanner to facilitate a fast raster scanning, provided a high repetition-rate laser, a fast sampling-rate digitizer, and a short-lifetime fluorophore are simultaneously accessible to maintain an adequate Nyquist figure-of-merit (Borah et al., 2021). Assuming a state-of-the-art resonant scanner with a 12 kHz scanning frequency (24 kHz line rate), the mechanical frame rate with a bidirectional scanning is represented by the orange-curve in Figure 5C, where we observe frame rates of 2.4 fps and 24 fps for slow-axis line numbers of 10,000 and 1,000, respectively. It is thus evident that our effective frame rate for DCE is considerably higher than the mechanical frame rate for such a 12 kHz scanning system. Note that even though we perform this time complexity assessment up to an image size of 10,000 × 10,000, being an extreme scenario for a 12 kHz resonant scanning MPM, one would require a pixel rate of at least 240 M/s to secure at least 10,000 pixels along the fast axis, which is, however, subject to availability of a suitable short-lifetime fluorophore and associated acquisition electronics as well as a fast enough laser repetition rate. This extreme example simply indicates that DCE holds the potential to be real-time implemented in most of the typical MPM imaging applications.

Discussion

In this article, we report a dedicated-hardware-free rapid DCE technique to digitally mimic the effect of a hardware-based adaptive/controlled illumination. To better comprehend the idea, let us consider the 50 × 50-pixel ROIs C1-6 (cropped from R1-6) in Figure 2. It is evident that with our fixed illumination scenario (refer to STAR Methods), we are unable to retrieve the fine nerve fibers with an adequate contrast ratio. The reason is simply owing to a weak fluorescence signal from a fiber which is most comparable to the noisy background as being depicted by each red curve in L1-6, Figure 2E. Note that a conventional adaptive/controlled illumination technique is expected to improve this situation by means of either increasing the excitation power once the raster-scanning laser spot is focused over the fiber (provided the system can distinguish the same), and/or reducing the excitation power once the laser spot passes the fiber being scanned. This process is expected to make the fluorescence signal from the fiber adequately stronger than the background to enable better visibility. Having said that, with the same fixed illumination scenario, now consider our DCE-applied versions of C1-6 alongside, where the visibility of the ultrafine fibers has been improved substantially, further being depicted by the gray curves in L1-6 in Figure 2E. The situation can be realized as, at the vicinity of a nerve fiber, we successfully enable a laser-off state, while over the fiber itself (i.e., low-frequency information), we preserve a laser-on state. In other words, we successfully enable an effect of adaptive illumination control digitally with no physical real-time tuning of the excitation power. We performed large-FOV aliasing-free TPEF neuronal imaging at multiple excitation wavelengths and validate the effectiveness of the proposed method by retrieving weak-intensity ultrafine neuronal structures amidst a strong noisy background, while demonstrating simultaneous improvements to the SNR, SBR, and contrast ratio. A CUDA-assisted reduced time complexity of <3 ms for a 1000 × 1000-sized dataset (16-bit unsigned format) is secured to enable real-time applicability of the same. For a better DCE performance, the input image is expected to be not saturated. One can adjust excitation power, gain of the fluorescence detection system as well as input-range of the digitizer in order to prevent saturation. Note that DCE tends to suppress a noisy background near the edges of a bright structure more aggressively compared to that near a weaker structure. That is to say, the contrast of the bright structures will be boosted first even at a lower α, and as the value of α is increased, the contrast of the weak-intensity structures will gradually improve. For a lower value of α, this behavior, therefore, might lead to an artifact particularly in case of a strong noisy background coexisting with bright enough or saturated structures. To minimize the same, a higher value of α is necessary. However, an excessively higher α might tend to suppress useful low-frequency information along with the noisy background. Based on our observations, we recommend an α range of 3.0–8.0. Practically, one should first apply a lower α value, and based on visual perception, α should be increased until a satisfactory result is observed. We recommend that the data-acquisition system satisfies or exceeds the respective Nyquist–Shannon criterion (Nyquist, 1928; Shannon, 1949), so that a smallest resolvable structure gets digitized with at least four pixels. One should not downscale the dataset before applying DCE. However, if the sampling pixel size is much smaller than that required by the Nyquist–Shannon criterion, a suitable pixel-binning can be performed. It is recommended not to apply a conventional low-pass filter to the digitized dataset, before applying DCE, as the noise components will tend to lose their high-frequency nature and our approach might treat them as low-frequency information thereafter. Note that DCE boosts the weaker structures while mostly preventing the brighter ones from getting saturated. Such local enhancement, however, might not be suitable to be applied in certain quantitative analysis. Furthermore, note that this article is dedicated to neuronal connectomics, that is, neuronal structural imaging studies utilizing a large-FOV high-resolution high-NFOM MPM (Borah et al., 2021), and does not report any low-pixel-rate but high dynamic range image or imaging system. A poor pixel rate is usually not recommended for a mesoscopic structural imaging application targeting a centimeter-scale tissue sample especially when a submicron digital resolution becomes a primary concern. A typical ∼0.5 MP image comprises around 700 × 700 pixels. To retrieve a typical 500 nm resolution, the pixel size must be ≤250 nm. Respecting the Nyquist–Shannon criterion, we are thus not allowed to extend the FOV beyond 0.175 × 0.175 mm2. Thus, if we target the whole connectomics of a typical ∼500 mm3 mouse brain, it would require at least 16 M tiles at an axial step of 1 μm, even without considering any overlap of adjacent tiles. This essentially leads to a tremendous computational load for the millions of stitching operations. On the other hand, a high-pixel-rate laser scanning helps maximize the number of pixels per fast-axis line, and thus allows us to extend the FOV up to millimeter-scale yet maintaining a high enough digital resolution with a high NFOM and hence becomes a promising idea assisting a high-resolution mesoscopic structural imaging with the reduced requirement of stitching operations. It is typical for a conventional PMT-based laser scanning system to encounter a limited photon number issue, and hence a low or moderate dynamic range especially when employing a fewer optical pulse per pixel in high-NFOM imaging scenarios, which, however, can be improved by lowering the gain, suitably adjusting the excitation power, and integrating multiple frames and/or pixels as per the purpose and requirements of a specific research goal. Nevertheless, the assessment of DCE in a low-pixel-rate but high dynamic range image has not been addressed in this article, which might be a future potential implication of the proposed algorithm. The reported DCE algorithm has a tremendous potential to be applied before the segmentation of neuronal structures, which can help construct high-resolution 3D neuronal maps for various neuronal connectomics studies targeting ultra-large volumetric brain regions, or even an intact whole animal brain. While performing ultra-deep mesoscopic volumetric imaging, owing to unavoidable scattering and absorption issues, the signal-of-interest often tends to degrade as one penetrates deeper into the tissue. This issue becomes prominent when the depth of penetration is in millimeter scale, thus severely deteriorating the signals from the weak-intensity ultrafine neuronal morphologies residing at an ultra-long depth. In such situation, DCE can be implemented to the acquired images to real-time improve the visibility of the neuronal structures. Besides such deep-volumetric imaging, DCE holds potential to be applied in two-dimensional imaging applications as well. For ultra-large centimeter-scale imaging of an optical section, it is a typical practice to adequately extend the FOV. However, the optical aberrations are typically unavoidable especially near the FOV-edges and -corners. This often leads to non-uniform excitation and detection efficiencies across the FOV. As a result, the signal strengths of the weak-intensity structures residing at the off-axis locations would often tend to deteriorate. In such situation, DCE can greatly help improve visibility at the FOV-edges and corners, which can further facilitate an artifact-free digital image mosaic-stitching operation. Recently, we are working on a post-processing-free sub-minute gigapixel nonlinear optical imaging technology providing submicron digital resolution over a centimeter-scale area, where the proposed CUDA-accelerated DCE greatly helps optimize visibility across our millimeter-scale high-NFOM FOV to facilitate uniform and artifact-free mosaic-stitching operations all in real-time. This technology would be dedicated to intra-operative rapid tumor border assessment for excised human brain tumor tissues to provide histopathological details as an alternative to a traditional frozen section biopsy. In this article, even though we have demonstrated DCE in a high-NFOM MPM, the algorithm has the potential to be extended to other forms of optical linear and nonlinear imaging, as well as clinical applications, such as ultrasound, CT, X-ray, and MRI.

Limitations of the study

The reported technique performs local enhancements of the weak-intensity structures which might not be suitable to be applied in certain quantitative analysis. Compared to a hardware-based adaptive/controlled illumination technique, DCE does not regulate the excitation power to minimize photobleaching and/or phototoxicity. However, for a large-FOV imaging scenario, the power density over a unit area is lower compared to a small-FOV case for the same average excitation power. Thus, the issue of photobleaching and/or phototoxicity can be minimized with a moderate or low average excitation power while imaging across a millimeter-scale FOV.

STAR★Methods

Key resources table

Resource availability

Lead contact

Further information and requests for resources and reagents should be directed to and will be fulfilled by the lead contact, Chi-Kuang Sun (sun@ntu.edu.tw).

Materials availability

This study did not generate new unique reagents.

Experimental model and subject details

One transgenic Thy1-GFP and one Nav1.8-tdTomato male mice used in this study were 8-week-old. The mice were housed with a 12-hour light/12-hour dark cycle and fed ad libitum. Mice were maintained in accordance with guidelines approved in the Codes for Experimental Use of Animals of the Council of Agriculture of Taiwan, based on the Animal Protection Law of Taiwan. All experimental protocols were approved by the Institutional Animal Care and Use Committee of National Taiwan University, Taipei, Taiwan. This study does not involve a human subject.

Method details

Two-photon excitation fluorescence imaging of neuronal structures

Two-photon fluorescence imaging (Denk et al., 1990; Carriles et al., 2009) was performed using a high-NA (>0.9) and low magnification (20×) objective lens (Olympus XLUMPlanFl, 20×/0.95W). The scanning head employed a resonant scanner (CRS 4 kHz, driver: 311–149887, Cambridge Technology, MA, USA) and a galvanometer scanner (8320K, driver: MicroMax 671, Cambridge Technology, MA, USA) for fast- and slow-axis scanning, respectively. A custom tube lens and a general scan lens (LSM05-BB, Thorlabs, NJ, USA) with effective focal lengths of 167 and 110 mm, respectively, were used providing a ∼1.5× beam magnification. A 70 MHz, <60 fs fiber laser (Fidelity-2, Coherent, Inc., CA, USA) centered at 1070 nm was utilized directly with a one-pulse-per-voxel acquisition scheme for excitation of the Nav1.8-tdTomato sample. The 70 MHz sync signal from Fidelity-2 was fed to the external clock input of our digitizer. The sampling frequency was thus maintained at 70 M/s. For excitation of the Thy1-GFP sample, the central wavelength was shifted to ∼919 nm. To achieve the same, output from Fidelity-2 fiber laser was free-space-coupled to a 7 mm-long photonic crystal fiber to induce a negative dispersion to generate Cherenkov Radiation (Liu et al., 2015; Li et al., 2016). Long-pass and short-pass filters with cut-on and cut-off at 750 and 1000 nm, respectively, were used to ensure a spectrum centered around 919 nm. A pulse duration of <60 fs was ensured after the objective lens by means of pulse pre-chirping (Liang et al., 2010) with a grating-pair. A dichroic beam splitter (FF735-Di02, Semrock) was used to reflect the emerging fluorescence signal into a detection unit comprising a relay system with two lenses with effective focal lengths of 150 mm (Edmund Optics: 32–982) and 40 mm (Edmund Optics: 48–654), respectively, producing a 3.75× demagnification, and subsequently a photomultiplier tube (PMT, R10699, Hamamatsu Photonics, Japan). To ensure detection of Nav1.8-tdTomato and Thy-1 GFP two-photon fluorescence signals, two band pass filters, FF01-580/60-25-D and FF03-525/50-25, respectively, were utilized, each placed before the photo-sensitive area of the PMT. A colored glass filter (FGB37-A, Thorlabs) was additionally placed in series with the band-pass filter in each case. A transimpedance amplifier C6438-01 (Hamamatsu Photonics, Japan) was employed to perform current to voltage conversion of the PMT-output signal, which was subsequently digitized with a high-speed digitizer ATS9440 (Alazar Technologies Inc., Canada). Note that ATS9440 captured only the negative voltages from C6438-01 utilizing half of its input range, thus the data from ATS9440 was appropriately scaled to fit a 16-bit unsigned format image as being used in this article for testing and demonstration of DCE. For more information on our acquisition system, image calibration, and other relevant details, please refer to our previously published paper (Borah et al., 2021) revealing the idea and construction of a high-NFOM MPM.

Implementation of the proposed DCE method via GPU-acceleration

For GPU-accelerated image/data processing, an open source computer vision library, OpenCV (version: 4.5.0) was built with CUDA libraries (version: 10.1, update 2). Table S1 depicts detailed implementation steps for the proposed method. Codes were developed using Microsoft Visual Studio and are available in Data S1.

Data processing and analysis

All intensity profiles along L1-6 shown in Figures 2, 3, and 4 were obtained via ImageJ (1.53c) software. All graphs were plotted using OriginPro. In Figure 4, a multiplicative gain enhancement of 2.0 was used in (2); minimum to maximum range was set as 20% to 60% of the maximum (i.e., 65535) in (3); a tile grid size of 20 × 20 and a clip-limit of 4.0 were used for contrast-limited adaptive histogram equalization (CLAHE) in (5); a radius and a mask-weight of 7-pixels and 0.5, respectively, were used for unsharp masking (UM) in (6); a 5-pixel wide elliptical structure element was employed for both morphological erosion and opening operations in (7) and (8), respectively; a ball-radius of 70-pixels was used for rolling ball and sliding paraboloid background subtractions (with enabled smoothing) in (9) and (10), respectively. Each reported processing time in Figures 5A–5C is an average of 100 measurements performed with standard C++ functions. CUDA (10.1)-accelerated OpenCV (4.5) built-in functions were employed for both methods in Figure 5B. In each GPU-processing case in Figures 5A–5C, asynchronous data-transfer was employed in between host and GPU.

Quantification and statistical analysis

Means and/or standard deviations related to SNR, SBR, and contrast ratio measurements in Figures 2, 3, and 4 were obtained using ImageJ standard functions.

Additional resources

Experimental protocols were approved by the Institutional Animal Care and Use Committee of National Taiwan University, Taipei, Taiwan (approval number: NTU105-EL-00113).
REAGENT or RESOURCESOURCEIDENTIFIER
Software and algorithms

OpenCVIntel Corporation, USAhttps://opencv.org
CUDA toolkitNVIDIA Corporation, USAhttps://developer.nvidia.com/cuda-toolkit
Microsoft Visual StudioMicrosoft Corporation, USAhttps://visualstudio.microsoft.com/
ImageJNational Institutes of Health, USAhttps://imagej.nih.gov/ij
OriginOriginLab, USAhttps://www.originlab.com/

Other

Quadro RTX 8000NVIDIA Corporation, USAhttps://www.nvidia.com
Quadro P1000NVIDIA Corporation, USAhttps://www.nvidia.com
  42 in total

1.  From image to data using common image-processing techniques.

Authors:  Laura R Sysko; Michael A Davis
Journal:  Curr Protoc Cytom       Date:  2010-10

Review 2.  Fluorescence microscopy.

Authors:  Jeff W Lichtman; José-Angel Conchello
Journal:  Nat Methods       Date:  2005-12       Impact factor: 28.547

3.  Image enhancement via adaptive unsharp masking.

Authors:  A Polesel; G Ramponi; V J Mathews
Journal:  IEEE Trans Image Process       Date:  2000       Impact factor: 10.856

4.  Deep tissue multiphoton microscopy using longer wavelength excitation.

Authors:  Demirhan Kobat; Michael E Durst; Nozomi Nishimura; Angela W Wong; Chris B Schaffer; Chris Xu
Journal:  Opt Express       Date:  2009-08-03       Impact factor: 3.894

5.  Contrast limited adaptive histogram equalization image processing to improve the detection of simulated spiculations in dense mammograms.

Authors:  E D Pisano; S Zong; B M Hemminger; M DeLuca; R E Johnston; K Muller; M P Braeuning; S M Pizer
Journal:  J Digit Imaging       Date:  1998-11       Impact factor: 4.056

6.  Optical sectioning microscopy: cellular architecture in three dimensions.

Authors:  D A Agard
Journal:  Annu Rev Biophys Bioeng       Date:  1984

7.  Practical implementation of log-scale active illumination microscopy.

Authors:  Kengyeh K Chu; Daryl Lim; Jerome Mertz
Journal:  Biomed Opt Express       Date:  2010-07-16       Impact factor: 3.732

8.  Optimization and enhancement of H&E stained microscopical images by applying bilinear interpolation method on lab color mode.

Authors:  Kaya Kuru
Journal:  Theor Biol Med Model       Date:  2014-02-06       Impact factor: 2.432

9.  Real-time high dynamic range laser scanning microscopy.

Authors:  C Vinegoni; C Leon Swisher; P Fumene Feruglio; R J Giedt; D L Rousso; S Stapleton; R Weissleder
Journal:  Nat Commun       Date:  2016-04-01       Impact factor: 14.919

View more

北京卡尤迪生物科技股份有限公司 © 2022-2023.