| Literature DB >> 35559212 |
Weiyi Liu1, Junxia Pan1, Yuanxu Xu1, Meng Wang2, Hongbo Jia3,4, Kuan Zhang1, Xiaowei Chen1, Xingyi Li2, Xiang Liao2.
Abstract
Two-photon Ca2+ imaging is a widely used technique for investigating brain functions across multiple spatial scales. However, the recording of neuronal activities is affected by movement of the brain during tasks in which the animal is behaving normally. Although post-hoc image registration is the commonly used approach, the recent developments of online neuroscience experiments require real-time image processing with efficient motion correction performance, posing new challenges in neuroinformatics. We propose a fast and accurate image density feature-based motion correction method to address the problem of imaging animal during behaviors. This method is implemented by first robustly estimating and clustering the density features from two-photon images. Then, it takes advantage of the temporal correlation in imaging data to update features of consecutive imaging frames with efficient calculations. Thus, motion artifacts can be quickly and accurately corrected by matching the features and obtaining the transformation parameters for the raw images. Based on this efficient motion correction strategy, our algorithm yields promising computational efficiency on imaging datasets with scales ranging from dendritic spines to neuronal populations. Furthermore, we show that the proposed motion correction method outperforms other methods by evaluating not only computational speed but also the quality of the correction performance. Specifically, we provide a powerful tool to perform motion correction for two-photon Ca2+ imaging data, which may facilitate online imaging experiments in the future.Entities:
Keywords: behaving mice; image density feature; image registration; motion correction; online experiment; two-photon Ca2+ imaging
Year: 2022 PMID: 35559212 PMCID: PMC9088923 DOI: 10.3389/fninf.2022.851188
Source DB: PubMed Journal: Front Neuroinform ISSN: 1662-5196 Impact factor: 3.739
FIGURE 1Processing pipeline for two-photon Ca2+ imaging data. (A) Flowchart of the proposed motion correction method. The red marked modules are the key steps in the processing flowchart. (B) Illustration of the feature (red dots) extraction for the imaging data. (C) Illustration of the feature update for a raw frame. The features of the current frame and the next frame are marked as the red and blue dots, respectively. (D) Illustration of matching features between template image and raw frame. The anatomical direction: a, anterior; m, medial; p, posterior; l, lateral.
FIGURE 2Features from accelerated segment test feature detection and sampled FAST features. (A) The corner features detected by FAST algorithm. (B) The sampled FAST features are uniformly distributed for reducing the computational cost. The features are represented as red dots.
FIGURE 3A global W-KDE for a two-photon image and a visual demonstration of fast hill climbing to update initial coordinates. (A) A representative two-photon image within a global range ROI marked in red. (B) A topography of global W-KDE. The density map indicated with the grayscale bar painted on the right. (C) An initial coordinate obtained from sampled FAST features with a 101 pixels-sized ROI centered by it. We mark the ROI in red and plot the initial coordinate and the local maximum with crosses that are also plotted in panel (D). (D) A W-KDE of this ROI is to reveal the topography and process of fast hill climbing. The density map is indicated with the grayscale bar painted on the right. Each iteration of this feature is marked on the topography by pink dots.
FIGURE 4Demonstration of the processing used to generate precise high-density features. (A) The sampled FAST features were processed by rough clustering. (B) A filtering of features to reduce noises. (C) Clustering with a precise termination and merging the coincident features.
FIGURE 5Updating features in a consecutive mode. (A) Features of current frame marked by red dots. (B) Features of the next frame updated from features of the current frame. We indicate features of the next frame with blue dots and those of the current frame with red dots. (C) Enlargement of the respective ROI in panel (B), i.e., the change flow of coordinates during this iteration. The arrows indicate the movement directions.
FIGURE 6A visualization of the process to generate the descriptor. (A) The features for the template image obtained by FIFER features are marked by red dots. (B) A raw frame to be corrected with its corresponding features. (C) The density distribution of the template image with corresponding features. (D) The density distribution of the raw frame with corresponding features. The density maps in panels (C) and (D) are indicated with the grayscale bar painted on the right. (E) The descriptor of a feature selected from the template image. (F) The descriptor of a feature from the raw frame. This descriptor represents a collection of the differences between itself and all the other features in panels (E,F).
FIGURE 7Matching relationships between the features of template image and raw frame. (A) The features obtained by FIFER of the template image; features are marked by red dots. (B) A raw frame to be corrected with its corresponding features. (C) The corresponding coupling of the features; the matched relationships are indicated by red lines.
FIGURE 8The calculation of direction of the selected features and transformation of corresponding descriptors for matching. (A) The features obtained by FIFER; features are marked by red dots. (B) A raw frame with a rotation deviation and its corresponding features. The red arrows represent a two-dimensional descriptor of the selected feature in panels (A,B). (C) The histogram of magnitude in directions bins of the template feature’s descriptor. (D) The histogram of magnitude in directions bins of the raw frame feature’s descriptor. (E) The descriptor of the selected template feature within its main direction. (F) The transformed descriptor of the selected frame feature within its main direction.
FIGURE 9Application of FIFER to simulated data. (A) Template image (the first frame) of the clean simulated data. (B) Template image (the first frame) of the simulated data added with Gaussian noise of a standard deviation σ = 0.10. (C) Template image (the first frame) of the simulated data combined with Gaussian noise of a standard deviation σ = 0.25. (D) Estimation of shifts for the simulated data of different noise levels. FIFER’s error is calculated as the 2d-norm between the estimated offsets by FIFER and ground truth offsets (artificially introduced).
FIGURE 10The motion correction effect in single raw frame of two-photon imaging data. (A) The template images of the population level and spine level. (B) Raw frames from population level and spine level with motion artifacts which are mixed with corresponding template image. (C) The correction of the raw frames, which are mixed with corresponding template image. In a mixed image, the image patches from the template and single frame are equally distributed and marked by red grid lines. The equally distribution means any two adjacent patches are from the template image and single frame, respectively. Each center of the mixed image is a patch from corresponding template image, which is marked by yellow rectangle. The red arrows indicate the performance of registration.
FIGURE 11The correction effect in average frame of two-photon imaging data. (A) The average of raw frames at population level and spine level. (B) The average of motion corrected frames at population level and spine level.
FIGURE 12The motion correction effect of the neuronal signals. (A) Two representative neurons in the raw averaged frames, the neurons are indicated by dashed circles in yellow. (B) Two representative neurons in the corrected averaged frames. (C) The raw and corrected Ca2+ signals of neuron #1. (D) The raw and corrected Ca2+ signals of neuron #2. The black arrows indicate the distorted period due to motion artifacts.
Comparison of fast image feature extraction and registration (FIFER) with other methods (Mean ± SD) for neuronal population imaging dataset (n = 200 frames, 600 × 600 pixels for each frame).
| Method | Metric ( | Time (ms) | |||
| NRMSE | PSNR | SSIM | NMI | ||
| SIFT | 0.9922 ± 0.0641 ( | 19.0407 ± 0.5627 ( | 0.1726 ± 0.0224 ( | 0.0217 ± 0.0059 ( | 105.23 |
| ORB | 1.0202 ± 0.0566 ( | 18.7942 ± 0.4903 ( | 0.1594 ± 0.0248 ( | 0.0191 ± 0.0065 ( | 26.87 |
| AKAZE | 1.0082 ± 0.0703 ( | 18.9045 ± 0.6131 ( | 0.1740 ± 0.0181 ( | 0.0223 ± 0.0045 ( | 52.41 |
| TurboReg (Accurate) | 0.9738 ± 0.0345 ( | 19.1906 ± 0.3063 ( | 0.1799 ± 0.0124 ( | 0.0236 ± 0.0012 ( | 140.87 |
| TurboReg (Fast) | 1.0369 ± 0.0212 ( | 18.6414 ± 0.1759 ( | 0.1700 ± 0.0110 ( | 0.0223 ± 0.0009 ( | 118.15 |
| Moco | 1.0361 ± 0.0211 ( | 18.6484 ± 0.1749 ( | 0.1715 ± 0.0109 ( | 0.0224 ± 0.0009 ( | 26.92 |
| NoRMCorre (Rigid) | 0.9601 ± 0.0422 ( | 19.3160 ± 0.3796 ( | 0.1842 ± 0.0119 ( | 0.0240 ± 0.0014 ( | 61.28 |
| NoRMCorre (Non-rigid) | 0.9521 ± 0.0181 ( | 19.3817 ± 0.1641 ( | 0.1922 ± 0.0090 ( | 0.0246 ± 0.0009 ( | 210.21 |
| Real-time processing | 1.0147 ± 0.0667 ( | 18.8815 ± 1.2811 ( | 0.1755 ± 0.0591 ( | 0.0254 ± 0.0270 ( | 3.85 |
| Suite2p | 1.0087 ± 0.0254 ( | 18.8821 ± 0.2166 ( | 0.1694 ± 0.0130 ( | 0.0226 ± 0.0013 ( | 11.91 |
|
|
|
|
|
|
|
The calculation time is for each frame. Statistical tests were calculated using the paired t-test.
Comparison of FIFER with other methods (mean ± SD) for dendritic spine imaging dataset (n = 1500 frames, 250 × 250 pixels for each frame).
| Method | Metric ( | Time (ms) | |||
| NRMSE | PSNR | SSIM | NMI | ||
| SIFT | 0.9253 ± 0.0815 ( | 17.8256 ± 0.7608 ( | 0.1657 ± 0.0192 ( | 0.0661 ± 0.0089 ( | 15.79 |
| ORB | 0.9344 ± 0.0783 ( | 17.7370 ± 0.7262 ( | 0.1630 ± 0.0191 ( | 0.0647 ± 0.0090 ( | 10.92 |
| AKAZE | 0.9312 ± 0.0809 ( | 17.7694 ± 0.7513 ( | 0.1628 ± 0.0202 ( | 0.0668 ± 0.0077 ( | 7.78 |
| TurboReg (Accurate) | 0.9064 ± 0.0292 ( | 17.9752 ± 0.2775 ( | 0.1539 ± 0.0090 ( | 0.0719 ± 0.0027 ( | 109.58 |
| TurboReg (Fast) | 0.9654 ± 0.0135 ( | 17.4240 ± 0.1197 ( | 0.1391 ± 0.0045 ( | 0.0711 ± 0.0025 ( | 104.46 |
| Moco | 0.9604 ± 0.0111 ( | 17.4687 ± 0.1000 ( | 0.1411 ± 0.0043 ( | 0.0715 ± 0.0025 ( | 9.24 |
| NoRMCorre (Rigid) | 0.9325 ± 0.0189 ( | 17.7263 ± 0.1742 ( | 0.1462 ± 0.0054 ( | 0.0711 ± 0.0025 ( | 7.59 |
| NoRMCorre (Non-rigid) | 0.9301 ± 0.0147 ( | 17.7480 ± 0.1361 ( | 0.1485 ± 0.0049 ( | 0.0720 ± 0.0026 ( | 57.4 |
| Real-time processing | 0.9550 ± 0.0254 ( | 17.5269 ± 0.5620 ( | 0.1417 ± 0.0222 ( | 0.0718 ± 0.0096 ( | 0.76 |
| Suite2p | 0.9506 ± 0.0252 ( | 17.5606 ± 0.2262 ( | 0.1428 ± 0.0061 ( | 0.0697 ± 0.0029 ( | 1.90 |
|
|
|
|
|
|
|
The calculation time is for each frame. Statistical tests were calculated using the paired t-test.
Comparison of FIFER with other methods (Mean ± SD) for neuronal population imaging dataset (the CaImAn dataset file images_N.01.01, n = 1825 frames, 512 × 512 pixels for each frame).
| Method | Metric ( | Time (ms) | |||
| NRMSE | PSNR | SSIM | NMI | ||
| SIFT | 0.9611 ± 0.1576 ( | 25.3388 ± 1.4258 ( | 0.3193 ± 0.1376 ( | 0.0218 ± 0.0161 ( | 67.54 |
| ORB | 0.9523 ± 0.1563 ( | 25.4124 ± 1.3676 ( | 0.3744 ± 0.0696 ( | 0.0283 ± 0.0097 ( | 18.31 |
| AKAZE | 0.8843 ± 0.1436 ( | 26.0484 ± 1.2999 ( | 0.4022 ± 0.0433 ( | 0.0346 ± 0.0039 ( | 40.28 |
| TurboReg (Accurate) | 0.8789 ± 0.1365 ( | 26.0914 ± 1.2283 ( | 0.4093 ± 0.0367 ( | 0.0347 ± 0.0025 ( | 112.07 |
| TurboReg (Fast) | 0.8943 ± 0.1398 ( | 25.9418 ± 1.2370 ( | 0.4005 ± 0.0383 ( | 0.0337 ± 0.0024 ( | 109.20 |
| Moco | 0.8945 ± 0.1402 ( | 25.9403 ± 1.2400 ( | 0.4003 ± 0.0386 ( | 0.0337 ± 0.0024 ( | 15.00 |
| NoRMCorre (Rigid) | 0.8899 ± 0.1396 ( | 25.9848 ± 1.2412 ( | 0.4027 ± 0.0383 ( | 0.0340 ± 0.0024 ( | 39.77 |
| NoRMCorre (Non-rigid) | 0.8821 ± 0.1392 ( | 26.0631 ± 1.2486 ( | 0.4090 ± 0.0379 ( | 0.0346 ± 0.0024 ( | 264.38 |
| Real-time processing | 0.8897 ± 0.1390 ( | 25.9905 ± 1.2984 ( | 0.3976 ± 0.0401 ( | 0.0342 ± 0.0119 ( | 2.70 |
| Suite2p | 0.8881 ± 0.1400 ( | 26.0037 ± 1.2457 ( | 0.4041 ± 0.0385 ( | 0.0341 ± 0.0024 ( | 8.28 |
|
|
|
|
|
|
|
The calculation time is for each frame. Statistical tests were calculated using the paired t-test.