| Literature DB >> 36170268 |
Matthew S Creamer1, Kevin S Chen1, Andrew M Leifer1,2, Jonathan W Pillow1,3.
Abstract
Imaging neural activity in a behaving animal presents unique challenges in part because motion from an animal's movement creates artifacts in fluorescence intensity time-series that are difficult to distinguish from neural signals of interest. One approach to mitigating these artifacts is to image two channels simultaneously: one that captures an activity-dependent fluorophore, such as GCaMP, and another that captures an activity-independent fluorophore such as RFP. Because the activity-independent channel contains the same motion artifacts as the activity-dependent channel, but no neural signals, the two together can be used to identify and remove the artifacts. However, existing approaches for this correction, such as taking the ratio of the two channels, do not account for channel-independent noise in the measured fluorescence. Here, we present Two-channel Motion Artifact Correction (TMAC), a method which seeks to remove artifacts by specifying a generative model of the two channel fluorescence that incorporates motion artifact, neural activity, and noise. We use Bayesian inference to infer latent neural activity under this model, thus reducing the motion artifact present in the measured fluorescence traces. We further present a novel method for evaluating ground-truth performance of motion correction algorithms by comparing the decodability of behavior from two types of neural recordings; a recording that had both an activity-dependent fluorophore and an activity-independent fluorophore (GCaMP and RFP) and a recording where both fluorophores were activity-independent (GFP and RFP). A successful motion correction method should decode behavior from the first type of recording, but not the second. We use this metric to systematically compare five models for removing motion artifacts from fluorescent time traces. We decode locomotion from a GCaMP expressing animal 20x more accurately on average than from control when using TMAC inferred activity and outperforms all other methods of motion correction tested, the best of which were ~8x more accurate than control.Entities:
Mesh:
Year: 2022 PMID: 36170268 PMCID: PMC9518861 DOI: 10.1371/journal.pcbi.1010421
Source DB: PubMed Journal: PLoS Comput Biol ISSN: 1553-734X Impact factor: 4.779
Fig 3TMAC reduces decodable motion artifacts in experimental data.
A) Top, animal body curvature over time. Middle, GCaMP and RFP fluorescence from a neuron that TMAC estimates to have high signal to noise, recorded from a behaving worm. Bottom, GCaMP and RFP fluorescence from a different neuron in that same recording that TMAC estimates to have large motion artifacts. B) Time trace of animal curvature and predicted behavior, decoded from activity inferred by TMAC in a GCaMP worm. Gray shaded regions were used to train the decoder, white region was held out and used to evaluate decoding performance. C) Ratio of decoding accuracy (ρ2) when decoding GCaMP divided by the median accuracy for a GFP worm across different models (). D) Histogram over all neurons of correlation squared between RFP and activity inferred by TMAC from a GFP worm. RFP vs GFP data the same as in . E) Same as F but in a GCaMP worm.