| Literature DB >> 24478623 |
Sylvain Takerkart1, Philippe Katz2, Flavien Garcia1, Sébastien Roux1, Alexandre Reynaud3, Frédéric Chavane1.
Abstract
Optical imaging is the only technique that allows to record the activity of a neuronal population at the mesoscopic scale. A large region of the cortex (10-20 mm diameter) is directly imaged with a CCD camera while the animal performs a behavioral task, producing spatio-temporal data with an unprecedented combination of spatial and temporal resolutions (respectively, tens of micrometers and milliseconds). However, researchers who have developed and used this technique have relied on heterogeneous software and methods to analyze their data. In this paper, we introduce Vobi One, a software package entirely dedicated to the processing of functional optical imaging data. It has been designed to facilitate the processing of data and the comparison of different analysis methods. Moreover, it should help bring good analysis practices to the community because it relies on a database and a standard format for data handling and it provides tools that allow producing reproducible research. Vobi One is an extension of the BrainVISA software platform, entirely written with the Python programming language, open source and freely available for download at https://trac.int.univ-amu.fr/vobi_one.Entities:
Keywords: linear model; neuroscience; optical imaging; python; signal processing
Year: 2014 PMID: 24478623 PMCID: PMC3901006 DOI: 10.3389/fnins.2014.00002
Source DB: PubMed Journal: Front Neurosci ISSN: 1662-453X Impact factor: 4.677
Figure 1Illustration of the use the BrainVISA API with one of the . The source code (from which comments have been removed for space purposes) follows a simple formatting with four sections. It produces the GUI displayed on the right. Note that the core section only contains a call to a single function, which is a generic coding principle we adopted in Vobi One.
Figure 2The main . Vobi One is listed and selected in the list of toolboxes in the left panel. The middle panels shows all the available processes. The right panel displays the associated documentation.
Figure 5Examples of figures produced by the different viewers and postprocessing routines on the VSDI dataset provided with . (A) Illustration of the components (noise, neural response etc.) estimated by our linear model. (B) Denoised timecourses estimated with our linear model on all trials of a session. (C) Denoised timecourses obtained with four different contrasts of the visual stimuli. (D) Spatial map of the hearbeat contribution, with two regions of interest. (E) Denoised timecourses averaged in these two regions. (F) Comparaison of the denoised timecourses obtained with two analysis methods.
Figure 3The two main workflows available in . In green: session-level processes; in blue: trial-level processes, repeated on each individual trial.
Figure 4Left: the model of the neural response shape with its parameter. Right: several shape prototypes that can be obtained with the model. The user should define this prototype, and then chooses intervals for the parameters that allows for variations of the shape around the prototype.