| Literature DB >> 28912685 |
Meng Li1,2, Joe Z Tsien1,2.
Abstract
A major stumbling block to cracking the real-time neural code is neuronal variability - neurons discharge spikes with enormous variability not only across trials within the same experiments but also in resting states. Such variability is widely regarded as a noise which is often deliberately averaged out during data analyses. In contrast to such a dogma, we put forth the Neural Self-Information Theory that neural coding is operated based on the self-information principle under which variability in the time durations of inter-spike-intervals (ISI), or neuronal silence durations, is self-tagged with discrete information. As the self-information processor, each ISI carries a certain amount of information based on its variability-probability distribution; higher-probability ISIs which reflect the balanced excitation-inhibition ground state convey minimal information, whereas lower-probability ISIs which signify rare-occurrence surprisals in the form of extremely transient or prolonged silence carry most information. These variable silence durations are naturally coupled with intracellular biochemical cascades, energy equilibrium and dynamic regulation of protein and gene expression levels. As such, this silence variability-based self-information code is completely intrinsic to the neurons themselves, with no need for outside observers to set any reference point as typically used in the rate code, population code and temporal code models. Moreover, temporally coordinated ISI surprisals across cell population can inherently give rise to robust real-time cell-assembly codes which can be readily sensed by the downstream neural clique assemblies. One immediate utility of this self-information code is a general decoding strategy to uncover a variety of cell-assembly patterns underlying external and internal categorical or continuous variables in an unbiased manner.Entities:
Keywords: cell assembly; code of silence; neural code; neural computing; neural spike variability; self-information; surprisal code; variability-surprisal
Year: 2017 PMID: 28912685 PMCID: PMC5582596 DOI: 10.3389/fncel.2017.00236
Source DB: PubMed Journal: Front Cell Neurosci ISSN: 1662-5102 Impact factor: 5.505
Figure 1Neuronal variability, underlying logic at synaptic and cell-assembly levels, and the traditional neural coding models. (A) Neurons discharge spikes all the time with enormous variability. Spike trains shown here are simultaneously recorded seven units from mice prefrontal cortex during animal's quiet-awake period using tetrodes. (B) A cortical neuron may contain tens of thousands of synapses which can contribute to changes in excitatory postsynaptic potential (EPSP), leading to the generation of action potential or spike at the soma. Stochastic nature of synaptic patterns leads to highly variable spike trains in both the resting “control” condition and stimulus-presentation experiments. (C) Power-of-two-based Cell-Assembly Wiring Logic as the brain's basic functional computational motif (FCM). A schematic illustration of a power-of-two connectivity motif consisted of 15 distinct neural cliques (N1-15) based on all the possible connectivity patterns for processing 4 distinct inputs (i = 4). (D) This motif gives rise to a specific-to-general feature extraction assembly. (E) The rate code model emphasizes the number of spikes within a defined time window, while ignoring the temporal structures in spike patterns. Five examples of the same firing rate (5 Hz) with completely different spike patterns were used for illustration. (F) The time-to-first-spike model of the temporal code emphasizes that key information is encoded in the relative arrival time of the first spike after stimulus onset. (G) The phase-coupling model focused on the temporal relationship between spike changes and local field potential (LFP) oscillation phases. (H) The synchrony code proposed that information coding and binding were achieved by some “uniquely meaningful” spikes which were transiently synchronized among different cells. In all cases, the rate code, population code, and temporal code models require a reference point (i.e., time zeros of stimulation, or oscillation phase, etc.) for data analyses. As such, these approaches are generally known as the biased methods. Panels (E–H) are artistic illustrations for better visualizing the four popular coding models.
Figure 2An illustration to describe how the proposed Neural Self-Information Theory can be used to decode cell-assembly patterns from neuronal spike trains. The Self-Information Code is proposed to explain how real-time neural code is generated via spike-timing variability, and how cell-assembly patterns can be identified in an unbiased manner. A general strategy to apply the neural self-information theory to uncover cell assemblies from spike-train datasets. Cell-assembly code is identified in four steps based on the conversion of an individual neuron's spike train into variability distribution of ISIs, followed by its conversion to a real-time self-information value. The temporally coordinated self-information surprisal patterns across cell population can be detected in an unbiased manner by pattern classification methods such as blind-source analyses. The unique feature of this self-information code is that this neural coding principle is completely intrinsic to the neurons themselves, with no need for any reference point to be set by outside observers.