Literature DB >> 35606151

Multiscale and Extended Retrieval of Associative Memory Structures in a Cortical Model of Local-Global Inhibition Balance.

Thomas F Burns1, Tatsuya Haga 芳賀 達也2, Tomoki Fukai 深井朋樹2.   

Abstract

Inhibitory neurons take on many forms and functions. How this diversity contributes to memory function is not completely known. Previous formal studies indicate inhibition differentiated by local and global connectivity in associative memory networks functions to rescale the level of retrieval of excitatory assemblies. However, such studies lack biological details such as a distinction between types of neurons (excitatory and inhibitory), unrealistic connection schemas, and nonsparse assemblies. In this study, we present a rate-based cortical model where neurons are distinguished (as excitatory, local inhibitory, or global inhibitory), connected more realistically, and where memory items correspond to sparse excitatory assemblies. We use this model to study how local-global inhibition balance can alter memory retrieval in associative memory structures, including naturalistic and artificial structures. Experimental studies have reported inhibitory neurons and their subtypes uniquely respond to specific stimuli and can form sophisticated, joint excitatory-inhibitory assemblies. Our model suggests such joint assemblies, as well as a distribution and rebalancing of overall inhibition between two inhibitory subpopulations, one connected to excitatory assemblies locally and the other connected globally, can quadruple the range of retrieval across related memories. We identify a possible functional role for local-global inhibitory balance to, in the context of choice or preference of relationships, permit and maintain a broader range of memory items when local inhibition is dominant and conversely consolidate and strengthen a smaller range of memory items when global inhibition is dominant. This model, while still theoretical, therefore highlights a potentially biologically-plausible and behaviorally-useful function of inhibitory diversity in memory.
Copyright © 2022 Burns et al.

Entities:  

Mesh:

Year:  2022        PMID: 35606151      PMCID: PMC9186110          DOI: 10.1523/ENEURO.0023-22.2022

Source DB:  PubMed          Journal:  eNeuro        ISSN: 2373-2822


Significance Statement

Broadly, there are two types of neurons: excitatory and inhibitory. Inhibitory neurons are amazingly diverse compared with excitatory neurons. Why? Using a computational model with realistically-sized groups of excitatory neurons (representing memories) associated together in a network of memories, we highlight a potentially biologically-plausible and behaviorally-useful function of inhibitory neuron diversity in memory. Two findings in particular standout: (1) inhibitory diversity can quadruple the range of memory retrieval; and (2) balancing the strength of different inhibitory neurons’ influence on excitatory neurons can dramatically change how the network of memories become activated, balancing and extracting both geometric and topological information about the network.

Introduction

The mechanisms by which our brains flexibly perform the complex tasks of learning and memory are not completely understood. Hebbian learning (Hebb, 1949), the relative increase in synaptic strength between neurons as a result of shared, causal activity, seems important. Hebb postulated memories were formulated in the brain by assemblies of highly-interconnected neurons (Hebb, 1949). Evidence for this “neuron assembly” hypothesis was found in hippocampus, where groups of neurons become synchronously activated in response to an animal’s spatial location, indicating a neural correspondence to and potential memory of the location (Harris et al., 2003). These memories are often mutually related, in physical or behavioral space for the case of navigation (Tolman, 1948), in reward space for the case of rewarded learning tasks (Dusek and Eichenbaum, 1997), in linguistic space for the case of language comprehension (Goldstein et al., 2021), and theoretically in any arbitrary semantic space for generalized graph-based reasoning (e.g., family trees; Whittington et al., 2020). How can the structure of these mutual relations be identified dynamically in cortical networks? Inhibitory mechanisms may hold an answer. Here, we computationally explore the possible role of inhibitory circuits in extracting graph-based relationships in the space of behaviorally relevant information. The majority of experimental and computational work focusing on assemblies as representations of memory items has focused on the role of excitatory neurons. However, emerging evidence suggests inhibitory neurons play a nontrivial role in cortical networks. Throughout the brain, inhibitory neurons have classically been thought to coarsely keep excitation in check with a broad, nonspecific blanket of inhibition (Amit et al., 1994; Brunel, 2000). But more recent work has shown inhibitory neurons are tuned to specific external stimuli (Okun and Lampl, 2008; Xue et al., 2014), have specific associations with behavior (Dudok et al., 2021), have a large diversity of forms and functions within and across brain areas (Gouwens et al., 2020; Burns and Rajan, 2021), and form inhibitory assemblies (Zhang et al., 2017), often jointly with excitatory subnetworks (Otsuka and Kawaguchi, 2009; Koolschijn et al., 2019). A hallmark of many neuropathologies is inhibitory dysfunction (Amieva et al., 2004; Baroncelli et al., 2011; Burns and Rajan, 2022; Yao et al., 2022). If specific inhibitory dysfunction alone is sufficient for explaining these pathologies, then we could expect subtle inhibitory changes to cause dramatic changes in global function in complex tasks like those involving learning and memory. A greater understanding of the neurophysiological mechanisms underlying these changes may help us target treatments for such disorders and provide fundamental insight into the computational roles of inhibitory neurons in such circuits. Previous modeling work in a formal model with binary neurons (Haga and Fukai, 2019) has shown how anti-Hebbian learning (i.e., involving inhibitory synapses) in an associative memory model was able to extend the span of association between mutually-related memory items organized in a simple ring structure, compared with a regular Hebbian learning rule (i.e., not involving inhibitory synapses). Later work extended this formal model to arbitrary graph structures (Haga and Fukai, 2021). These results suggest inhibition may play a nontrivial role in relational memory systems. However, these models lacked biological features, most prominently a lack of distinction between excitatory and inhibitory neuron populations, breaking Dale’s Law. Dale’s Law (sometimes also called Dale’s Principle or Dale’s Hypothesis), first appearing in Eccles et al. (1954), is the view that a neuron’s terminals do not transmit multiple, differently-acting chemical or electrical signals to postsynaptic targets, e.g., an excitatory neuron has the exclusive electrical effect of exciting postsynaptic targets and never inhibiting them. Another limitation of prior work is that the excitatory assemblies were also not nearly as sparse as those seen in biology and the neurons took on binary states. Nevertheless, the results indicate global functional changes can result from subtle inhibitory changes (Ferguson et al., 2013; Rich et al., 2017). This study proposes a more realistic connection scheme of distinct excitatory and inhibitory neurons to embed sparse cell assemblies which represent memory items mutually linked through arbitrary graph structures. Formulated in this way, the model allows us to confirm the previous suggestion that a balance between local inhibition and global inhibition on cell assemblies determines the scale and extent of memories retrieved in a neural network. We show this for various naturalistic and artificial associative memory structures, including as a potentially behaviorally-useful function to maintain a choice distribution given a juncture or decision point in physical or memory space. We find a balance between local and global inhibition allows control over the range of recall within arbitrary graph structures, as well as graph clustering effects which may be useful in navigation and memory tasks.

Materials and Methods

Model

In order to embed memories in the network, we generate binary patterns as vectors of length , the number of excitatory neurons. Then, the weight of connections between any pair of excitatory neurons and is defined using these patterns. First, we create random binary patterns (of 0 and 1 s) of length , , with probabilities for 0 and 1 as and , and where we call the “sparseness” parameter of the memory patterns. This means neurons can belong to one or more memory patterns, and can be expected on average to belong to memory patterns. Memories are then embedded using a modified extended association rule (Griniasty et al., 1993; Amit et al., 1994) designed to allow association between memory items in an arbitrary graph structure where vertices are the memory patterns and edges represent an association of two memory patterns. Specifically where is the set of memory patterns neighboring (adjacent to, in the graph theoretic sense) pattern in the associative memory structure, (an example of the memory patterns and is illustrated in the example shown in Fig. 2). Although all values of are defined, not all are non-zero, in fact, many can be zero. This is because Equation 1 defines increases in only when those units are assigned to the same memory pattern or neighboring patterns. We could interpret this functionally as neurons belonging to the same or neighboring patterns are connected with probability 1 and with probability 0 (or not connected) to all other excitatory neurons. An example of the functional consequence of this connectivity can be seen in Extended Data Figure 1-1.
Figure 2.

, Illustration of as a 1D chain. Arrows indicate the initially stimulated memory and the set of neighboring memory patterns, , which are adjacent to in . Shading of vertices indicates the strength of activity in each excitatory assembly (darker is more active). , Example trials showing the correlation of approximate steady-state activities of excitatory neurons with neighboring memories in a 1D chain associative memory structure. Gray lines are single trials (n = 100) and black lines are the mean of all trials. Panels show increasing the value of to (to a local inhibition dominant network configuration) approximately doubles the initial range of retrieval. , Example trials the same as for with panels showing increasing sizes of networks (starting from x, which is , , ). This indicates a strong finite field effect which appears in the local inhibition dominant state. , Scatterplot showing the range of retrieval measure, , increases with . Gray dots are single trials (n = 5 per value of ) and the black line follows the mean of trials. Trials for were completed with , , , and for all other values of the trials were completed with the regular network size (, , ). Extended Data Figure 2-1 shows the stability of single units in these simulations of up to 5 s.

An example of spiking rates of all units in the stimulated pattern, its first neighbor (pattern adjacent to the stimulated pattern), the neighbor’s neighbor (second neighbor), the neighbor’s neighbor’s neighbor (third neighbor), and all other patterns. This is from a simulation using . Download Figure 1-1, EPS file. Two populations of inhibitory neurons are also modelled, one with global connectivity (uniform connection probabilities as indicated in Fig. 1) of size and another with local connectivity, which is specific to each memory pattern, and has a total size of , but where only local inhibitory neurons participate in each pattern. Unless stated otherwise, we use , , , and , meaning that each pattern consists of a joint assembly of 40 excitatory neurons and 10 local inhibitory neurons. A general schematic of the model from the perspective of a single memory pattern is shown in Figure 1.
Figure 1.

, General schematic of the model from the perspective of a single memory pattern (Eμ) and its connections to its respective local inhibitory population (LI), neighbors (Ek), and the global inhibitory population (GI). Connection probabilities are indicated as values of P. To retrieve a pattern, excitation is given directly to a single pattern. Gaussian noise is also applied independently to all excitatory neurons. Key: striped/shaded arrows and circles indicate excitatory connections and populations, respectively, and unshaded arrows and circles indicate inhibitory connections and populations, respectively. N.B., although consists of distinct memory patterns (and this distinction is necessary for the purposes of creating different associative memory structures), neurons can belong to any or all patterns (with independent probability of to belong to any single memory pattern). Extended Data Figure 1-1 shows an example of the resultant activities over time for the stimulated pattern, neighbor, and other patterns. , Input-output functions for the excitatory neurons () and inhibitory neurons (; from Amit et al., 1994). For the excitatory input-output function, values of above are mapped to , and the inhibitory input-output function continues linearly with the same slope for values of above .

, General schematic of the model from the perspective of a single memory pattern (Eμ) and its connections to its respective local inhibitory population (LI), neighbors (Ek), and the global inhibitory population (GI). Connection probabilities are indicated as values of P. To retrieve a pattern, excitation is given directly to a single pattern. Gaussian noise is also applied independently to all excitatory neurons. Key: striped/shaded arrows and circles indicate excitatory connections and populations, respectively, and unshaded arrows and circles indicate inhibitory connections and populations, respectively. N.B., although consists of distinct memory patterns (and this distinction is necessary for the purposes of creating different associative memory structures), neurons can belong to any or all patterns (with independent probability of to belong to any single memory pattern). Extended Data Figure 1-1 shows an example of the resultant activities over time for the stimulated pattern, neighbor, and other patterns. , Input-output functions for the excitatory neurons () and inhibitory neurons (; from Amit et al., 1994). For the excitatory input-output function, values of above are mapped to , and the inhibitory input-output function continues linearly with the same slope for values of above . Neurons are modelled as proportions of their maximum firing rates, based on an established method (Amit et al., 1994; note: the following completely describes our implementation, including modifications, so readers need not be familiar with the prior work (Amit et al., 1994)). At each timestep, currents are calculated for each excitatory neuron , global inhibitory neuron , and local inhibitory neuron : and then converted into proportions of their maximum firing rates by: where is the balanced connection weight between neurons and , and are the time decay constants, is the local-global inhibition balance, and and are the connection probabilities from excitatory to global inhibitory neurons and global inhibitory to excitatory neurons, respectively. The 〈f〉 term is the sum of expected firing rates based on the average degree of , e.g., if is a 1D chain, (where by “1D chain,” we mean a set of vertices wherein each vertex is connected to exactly two other vertices in the set, such that they form a chain-link structure as illustrated in Fig. 2). The 〈f〉 term therefore acts to normalize the excitatory-to-excitatory weights and does not affect the probability of neurons belonging to memory patterns. External input to the network is given by , the drive given to excitatory neurons in the pattern we wish to retrieve during the stimulation window, and is small Gaussian noise (independently drawn at every step, for every excitatory neuron). The input-output functions for the excitatory neurons () and inhibitory neurons () are shown in Figure 1 and are from a previous study (Amit et al., 1994). The network’s forward dynamics (governed by Eqs. 2–7) are solved using the Euler method with step sizes of 0.1 ms. , Illustration of as a 1D chain. Arrows indicate the initially stimulated memory and the set of neighboring memory patterns, , which are adjacent to in . Shading of vertices indicates the strength of activity in each excitatory assembly (darker is more active). , Example trials showing the correlation of approximate steady-state activities of excitatory neurons with neighboring memories in a 1D chain associative memory structure. Gray lines are single trials (n = 100) and black lines are the mean of all trials. Panels show increasing the value of to (to a local inhibition dominant network configuration) approximately doubles the initial range of retrieval. , Example trials the same as for with panels showing increasing sizes of networks (starting from x, which is , , ). This indicates a strong finite field effect which appears in the local inhibition dominant state. , Scatterplot showing the range of retrieval measure, , increases with . Gray dots are single trials (n = 5 per value of ) and the black line follows the mean of trials. Trials for were completed with , , , and for all other values of the trials were completed with the regular network size (, , ). Extended Data Figure 2-1 shows the stability of single units in these simulations of up to 5 s. Each panel shows the spiking rates over 5 s for 100 random excitatory neurons drawn from separate, unique random seed simulations where was a 1D-chain with and the regular protocol of stimulating one memory pattern for the first 80 ms was conducted. The six panels each used different values of ; however, because only a small random sample is drawn from the entire excitatory population in each case, no systematic differences are observable. Two aspects are common to all panels, however (1) within the first few hundred milliseconds, all neurons reach and then maintain an approximate steady state level of activity for the remainder of the simulation (despite jittering caused by external random noise); and (2) neurons can settle on many different mean spiking rates, and (proportionally) many remain quiet or have very low levels of spiking activity. Download Figure 2-1, EPS file. The excitatory-to-excitatory weights are considered balanced by setting . We then balance the inhibitory-to-excitatory and excitatory-to-inhibitory weights based on . We balance the inhibitory-to-excitatory ( and ) and inhibitory-to-excitatory ( and ) connections by calculating the sum of each excitatory neuron’s presynaptic input in and calculating the proportion of this sum compared with the mean sum of all excitatory neurons. This proportion becomes the connection weight, and obtains a mean of . In effect, this means excitatory neurons which receive stronger recurrent excitation than the mean excitatory neuron receive proportionally stronger local and global inhibition. Theoretically, this can be interpreted as a form of homeostatic normalization for the purpose of excitatory-inhibitory balance. Associative memory structures with vertices (memory patterns) and edges (memory associations), , is chosen and the model is instantiated according to the above procedure. We then choose a single pattern to receive external input to all of its excitatory neurons during the stimulation window, to , after which the network is left to settle into an approximate steady-state and stopped at for analysis (we show representative examples in Extended Data Fig. 2-1 of simulations up to 5 s to demonstrate the stability of these approximate steady-states). The main variable of manipulation was the balance between local and global inhibition balance, , where means only global inhibition is active, means only local inhibition is active, and means there is an equal contribution of both global and local inhibition in the network.

Analysis

We noted changes to systematically changed the number of memory patterns in which became activated during the simulated memory retrieval phase (from , when the external stimulation ended, to , when the simulation ended), despite no change to the excitatory weights or structure of . We refer to this phenomenon throughout this and following sections as an “extension” in the “range of retrieval” of the memory patterns. To quantify this extension in the range of retrieval given by changes in , we tested as a 1D chain with . We stimulated each pattern and recorded the excitatory firing rates at to . With being the number of timesteps being averaged, we calculate the mean and variance of the final firing rates for each memory by: The covariance between two memories and is: The correlation between two memories and is: We then calculate the mean correlation between two memories at the shortest path distance away from each other by: Finally, we quantify the range of retrieval using the following algorithm: 1. Calculate for all . 2. is the first value of for which the next memory patterns have . If no such is found, . We use and . Intuitively, this algorithm can be considered to estimate the distance in from the initially stimulated memory pattern to the farthest sufficiently-active memory pattern to quantify the range of retrieval. We observed how the activity of the excitatory population spread through associative memory structure for different values of and across time. We chose to visualize this spread in three classical graphs, Zachary’s karate club graph (Zachary, 1977), the -3-chain (Schapiro et al., 2013), and the Tutte graph (Tutte, 1946), and one constructed graph representing a multi-room spatial environment which we call the multiroom graph. The karate club graph (Zachary, 1977) is a classical graph where each vertex represents a karate practitioner and edges connect individuals who interacted with each other outside of their karate training (Fig. 3, second row). The -3-chain (Schapiro et al., 2013) is a set of three almost fully-connected graphs on five vertices, but where the edge between two vertices in each of the three almost fully-connected are cut and instead those vertices are connected to another almost fully-connected graph to form a single, connected graph (Fig. 3, first row). The Tutte graph (Tutte, 1946; Fig. 3, third row) has some notable graph-theoretic properties: (1) each vertex has exactly three neighboring vertices; (2) unlike many other graphs with the prior property [property (1)], there exists no Hamiltonian cycle in the Tutte graph, i.e., there is no path through the graph which visits each vertex only once, never uses an edge more than once, and returns to the original vertex it started at; and (3) there is a central vertex which connects to three separate “rooms” (clusters of vertices), despite there being a strong global symmetry in the number of each vertex’s neighbors [property (1)] while there is also a subtle asymmetry in the lack of a Hamiltonian cycle [property (2)]. We designed the multiroom graph to represent a spatial environment with four equally-sized rooms, each connected to two other rooms by centrally-located “doorways” on two sides of each room (Fig. 3, fourth row). These graphs were chosen for their complexity, relation to or derivation from real-world analogues, and well-known graph theoretic features.
Figure 3.

Each row of subplots in this figure corresponds to data from four different associative memory structures (graphs); from top to bottom: -3-chain, karate club graph, Tutte graph, and multiroom graph. , Example trials in the associative memory structures at two values of . Vertices are shaded according to the sum of its neurons’ normalized activity (darker is more active). Arrows indicate the vertex which was stimulated at the beginning of the trial. , Correlations of approximate steady-state activities of excitatory neurons with all other vertices in the same associative memory structures and at the same two values of as in panel . Vertices have been ordered such that those with similar correlations to other vertices are adjacent to illustrate the clustering effect that naturally arises from the network’s dynamics.

Each row of subplots in this figure corresponds to data from four different associative memory structures (graphs); from top to bottom: -3-chain, karate club graph, Tutte graph, and multiroom graph. , Example trials in the associative memory structures at two values of . Vertices are shaded according to the sum of its neurons’ normalized activity (darker is more active). Arrows indicate the vertex which was stimulated at the beginning of the trial. , Correlations of approximate steady-state activities of excitatory neurons with all other vertices in the same associative memory structures and at the same two values of as in panel . Vertices have been ordered such that those with similar correlations to other vertices are adjacent to illustrate the clustering effect that naturally arises from the network’s dynamics. In order to quantify the similarity between the activity of the network and graph theoretic properties in the associative memory structures, we compared the approximate steady-state activity to the community detection and classification of vertices using the label propagation algorithm (Raghavan et al., 2007). We denote two vertices, e.g., and , being members of the same community according to this algorithm with and otherwise. Then, the clustering index for a given trial and its associated associative memory structure is given by the following: The clustering index is a measure of how our model’s activity corresponds to topological features of . To test how the activities correspond to geometric distance for arbitrary graphs, we define a local area around a vertex in . This local area is the closed -neighborhood of a vertex, i.e., the set of the vertex and all vertices within distance as measured by their shortest path to . For a choice of and , we construct a local area function which assigns vertices in the local area with a value of and otherwise. We then calculate the geometric index by the following:

Code availability

The model was implemented using Julia 1.5.2. A copy of the code is publicly available at https://github.com/tfburns/BurnsHagaFukai (also see Extended Data 1). Code. Download Extended Data 1, ZIP file.

Results

The general structure of the model is illustrated in Figure 1. Memories are modelled as strongly-interconnected assemblies of excitatory neurons. Each memory item’s assembly is also interconnected to the assemblies of memory items which it is connected to in the associative memory structure, . The associative memory structure can take on any form. Inhibition to the network is provided by two equally-sized populations: (1) a global inhibitory population, which has an excitatory to global inhibitory connection probability of and global inhibitory to excitatory connection probability of ; and (2) local inhibitory populations (one for each excitatory assembly), which are fully connected to individual excitatory assemblies in the associative memory structure. The balance between these two activities was governed by the parameter : being strongly global, being strongly local, and being a balance between the two. A single trial is performed by giving a brief positive impulse () to a single excitatory assembly and then letting the network self-regulate its activity thereafter. This is similar to how a brief sensory stimulus of a single memory item can (even after the stimulus is removed) have persistent, representable activity and this activity can cause the retrieval of related memory items via cognition (Miyashita, 1988; MacDonald et al., 2011; Uitvlugt and Healey, 2019). We mostly analyze the approximate steady-state reached after .

Extended range of retrieval

Setting as a 1D chain with memory patterns, we simulated values of from to in steps. We found the range of retrieval extended gradually with increases to (Fig. 2). At , the network showed a dramatic increase in noisy behavior, however this slowly subsided as we increased the size of the network, indicating a finite field effect (Fig. 2). Compared with , which had a range of retrieval of around 5, quadrupled this distance to 20 neighbors in distance along the 1D chain (Fig. 2). In the range of , we tested networks of sizes up to , , and found that in all cases the network activity was very noisy. Because of computational limitations, we did not test larger networks, however we speculate that sufficiently large networks are likely to exhibit even greater extensions to the range of retrieval but at smaller network scales are perturbed by noise from a finite field effect.

Spread of excitation in associative memory structures

We also tested more sophisticated associative memory structures, namely: the -3-chain, karate club graph, Tutte graph, and multiroom graph. As in the 1D chain case, trials with values above often had noise, although the largest graph (multiroom) had stable trials with values of up to . We also observed most graphs change in their excitatory activity most noticeably in the region of to . We therefore chose to focus on two cases: (1) strong global inhibition (), and (2) slightly stronger local inhibition (; Fig. 3). In most cases (karate club graph, -3-chain, and multiroom graph) excitation spread across a larger range of the associative memory structure when local inhibition was dominant than when global inhibition was dominant. The Tutte graph uniquely decreased the spread of excitation when activating its central vertex (Fig. 3, third row, arrow). We suspect this is because of the unique topology of the Tutte graph and this central vertex—no other graph has strongly segmented “rooms” all neighboring a single vertex. Correlations between the vertices (assemblies) of the underlying neurons (neurons belonging to those assemblies, see Eq. 11) showed different resolutions of clustering. For most graphs, there was a trend of more and small clusters at and then fewer, larger clusters at . However, the -3-chain showed the breaking down of clusters and some strong negative correlations at . We can see in Figure 3, top row, that the graph is made up of pseudo- subgraphs—groups of five vertices completely connected, except for two “boundary” vertices, which connect the pseudo- subgraphs together. Within each pseudo- subgraph, the three “core” vertices (those which are fully connected within the pseudo- subgraph and not the boundary vertices) remain strongly correlated with one another while the two “boundary” vertices become almost equally correlated with their own pseudo- subgraph and their neighboring subgraph and negatively correlated with the opposite subgraph. For the well-connected core vertices, also represents the level at which the spread of excitation almost covers the entire graph. This is quite unlike the other graphs tested. At , in the karate club graph, approximately five clusters of strongly correlated vertices were present, whereas at this reduced to approximately three (Fig. 3, second row). The Tutte and multiroom graphs showed a similar trend in consolidation of clusters at (Fig. 3, third and fourth rows). We also observed how excitation spreads across the associative memory structure across time, after activation of vertices of interest, in the Tutte and multiroom graphs. For the Tutte graph we chose the central vertex, which branches off into three separate “rooms,” and for the multiroom graph we chose a location within one of the rooms that also led through a “doorway” to a neighboring room. We chose these vertices since they represent points of behavioral interest and ecological importance in animals—they are points at which an animal may make significant choice between which room to enter, explore, or exploit. In the Tutte graph, for , there is initial activation of all three rooms (Fig. 4). This is accompanied by a general rise in global inhibition and specific increases in the activity of local inhibitory populations connected to the respective active excitatory populations. However, at this early stage, one room is slightly more dominant in overall excitation (Fig. 4, top-left panel, bottom-right “room”). This dominance appears to translate into gradual and then complete activity dominance compared with the other two rooms at the later time-windows. Contrastingly, for , the activity of vertices in the Tutte graph is initially broader and this breadth of excitation is maintained steadily throughout the duration of the trial. We also see that the global and local inhibitory populations for (Fig. 4) quickly stabilize in an approximate steady-state. In the case of , the global inhibitory activity progresses through three distinct phases of activity (Fig. 4, arrows): an initial rise, an unstable plateau, and finally a higher, stable plateau. Meanwhile, the local inhibitory activity for reflects the recruitment and release of various memory items before coming to an approximate steady-state at a similar time as the global inhibitory activity.
Figure 4.

, Example trials for memory item neuron activities in the Tutte graph during different time-windows for and . The central vertex is activated for the first 80 ms of each trial. , Global and local inhibitory firing rates over time for the Tutte graph trial with shown in . Arrows illustrate three distinct modes or levels of global inhibition. , Same as but for .

, Example trials for memory item neuron activities in the Tutte graph during different time-windows for and . The central vertex is activated for the first 80 ms of each trial. , Global and local inhibitory firing rates over time for the Tutte graph trial with shown in . Arrows illustrate three distinct modes or levels of global inhibition. , Same as but for . The multiroom graph showed a similar trend in broadening and maintaining a larger range of retrieval with increases in . However, possibly because of the size of the network and because the chosen vertex was located within one of the rooms (thus biasing toward activation of that room’s other vertices, unlike the central vertex in the Tutte graph), observation of the effect required an increase in . For illustration of the effect, we chose and (Fig. 5). Interestingly, in the case of , initial broadening of the range of retrieval into the neighboring room (through the doorway adjacent to the memory item being stimulated) was slightly reduced and the first memory pattern of the room on the opposite doorway became active later in the trial.
Figure 5.

, Example trials for memory item neuron activities in the multiroom graph during different time-windows for two different values global-local inhibitory balances, and . A vertex beside to uppermost “doorway” is activated for the first 80 ms of each trial. , Global and local inhibitory firing rates over time for the multiroom graph trial with shown in . , Same as but for .

, Example trials for memory item neuron activities in the multiroom graph during different time-windows for two different values global-local inhibitory balances, and . A vertex beside to uppermost “doorway” is activated for the first 80 ms of each trial. , Global and local inhibitory firing rates over time for the multiroom graph trial with shown in . , Same as but for . The clustering and geometric indices, and , for each graph, at different values of are given in Table 1. Since depends on a choice of distance in the local area, we calculated for all values of (from up to the diameter) and report the largest value of (and its ) in Table 1 and for all values in Table 2. In general, the larger the value of , the more agreement between the community structure measured by label propagation and by the correlations of vertex activities in the final network states (by our model). High values of indicate the final activity states are similar to geometric distance. We analyze the activity based on all neurons and a subset of neurons which reach a firing rate of at least of the maximum firing rate during the simulation. We call this subset the selective neurons.
Table 1

Clustering and geometric indices for graphs at different values of

Clustering indices (Q)Geometric indices (R)
All neuronsSelective neuronsAll neuronsSelective neurons
Diameter c=0.1 c=0.525 c=0.1 c=0.525 c=0.1 c=0.525 c=0.1 c=0.525
K5-3-chain40.4040.0400.6430.3680.337 (d = 2)0.393 (d = 4)0.557 (d = 1)0.592 (d = 2)
Karate club graph50.098−0.2230.120−0.2660.111 (d = 2)0.999 (d = 5)0.191 (d = 4)0.999 (d = 5)
Tutte graph80.0500.2700.2540.2440.259 (d = 4)0.389 (d = 4)0.300 (d = 3)0.464 (d = 3)
Multiroom graph180.1150.1440.0950.2580.210 (d = 5)0.375 (d = 5)0.210 (d = 5)0.375 (d = 5)
Table 2

Geometric indices () for different graphs for all tested levels of

 K5-3-chainKarate club graphTutte graphMultiroom graph
 c = 0.525c = 0.1c = 0.525c = 0.1c = 0.525c = 0.1c = 0.525c = 0.1
dAllSelectiveAllSelectiveAllSelectiveAllSelectiveAllSelectiveAllSelectiveAllSelectiveAllSelective
10.0720.4530.3380.557−0.722−0.722−0.029−0.097−0.0190.1370.0290.0920.0730.0730.0520.052
20.3340.5920.3380.4430.2230.2230.1110.1330.1780.3250.1740.2300.1700.1700.1220.122
30.3790.2960.2340.1290.7110.7110.0180.1050.3410.4640.2580.3000.2700.2700.1780.178
40.393−0.0570.167−0.0710.9710.9710.0870.1910.3890.4590.2590.2770.3450.3450.2070.207
51.0001.0000.0810.1900.3020.2870.1930.1810.3750.3750.2100.210
6  0.2040.1020.1220.0780.3600.3600.1960.196
7  0.1510.0020.0840.0230.3140.3140.1700.170
8  0.140−0.0180.0760.0110.2520.2520.1370.137
9  0.1850.1850.1030.103
10  0.1240.1240.0710.071
11  0.0720.0720.0440.044
12  0.0350.0350.0240.024
13  0.0120.0120.0130.013
14  0.0000.0000.0070.007
15  −0.005−0.0050.0040.004
16  −0.008−0.0080.0020.002
17  −0.009−0.0090.0020.002
18            −0.009−0.0090.0020.002
Weighted sum3.4482.2972.3831.54310.74310.7431.0002.1957.8275.9355.0894.18514.62614.6268.9968.996
Weighted mean2.9281.7892.2141.4604.9204.9203.7324.2134.6443.3754.2543.5105.7195.7195.8315.831
Weighted median2111333332325555
Clustering and geometric indices for graphs at different values of Geometric indices () for different graphs for all tested levels of Clustering indices () using only the selective neurons are generally larger than for all neurons, indicating these more-active neurons generally contribute positively to clustering. This is especially noticeable when the network settles into a state where assemblies take on a wide range of values (e.g., in the -3-chain graph for ). In general, the clustering indices indicate that given the size and topology of different graphs, different values of have different propensities for clustering global characteristics. Geometric indices () were generally greater than the clustering indices, indicating a greater emphasis of the geometry rather than the topology in these memory graphs at these values of . Nonetheless, some topological information is captured and almost all of the geometric indices were of a comparable order as the clustering indices. As we increase , the distance is increased or unchanged (i.e., not decreased). However, whether the clustering index increases with depends on the structure of the graph. Importantly, either the clustering index or the distance becomes larger when becomes larger, implying the approximate steady states can reflect the broader structure of the graph as the ratio of local inhibition to global inhibition is increased.

Discussion

Previous modeling studies have conflated excitatory and inhibitory neuron identities and learning rules (Griniasty et al., 1993; Haga and Fukai, 2019) or ignored inhibitory neurons’ functional participation (Amit et al., 1994) in associative memory structure retrieval. This work uniquely disentangles excitatory and inhibitory neurons and uses sparse excitatory assemblies to demonstrate the potential functional role of global-local inhibitory balance in a more biologically-plausible setting. In the simplistic case of a 1D memory chain (like might correspond to discrete memories in a sequence of events through time), shifting inhibition to a locally-dominant state quadrupled the range of activation or retrieval. In the case of more sophisticated memory structures, globally-dominant inhibition tended to emphasize finer scale partitions of the memory structure and consolidated strong local associations. Whereas, locally-dominant inhibition tended to capture broader scale partitions and allow excitation to extend across a larger range of the memory structure. It is important to emphasize these results are generated in the context of a memory structure which relies on the correlation of semantically close units, implying that memory retrieval in such a structure is functionally optimized when nearby units are correlated. Biological evidence for such correlations was first prominently shown in monkey anterior ventral temporal cortex by Miyashita (1988), which showed that the activity of units selective for arbitrary complex visual patterns was correlated by the stimulus-stimulus associations in the temporal ordering of the stimuli presentations. However, this kind of correlated, associative memory structure is not only found in the visual system, it is also noticeable and widely studied in hippocampus. Within a spatial environment, place cells representing nearby place fields show correlated activity (Monsalve-Mercado and Roudi, 2020) and can maintain correlations in the same environment over different tasks (Hampson et al., 1996), mostly because of overlapping place fields. When the environment changes, however, these correlations are typically inconsistent with one another (Alme, et al., 2014), suggesting contextual cues alter or switch between different memory structures. In our study, we selectively stimulate single memory patterns and see memory retrieval of the pattern and surrounding associating patterns in ∼100–200 ms of simulation time. Is this biologically realistic? Single neurons in human medial temporal lobe which learn to selectively encode associative episodic memories within just a few trials can be recruited in subsequent activations within ∼500–700 ms (Ison et al., 2015); maximal pattern completion of cortical ensembles in visual cortex after subensemble optogenetic stimulation typically takes on the order of 2–4 s (Carrillo-Reid et al., 2019); biasing memory-guided spatial behavior by selectively stimulating clusters of place cells for ∼1 s has been shown to improve performance in reward-attaining behavior (Robinson et al., 2020). Therefore, the speed of memory retrieval in our model is likely on a faster timescale than should be generally expected in actual biological systems, although this could also be because of simplifications in the model, or disanalogous stimulation methods or assembly/memory structures. Recent experimental evidence in mice (Rolotti et al., 2022) shows that when optogenetic techniques are used to induce place field formation in CA1 neurons, feedback inhibition limits the number of neurons which become activated, thereby limiting the size of the neural assembly which becomes activated by the induced place field. However, using disinhibition, this effect can be nullified and the neural assemblies can be made larger. Rolotti et al. (2022) showed such disinhibition can improve performance on a head-fixed spatial goal-oriented learning task via overrepresentation of the rewarded locations used for performance in the task. Another functional benefit of such disinhibition may be in rapid place field formation, as is seen in the behavioral timescale synaptic plasticity mechanism (Bittner et al., 2017; Zhao et al., 2020; Milstein et al., 2021). Our modeling results suggest similar effects may be possible without the use of disinhibition but rather simply via a rebalancing of the relative activity or strength between different inhibitory populations. In Rolotti et al. (2022), the feedback inhibition comes from the hippocampus, but they do not explore distinctions between different inhibitory populations therein. There are many different types of inhibitory neurons, each with distinct connectivity, dynamics, and morphology (Pelkey et al., 2017; Burns and Rajan, 2021; Campagnola et al., 2022). In our model, we speculate that the “local” inhibitory neurons are parvalbumin-expressing while “global” inhibitory neurons are somatostatin-expressing, given there exists some evidence for such connectivity profiles in visual cortex (Adesnik et al., 2012; Litwin-Kumar et al., 2016). However, it is possible different areas may recruit and use inhibitory neurons and their circuits differently, for example to develop different scales of representations in hierarchical planning (Brunec and Momennejad, 2022). It could also be the case that there are even more functional groups of inhibitory neurons involved in these phenomena (e.g., see later in this discussion regarding a potential additional “global” inhibitory group for decreasing the correlation between neighboring memory patterns). Inhibitory neurons also contribute to the initiation, maintenance, and modulation of rhythmic oscillations in local electrical activity (Traub et al., 1998; Fries, 2005; Bartos et al., 2007; Buzsáki and Wang, 2012; Aton et al., 2013). One example is in the pyramidal interneuron network gamma (PING) mechanism (Whittington et al., 1995), which can generate rhythmic dynamics which can ultimately result in the synchronous firing of excitatory neurons. Classically, the PING mechanism is thought of as involving just one group of excitatory neurons and one group of inhibitory neurons, and this is generally sufficient for the generation of PING dynamics. However, Rich et al. (2017) showed by expanding the diversity of inhibitory neurons into two groups with different recurrent disinhibitory connectivity, one weakly connected and one strongly connected, it is possible to achieve richer and more robust PING dynamics. Although we do not study disinhibition in our model and our techniques are substantially different to Rich et al. (2017), we partly followed in the theme of Rich et al. (2017; albeit in a different mechanism and showing a different phenomenon) by showing how by considering a greater diversity of inhibitory neurons acting simultaneously in a network, we are able to generate more interesting and novel dynamics. How inhibitory diversity related to different mechanisms or phenomena (e.g., the PING mechanism and the multiscale and extended retrieval of associative memory structures we demonstrate here) interact with one another is an open question for both computational and experimental neuroscientists. Theoretically, in the absence of noise and with a sufficiently large network, an associative memory structure with neurons can expect to accurately store (and retrieve via pattern completion) a maximum of memory patterns (Amit et al., 1985; McEliece et al., 1987), memory patterns if we permit more errors (Amit, 1992), and fewer if those patterns are correlated (Löwe, 1998; although exactly how fewer depends on the manner in which the patterns are correlated). In our case, the patterns themselves are not correlated, but rather they are created independently of one another and then correlated “spatially” in the larger memory structure via excitatory weights between the memory patterns as described in Equation 1 and illustrated in Figure 1. Since we set the probability of neighboring memory patterns being connected to one another to be , the effective spatial correlation will also be . Past theoretical and numerical results (Cugliandolo and Tsodyks, 1994; Gandolfo et al., 1999) therefore indicate the memory capacity will be smaller than if the patterns were not correlated. However, if the spatial correlation is lowered, e.g., , the theoretical memory capacity can be the same as if there was no spatial correlation (for sufficiently large networks without noise) and the memory patterns will be sufficiently separated to allow accurate pattern completion. Conceivably, it is possible to functionally enter into the range of spatial correlation in our model without changing the connection probability between excitatory memory patterns and instead by sufficiently increasing the absolute strength of global inhibition while . Such an increase in global inhibition will gradually suppress all patterns, with those most weakly activated dying out earlier. At the level of global inhibition just before all patterns are suppressed, one or more patterns will be minimally active, and this is likely less than the number of patterns active before the increase in global inhibition. However, increasing the strength of a global inhibitory population like shown in our model may not be biologically realistic, perhaps a more realistic scenario would be to recruit another global inhibitory pool, i.e., a second inhibitory neuron group which is globally connected to the excitatory population. However, this is beyond the scope of the current study and here we focus on the case of just one global inhibitory group and one local inhibitory group for each memory pattern. Nevertheless, for these reasons, the memory capacity of our model is less than the theoretical optimum because of the correlations between patterns, and as it becomes even less optimum since it is theoretically equivalent to increasing the strength of the correlations between neighboring memory patterns. A similar capacity effect is present in prior models with correlation between the memory patterns (Griniasty et al., 1993; Amit et al., 1994; Haga and Fukai, 2019), however this effect comes about by (in whole or in part) modifying excitatory weights whereas here we demonstrate this effect can be generated by modifications to inhibitory weights alone. The effects generated by these modifications, such as stable extension in the range of retrieval, appears limited because of increases in noise in strongly local-inhibition dominant states. This is likely because of a finite field effect and may indicate a necessary minimum size of local excitatory-inhibitory assemblies for such states. For example, stability in the case of for the 1D chain case with sparsity of required a network size four times greater than the case of to maintain stability of retrieval, translating to excitatory assemblies of 160 neurons paired with 40 local inhibitory neurons. Although assemblies of ∼300 neurons have been used in optical microstimulation experiments in sensory cortex to drive behavior in mice (Huber et al., 2008), most recorded assemblies are on the order of tens of neurons (Harris et al., 2003; Fujisawa et al., 2008). Among other benefits, such sparsity is accompanied by theoretical energy efficiencies (Levy and Baxter, 1996) and in associative memory models can lead to fewer spurious memories (Hoffman, 2019). It therefore seems likely that for the described mechanism of local-global inhibition to have a stable functional effect in extending the range of retrieval, the presence of both local and global inhibition is required in finite, real-world networks with sparse assemblies. Alternatively, it is possible this mechanism requires a hybrid sparse-dense coding schema, as has long been suggested operates in hippocampus (Barnes et al., 1990), cerebellum (Marr, 1969), and more recently in sensory areas (Laurent, 2002; Sakata and Harris, 2009). In such a schema, sparse assemblies report their activity to densely-connected assemblies which broadcast information to other sparse assemblies. In our model, we could consider the global inhibitory population as a densely-connected assembly which broadcasts the overall level of excitation in the network to all local, sparse assemblies. It is just not excitatory, as in classic dense-sparse schemas. Through this interpretation, a reduction in the relative strength of global inhibition (as in the unstable region of ) is equivalent to a gradual transition in the coding schema from sparse-dense to sparse. Thus, if the described local-global inhibition mechanism requires a sparse-dense coding schema, its instability when the coding scheme becomes sparse is expected. Associative memory structures which had more sophisticated topologies also showed unstable regions at high values of , however less so when the graph was sufficiently large (such as in the multiroom graph). So, it is also possible this mechanism can be supported when the memory structure is adequately structured or large. Extension of the range of retrieval was not simply the only apparent function of the inhibitory mechanism in sophisticated associative memory structures, the mechanism also permitted multiscale segmentation of the associative memory structure. Local-inhibition dominant states typically activated coarser topological segments of the graphs whereas global-inhibition dominant states consolidated activity in more densely associated clusters, highlighting finer topological features. These results were similar to those found in a more abstract model of binary neurons (Haga and Fukai, 2021), except that the current model was unable to eliminate the spread of excitation totally (as the more abstract model (Haga and Fukai, 2021) was capable of). This is because the current model does not include direct potentiation of excitatory weights, but rather modulation of local-global inhibitory balance. In this model, where association is embedded ubiquitously, to sustain highly-specific activity within a narrow range of memory items or even a single memory item, it is necessary to create very strong self-excitation within an assembly and have stronger overall inhibition with . This demonstrates a general limitation that in a more biologically-realistic setting it may not be possible to fully eliminate or reduce association between items embedded in an excitatory memory structure through inhibitory modulation alone. Nevertheless, such inhibitory activity may cause dissociation through plasticity and learning mechanisms, as demonstrated in numerous psychological and biological studies (Anderson, 2003; Chiu and Egner, 2015; Schmitz et al., 2017; Anderson and Hulbert, 2021), which we have not investigated here. An intriguing aspect of this inhibitory mechanism is its ability to dramatically affect not just the range of retrieval but also which parts of the memory structure become dominant given an initial stimulation. For example, it appears in global-inhibition dominant states, global inhibition drives a “winner-takes-all” dynamic (Grossberg, 1973) whereby only the globally strongest memories remain active. In local-inhibition dominant states, this “winner-takes-all” dynamic appears to dissipate and permit a general extension of retrieval, or a more egalitarian sharing of the winners. However, this extension can also be mediated and a “winner-takes-all” dynamic can appear at the peripheries of the retrieval range, with different peripheries competing against each other (Fig. 5). This may be considered as a global state transition from “winner-takes-all” to “winner-shares-all” (Fukai and Tanaka, 1997). We therefore hypothesize an inhibitory mechanism like we have described may be used to aid in the learning or retrieval of graph-based cognitive tasks in cortical networks (Whittington et al., 2020; Wang et al., 2021). Cognitive control or exploitation of this mechanism might also occur in concert with, for example, gamma oscillations, which are strongly tied to inhibitory activity (Buzsáki and Wang, 2012). This may be especially useful when faced with competing behavioral choices and maintaining the distribution of these choices is meaningful, such as in perceptual decision-making (Najafi et al., 2020). Indeed, Roach et al. (2022) report that tuned local inhibition can alter the attractor dynamics of perceptual decision-making networks to balance between the speed or accuracy of perceptual decisions. Probing such circuits and behaviors may provide insights on the potential influence such inhibitory mechanisms have on neuropathologies, especially those associated with cognitive defects (Amieva et al., 2004; Baroncelli et al., 2011). For instance, the coordination and interaction of inhibitory-driven oscillatory activity in hippocampus and prefrontal cortex is known to play a role in spatial memory tasks (Jones and Wilson, 2005) and spatial decision-making (Tavares and Tort, 2022). This coordination and interaction can be disrupted in epilepsy, leading to decreased behavioral flexibility (Kleen et al., 2011). Perhaps the associated behavioral deficits are in part because of maladaptations or dysfunction of local-global inhibitory balance or other subtle disruptions to networks involving multiple inhibitory neuron types. While this study has made some advances over prior models (Griniasty et al., 1993; Amit et al., 1994; Haga and Fukai, 2019) in terms of improving the “biological realism” of the model, there exist many simplifications and unrealistic features in our model. We treat neurons as having a single point of intracellular space, i.e., without dendrites or specific morphology, which other than itself being unrealistic also prevents us from allowing different classes of inhibitory neurons to preferentially synapse onto different regions of other neurons, which is known to vary widely across inhibitory neurons (Otsuka and Kawaguchi, 2009; Burns and Rajan, 2021; Dudok et al., 2021). We also assume that joint excitatory-inhibitory assemblies are completely connected, which is a simplification that does not match biology (Otsuka and Kawaguchi, 2009; Koolschijn et al., 2019; Rolotti et al., 2022). Therefore, these and other limitations mean that whether and how actual biological networks achieve the same functional benefits we described here using inhibitory neuron diversity currently remains unknown. Experimentalists may therefore wish to design studies to test the presence or absence of such computational benefits in biological networks with diverse inhibitory populations. In our model, making a seemingly subtle change to the network structure by introducing some of the complexities and diversities of inhibitory neurons had a profound impact on retrieval. We have shown how this phenomenon mainly persists in a sparse, associative memory structure which obeys Dale’s Law and has more biologically-plausible connections than prior models. We have also shown and discussed some of the potential functional roles of this mechanism in graph-based cognitive tasks and discussed how this mechanism may contribute to a type of sparse-dense coding schema.
  66 in total

Review 1.  A mechanism for cognitive dynamics: neuronal communication through neuronal coherence.

Authors:  Pascal Fries
Journal:  Trends Cogn Sci       Date:  2005-10       Impact factor: 20.229

2.  Near linear time algorithm to detect community structures in large-scale networks.

Authors:  Usha Nandini Raghavan; Réka Albert; Soundar Kumara
Journal:  Phys Rev E Stat Nonlin Soft Matter Phys       Date:  2007-09-11

3.  Behavior-dependent short-term assembly dynamics in the medial prefrontal cortex.

Authors:  Shigeyoshi Fujisawa; Asohan Amarasingham; Matthew T Harrison; György Buzsáki
Journal:  Nat Neurosci       Date:  2008-05-30       Impact factor: 24.884

4.  Neuronal correlate of visual associative long-term memory in the primate temporal cortex.

Authors:  Y Miyashita
Journal:  Nature       Date:  1988-10-27       Impact factor: 49.962

5.  Enhanced oscillatory activity in the hippocampal-prefrontal network is related to short-term memory function after early-life seizures.

Authors:  Jonathan K Kleen; Edie X Wu; Gregory L Holmes; Rod C Scott; Pierre-Pascal Lenck-Santini
Journal:  J Neurosci       Date:  2011-10-26       Impact factor: 6.167

Review 6.  Hippocampal GABAergic Inhibitory Interneurons.

Authors:  Kenneth A Pelkey; Ramesh Chittajallu; Michael T Craig; Ludovic Tricoire; Jason C Wester; Chris J McBain
Journal:  Physiol Rev       Date:  2017-10-01       Impact factor: 37.312

7.  The Tolman-Eichenbaum Machine: Unifying Space and Relational Memory through Generalization in the Hippocampal Formation.

Authors:  James C R Whittington; Timothy H Muller; Shirley Mark; Guifen Chen; Caswell Barry; Neil Burgess; Timothy E J Behrens
Journal:  Cell       Date:  2020-11-11       Impact factor: 41.582

8.  Reduced inhibition in depression impairs stimulus processing in human cortical microcircuits.

Authors:  Heng Kang Yao; Alexandre Guet-McCreight; Frank Mazza; Homeira Moradi Chameh; Thomas D Prevot; John D Griffiths; Shreejoy J Tripathy; Taufik A Valiante; Etienne Sibille; Etay Hay
Journal:  Cell Rep       Date:  2022-01-11       Impact factor: 9.423

9.  Alternating sources of perisomatic inhibition during behavior.

Authors:  Barna Dudok; Peter M Klein; Ernie Hwaun; Brian R Lee; Zizhen Yao; Olivia Fong; John C Bowler; Satoshi Terada; Fraser T Sparks; Gergely G Szabo; Jordan S Farrell; Jim Berg; Tanya L Daigle; Bosiljka Tasic; Jordane Dimidschstein; Gord Fishell; Attila Losonczy; Hongkui Zeng; Ivan Soltesz
Journal:  Neuron       Date:  2021-02-01       Impact factor: 17.173

10.  Theta rhythms coordinate hippocampal-prefrontal interactions in a spatial memory task.

Authors:  Matthew W Jones; Matthew A Wilson
Journal:  PLoS Biol       Date:  2005-11-15       Impact factor: 8.029

View more

北京卡尤迪生物科技股份有限公司 © 2022-2023.