José A Carrillo1, Helge Holden2, Susanne Solem3. 1. Mathematical Institute, University of Oxford, Oxford, OX2 6GG, UK. carrillo@maths.ox.ac.uk. 2. Department of Mathematical Sciences, NTNU Norwegian University of Science and Technology, NO-7491, Trondheim, Norway. 3. Department of Mathematics, Norwegian University of Life Sciences, 1433, Ås, Norway.
Abstract
The activity generated by an ensemble of neurons is affected by various noise sources. It is a well-recognised challenge to understand the effects of noise on the stability of such networks. We demonstrate that the patterns of activity generated by networks of grid cells emerge from the instability of homogeneous activity for small levels of noise. This is carried out by analysing the robustness of network activity patterns with respect to noise in an upscaled noisy grid cell model in the form of a system of partial differential equations. Inhomogeneous network patterns are numerically understood as branches bifurcating from unstable homogeneous states for small noise levels. We show that there is a phase transition occurring as the level of noise decreases. Our numerical study also indicates the presence of hysteresis phenomena close to the precise critical noise value.
The activity generated by an ensemble of neurons is affected by various noise sources. It is a well-recognised challenge to understand the effects of noise on the stability of such networks. We demonstrate that the patterns of activity generated by networks of grid cells emerge from the instability of homogeneous activity for small levels of noise. This is carried out by analysing the robustness of network activity patterns with respect to noise in an upscaled noisy grid cell model in the form of a system of partial differential equations. Inhomogeneous network patterns are numerically understood as branches bifurcating from unstable homogeneous states for small noise levels. We show that there is a phase transition occurring as the level of noise decreases. Our numerical study also indicates the presence of hysteresis phenomena close to the precise critical noise value.
By now it is well established that grid cells, and the characteristic hexagonal firing patterns they create in physical space, play an important role in the navigational system of mammalian brains (McNaughton et al. 2017). Since grid cells were discovered in Hafting et al. (2005), there has been extensive activity in order to understand their precise behaviour, see (Rowland et al. 2016; McNaughton et al. 2017) and the references therein. The main challenges ten years after the discovery of grid cells, such as how grid cells are organised and how they are connected to other cell types in the brain, were highlighted in Rowland et al. (2016). In particular, as the brain is inherently noisy (Rolls and Deco 2010), the lack of understanding of the effect of noise on grid cells was emphasised as a challenge. The recent results in Gardner et al. (2022) has provided insight into the organisation of grid cells by showing that the activity of the network (called a module in Gardner et al. 2022) is arranged on a torus. The question regarding the effects of noise on grid cells, however, remains open.In accordance with previous experimental studies and general belief in the field, the results in Gardner et al. (2022) provided further evidence in favour of describing the grid cell network by continuous attractor network dynamics through a system of neural field models (Ermentrout and Terman 2010). The first attractor network models for grid cells were presented in McNaughton et al. (2006), Burak and Fiete (2009), Couey et al. (2013), which were based on the classical papers Wilson and Cowan (1972, 1973) and Amari (1977), see also Pinto et al. (1996). In Burak and Fiete (2009), Couey et al. (2013), the grid cells are assumed to have orientation preferences in four different directions. The hexagonal grid cell patterns are then generated by a system of neural field ordinary differential equationswith . Here represents the activity level of neuron i with orientation preference , and is its relaxation time. The right-hand side of (1.1) represents the firing rate of the neuron, see Bressloff (2012). The function is a given activation function, often of the form of a ReLU or sigmoid function. The firing rate of neuron i depends on an external input and the response of the network. It is assumed that the neurons are arranged on a square, which we will denote and call the neural sheet, according to the strength of their pairwise connection. The position of neuron i is denoted by . The strength of the connectivity between neuron i of type and j of type is , where ), , is assumed to be even in each coordinate, in . The connectivity is shifted in the direction of the orientation preference of neuron j of type with which is given by shifts of equal length in the four cardinal directions: north, south, east and west. It has been commonly considered, and, as mentioned, recently shown in Gardner et al. (2022), that the network of neurons creates a torus connectivity. This is realised in the model by assuming that W is extended periodically outside .By modelling the movement of a rat traversing physical space through the input , (1.1) can recreate the hexagonal patterns in physical space produced by the firing of a single grid cell as observed in experiments (Couey et al. 2013). In this model, the patterns in physical space are a consequence of the patterns generated on the neural sheet . However, (1.1) being a deterministic model, it does not offer much insight into the effects of noise.Works on understanding noisy neural fields have in general been lacking (Bressloff 2012, Sect. 6) until recently (Touboul 2012; Kilpatrick and Ermentrout 2013; Kilpatrick 2014; MacLaurin and Bressloff 2020; Touboul et al. 2012; Bressloff 2019; Byrne et al. 2019). In Burak and Fiete (2012) fundamental limits on how information dissipates in networks of noisy neurons were derived. The author in Kilpatrick (2014) presents a study of two coupled noisy neural field models with a focus on the consequences of the coupling on the neural activity waves, while Kilpatrick and Ermentrout (2013) and MacLaurin and Bressloff (2020) investigate the effect of noise on stationary bumps in one-dimensional spatially extended networks.Taking a different perspective than Burak and Fiete (2012); Kilpatrick and Ermentrout (2013); Kilpatrick (2014) and MacLaurin and Bressloff (2020), by adding noise to the common model of a grid cell network (1.1), the main goal of this work is to analyse the robustness of the hexagonal patterns in the activity level (Ermentrout and Cowan 1979) observed in (1.1) with respect to noise strength. We show that the stationary spatial patterns of the activity level emerge from the instability of homogeneous brain activity as the noise level diminishes. By upscaling the model (1.1) with noise to a system of Fokker–Planck-like partial differential equations, our analysis gives an estimate on the noise strength above which there is no coherent activity pattern. We also numerically explore the different branches of inhomogeneous stationary patterns bifurcating from the homogeneous state depending on the noise for several activation functions indicating the presence of hysteresis phenomena.Instabilities of homogeneous steady states of noisy neural fields were also investigated by Byrne et al. (2019) utilising a partial differential equation (PDE) description. However, the PDEs were of a very different form than the ones presented in this manuscript. A Fokker–Planck-like system describing a network of noisy neurons can be found in Bressloff (2019), where neural variability in a coupled ring network was studied.It is classical to analyse the behavioural change of neural fields without noise in the form of ordinary differential equations (ODEs) by standard bifurcation analysis (Murray 2002; Bressloff 2012; Veltz et al. 2015; Kilpatrick and Poll 2017; Schmidt and Avitabile 2020). Finding noise-driven bifurcations is more challenging, and one has to rely on other technical tools unless the coupling of the network has a particular structure where closed ODEs for the mean and the variance are available (Touboul et al. 2012; Touboul 2012). The system (1.1) with noise has, in addition, a nonlinear coupling, and the activity levels must remain nonnegative due to their physical interpretation, leading to technical additional constraints on the stochastic processes involved. In the following we deal with these challenges by analysing a system of Fokker–Planck-like partial differential equations with boundary conditions describing the space-time evolution of the law of the stochastic processes with respect to the noise level.
The PDE system: derivation, main goal, and numerical experiment
For the sake of the reader, we start by discussing the simplest classical case of no spatial connectivity, see Hopfield (1984); Bressloff (2012) and the references therein. Let us consider the classical neural field stochastic dynamical system for a network of M coupled neurons given byHere, the neurons are considered indistinguishable and all-to-all coupled with equal strengths given by whose sign depends on the type of neurons considered: inhibitory or excitatory. We also consider that the relaxation time for all neurons is the same and equal to . B(t) is the external input for this neural network and is the strength of the noise . We have considered independent Brownian motion for each neuron in the network. Classical stochastic analysis implies that we can derive a Fokker–Planck equation for the evolution of the probability density of neurons with activity level s at time t in the large population limit , i.e., the law of the limiting stochastic process follows the PDEwhere denotes the probability to observe the activity s at time t, and denotes the mean value of the activity level sNotice that the noise can drive the activity level to be negative in (2.1), which is clearly not desirable from the modelling viewpoint. In order to avoid this, it is common practise to consider the Fokker–Planck equation (2.2) on with no-flux boundary conditionsThis ensures that particles cannot escape from non-negative values of the activity level variable s at the PDE level while keeping an evolution of a probability density, see Carrillo et al. (2011), Carrillo et al. (2013) for instance.
Remark 2.1
(Microscopic Model) Reflective boundary conditions for stochastic processes have been incorporated at the stochastic differential equation (SDE) level in order to avoid particles to escape a fixed domain (Sznitman 1984; Lions and Sznitman 1984; Faugeras and Inglis 2015). One can produce a microscopic stochastic process by adding an additional process counting when particles touch the boundary of the domain. The law of the rigorous mean-field limit, , of the following system, follows the evolution of (2.2)–(2.3) under suitable smoothness assumptions on , see Lions and Sznitman (1984), Sznitman (1991).The next step in the modelling is to reinterpret M as the number of neurons in each of the cortical columns of a neural sheet of N columns. Given space points in the region of the neural cortex, the interaction among NM neurons stacked in N columns at locations with M neurons each, where represents the activity level with orientation of the neuron at location is given for and by Here, we consider the same periodic setting, imposed through the periodicity of the interactions for , as in Burak and Fiete (2009), Couey et al. (2013). Moreover, the neurons are inhibitory (Couey et al. 2013) and the activity in the network is modulated by a time dependent external input as in (1.1). We are dealing with a population network of neurons structured by their orientation preference corresponding to the four cardinal points (north, west, south, east). The network population includes the localised in space cross inhibition of neurons with different orientations modulated by the shape function , where . Following the approach outlined above in the case of one population, we can formally write a Fokker–Planck type equation, in the limit , for the evolution of the probability density of finding neurons of type at position on the neural sheet with activity level at time . We refer to Cai et al. (2006) for a similar approach in conductance-voltage models. The system of equations readswhere is given bywithperiodic boundary conditions in , and the no-flux boundary conditions at given byfor each position in the square sheet . To realise the torus connectivity, we assume that W is periodic with respect to and even in each coordinate on . The function is typically a smooth approximation of the ReLU activation function or a sigmoid function. The initial probability density of the system (2.7) is denoted by .
Remark 2.2
The system of Fokker–Planck equations (2.7)–(2.8) can be rigorously derived from the microscopic stochastic processes (2.6) under suitable assumptions. The rigorous proof of this mean-field limit for the spatially extended system (2.7)–(2.8) has recently been obtained in Carrillo et al. (2021) by a generalisation of the coupling method of Sznitman (1991). This rigorous passage to the limit is a very interesting area of mathematical research on its own with a multitude of different models and limiting systems derived under different assumptions on the ingredients of the network. For instance, we refer to the works Moynot and Samuelides (2002), Faugeras et al. (2009), Faugeras and Inglis (2015), Touboul (2012), Touboul et al. (2012), and Cabana and Touboul (2018) in which the authors deal with spatially extended systems of neural networks modelled by their voltage with random connectivity interactions using large deviation principles (Arous and Guionnet 1995; Guionnet 1997).To summarise, the main goal of this work is to focus on the biological information carried by the system of PDEs (2.7)–(2.8). More precisely, we study how noise affects the dynamics of (2.7)–(2.8) under the following assumptions:The grid cells are arranged on a torus, realised by setting the neural sheet and extending W periodically outside .The inhibitory (Couey et al. 2013) connectivity function is at least in , and is an even function in each coordinate on . Furthermore, we define . In the numerical experiments W satisfies in addition.The modulation function is in (unless otherwise stated).There are four orientation preferences, 1 (north), 2 (west), 3 (south), and 4 (east), where the shifts are of equal size z in each direction, i.e., , where is the unit vector in direction .It is well-known that grid cell firing is strongly connected to mammals’ navigation, but unknown exactly how the grid cell network communicates with other networks in the brain. We will therefore simply assume in the numerical experiments that the external input in (2.7)–(2.8) depends on the velocity at time t, v(t), of a moving animal in the following manner (Burak and Fiete 2009; Couey et al. 2013):where is a constant external excitatory input, assumed to be the same for different , the velocity modulation, the orientation of the animal at time t according to the reference frame, and the orientation preference of the neurons of type (). This particular form of the input, together with the right set of parameters in (1.1), has been shown to enable single cells of the ODE system (1.1) to create hexagonal firing patterns in physical space, see Burak and Fiete (2009), Couey et al. (2013).
Numerical reproduction of the hexagonal patterns
We numerically demonstrate that the PDE system (2.7)–(2.8) with (2.9) is able to reproduce the characteristic single-cell hexagonal firing pattern as discovered by Hafting et al. (2005) for rats and see how this pattern depends on the noise strength . For this, we use a numerical scheme that has been extensively utilised for Fokker–Planck like equations (Carrillo et al. 2015). For more details on the numerical approach and its validation, see Appendix 1. Before connecting the grid cell system (2.7)–(2.8) with the movement of a rat, we initialise the activity on the neuronal sheet by running the simulation with until has numerically stabilised into stationary patterns equal to the ones in the top and middle rows of Fig. 1, modulo translations.
Fig. 1
Network patterns and single-cell responses after min for increasing noise strength (left to right ) with the modulation function used in Couey et al. (2013): and (2.9) with and . Top row: the probability density for one , , middle row: , and bottom row: single-cell firing pattern in a circular enclosure traversed by a rat (units in cm) created by the cell at position (0, 0) on the neural sheet (green dot in top and middle plots)
Network patterns and single-cell responses after min for increasing noise strength (left to right ) with the modulation function used in Couey et al. (2013): and (2.9) with and . Top row: the probability density for one , , middle row: , and bottom row: single-cell firing pattern in a circular enclosure traversed by a rat (units in cm) created by the cell at position (0, 0) on the neural sheet (green dot in top and middle plots)Then we connect with the rats movement by setting in (2.9) as in Couey et al. (2013). The velocity v(t) and orientation used in the numerical experiment are calculated using timestamped position data from the physical experiments in Hafting et al. (2005) where rats moved around in a circular enclosure with a radius of 80 cm. The shape of the enclosure can be seen in the plots in the bottom row of Fig. 1. The path of the rat after moving around for minutes, generated with the position data, is visualised in grey.The red coloured areas in each plot in the bottom row in Fig. 1 make up the firing field of a grid cell in the network. The firing field consists of smaller red circular areas, which again are made up by even smaller red dots. Each dot marks a firing of the grid cell placed at position (0, 0) on the neuronal sheet as the velocity and orientation data is fed into the numerical method of (2.7)–(2.8) through (2.9). For simplicity, we have assumed in the numerical experiment that a neuron at a particular position fires as soon as the firing rate in (2.7)–(2.8) satisfies .Now, from left to right in the bottom row of Fig. 1, we see the pattern of the firing fields—the patterns created by the red dots over the path of the rat in the enclosure in physical space—for the grid cell at (0, 0) for increasing noise strength.The top and middle rows in the figure display snapshots of the probability density at and , respectively. As t increases, these patterns are translated in accordance with the movement of the rat.As can be observed in the bottom row, the red fields generated with the PDE system (2.7)–(2.8) form hexagonal patterns similar to the ones observed in physical experiments. However, the distance between the activity bumps and the area they cover decreases as the noise strength increases. The second main observation is that by increasing the noise the firing becomes less and less localised. The numerical experiments support the existence of a critical value of the noise, , at which a single cell could fire no matter where the rat is on its path.The question we will address in the following is how stable the patterns observed in Fig. 1 are with respect to the noise strength .
Stability of the neural field system
In this section, we start by studying the spatially homogeneous solutions. We would like to understand the pattern formation in the neural field system (2.7)–(2.8) as a byproduct of the instability of these homogeneous solutions. We assume that solutions to (2.7) are sufficiently smooth and decay fast enough as . We further assume from now on that is constant, i.e., in (2.9), in order to study the emergence of stationary network patterns of the system. Note that setting to different constant values depending on could yield non-stationary network patterns: a rat running with constant speed in one direction would give and in (2.9), which results in different constant values of . This would, with the right set of parameter values, consequently translate the network patterns in time. To avoid this technicality, we let B be identical for the four different direction preferences.Homogeneous solutions for to (2.7) satisfywith no-flux boundary conditions (2.8) at , i.e.,andIn order to find stationary spatially homogeneous states we first assume that their mean is given. Denote by the corresponding firing rate for simplicity. Thus, by integrating (3.1) and using the boundary condition (3.2), the stationary spatially homogeneous states are given bywith the mass normalisation factor Z such that , i.e.,where the error function has been defined asHowever, note that (3.3) is an implicit equation as depends on the mean . To show the existence of stationary solutions, we need to solve the consistency equation for the mean given byWe prove next that the stationary state exists and is unique by leveraging on (3.5) under suitable conditions on the firing rate function .
Proposition 3.1
Let , , and for any . Assume (A3), and that satisfies for all , then (3.1) has a unique stationary solution defined by (3.3)–(3.5).
Proof
Define for and the functionFirst, notice that satisfies andfor . We now computewithIt is not difficult to check that the supremum of over is given byOur assumptions on and imply that , and then we obtain the desired unique zero of G defining our stationary state through (3.3)–(3.5).
Remark 3.2
Notice that the previous proposition can also be applied for functions admitting negative values as behaves like in the limit , such that one can showfor any . For instance, the theorem is valid for the -approximation of defined byfor small enough such that . The smooth approximation of can also be used as trivially satisfies the hypotheses of Proposition 3.1 since , and it is strictly increasing.Stabilisation in time of the homogeneous problem. Left: numerical steady state f (blue) versus the fixed point steady state (red) obtained from (3.3)–(3.5) as functions of s. Right: the -error and difference in mean between f and plotted as functions of time t. Parameters: with . . . The interval [0, 3] is split into 512 grid points. Initial data: at random grid points , and zero elsewhere. Average of the slopes over 100 runs: 0.78 (difference in mean) and 0.57 (-difference)We have numerically analysed the stability of the stationary solutions obtained in the previous result among spatially homogeneous solutions of (3.1). In Fig. 2a, we illustrate that the computed stationary state and the numerical solution to the evolution problem after time ms are indistinguishable for the firing rate with . In Fig. 2b, we observe the convergence in time towards the stationary state by computing the difference in and the difference in average between the stationary solution and the evolution problem. We conclude that the stationary state and the corresponding numerical solution to the evolution problem (3.1) are identical to machine precision after ms, and that the convergence in time is exponential with the rates computed by averaging over 100 runs.
Fig. 2
Stabilisation in time of the homogeneous problem. Left: numerical steady state f (blue) versus the fixed point steady state (red) obtained from (3.3)–(3.5) as functions of s. Right: the -error and difference in mean between f and plotted as functions of time t. Parameters: with . . . The interval [0, 3] is split into 512 grid points. Initial data: at random grid points , and zero elsewhere. Average of the slopes over 100 runs: 0.78 (difference in mean) and 0.57 (-difference)
Next, we focus on the linear stability of the spatially homogeneous solution as a solution of the nonlinear system (2.7)–(2.8). For comparison, we first find the condition for linear spatial stability in the case of zero noise () following the classical approach as in Murray (2002). Let be the two-dimensional Fourier transform of W restricted to ,with , . As an example, if is the characteristic of a ball with radius r fully contained in , thenwhere is a Bessel function. Now let with given by (3.3), and define
Remark 3.3
Given the assumptions on W in (A1) and (A2), the function is real-valued when the shifts , , satisfy the assumptions in (A4). With shifts of equal size z, one can check thatwith using Euler’s formula.The following lemma presents a linear stability condition for the system (2.7) without noise.
Remark 3.4
The proof of the mean field limit in Carrillo et al. (2021) relies on . The mean field limit in the case is easier to obtain and leads to a pure Vlasov equation. This is a very classical result in the smooth setting and is derived via estimates in transport distances, see Hauray and Jabin (2015), Cañizo et al. (2011), Jabin (2014), Hauray and Jabin (2007), Golse (2016) and the references therein.
Lemma 3.5
Assume (A1)–(A4). Let be as in (3.7). Then the mean of the zero noise, i.e., , spatially homogeneous, stationary solution of (2.7)–(2.8) is linearly asymptotically stable if .By taking the mean of (2.7)–(2.8) with , we find that the mean at position , , evolves according to (dropping the t dependence for ease of notation)We linearise (3.8) around the mean stationary, spatially homogeneous solution , defined through , and getwhere and . Applying the ansatz to the equation, we find that each mode has the characteristic polynomialWe see that three of the eigenvalues are stable as long as . The fourth eigenvalue iswhich determines whether the linear system is stable or not. The eigenvalue is negative if .We now turn to the case with noise, . Let , where is defined through (3.3)–(3.5). Notice that for the perturbations to be admissible, we need to ensure that , and consequentlyIn principle one needs for it to be a probability density. However, we will prove below that linear stability holds without any assumption on the sign of .After linearising (2.7) around the spatially homogeneous state , we getwith . By a change of variables , where is the stationary state satisfyingwe getwith . We now restrict the set of perturbations in to the ones of the formwhere is sufficiently smooth. We can then reduce (3.10) to one Fourier mode. Dropping the subscript we set , where may be complex-valued, and , such that (3.10) turns intofor An equation for the time evolution of U can also derived, namelywhere is defined in (3.7). Denotewhere the dependence on enters through defined by (3.3)–(3.5). We remark that we cannot obtain closed ODE equations for the moments of the distribution in the activity variable s, and thus a similar analysis as in Touboul (2012); Touboul et al. (2012) is not possible here. Building on (3.12), we can prove the following result.
Theorem 3.6
Assume (A1)–(A4). Let satisfy the assumptions in Proposition 3.1 and let be as in (3.7) and satisfy the condition in Remark 3.3. Then the spatially homogeneous steady solution to (2.7)–(2.8) is linearly asymptotically stable in for admissible perturbations of the form (3.11) as long as
Remark 3.7
Using the relations (3.3)–(3.5), one can check after some tedious computations that satisfies and assuming . Thus, as , yielding the condition in Lemma 3.5 in the zero noise limit.
Proof (Proof of Theorem 3.6)
The proof is split into three parts. First, we obtain an upper bound for the time derivative of (Part I). Recall that . To obtain a time decaying estimate from the bound, we need to separate the linear part of U from the nonlinear (Part II). Finally, we establish the stability of U, and consequently , which then yields the asymptotic stability of (Part III).Part I: Note thatsuch thatWe multiply (3.13) with (the complex conjugate of U), and integrate over with respect to :After applying (3.17) to the first term on the right-hand side and integrating the last term by parts, we find that U satisfies (where denotes the real part of the complex number z)Using the boundary condition on the second term above and by again utilizing the equivalence (3.17), we getPerforming an integration by parts on the second to last integral and then yet again using (3.17),we arrive atFrom (3.12) and the definition of U, it can be shown that followswith as in (3.7). We add the two equalities,By setting and , we getIn the above derivation we have used that By applying the Cauchy–Schwarz inequality to with , we see that the right-hand side of (3.19) is non-positive. However, it is not straightforward to determine whether the right-hand side is strictly negative or if there is a Grönwall type decay estimate to be obtained from this expression. In particular, the Cauchy–Schwarz inequality with the chosen functions is an equality,whenever for any function c(t), which here means that when we make sure that the requirement holds.Part II: To separate the linear part from the nonlinear part, we split U into . We have that . Next, we choose c(t) such that V and are orthogonal with respect to the measure , i.e., . For this we choosewhere is defined in (3.14). We can summarise this assuch that (3.19) becomesFrom the orthogonality , we getInserting this into the first square in (3.21) and then expanding the square, (3.21) turns intowith . We now apply the Poincaré inequality with respect to the measure (Muckenhoupt 1972; Roustant et al. 2017), given byLet . Then and , such thatIn the above calculation, we have used the orthogonality of V and with respect to .Part III: Definingthe inequality (3.22) readsdue to (3.15). Note that also due to (3.15), . It can be checked after some tedious computations using the explicit expression of in (3.3)–(3.5) that when . This leads to the exponential decay by an application of Grönwall’s inequalityThus, we can conclude that U is asymptotically stable. What remains to show is that the same holds for . We multiply (3.12) with and integrate over ,As done for U, we integrate the last term by parts and use the boundary condition such thatwhere is to be determined. We apply the Cauchy–Schwarz inequality to the middle term and rearrange,We now choose such that , and then apply Poincaré’s inequality to the integral,where the exponential decay (3.23) is applied in the last step. This leads toOne can avoid by choosing appropriately.The asymptotic stability of in for the set of perturbations given by (3.11) now follows by an application of Parseval’s identity with respect to tothe identity (3.11), and the estimate above. Notice that from (3.4).
Remark 3.8
In principle, the linear stability analysis is valid only for smooth . However, the stability condition (3.15) of the linearised problem does only depend on such that the result holds for the linearised system with . Notice that condition (3.15) is continuous with respect to the regularisation parameter in Remark 3.2 for which the linearisation is valid.We also remark that the value of R in the proof above remains strictly negative as long as is small enough despite the fact that may give negative values.Plots of the linear stability condition on for , with . Left: at the minimal value of the noise for linear stability (3.15). Right: Associated contour plot highlighting the Fourier modes with black dots atLinear combinations of are plotted against x horizontally and y vertically. The Fourier modes are chosen to be the points with the largest values of in Fig. 3b with . From left to right: , and
Fig. 3
Plots of the linear stability condition on for , with . Left: at the minimal value of the noise for linear stability (3.15). Right: Associated contour plot highlighting the Fourier modes with black dots at
Bifurcation diagrams and phase transitions
With our linear stability analysis at hand, we will now investigate how the stationary patterns of the nonlinear PDE system (2.7)–(2.8) change as we vary the noise parameter . We do this by numerically computing bifurcation branches from the spatially homogeneous solution for various choices of the modulation function . The numerical procedure is described in more detail in Sect. 4.2.
Instability of the linearised system
First, to connect the linear stability analysis in Sect. 3 with the patterns we observe for the full system (2.7)–(2.8), we start by investigating the dominant Fourier modes of the perturbations , where satisfies (3.12). The stability condition (3.15) of Theorem 3.6 is visualised by plotting the function for the case (cf. (3.6)), with , against the modes in Fig. 3. The figure illustrates that the maximum points over the lattice , for this particular W are among and their reflections by symmetries with respect to the origin, , and . Note that the modulation function of the firing rate has no effect on the maximum points of since it enters through as an amplification factor in (3.7). As a consequence, we may expect that the patterns leading the instability of the homogeneous in space stationary state are driven by a combination of these Fourier modes. Examples of possible patterns generated as a sum of cosines depending on the dominant modes, i.e., the maximum points of over the lattice , , are depicted in Fig. 4. Notice that the rightmost plot displays a hexagonal pattern similar to the ones generated by the nonlinear PDE system (2.7)–(2.8) in the top and middle row of Fig. 1. See a similar strategy to this for a related problem in Murray (2002, Ch. 12).
Fig. 4
Linear combinations of are plotted against x horizontally and y vertically. The Fourier modes are chosen to be the points with the largest values of in Fig. 3b with . From left to right: , and
Bifurcations and phase transitions of the nonlinear PDE system
We now continue with our examination of the stability of the spatial patterns, generated by the full system (2.7)–(2.8), with respect to . In Fig. 5, we numerically compute bifurcation branches from the spatially homogeneous solution for different modulation functions with for the nonlinear problem (2.7)–(2.8). This is done by using a continuation based method on over an accurate numerical solver for the evolution in time of Fokker–Planck like equations developed in Carrillo et al. (2015); further details are given in Appendix 1. The continuation method starts either at the largest or the smallest noise value of the interval under consideration and it solves for the evolution in time of (2.7)–(2.8) up to stabilisation to a steady value. This allows for recursive computation of the stationary states for smaller or larger values of the noise by taking as initial data the already computed steady state. With this procedure we ensure, up to numerical accuracy, that we compute the stable stationary states, either by sweeping the noise values from left-to-right (l2r) or from right-to-left (r2l).
Fig. 5
Bifurcation plots of with respect to , where for different modulation functions . The red dots show the stability threshold for no noise while the blue dots correspond to the stability threshold (3.15) with noise, where , and is the threshold value for linear stability. Top row: (left)=(a) , (right)=(b), bottom row: (left)=(c), and (right)=(d)
Each subplot in Fig. 5 shows the maximum and minimum over space of the average activity rate of the computed steady states for each noise value . We show both the spatial maximum and minimum of to illustrate the fact that the computed stationary states are not spatially homogeneous, in other words, that they lead to spatial patterns. We also plot the spatially homogeneous branch numerically solving the implicit expression (3.5) as reference. The red dots indicate the stability threshold in for the condition , as in Lemma 3.5, to hold.In Fig. 5a, we observe the bifurcation branches for the sigmoid function . All of them show a sharp discontinuity at different noise values. We restrict the discussion to the lines representing the spatial maximum. We first focus on the full line (l2r) and the dashed line (r2l) that connect two bifurcation branches at different noise values corresponding to a hexagonal-like pattern similar to Fig. 6a. This clearly indicates that there is a discontinuous phase transition near the noise value indicated by the arrow. The fact that the l2r and r2l curves do not coincide further indicates that there is a hysteresis phenomenon. This conclusion is supported by the fact that the blue dot, the minimum noise value for linear stability (3.15), is to the left of both branches. This allows the possibility of branches of dynamically unstable steady states bending backwards in noise at the phase transition point. Unstable branches are not computable with our numerical approach. Finally, we find a second bifurcation branch given by the dotted line (l2r-s) in Fig. 5c corresponding to a stripe-like pattern similar to Fig. 6b. This branch was found by imposing a particular symmetry on the initial data, i.e., enforcing a horizontal band with activity level one.
Fig. 6
Stationary patterns of at for with : hexagonal-like (a), stripe-like (b), eye-like (c). Left to right: black, dotted, and red line in Fig. 5c
Bifurcation plots of with respect to , where for different modulation functions . The red dots show the stability threshold for no noise while the blue dots correspond to the stability threshold (3.15) with noise, where , and is the threshold value for linear stability. Top row: (left)=(a) , (right)=(b), bottom row: (left)=(c), and (right)=(d)In Fig. 5b–d, we show analogous computations for the case of the modulation function given by and its regularisations with and . Similarly to Fig. 5a, we observe a discontinuous phase transition for the full line (l2r) and the dashed line (r2l), and the linear stability blue dot is also to the left of the phase transition point as above. We remark that the blue and the red dots may lie outside the noise intervals in Fig. 5b–d, but they follow the same order. Similar conclusions as above lead to hysteresis phenomena and the possible existence of unstable branches not obtainable with our present numerical approach.The case of in Fig. 5b resembles the behaviour observed for the sigmoid function in Fig. 5a. The hexagonal-like patterns are the preferred stable configurations both for generic initial data, full line (l2r), and starting with small perturbations of the homogeneous stationary state, dashed line (r2l). Again stripe-like patterns are obtained by choosing specific initial data. Similar branches and the numerical observation that the hexagonal-like pattern is the most stable configuration has already been reported for a neural field model without noise (Veltz et al. 2015).This behaviour changes in Fig. 5c, d. The hexagonal-like patterns are still the preferred stable configurations for generic initial data, full line (l2r). However, starting with small perturbations of the homogeneous stationary state, dashed line (r2l), we connect to the stripe-like bifurcation branch, dotted line (l2r).The bifurcation branches and their dynamics gets richer as the regularisation parameter gets smaller. We observe that for in Fig. 5c, there is an additional branch, dark red line (l2r-e), leading to eye-like patterns as in Fig. 6c. This branch jumps to the stripe-like pattern for larger noise values. It is difficult to extract information on the range of noise values since the branch, dark red line (l2r), does not show a sharp transition point while the dashed line (r2l) does. However, this becomes much clearer in the limiting case of the positive part in Fig. 5d. We observe two sharper discontinuous transition points in the stripe-like branch, leading to an intermediate pure-stripe branch, , before jumping to the homogeneous state, see the dashed line (r2l) and the dotted line (l2r).Stationary patterns of at for with : hexagonal-like (a), stripe-like (b), eye-like (c). Left to right: black, dotted, and red line in Fig. 5c
Concluding remarks
The first conclusion of our analysis is that the mean-field limit of the grid cell model (1.1) with constant external input introduced by Burak and Fiete (2009), Couey et al. (2013), and Burak and Fiete (2012), presents phase transitions driven by the noise strength as demonstrated in Figs. 5 and 6. This behaviour resembles the phenomena appearing in the classical Kuramoto model (Kuramoto 1981; Sakaguchi et al. 1988; Acebrón et al. 2005; Carrillo et al. 2019, 2020) for synchronisation and other neural field models Touboul et al. (2012) in the computational neuroscience literature.It is shown that the homogeneous in space stationary state is linearly unstable for small noise strength, similarly to basic ring and neural field models (MacLaurin and Bressloff 2020; Kilpatrick and Ermentrout 2013; Byrne et al. 2019). We numerically analysed the bifurcation diagram of stationary patterns showing the appearance of different branches identified by their symmetries, see Figs. 5 and 6. Our numerical experiments with random initial data demonstrate that the stationary hexagonal-like pattern in space of the activity level of neurons in Fig. 6a, leading to the solid black bifurcation branches in Fig. 5, has the largest basin of attraction. Moreover, the numerical simulations indicate that there is a sharp transition in the mean activity level together with a hysteresis phenomenon suggesting a discontinuous phase transition. Whether more stationary network patterns exist is another interesting topic.The crucial implication of this phase transition on the rats’ navigation path is that the larger the noise the less localised are the spatial firing fields of each neuron. This can be observed in the bottom row of Fig. 1 which shows that the firing field (coloured in red) gets denser as the noise increases. Moreover, there is a sharp value of the noise after which there is no localisation at all, leading the rats to not being able to orientate themselves in physical space. In other words, the point of transition from a homogeneous pattern (all neurons have the same mean activity level) to a non-homogeneous pattern (neurons at different locations in the network have different mean activity levels) gives an upper bound for the noise strength for which single grid cells no longer can fire in a hexagonal pattern in physical space when connected with the rats movements through (2.9).With a hexagonal network activity configuration as in Fig. 6a, a single neuron can create hexagonal neural field patterns in physical space as in the third row of Fig. 1. Exactly how the firing fields in physical space of a single neuron are affected by initial network activity patterns as the ones in Fig. 6b–c remains to be investigated.From the methodological viewpoint, we remark that as the bifurcation branches are computed using a numerical approximation of the PDE (2.7)–(2.8), they can differ slightly from the actual branches of the PDE itself. To study bifurcations and phase transitions of the nonlinear PDE analytically will require sophisticated mathematical tools, and they will be investigated elsewhere.From the computational neuroscience viewpoint, we expect that noise driven phase transitions will also naturally appear in related attractor dynamic models as the ones in Burak and Fiete (2012), Agamon and Burak (2020). Instability of homogeneous stationary network patterns should also play an important role therein. Additional investigations of more realistic models of coupled place and grid cells are needed. This will allow to connect with experiments and further contribute to the challenge of how noise affects network dynamics in Rowland et al. (2016, Future Issue 3).
Table 1
and (in ) errors and order of convergence (OOC) of the mean for
Authors: Bruce L McNaughton; Francesco P Battaglia; Ole Jensen; Edvard I Moser; May-Britt Moser Journal: Nat Rev Neurosci Date: 2006-08 Impact factor: 34.870
Authors: Richard J Gardner; Erik Hermansen; Marius Pachitariu; Yoram Burak; Nils A Baas; Benjamin A Dunn; May-Britt Moser; Edvard I Moser Journal: Nature Date: 2022-01-12 Impact factor: 49.962