Literature DB >> 33296032

Neural field models with transmission delays and diffusion.

Len Spek1, Yuri A Kuznetsov2,3, Stephan A van Gils2,3.   

Abstract

A neural field models the large scale behaviour of large groups of neurons. We extend previous results for these models by including a diffusion term into the neural field, which models direct, electrical connections. We extend known and prove new sun-star calculus results for delay equations to be able to include diffusion and explicitly characterise the essential spectrum. For a certain class of connectivity functions in the neural field model, we are able to compute its spectral properties and the first Lyapunov coefficient of a Hopf bifurcation. By examining a numerical example, we find that the addition of diffusion suppresses non-synchronised steady-states while favouring synchronised oscillatory modes.

Entities:  

Keywords:  Delay equation; Hopf bifurcation; Neural field; Normal form; Numerical bifurcation analysis; Sun-star calculus

Year:  2020        PMID: 33296032      PMCID: PMC7726065          DOI: 10.1186/s13408-020-00098-5

Source DB:  PubMed          Journal:  J Math Neurosci            Impact factor:   1.300


Introduction

In the study of neurological disease, non-invasive imaging techniques are often used to get an understanding of the structure and functioning of the brain on intermediate scales. As they give a course-grained view of the neuronal activity, mean-field models are a natural fit to describe the observed dynamics [1, 2]. In this paper we use a neural field model with gap-junctions, electrical connections between neurons, which are thought to be related to observed synchronisation of neural tissue in Parkinson’s disease [3, 4]. We study the effect of gap junctions on the dynamics of the model. We mainly focus on the stability of steady-states, periodic oscillations and the bifurcations which lead to a qualitative change in behaviour. To properly address the difference in time-scales between gap-junctions and synaptic connections, we use a neural field with transmission delays for the synaptic connections. This leads to a complicated model which is infinite-dimensional and has spatially-distributed delays. The dynamical theory for such models is not readily available. In this paper, we address the analytic problems which arise from these abstract delay differential equations. We use the sun-star calculus as the basic functional analytic tool to cast the equation in the variation-of-constants form. We exploit the results by Janssens [5, 6] that allow the linear part of the equation, without the delays, to be unbounded, as is the case for the diffusion operator.

Background

Neural field models try to bridge the gap between single neurons models [7] and whole brain models [8] by modelling the qualitative behaviour of large groups of neurons. In the seminal work of Wilson and Cowan [9, 10], they modelled two populations of excitatory and inhibitory neurons and analysed the dynamical properties of the resulting model. A neural field uses spatial and temporal averaging of the membrane voltage of a population of neurons. The synaptic connections are modelled by a convolution of a connectivity kernel and a nonlinear activation function. This leads to a set of two integro-differential equations with delays. These models have been simplified by Amari [11] by combining the excitatory and inhibitory populations into a single population and made more realistic by Nunez [12] by including transmission delays. These delays arise from the finite propagation speed of action potentials across an axon and the delay due to dendritic integration. There has been considerable interest in the role of these delays in the spatiotemporal dynamics [13-21]. Further modelling work by Coombes, Venkov and collaborators show the usefulness of these neural fields for understanding neural activity [22-25]. Roxin and collaborators first did a bifurcation analysis for neural fields with a single fixed delay [26-28]. Faugeras and collaborators investigated the stability properties of stationary solutions of these neural fields with distance dependent delays [29-32] using a functional analytic approach based on formal projectors. In [33] it was shown that the neural fields can be studied as abstract delay differential equations to which the sun-star framework can be applied. They used this to compute normal form coefficients for bifurcations of equilibria. Dijkstra et al. [34] expanded their analysis to Pitchfork–Hopf bifurcations, and Visser et al. [35] analysed a neural field with delays on a spherical domain. We build on [33, 34] by introducing gap-junctions into the neural field model and studying the resulting bifurcations and dynamics. Gap-junctions are electrical connections between neurons, which directly exchange ions through a connexin-protein. This is in contrast to synaptic connections, where a potential is induced across the synapse by neurotransmitters. These gap-junctions are thought to be related to Parkinson’s disease by synchronising neurons in the globus pallidus [3, 4]. Gap-junctions can be modelled as a simple diffusion process [24]. There have been some attempts to incorporate gap-junctions into networks of coupled neurons [36-38], but to our knowledge not yet within a proper neural field model.

Theoretical framework

As mentioned before, we use the sun-star calculus for delay differential equations to formally analyse these neural field models with transmission delays. This mathematical theory for delay differential equations was constructed by Diekmann et al., see [39] and the references therein. This theory uses the space , pronounced X-sun, which is the largest subspace of strong continuity of the adjoint semigroup. It allows us to employ the classical Fredholm alternative, which plays a key role in the computation of the normal form coefficients. As a result, many of the mathematical techniques developed for the analysis of ODEs, such as the centre manifold reduction and the Hopf bifurcation theorem, can be generalised for these abstract delay differential equations. Recently, Janssens [5, 6] has begun expanding the sun-star calculus to the case where the linear part, which contains no delays, is an unbounded operator. This allows us to study both the neural field with and without diffusion in the same framework. This unifying theory then allows us to fill in the gap in the proofs of [33], while obtaining the same results for a neural field with diffusion. There are also other theoretical frameworks possible. The first approach to develop a geometric theory for delay equations along the lines of ODEs was proposed by Hale [40] who used formal adjoint operators. Formal adjoint operators were also used by Faria and Magalhaes [41-43] to study Hopf and Bogdanov–Takens bifurcations. Wu [44] used the formal adjoint method to study reaction-diffusion systems with delays and prove the necessary theorems for bifurcation analysis. There is a difference whether to take as a starting point an abstract integral equation, like we do, or an abstract ODE like in the integrated semigroup approach [45-47]. Integrated semigroups have been used to deal with classical delay differential equations as abstract ODEs with non-dense domains. By classical we here mean that the state space is . In the case of the neural field equations we consider, the state space is an abstract Banach space. It might very well be possible that the formalism of integrated semigroups is general enough to cover this as well, but as far as we know, this has not been done as yet. We prefer the sun-star formalism as it allows us to work with the variation-of-constants formula in the state space X, albeit after an excursion in the bigger space . In addition, the projectors are based on duality pairing and the classical Fredholm alternative, while in the integrated semigroup formalism the projectors are based on a formal inner product [48]. There are also two approaches to compute normal form coefficients. In the first approach, the abstract ODE is split into a finite dimensional and an infinite dimensional one. By decoupling these step by step, the centre manifold is rectified and the equation on it is normalised [45-47]. In the second approach, which we follow, we parametrise the centre manifold and assume that the finite dimensional ODE on it is in normal form. As the delay differential equation has an abstract state space, this ODE is also an abstract ODE. The Taylor coefficients of the centre manifold are obtained in a step-by-step procedure that simultaneously gives us the coefficients of the normal form [49, 50]. In this way, the sun-star calculus approach leads to explicit, compact and easy to evaluate expression for the normal form coefficients [51]. These coefficients are obtained using the true duality pairing, for which the classical Fredholm alternative holds. Of course, the resulting formulas are equivalent, but the approach we adopted is more straightforward. In the sun-star calculus we choose to model the neural field as a continuous function in space. In [29] and [31] the authors instead choose to use the -functions, based on the work in [52-54]. This leads to some mathematical complications dealing with the smoothness of the nonlinearity, as laid out previously in Sect. 2.4 of [55]. This was later rectified in [56]. Moreover, from a physiological point of view, it is not clear why the potential of the neural field should be merely square integrable, instead of continuous. Finally, we want to briefly comment on the need of a theoretical framework to study these neural fields. Software packages, such as DDE-BIFTOOL [57], can perform numerical bifurcation analysis of delay equations. However, they cannot directly be applied to these delayed integro-differential equations. While a discretised model can be studied with these software packages, there is no guarantee that the dynamical properties converge to those of the full neural field. In this work, the formulas of the normal form coefficients are exact and can be evaluated to arbitrary precision. In this paper we build on the work of Janssens [5, 6] and prove the necessary theorems to use the sun-star calculus to study our neural field model with diffusion and without diffusion. We then derive the spectrum and resolvent of a neural field with delays, diffusion and a connectivity kernel of a sum of exponentials. Finally, we compute the first Lyapunov coefficient of a Hopf bifurcation and verify our results by simulating the full neural field numerically.

Modelling

In this section we derive the neural field model with transmission delays and gap junctions. This is largely based on a derivation by Ermentrout and Cowan [20]. We start with a collection of neurons and denote the (somatic) potential of neuron i at time t by and its firing rate by . We assume that there is a nonlinear dependence of on given by We define to be the postsynaptic potential appearing on postsynaptic cell i due to a single spike from presynaptic cell j. We assume a linear summation of the postsynaptic potentials, so the total potential received at the soma due to the synaptic connection between cell i and j can be modelled as where is the delay due to the finite propagation speed of action potentials along an axon and other factors such as dendritic integration. We define to be the potential appearing in neuron i due to a gap-junction current . The resulting model for becomes We can reduce this integral equation if we have a model for Φ and Ψ. For cell i, let us consider a passive membrane with a time constant , a resistance and an injected postsynaptic current and similarly when a gap-junction current is injected If we now apply the Laplace transform to equation (1), we get We assume that the synaptic dynamics are dominated by the time-scale of the membrane. This means we can reduce to , where δ is the Dirac-delta distribution and represents the strength of the synaptic connection, where a negative value corresponds to inhibition. Taking the inverse Laplace transform results in a system of differential equations We want to model this network of cells by a neural field. Suppose we have a sequence of similar neurons on the interval and we model the gap-junctions as a simple resistor between adjacent neurons, we arrive at the formula We will now take the limit as , while scaling g by and by , to find our neural field model We have not specified yet what happens with the gap-junctions at the boundary of our domain. It is natural to assume that no current leaks away at the boundaries, which corresponds to Neumann boundary conditions in the neural field

Overview

This paper is divided into three parts, each of which can mostly be read independently. In Sect. 2, we construct the sun-star calculus for abstract delay differential equations and derive the variation-of-constants formula. In particular we prove a novel characterisation for sun-reflexivity. Furthermore we consider linearisation, the corresponding spectrum and a normal form derivation for Hopf bifurcation of the nonlinear equations. In appendix A we elaborate on the case when the unbounded linear operator is the diffusion operator. We expect the reader to be familiar with the basics of the sun-star framework in the book by Diekmann et al. [39]. In Sect. 3 we derive formulas for the eigenvalues and eigenvectors for a neural field with a connectivity defined by a sum of exponentials. We also explicitly construct the solution to the resolvent problem for this class of neural field models. In Sect. 4 we do a numerical study for a neural field model with specific parameter values. We compute the first Lyapunov coefficient for the Hopf bifurcation and investigate how it is influenced by the diffusion term. We also investigate the emergence of periodic behaviour using numerical simulations of the neural field.

Abstract delay differential equations in the sun-star framework

In this section we first develop the sun-star calculus for a large class of abstract delay differential equations (ADDE). This leads to a variation-of-constants formulation of (ADDE). Next we study the linearisation and obtain results on the spectrum. Finally, we construct a method for computing the first Lyapunov coefficient for a Hopf bifurcation of nonlinear equations. We build on the theory developed by Janssens [5], who considers a class of abstract delay differential equations with a possibly unbounded linear part. Consider two Banach spaces Y and over or . Let S be a strongly continuous semigroup on Y with its generator B, and let be a (nonlinear) globally Lipschitz-continuous operator. Note that the assumption that the semigroup S is compact is not necessary, in contrast to what is assumed by Wu [44]. We introduce now our main object of study: Here , where for and . In the remaining sections we are mainly interested in the case where B is a diffusion operator acting in the space of continuous functions . We have summarised the relevant properties of the diffusion operator in Appendix A. However, the theorems which are proven in this section hold for any operator B that generates a strongly continuous semigroup S on Y. This fills in some technical details missing in [33], where , which does not generate a compact semigroup. On X we consider the strongly continuous semigroup defined by Here , and . This semigroup is related to the problem for , i.e. The solution of problem (6) is then given by .

Lemma 1

([58, Theorem VI.6.1]) The generator of the semigroup is given by We will interpret (ADDE) as problem (6) with some nonlinear perturbation and use a variation-of-constants formula in X to obtain results about the perturbed problem, such as normal form coefficients for local bifurcations. As G maps X into Y, we would like to embed Y in a natural way into X. A naive approach would be to use a delta-function as an embedding. However, this embedding is not bounded, so the domain of would not be preserved under perturbation. This is indeed the case, as the rule for extending a function beyond its original domain, i.e. , is incorporated in . Hence adding a perturbation to the rule for extension changes the domain of the generator. A way out is to embed this problem into a larger space. A natural choice would be , where we have a continuous embedding , and we can separate the extension and translation part of into and respectively. More formally we use the sun-star calculus as developed in the book by Diekmann et al. [39] to construct the space , which contains the space . We will first restrict the dual space to the sun space , on which is strongly continuous. Then taking the dual we obtain the dual space . It is convenient to present the relationship of the various spaces schematically in the following ‘duality’ diagram, see Fig. 1.
Figure 1

A schematic representation of the various Banach spaces in sun-star calculus [5]

A schematic representation of the various Banach spaces in sun-star calculus [5]

Characterisation of the sun-dual

Using a generalisation of the Riesz representation theorem, we can find a representation of , the dual space of X [59]. It can be represented as , the space of functions of bounded variation on , normalised such that and f is right continuous on . The (complex-valued) duality pairing between X and is given by the Riemann–Stieltjes integral, for and , Results on scalar functions of bounded variation and the corresponding Riemann–Stieltjes integral can be extended to Y-valued functions, see [59]. It is possible to find an explicit representation of the adjoint operator and its corresponding domain . The adjoint operator exists and is unique as the domain is dense.

Theorem 2

The domain of is given by and the action of is given by , where , i.e. the characteristic function of .

Proof

We first prove the inclusion ⊆ for the domain . Let and . Without loss of generality we can write , where and and . Using the integration by parts formulas for Riemann–Stieltjes integrals [60, Appendix H], we obtain We will now want to use some limiting argument. However, the Riemann–Stieltjes integral lacks good convergence properties. In the scalar case, we could interpret this integral as a Lebesque–Stieltjes integral, which has better convergence properties. For a general Banach space Y and continuous integrands, the equivalent would be the Bartle integral [61, 62]. The Bartle integral has an equivalent theorem to the Lebesque dominated convergence theorem. For uniformly bounded, pointwise converging sequences, we can interchange the limit and the integral [62, Theorem 6]. For some and , we may choose as a uniformly bounded sequence in X such that and it converges pointwise to , i.e. the characteristic function of . We then substitute φ for in (9) Taking the limit as , using the dominated convergence of the Bartle integral, we get that Since y was arbitrary, we infer that Letting , we obtain for where . Now we substitute this formula for f into and use integration by parts and the fact that to find that We compare this to equation (9) Since can be chosen arbitrary, implies that and . Finally we prove the other inclusion ⊇ for the domain and simultaneously obtain the formula for the action of . Let f be of the form in (8), then by the above computations we find that  □ We can characterise the sun-dual as the subspace of , where is strongly continuous, or equivalently , where the closure is with respect to the norm on . Similarly we can characterise the sun-dual as the subspace of , where is strongly continuous, or equivalently , where the closure is with respect to the norm on . In case B is the diffusion operator, see A for an explicit characterisation of . The following theorem can be proved by showing that is strongly continuous on some set E given by (10), that , and that E is closed.

Theorem 3

([5, Theorem 1 and Remark 4]) The space , the sun-dual of X with respect to , is given by the set Furthermore, the map defined by is an isometric isomorphism. From now on we identify with . The corresponding duality pairing between X and is then given by Now we can describe the action of and , the restrictions of the operators and to the subspace .

Definition 4

The strongly continuous semigroup on is defined as

Theorem 5

([5, Theorem 1]) For the action of on , we have where the integral is the weak∗ Lebesque integral with values in .

Theorem 6

For the sun-dual of on , we have that and , with ġ a function in such that for . By definition and . We first prove the equivalence of the definition and (15). Let such that and . Recall that the embedding ι is given by (11) From Theorem 2, we can conclude that implies that and with . As , Theorem 3 implies that , and we can write g as where and ġ some function in . Hence g is absolutely continuous on . As g is an -function (class), we may redefine to get an absolutely continuous function on . Conversely, let such that it is in the right-hand side of (15). From Theorem 2 and the fact that , we conclude that and that . As g is absolutely continuous and , this implies that . Hence, . □

Characterisation of the sun-star space

We can represent , the dual of , as , where is the dual of . In case B is the diffusion operator, is explicitly characterised in Appendix A. In general, cannot be identified with . However, the latter space can be embedded into the former.

Theorem 7

([63, Remark 1.4.18, Theorem 1.4.19]) There exists an isometric embedding of into with the duality pairing for and . Moreover, can be identified with if and only if has the Radon–Nikodym property.

Lemma 8

(Dunford–Pettis) If Y is reflexive, then it has the Radon–Nikodym property. We can embed both Y and X into which is a subspace of . The canonical embedding is defined as . The continuous embedding is defined as , where is the canonical embedding of Y into . [5] It is possible to find an explicit representation of j.

Lemma 9

For , . Moreover, j is a continuous embedding and is bounded. , consequently is contained in , which is the subspace of on which is strongly continuous. Let and , then Hence . The other statements are generally known to hold for the canonical embedding of X into [39, Appendix II, Cor. 3.16, Prop. 3.17]. □ As we do not have an explicit norm or measure on , we cannot say anything in general about . However, it is possible to find a representation of restricted to the space .

Theorem 10

For , the following statements are equivalent: In this case the action of is given by . and ; φ has an a.e. derivative for which and . Let such that , and let . We have that Let such that Then, by Lemma 46 and Theorem 6, i.e. , we can rewrite (17) as Taking , we get that for all such that by Theorem 6. Hence , which implies that and . As can be embedded in [64, Corollary 4.2], we find that . Furthermore, by [64, Theorem 4.3] we have, for all and , Alternatively, we take , and such that . Then (18) reduces to for all , hence . Conversely, let , where and φ has an a.e. derivative for which Then again using Lemma 46 we get that, for any , Hence . □

Corollary 11

For , the following statements are equivalent: In this case, the action of is given by . and ; and φ has an a.e. derivative . This follows immediately from Theorem 10 and Lemma 9. □ Note that, for , the rule for extension is no longer included in the domain of , but is represented in the action of , which resolves the problem with stated at the beginning of this section. The previous theorem allows us to formulate an equivalence between the sun-reflexivity of X, i.e. and the ordinary reflexivity of Y, i.e.

Theorem 12

X is sun-reflexive with respect to if and only if Y is reflexive. Suppose that Y is reflexive. Then, by Theorem 7 and Lemma 8, can be represented as and hence the full domain of is given by Theorem 10: We use that is the closure of with respect to the norm on . First the closure of with respect to the -norm results in the space . As reflexivity implies sun-reflexivity [65, Corollary 2.5], we have that . Next we note that functions are dense in the continuous functions and is closed with respect to the -norm. Hence we conclude that Conversely, suppose that Y is not reflexive. From Theorem 7, is a subset of and hence Taking the norm closure of both sides, we conclude that As Y is not reflexive, is a proper subset of . Hence is a proper subset of , so X is not sun-reflexive. □ In case B is the diffusion operator, we use that Y is the space of continuous functions. As this is a non-reflexive Banach space, X in this case is not sun-reflective.

Variation-of-constants formulation

As the space solves the problems mentioned at the beginning of this section, we can formulate a variation-of-constants formula for (ADDE) as an abstract integral equation Here the embeddings j and ℓ are as defined before Lemma 9. As the integrand of (AIE) takes values in , the integral is taken to be a weak∗ integral. It is possible to show that the integral maps to the range of and hence (AIE) is well defined.

Lemma 13

([5, Proposition 8]) Let be given, then where Moreover, where are such that for all . The Banach fixed point theorem in combination with the bound in (21) gives the existence of a unique global solution of (AIE).

Corollary 14

([5, Corollary 9]) Let be globally Lipschitz continuous. For every initial condition , there exists a unique solution such that satisfies (AIE) for all . We would like to show that this unique solution of (AIE) can be translated over to a (classical) solution of (ADDE). However, this is in general not the case when B is unbounded. Therefore we recall a weaker solution concept from [44].

Definition 15

A function is called a classical solution of (ADDE) if u is continuously differentiable on , for all and u satisfies (ADDE).

Definition 16

A function is called a mild solution of (ADDE) if and u satisfies Note that Definition 15 is quite restrictive as only specific initial conditions are admissible. There is the following correspondence between classical and mild solutions of (ADDE)

Lemma 17

([44, Theorem 2.1.4]) A classical solution of (ADDE) is also a mild solution of (ADDE) Conversely, when G has a globally Lipschitz continuous Fréchet derivative and , and , then a mild solution of (ADDE) is also a classical solution of (ADDE). Note that Theorem 25 implies that the conditions in the second statement, starting with conversely, are equivalent to the condition that . It is possible to construct a one-to-one correspondence between solutions of (AIE) and mild solutions of (ADDE).

Theorem 18

([5, Theorem 16]) Let be an initial condition. The following two statements hold. Suppose that u is a mild solution of (ADDE). Define by Then v is a solution of (AIE). Suppose that v is a solution of (AIE). Define by Then u is a mild solution of (ADDE).

Corollary 19

Suppose that G is a globally Lipschitz operator and it has a globally Lipschitz Fréchet derivative, then for all with and , there exists a unique classical solution of (ADDE).

Linearisation

We want to investigate the behaviour near a fixed point. We will show that for the linearised problem we can perturb the semigroup with generator to a semigroup T with generator A. In the next section we investigate the spectral properties of A. Linearising equation (ADDE) near a fixed point u, which we take without loss of generality to be , results in the linear problem (LINP). As with the general nonlinear problem, we can define an abstract integral equation where . Then, due to Lemma 13 and Corollary 14, we can define the strongly continuous semigroup when is globally Lipschitz.

Lemma 20

([5, Theorem 19]) Let be globally Lipschitz continuous, then there exists a unique strongly continuous semigroup T on X such that for all and for all . The strongly continuous semigroup T has a generator A. We want to establish how the perturbed generator A relates to the original generator , which can be done using the sun-star framework. A technical detail which we need to check is that the sun-dual space is the same with respect to T and .

Lemma 21

([5, Proposition 20]) is also the maximal subspace of strong continuity of the adjoint semigroup on . The adjoint generator is given by and the generator of the is given by Finally, is also the maximal subspace of strong continuity of the sun-star semigroup . One could think that we could extend this argument and show that and . However, this is not the case when we lack sun-reflexivity, i.e. . We can circumvent these problems by restricting the domain to .

Lemma 22

([5, Proposition 22]) It holds that and on this subspace. We can extend Corollary 11 for to , which will be needed for the computation of normal form coefficients.

Corollary 23

For , the following statements are equivalent: In this case, the action of is given by . and ; and φ has an a.e. derivative . The statement on the domain follows immediately from Lemma 22 and Corollary 11. Furthermore, we have that  □ We are now able to state the result which relates A to .

Theorem 24

([5, Corollary 23]) For the generator A of the semigroup T, we have that We can cast (27) in a form which can also be found in Engel and Nagel [58, Theorem VI.6.1] by using Corollary 11.

Theorem 25

For the generator A of the semigroup T, we have that Let and . As , we have that . By Corollary 11, and φ has an a.e. derivative . Furthermore, we have that By Lemma 9 this implies that , and . Hence and . Let with . As , . Let be the strongly continuous semigroup generated by . This implies that for all [39, Appendix II Proposition 3.17]. By the continuity of , this converges in norm as to with . Conversely, let , and . Furthermore, let , then Hence and, by Corollary 11, . Furthermore, Finally, for the action of A, we derive  □

Spectral properties

In this section we state some results on the spectrum of the operator A, notably its essential spectrum and a method for computing its eigenvalues. For an operator A on X, the resolvent set is the set of all such that the operator has a bounded inverse. The resolvent operator is then defined as for . The spectrum of A, can be decomposed into the point spectrum and the essential spectrum . We use Weyl’s definition of the essential spectrum, i.e. [66]. Then is the discrete spectrum, i.e. isolated eigenvalues with a finite dimensional eigenspace.

Lemma 26

For the respective spectra, we have . Furthermore, . We have that [58, Proposition IV.2.18]. Next we consider the eigenvalues of . For some , we need to find such that . Clearly, this is the case if and only if for , with and . Therefore if and only if as the corresponding eigenspaces have the same dimension. Finally, we show that , which completes the proof. If , then we can find the resolvent of explicitly as for all and , [58, Proposition VI.6.7] Hence . Conversely, suppose that , and let . Then the constant function for is in X and hence . This implies that and . Hence is surjective. As z is not an eigenvalue of , by the above reasoning it is not an eigenvalue of B, and hence is injective. So we conclude that and . □ If is compact, then we can make inferences on the essential spectrum of A from the spectrum of .

Theorem 27

If is compact, then . We will prove this by working in the dual space. This is possible as , which is a consequence of the properties of Fredholm operators [66, Theorem IV.5.14]. On , due to Lemma 21. As ℓ is bounded, is compact and so is its adjoint due to Schauder’s theorem [66, Theorem III.4.10]. Hence is a compact perturbation of . One of the defining properties of Weyl’s essential spectrum is that it is invariant under compact perturbations [66, Theorem IV.5.35]. So we conclude that  □ In case B is the diffusion operator, its essential spectrum is empty, see Lemma 40. This means that also the essential spectrum of A is empty when is compact. For computation of the eigenvalues, we follow Engel and Nagel [58]. We introduce the family of operators , and parametrized by , defined as for , and . Using these, we can define the characteristic operator Now we formulate the main theorem of this section, which allows us to reduce the computation of the eigenvalues and eigenvectors in X to a computation on Y.

Theorem 28

([58, Proposition VI.6.7]) For every , if and only if has a solution . Moreover, if and only if this q is unique. In that case the resolvent is given by where and . Finally, is an eigenvector corresponding to if and only if , where is nontrivial and satisfies

Hopf bifurcation

We are interested in the nonlinear behaviour of (ADDE). In this section we develop techniques to compute the first Lyapunov coefficient for (Andronov-)Hopf bifurcations. These techniques can be extended to other local bifurcations, but we do not address those here. In this section, we follow the methods from van Gils et al. [33]. Suppose that contains a pair of simple purely imaginary eigenvalues with and no other eigenvalues on the imaginary axis. Let be the corresponding eigenvector of A and be the corresponding eigenvector of , respectively, We normalise these vectors such that The centre subspace is spanned by the basis of eigenvectors corresponding to the critical eigenvalues of A. Here ψ̄ denotes the complex conjugate of ψ. In order to extend this to the nonlinear setting, we need a (locally) invariant critical centre manifold , which is tangent to at the equilibrium at the origin. From [6], we get a general result on the existence of this centre manifold.

Theorem 29

([6, Theorem 41]) If the strongly continuous semi-group S generated by B is immediately norm continuous, is finite-dimensional, is the pairwise disjoint union of the sets where is closed and both , are compact, and if then there exist a -smooth mapping and an open neighbourhood U of the origin in such that , , the identity mapping, and is locally positively invariant for (ADDE) and contains every solution of (AIE) that exists on and remains sufficiently small for all time. The conditions on can be easily satisfied when and are composed of finitely many eigenvalues of finite multiplicity. In case B is the diffusion operator, it is immediately norm continuous by Lemma 39 and the essential spectrum by Theorem 27 and Lemma 40. Also, when , , we get that the conditions are likewise satisfied. If , then we can write for some . Using this we can recast into the formal expansion : Due to Theorem 18, (ADDE) and (AIE) formulations are equivalent. By weak∗ differentiation of (AIE) and exploiting the finite dimensionality of , one can show that a solution , , of (AIE) satisfies the abstract ODE where the nonlinearity is given by Let be the projection of onto the centre subspace . The function satisfies a complex ODE which is smoothly equivalent to the Poincaré normal form where . In polar coordinates, , this is orbitally equivalent to where is the first Lyapunov coefficient determined by the formula It is well known [67] that in generic unfoldings of (38), implies that the bifurcation is supercritical and that a stable limit cycle exists near one of the branches. On the other hand, implies that the bifurcation is subcritical and that an unstable limit cycle exists near one of the branches. The critical centre manifold has expansion (34), and due to the time-invariance of , we have If we differentiate both sides with respect to time and use the abstract ODE (35) for the left-hand side, we obtain the homological equation We can substitute the expansion of nonlinearity (36), the normal form (37) and the expansion of the critical centre manifold (34) into the homological equation (41) to derive the normal form coefficients. If we equate coefficients of the corresponding powers of z and z̄, we obtain the following equations: They all have the form Here and are given. When , then (43) has a unique solution. However, if , then a solution does not necessarily exist for all . The following lemma, which is equivalent to [33, Lemma 33], provides a condition for solvability.

Lemma 30

(Fredholm solvability) Let . Then has a closed range. In particular is solvable for given if and only if for all . From the definition of the essential spectrum, is closed [66, Section IV.5.1], and is also closed by Banach’s closed range theorem [66, Theorem IV.5.13]. Let be a sequence in such that . Then there is a sequence in such that Hence for all , so there exists such that and Hence , and . Due to Banach’s closed range theorem, is a solution of given if and only if  □ We now return to equations (42). As , we can use the resolvent of to solve the first two equations. However, , so for the last equation of (42) we need to use the theorem above. The corresponding eigenspace is spanned by , so we can compute for the normal form coefficient by We are not yet able to compute the normal form coefficient explicitly as we do not have an explicit representation of or a representation of the resolvent of . However, we resolve this by using spectral projections. Let and be the spectral projections on and corresponding to some eigenvalue λ, respectively. Then for some and Hence we seek to determine ν. From the Dunford integral representation it follows that where is a sufficiently small open disk centred at λ and is its boundary. The element on the left in the pairing (44) is of the form , . In this case we can reduce to by virtue of the following theorem.

Theorem 31

Suppose that . For each , the function , defined as for , is the unique solution in of the system Moreover, is the unique solution in of . Since , by Theorem 28 it follows that exists. We start by showing that φ as defined above solves (46). Clearly, and . Recall from the definition of that for , . Therefore, Finally, by differentiating φ, we see that it satisfies the second equation in (46). When , then , because for all Then Corollary 23 implies that . However, by Theorem 28, , so is the unique solution of . Consequently, φ itself is the unique solution in . □ Now given that we can compute the resolvent and the Fréchet derivatives of G, we have a method to compute the centre manifold coefficients and , and the first Lyapunov coefficient :

Characterisation of the spectrum

In this section we return to the neural field as derived in Sect. 1.3. For certain choices we can derive some explicit conditions for the spectrum and find an explicit expression for the resolvent. We take with and use the (ADDE) formulation of Sect. 2 where and are defined as Here, we assume that , , J and τ are continuous functions and , with and . The assumption makes sure we have an equilibrium at . We interpret u as the deviation from this physiological resting state. This interpretation then makes for cleaner notation. We have the following properties for G and its derivatives.

Lemma 32

([33, Lemma 3, Proposition 11]) G is compact, globally Lipschitz continuous and k times Fréchet differentiable for any . Furthermore, the kth Fréchet derivative of G at , , is compact and given by As is compact, we can find, due to Theorem 27 and Lemma 40, that the essential spectrum of the linearisation A is given by We want to be able to compute the eigenvalues, eigenvectors and resolvent for specific choices of J and τ. We take J as a sum of exponentials and τ as a constant delay plus a finite propagation speed, which we can normalise to 1 by scaling time. where we take and for . Due to Theorem 28, we have that λ is an eigenvalue and ψ an eigenvector if and only if and satisfies the characteristic equation (CE). where in this case is a parametrized family of operators for defined as follows: where and . The case without diffusion, i.e. , has already been extensively studied [33, 34], so in this section we develop formulas for the eigenvalues, eigenvectors and resolvent with nontrivial diffusion, i.e. . For the following section, we adopt the notational convention that bold-faced variables correspond to vectors where its length is clear from the context.

Eigenvalues

So we are looking for nontrivial solutions of As this is a mixed differential-integral equation, it is in general hard to solve. We will use the method of Dijkstra et al. [34] to convert (CE) into a differential equation (ODE), which we can solve. Then substituting the general solution of (ODE) back into (CE) yields appropriate conditions on q. This is possible due to the following observations.

Lemma 33

All solutions of (CE) are . As and the range of is contained in , we have that , which means that . By induction, we conclude that . □ Differentiating the kernel functions in the (CE) in the distributional sense yields, for , So we define the differential operator for : For this operator , we have that for Hence, by applying the operator to (CE), we end up with an ordinary differential equation (ODE) This differential equation has a characteristic polynomial corresponding to exponential solutions is an even polynomial of order . Assuming that z is such that has exactly distinct roots , the general solution q of (ODE) is a linear combination of exponentials : Writing q as a linear combination of cosine hyperbolic and sine hyperbolic leads to cleaner notation below. Before we substitute (51) back into (CE), we first prove two lemmas.

Lemma 34

If the characteristic polynomial has distinct roots, then for all and for all . If has distinct roots , then is distinct from and hence for . Let without loss of generality . In that case the characteristic polynomial becomes So is a root of . Hence we conclude by contradiction that for all . □ Define the set as follows:

Lemma 35

If characteristic polynomial has distinct roots, then We have that if and only if for some . Hence if and only if for some , . □ For , we can rewrite as follows: We can divide out the product to conclude that, for and , Next we find formulas for and . To compute these integrals, we split the interval into the intervals and . On these intervals is a function in , so we can compute the following anti-derivatives for these smooth branches: Using these anti-derivatives, we can evaluate the integrals and . For clarity, we omit the dependence on z in the remainder of this section. Now we are ready to substitute the general solution q of (ODE), (51), back into (CE): Due to the characteristic equation (53), the first line in equation (55) vanishes. When , and for are linearly independent. Hence the second line vanishes if and only if , where matrices and are defined as follows: for and . As , we also need to take the boundary conditions into account as To satisfy the boundary conditions, we augment the matrices and as follows: Now we have square matrices . There exists a nontrivial solution of (CE) if and only if or .

Theorem 36

Suppose that has distinct roots and for some , then we have that if and only if . When , the corresponding eigenvector is given by where a is a vector in the nullspace of . When , the corresponding eigenvector is given by where b is a vector in the nullspace of . Let be a solution of (CE) for some . Then, by Theorem 33, , so it is also a solution of (ODE). Conversely, let q be a solution of (ODE). As has distinct roots, q is of the form (51). Due to (55) and (57), it is a solution of (CE) if and only if . □ We will call an eigenvalue ‘even’, respectively ‘odd’, when , respectively .

Resolvent

Due to Theorem 31, to compute the normal form coefficients, we need a representation of . It is defined for as the unique solution of the resolvent equation (RE) We can find an explicit form for this resolvent using a variation-of-constants ansatz when , which is defined as follows: with as in (52).

Theorem 37

For with , the unique solution of (RE) is given by where is the resolvent operator of B as in (97) and and as in (79) Our variation-of-constants ansatz q needs to satisfy three conditions. It must solve (RE), , it must satisfy the boundary conditions and the regularity condition . When we found some , such that q satisfies these conditions, we have found the resolvent as it is unique due to Theorem 28. As maps into , the regularity condition is satisfied when . For this proof, we suppress the dependencies on z. To aid in the calculation of , we first compute some integrals up front. We can integrate by parts by splitting the interval into and and using the anti-derivatives in (54) to end up with Now we substitute ansatz (62) into (RE) and collect the terms. Using the above calculations and the fact that , we have that We have that the above equation vanishes when all the terms within square brackets vanish. Term (64a) vanishes naturally due to characteristic equation in (53) as . As maps into , the boundary condition reduces to We can split equation (65) into three sufficient equations: Note that equations (66b) and (66c) are equivalent to If we combine equations (67) with the terms in square brackets in (64c) and (64d), we get the matrix equations: The term in square brackets in (64b) vanishes if the following two equations vanish: We see that in equation (69a) the sum should be constant. Using equation (66a), we see that this constant is zero. The remaining equations (64e), (69b), (70) form a system of differential equations with boundary conditions (68): We can rewrite these equations by introducing some matrices. We define the diagonal matrices Ĉ, , the square matrices K̂, M̂, and the operator as follows: Here , and we define . We seek functions and which solve the system of differential equations with boundary conditions For , we have that and are invertible. Due to Lemmas 34 and 35, when , Q̂ satisfies the conditions of Lemma 47, and hence Q̂ is invertible. We can write the determinant of K̂ and M̂ in terms of the determinant of Q̂, , , and so K̂ and M̂ are both invertible too. Now we multiply the first line of (73) by and the second line by If we now subtract these equations and use the trigonometric identity , we arrive at the following equation: Here, we get the second line by a similar procedure. We note that and , which implies that . Hence we satisfy the regularity condition. We can now find and by taking an anti-derivative plus some constants of integration and . To satisfy the boundary equations (74), we take an anti-derivative such that and . By adding and subtracting boundary equations (74), we find that the constants of integration equal We can simplify this as follows:  □ For the computation of the first Lyapunov coefficient , we need to evaluate the Dunford integral in (47). Similar to Dijkstra et al. [34], we can use residue calculus to find an expression for this integral.

Theorem 38

Let be a simple eigenvalue and . Let be a sufficiently small closed disk such that and . If λ is an ‘even’ eigenvalue with eigenvector where a is a nontrivial solution of , then if and only if for all , where denotes the adjugate of , and using the definitions in (72). If λ is an ‘odd’ eigenvalue with eigenvector where b is a nontrivial solution of , then if and only if for all , where denotes the adjugate of , and using the definitions in (72). As and contain only isolated eigenvalues and and are analytic in z, the set contains only isolated values. Hence such exists. Suppose that λ is an even eigenvalue. As and , we have that is given by Theorem 37 for . We observe that all components of the resolvent are analytic for all except for the constants of integration . This analyticity simplifies (81) to for all , . We can substitute (78) and use the residue formula Due to linear independence of for , this results in the formula The reasoning for odd eigenvalues is similar. □

Numerical results

In this section we examine a specific numerical example. We compute eigenvalues and the first Lyapunov coefficient for a Hopf bifurcation and investigate the effect of varying the diffusion parameter d. For J, we choose the following difference of two exponentials, as in [34]: This connectivity is a model of a population of excitatory neurons acting on a short distance combined with a population of inhibitory neurons acting on a longer distance, see Fig. 2.
Figure 2

The wizard-hat connectivity of (86)

The wizard-hat connectivity of (86) For the activation function S, we choose the sigmoidal function As S is an odd function, and hence . This simplifies the computation of first Lyapunov coefficient of (47) to We can compute this integral using Theorem 38 with . We fix the following values for parameters and and use γ as the bifurcation parameter. We want to compare two cases: without diffusion, i.e. , and with diffusion, i.e. . For , we have a Hopf bifurcation for at with the corresponding eigenvector The normal form coefficient and the Lyapunov coefficient , and hence the bifurcation is supercritical. For , we have a Hopf bifurcation for at with the corresponding eigenvector The normal form coefficient and the Lyapunov coefficient , and hence the bifurcation is also supercritical. We have put these values for further reference in Table 1.
Table 1

Parameter values of the Hopf bifurcation without and with diffusion respectively

Bifurcationα\documentclass[12pt]{minimal} \usepackage{amsmath} \usepackage{wasysym} \usepackage{amsfonts} \usepackage{amssymb} \usepackage{amsbsy} \usepackage{mathrsfs} \usepackage{upgreek} \setlength{\oddsidemargin}{-69pt} \begin{document}$\tau ^{0}$\end{document}τ0\documentclass[12pt]{minimal} \usepackage{amsmath} \usepackage{wasysym} \usepackage{amsfonts} \usepackage{amssymb} \usepackage{amsbsy} \usepackage{mathrsfs} \usepackage{upgreek} \setlength{\oddsidemargin}{-69pt} \begin{document}$\eta _{1}$\end{document}η1\documentclass[12pt]{minimal} \usepackage{amsmath} \usepackage{wasysym} \usepackage{amsfonts} \usepackage{amssymb} \usepackage{amsbsy} \usepackage{mathrsfs} \usepackage{upgreek} \setlength{\oddsidemargin}{-69pt} \begin{document}$\eta _{2}$\end{document}η2\documentclass[12pt]{minimal} \usepackage{amsmath} \usepackage{wasysym} \usepackage{amsfonts} \usepackage{amssymb} \usepackage{amsbsy} \usepackage{mathrsfs} \usepackage{upgreek} \setlength{\oddsidemargin}{-69pt} \begin{document}$\mu _{1}$\end{document}μ1\documentclass[12pt]{minimal} \usepackage{amsmath} \usepackage{wasysym} \usepackage{amsfonts} \usepackage{amssymb} \usepackage{amsbsy} \usepackage{mathrsfs} \usepackage{upgreek} \setlength{\oddsidemargin}{-69pt} \begin{document}$\mu _{2}$\end{document}μ2dγλ\documentclass[12pt]{minimal} \usepackage{amsmath} \usepackage{wasysym} \usepackage{amsfonts} \usepackage{amssymb} \usepackage{amsbsy} \usepackage{mathrsfs} \usepackage{upgreek} \setlength{\oddsidemargin}{-69pt} \begin{document}$\ell _{1}$\end{document}1
Hopf 110.7512.5−102103.34821.2403i−0.9123
Hopf 210.7512.5−10210.23.30941.2379i−0.9314
Parameter values of the Hopf bifurcation without and with diffusion respectively As one might already have observed, the diffusion has little effect on the Hopf bifurcation. We observe more generally that the eigenvalues which are off the real axis are barely effected by the introduction of diffusion, while the eigenvalues on the real axis become more negative, see Fig. 3.1 A possible explanation is that the eigenvector corresponding to the eigenvalue on the imaginary axis has very little spatial curvature, see Fig. 4. As diffusion penalises curvature, its effect on this eigenvector would be small.
Figure 3

The eigenvalues of A at parameter values in Table 1 of the Hopf bifurcation without and with diffusion respectively

Figure 4

The corresponding eigenvectors of the eigenvalue at parameter values in Table 1 without and with diffusion respectively. Note that with diffusion the eigenvector satisfies the boundary conditions at and , while this is not the case without diffusion

The eigenvalues of A at parameter values in Table 1 of the Hopf bifurcation without and with diffusion respectively The corresponding eigenvectors of the eigenvalue at parameter values in Table 1 without and with diffusion respectively. Note that with diffusion the eigenvector satisfies the boundary conditions at and , while this is not the case without diffusion

Discretisation

To obtain an approximate solution of (ADDE), we discretise the spatial domain Ω into an equidistant grid of points, , with a width of . As in [29], we discretise the integral operator G using the trapezoidal rule and the diffusion operator B using a central difference method and a reflection across the boundary for the boundary conditions. This results in a second order spatial discretisation. The discretisation of (ADDE) for and becomes a set of delay equations (DDE): Here is defined as Now we are left with a set of ordinary delay differential equations which we solve with a standard DDE-solver. Note that (DDE) is very similar to the discrete model (3) from which (ADDE) is derived. Only the terms at the boundary are different due to the second order discretisation.

Simulations

We will now perform some simulations around the Hopf bifurcation with diffusion. We set and take as initial conditions an odd function and an even function: For Fig. 5, we took , and for Fig. 6, .
Figure 5

Simulation of (DDE) with the initial conditions , of (92) and and

Figure 6

Simulation of (DDE) with the initial conditions , of (92) and and

Simulation of (DDE) with the initial conditions , of (92) and and Simulation of (DDE) with the initial conditions , of (92) and and For , the solutions with both initial conditions (92) converge to the trivial equilibrium. The one with the odd initial condition converges monotonously to the trivial equilibrium, while the one with the even initial condition converges to the trivial equilibrium in an oscillatory manner. For , there are (at least) two nontrivial stable states. The odd initial condition converges to some nontrivial equilibrium, and the even initial condition converges to some limit cycle, which is due to the Hopf bifurcation. This is similar to the results of Dijkstra et al. [34], where the nontrivial equilibrium arises from a pitchfork bifurcation. The bi-stability is also exemplified in the eigenvalues, see Fig. 7, as we have a positive real eigenvalue and a pair of complex eigenvalues with a positive real component.
Figure 7

The eigenvalues of A for and

The eigenvalues of A for and We have seen that increasing the value of d decreases the eigenvalues on the real axis. This would imply that the nontrivial equilibrium becomes unstable or disappears, probably through a pitchfork bifurcation. Indeed when we use the initial condition and compare the dynamics for and in Fig. 8. The initial condition converges to a nontrivial equilibrium when , but it converges to a limit cycle when .
Figure 8

Simulation of (DDE) with the same initial condition (93) and , and , respectively

Simulation of (DDE) with the same initial condition (93) and , and , respectively

Discussion

We have proved the necessary theorems to construct the sun-star calculus for abstract delay differential equations. In particular, we proved a novel characterisation for sun-reflexivity in Theorem 12. The sun-star calculus provides a variation-of-constants formulation for the nonlinear problem and produces results on the spectral properties of the system, notably the essential spectrum. Using the results of Janssens [6] on the centre manifold reduction, we have derived a simple and explicit formula to compute the first Lyapunov coefficient for the Hopf bifurcation. This procedure can quite easily be extended to normal coefficients of other local bifurcations. The neural field models, both with and without diffusion, can be cast as abstract delay differential equations to which the same theoretical results can be applied. In the sun-star calculus the relevant spaces, duality pairings and Fredholm alternative follow naturally by considering the strong continuity of adjoint operators. Hence there is no need to construct formal projectors. Moreover, for a specific example of the neural field, we could calculate the first Lyapunov coefficient exactly and with arbitrary precision. Thus we conclude that the sun-star calculus for delay equations is a natural setting to study neural field models, with and without diffusion. For certain specific connectivity functions, we have derived analytical conditions for λ to be an eigenvalue for a neural field with a connectivity function that is a sum of exponentials. We have also constructed the corresponding eigenvectors and the resolvent. Numerical results show that the diffusion term does not cause oscillations to arise due to a Hopf bifurcation. However, stable equilibria which are not uniform disappear due to the smoothing effect of the diffusion. So increasing the diffusion in a bi-stable system with a nonuniform equilibrium and a synchronous oscillation leads to a system with only stable synchronous oscillations. We hypothesise that this is a more general feature of equations with diffusion and a delayed reaction. Gap junctions, modelled by the diffusion term in our neural field, are thought to be linked to synchronisation in Parkinson’s disease [3]. Further research could be undertaken to see whether the effects can be observed in a neural field model with physiological values for the parameters. We used a neural field model with a connectivity function, which is a sum of exponentials. This connectivity function is commonly used to aggregate the effect of multiple different types of cells, e.g. excitatory and inhibitory neurons. However, introducing a diffusion term into this model leads to gap junctions between similar and different populations of neurons of the same strength. This may not be physiologically feasible. A way to circumvent this is to use a neural field model with multiple populations. In such a model, it is possible to introduce only gap junctions between neurons of the same population. We have studied a neural field on a one-dimensional closed domain. However, when modelling the neuronal activity in the cortex, it is common to use two-dimensional domains [24]. For a neural field with a rectangular domain, characterising the spectrum, as is done in this paper in Sect. 3, is still an open problem. On a spherical domain, Visser et al. [35] have characterised the spectrum for a neural field with transmission delays and have computed normal form coefficients of Hopf and double Hopf bifurcations. It seems possible to extend the analysis of that paper to include a diffusion term into that neural field model. Due to the general nature of the theoretical results of Sect. 2, these results, including the sun-star framework, the variation of constants formulation and the essential spectrum, also hold for neural field models on arbitrary domains.
  19 in total

1.  Field Theory of Electromagnetic Brain Activity.

Authors: 
Journal:  Phys Rev Lett       Date:  1996-07-29       Impact factor: 9.161

2.  A spatially continuous mean field theory of electrocortical activity.

Authors:  David T J Liley; Peter J Cadusch; Mathew P Dafilis
Journal:  Network       Date:  2002-02       Impact factor: 1.273

Review 3.  Waves, bumps, and patterns in neural field theories.

Authors:  S Coombes
Journal:  Biol Cybern       Date:  2005-07-30       Impact factor: 2.086

4.  Role of delays in shaping spatiotemporal dynamics of neuronal activity in large networks.

Authors:  Alex Roxin; Nicolas Brunel; David Hansel
Journal:  Phys Rev Lett       Date:  2005-06-16       Impact factor: 9.161

5.  Synchronization properties of networks of electrically coupled neurons in the presence of noise and heterogeneities.

Authors:  Srdjan Ostojic; Nicolas Brunel; Vincent Hakim
Journal:  J Comput Neurosci       Date:  2008-11-26       Impact factor: 1.621

6.  Dynamics of pattern formation in lateral-inhibition type neural fields.

Authors:  S Amari
Journal:  Biol Cybern       Date:  1977-08-03       Impact factor: 2.086

7.  Excitatory and inhibitory interactions in localized populations of model neurons.

Authors:  H R Wilson; J D Cowan
Journal:  Biophys J       Date:  1972-01       Impact factor: 4.033

8.  A mathematical theory of the functional dynamics of cortical and thalamic nervous tissue.

Authors:  H R Wilson; J D Cowan
Journal:  Kybernetik       Date:  1973-09

9.  Pallidal gap junctions-triggers of synchrony in Parkinson's disease?

Authors:  Bettina C Schwab; Tjitske Heida; Yan Zhao; Stephan A van Gils; Richard J A van Wezel
Journal:  Mov Disord       Date:  2014-08-13       Impact factor: 10.338

10.  The Virtual Brain: a simulator of primate brain network dynamics.

Authors:  Paula Sanz Leon; Stuart A Knock; M Marmaduke Woodman; Lia Domide; Jochen Mersmann; Anthony R McIntosh; Viktor Jirsa
Journal:  Front Neuroinform       Date:  2013-06-11       Impact factor: 4.081

View more

北京卡尤迪生物科技股份有限公司 © 2022-2023.