Literature DB >> 30254285

Spectral Analysis of a Non-Equilibrium Stochastic Dynamics on a General Network.

Inbar Seroussi1, Nir Sochen2,3.   

Abstract

Unravelling underlying complex structures from limited resolution measurements is a known problem arising in many scientific disciplines. We study a stochastic dynamical model with a multiplicative noise. It consists of a stochastic differential equation living on a graph, similar to approaches used in population dynamics or directed polymers in random media. We develop a new tool for approximation of correlation functions based on spectral analysis that does not require translation invariance. This enables us to go beyond lattices and analyse general networks. We show, analytically, that this general model has different phases depending on the topology of the network. One of the main parameters which describe the network topology is the spectral dimension [Formula: see text]. We show that the correlation functions depend on the spectral dimension and that only for [Formula: see text] > 2 a dynamical phase transition occurs. We show by simulation how the system behaves for different network topologies, by defining and calculating the Lyapunov exponents on the graph. We present an application of this model in the context of Magnetic Resonance (MR) measurements of porous structure such as brain tissue. This model can also be interpreted as a KPZ equation on a graph.

Entities:  

Mesh:

Year:  2018        PMID: 30254285      PMCID: PMC6156338          DOI: 10.1038/s41598-018-32650-5

Source DB:  PubMed          Journal:  Sci Rep        ISSN: 2045-2322            Impact factor:   4.379


Introduction

Stochastic dynamics on large scale networks has attracted a lot of attention due to its wide occurrence in many disciplines, such as social sciences[1-3], physics and biology[4-6], communication and control theory[7]. Also of interest is the population dynamics on networks, which is usually affected by both the topology of the network and some internal stochastic noise in the system. In many applications the network topology is not known and one can only have access to some measurable parameters of the system. Therefore, the analysis of critical phenomena and phase transitions can provide information about the topology of the network[8]. Since the topology of the network is not known, the study of spectral properties of the network is important. For example, the spectra and eigenvectors of the Laplacian of uncorrelated or locally tree-like complex networks reveal interesting properties of the return-probability distribution, the smallest eigenvalue of the Laplacian, as well as localization of the eigenvectors[9,10]. In additions, one can characterize the complexity of the network by analysing the spectral dimension. The analysis of random walks on graphs and their connections to the spectral dimension in equilibrium system is introduced in refs[11-13]. The main question we ask is what kind of information about the topology of the graph can we extract based on the dynamics of measurable functions, such as the correlation function between the sites of the network in a non-equilibrium model. We show, by working in the spectral domain of the graph, that the spectral dimension plays an important role in determining the dynamical properties of the system. The model we consider consists of a system of interacting sites on a graph with N vertices and E edges between them. We are interested in the stochastic dynamics of some characteristic property . The property m(t) is linked to a physical measurable quantity in the real world and the graph is the underlying geometry/topology in which the property lives and which usually is a complex network of sites. The model is the following collection of stochastic differential equations in the Stratonovich form on the graph :with the initial condition m(0) = m0, where J is a constant. The term g(t) is a multiplicative white noise such that We choose the Stratonovich form, since its solution is a limiting case of a system with a white noise with a short memory[14]. As a start, we investigate the case where the system does not have any spatial and temporal memory, that is, , The topology of the network is encoded in the Laplacian matrix L of the graph , defined as L = kδ − W; then is the degree of site i. The Laplacian matrix is a zero-row-sum matrix, i.e., , for all i with non-negative eigenvalues which are ordered by increasing values (assuming L is diagonalizable). The graph is assumed to be undirected, with uniformly bounded weights, and with no self loops and multiple edges. Later on, for the asymptotic behaviour analysis, we will add a uniformly-like assumption, which will lead to the relation with the spectral dimension of the graph. The model consists of two parts: an interacting part, where diffusion components exchange with strength depending on their location on the graph, and a non-interacting part, where each component follows a stochastic noise with variance σ2. The first causes spreading, while the second pushes towards concentration (a.k.a localization or condensation). The model has numerous applications. We present the model in the context of porous systems such as brain tissue which can be measured using Magnetic Resonance Imaging (MRI). The sensitivity of the MR signal to self-diffusion of water molecules can be utilized to extract information about the network describing the topology of cells (neurons) in the brain. The concept of self-diffusion of molecules in a network of pores was already introduced in refs[15-17]. This approach is based on the assumption that diffusion within a pore is much faster than diffusion between pores, which allows one to describe the porous structure as a network of interacting pores. On the measurement time scale the topology is assumed to be fixed. This description goes beyond the traditional models which assume free diffusion of molecules and isolated pores[18-20]. Under this description, m(t) is the magnetization of the ith pore at time t. The interaction strength W between pores i and j is influenced by the molecules diffusing between pores. The interaction is time-independent, i.e., possible jamming effects are neglected. In order to model the processes within each pore, we add a stochastic noise coupled to each pore. The variance depends on many parameters, such as temperature, the intrinsic geometry of the pore, fluctuations in the geometry of the pore, and the inhomogeneous magnetic field applied. Figure 1 presents an illustration of a porous medium as a network of pores. This model can be viewed as a generalization of the analysis introduced in ref.[21]. The main challenge in this context is to solve the inverse problem of finding the network topology based on the limited resolution measurements from a single voxel (three-dimensional pixel in an MRI image). A single voxel is a massive average over large domains of around millimetres size, whereas the sizes one is actually interested in are three orders of magnitude smaller of micrometers size, such as radii of axons in the brain. An additional worth noting application of this model is in the context of interface growth; where the property can be related to the interface hight, h(t), via the Cole-Hopf transformation. In this representation Eq. (1) is transformed to a discrete KPZ equation on the graph[22]. Exact results have recently been obtained for the one-dimensional case[23,24].
Figure 1

Schematic picture of a network of pores, where m is the magnetization in the ith pore, and the interaction between the pores is described by the transition matrix W.

Schematic picture of a network of pores, where m is the magnetization in the ith pore, and the interaction between the pores is described by the transition matrix W. Our novelty is in analyzing the model on a complex network. The network is not embedded in a lattice topology and therefore does not posses symmetries such as translation invariance. This is important, since in most applications, a lattice topology is highly non-physical. We develop a tool to perform perturbation theory calculations on networks without the usually imposed conservation of momentum constraint. This formalism can be applied to other problems in physics of complex system, quantum disorder, neural science, and more. Using it, we are able to show the existence of a phase transition in our model for spectral dimension . This phase transition is well known on the lattice topology for the Euclidean dimension[25-28]. A sketch of the phase diagram of the model is presented in Fig. 2. It is important to note that the critical values are also geometry/topology dependent as we show later on. Moreover, the connection between this model and brain tissues measured by MRI is also established here for the first time. In the following subsections, we review known results for some simple topologies.
Figure 2

Sketch of the phase diagram of the model.

Sketch of the phase diagram of the model.

Mean field topology

We review the results obtained in the mean field topology, i.e., a fully connected graph with constant coupling strength J between each two sites. In this topology, one can calculate exactly the equilibrium distribution using the Fokker-Plank equation for a normalized model, i.e., , such that for any t[1]. Under this transformation, all the nodes in the graph are uncorrelated and we can omit the subscript i, since the distribution is the same for all the nodes:where is a model-dependent parameter, is a normalization factor, and Γ(μ) is the gamma function. The equilibrium distribution exhibits a Pareto power-law tail. In this case, there are two phase transitions, at μ = 1 and at μ = 3. This can be understood by deriving the equations for the dynamics of the first and the second moments using the forward Fokker-Plank equation, or alternatively the Feynmann-Kac formula[27]. With this formalism, one can also derive an equation for any pth moment. The steady-state solution for the second moment is, then By comparing these results to Fig. 2, we can identify and . In this case, there are only two phases, and equilibrium is always reached (unless the coupling strength is allowed to be negative). The meaning of these phases in the context of the analysis of porous structure using MRI is as follows: the values M describe the density of magnetization in each pore due to the molecules at the ith pore at time t, and all the pores are connected and interact with pore i in the same way. Therefore, one would expect that the contribution of each pore to the magnetization is the same. Here we show that even in the simplified topology of a fully connected graph, the distribution of the magnetization among pores depends on μ, which represents the ratio of the interactions strength among pores to the noise variance within each pore. If μ < 1 is allowed, then the interpretation is that the contribution to the magnetization comes mainly from a few pores in the long-time limit. This can be regarded as a localization regime which is observed in high magnetic field gradients in the case of isolated pores[15,29]. For 1 < μ < 3 the distribution of the molecules among the pores is relatively uniform, but the fluctuations in the magnetization in each pore are very large, whereas above μ = 3 the fluctuations become finite for all the pores. An additional solvable model is the separable model W = bk, given that and k is the degree of the node i. Using a similar derivation as in the mean field topology, the critical phases are similar to those in the mean field model (see Supplementary Information Sec. 1).

Lattice model and other topologies

The lattice topology is studied in various contexts in the mathematics and physics literature. In mathematics, this model is known as the time-dependent Parabolic Anderson Model (PAM), or in its continuum form as the Stochastic Heat Equation (SHE)[30], and is closely related to the KPZ equation[31]. In the physics literature, one encounters it in many physical systems, such as directed polymers in random media, a case which was analyzed also on trees[2,6], random interface growth, and turbulence[4]. The discrete lattice case was investigated thoroughly in refs[26,27], and a generalization with inhomogeneous coupling is given in ref.[25]. In the discrete case, the graph is embedded in space, and with the corresponding Laplacian, the governing stochastic equation reads:note that in this form the coupling J| is assumed to be translation invariant. The case where J| = J was analyzed mainly in refs[26,27] and in the continuum settings, see for example ref.[32]. The analysis in the continuum limit is based on perturbation theory calculations and renormalization group arguments. In this work, we extend this derivation to a general network topology. A generalization to non-constant J| was done in ref.[25]. An additional variation of the lattice model where the weights J| are multiplied by a time-dependent random process is introduced in ref.[7]. It is also possible to consider a model in which the noise is not delta correlated (white noise) and it has some correlation form, for example, exponentially decaying with some correlation time, e.g., an Ornstein-Uhlenbeck process. This adds another stochastic equation for the noise in each site. This extension of the model is analyzed in refs[2,25,26,33].

Results and Discussion

In this section, we show how the correlation functions provide information about the topology of the underlying graph and the phase transitions of the system. As shown in the methods section (Sec. 4), using perturbation theory we can calculate to all orders the two-point correlation function in the spectral domain. Asymptotic properties of this quantity can be found by analysing the vertex function . The vertex function can be expressed in terms of an infinite series of collision matrices , where is the Laplace transform of the transition probability from site i to j. Namely, The convergence properties of this infinite series depends on the topology of the graph. The information about the graph’s topology is encoded by the collision matrices. In the case of an undirected infinite graph, we assume that the graph is uniformly-like, i.e. the long-time limit of the return probability does not depend on the node location. We can connect the collision matrix in the time domain I(t) to the spectral dimension, using a property of the Hadamard product (see Supplementary Information Sec. 2):where P(t) is the return probability to the site i; its Laplace transform, , is the generating function of the process. Thus the spectral dimension is defined by the limit of P (if it exists)[34-37]. It can be understood intuitively as the dimension a random walker “experiences” in a diffusion process on the graph, it measures to what extent the graph is recurrent. In the context of dynamics of molecules, it measures the probability that molecules that started at the ith pore will return to this pore. Note that, Eq. (5) refers to the local spectral dimension (for the difference between local and average spectral dimension see ref.[38]). Given Eq. (5), in the limit s → 0, we can approximate any row of the matrix as follows:where is a constant in s (non diverging for ). This approximation is used to analyze the behaviour of the vertex function. The analysis of the vertex function is done separately for the two type of topologies: recurrent and transient graphs. For recurrent graphs, since for each i, , the geometric series in Eq. (4) is divergent. Therefore, if a spectral dimension exists, then and the vertex function diverges, i.e., all the nodes in the graph are strongly correlated. This corresponds to only one phase of the system in which the magnetization is spread relatively uniformly among the pores. For transient graphs, since for each i, , and under the constraint thatit is possible to take the geometric sum in Eq. (4). Here, λ are the eigenvalues of the Laplacian matrix. Note that, in the translation invariant case, an equality in Eq. (7) coincides with the self-consistent equation analyzed in refs[26,27]. The analysis here is at the point where this self-consistent equation breaks down, i.e., close to a phase transition. In the limit of L → ∞ the infinite series in Eq. (4) converges to the following function, when plugging in the expression (6) recursively: In the general case, one can write the vertex function in terms of the eigenvalues of the collision matrix:where the are the α eigenvalues of the collision matrix . Renormalization group calculations, similar to the analysis in ref.[32], show that when there is only one phase, in which all the nodes in the graphs are strongly correlated, and that when there exists a critical value , which separates between any two phases, one with a finite correlation between two nodes in the network, and the other with infinite correlation. The perturbative calculations, although carried out to all orders, are not capable of capturing the high noise critical point. At the point , the RG parameter diverges, which may also indicates that there is a unique point, which we are unable to probe. It may be that a more careful analysis of the asymptotic behaviour of Eq. (2) can lead to a better description of the phases of the system. If we compare the stated results to known results on systems with translation invariance[25] and to the mean field scenario, the perturbation theory analysis reveals that the system is intermittent for all σ2 if , whereas for the system is intermittent only for .

The graph Lyapunov exponents

The moment Lyapunov exponents are an important tool. A large amount of information about the topology of the system can be extracted by analyzing theses exponents. They are relevant parameters in the context of porous media, since for instance they are closely related to what we can measure using MRI, and can provide insight about the hidden porous structure and complexity. For example, the first Lyapunov exponent can be viewed as a measure of the diffusivity of the medium. In this sub-section, we define and calculate numerically the moment/annealed Lyapunov exponents for simple topologies. The moment Lyapunov exponents can be calculated analytically in the mean field topology[27]. In the lattice topology, one can derive lower and upper bounds[26,27]. The definition of the Lyapunov exponents can be generalized to the case of a general graph topology, where, since the system is not translation invariant and the correlations are node dependent, one needs to average over all the nodes. We define two graph Lyapunov exponents (if the indicated limits exist):where is the pth moment, such that , where . The overline stands for the average over all the nodes in the graph, e.g., and the brackets stands for the average over all possible paths of the random walk up to time t. The order of the limits is important. We call the quantity in Eq. (9) the graph sample Lyapunov exponent and the one in Eq. (10) the graph moment Lyapunov exponent. Using Jensen’s inequality, it is clear that for any p and any graph. In order to examine the behaviour of these exponents, we calculate them numerically for different topologies and noise values. The calculation can be done by solving the equations governing the moments dynamics, which can be derived by using the Stratonovich Fokker-Plank equation for the density of all the nodes p(m, t): Alternatively, one can calculate the moments dynamics based on the backward Fokker-Plank using the Feynmann-Kac formula (see ref.[27]). Since the behaviour of higher moments and the intermittency property are highly dependent on the second moment[26,27], we focus on the second-moment dynamics for insight into interesting dynamical properties of the model:where H can be identified, as a deterministic Schrödinger operator[26], in this example, for the second moment p = 2. Eqs (11) and (12) are valid for any σ as opposed to the perturbative analysis in Secs 2 and 4, which is valid only for small σ. Since we do not assume translation invariance in our model, Eq. (12) cannot be reduced to a one-dimensional problem as in the lattice case[26]. Therefore, we use a numerical solution to show the behaviour of the solutions for different graphs. For the numerical calculation, we present Eq. (12) in a vector form, containing all the possible pairs of correlations. The Lyapunov exponents, γ2 and , are then calculated using the above definitions Eqs (9) and (10) assuming uniform initial conditions. Note that in the lattice case the moment Lyapunov exponents are usually defined in two ways, the first as the long-time limit of a solution to Eq. (12), and the second as the spectral radius of the Schrödinger operator H2[26,27]. These two definitions are equivalent for the d-dimensional lattice and the mean field topologies[26], but it is not clear whether this is the case for general networks. We believe, following similar arguments as in ref.[26], that this is the case for all transitive graphs and for graphs of constant degree. We show numerically that it is indeed true for a d-regular graph. In general, for a non-trivial topology the spectral radius is a lower bound on the Lyapunov exponent γ2(L,σ). An interesting question worth investigating is that of the gap between the two definitions Eqs (9) and (10) with respect to the relation between space (the number of nodes) and time. We compare numerically the two definitions and the largest eigenvalue of the equation λmax (see Supplementary Information Sec. 3 for details). We also use as an additional reference the following theoretical lower bound on the second graph moment and sample Lyapunov exponent, where equality is achieved in the mean field:here 〈k〉 denotes the average degree of the graph. This bound is proved in refs[26,27] for the d-dimensional lattice case where 〈k〉 = 2d. The proof for a general graph is similar (see SI Sec. 3). This lower bound shows that for graphs where the average degree distribution (e.g., graphs with power law degree distribution) is diverging, the second graph moment and sample Lyapunov exponent also diverge. The numerical solution for the mean field Laplacian is presented in Fig. 3a for different values of the ratio J/σ2. Each point was obtained by solving Eq. (12) for different values of the noise variance. The critical point is obtained as expected (Eq. (3)) at J/σ2 = 1. In this case, γ2 → λmax which is also equal to the predicted analytical value Eq. (13) [27]. Figure 3b,d present the Lyapunov exponents for a 1D and 3D lattice, respectively, with periodic boundary conditions. The simulation shows that, as, predicted by theory[26,27], in 1D there is no phase transition. On the other hand, in 3D, we see two phase transitions. The two definitions of the Lyapunov exponent Eqs (9) and (10) coincide in these cases. Figure 3c presents the values of the Lyapunov exponent on a d-regular graph of degree d = 4. In this case, .
Figure 3

A numerical solution of the graph sample (green star) and moment (purple circle) Lyapunov exponents (Eqs (9) and (10)) and the largest eigenvalue of Eq. (12) (horizontal blue line) for four types of graphs: (a) Mean field (b) 1D lattice (c) regular graph of degree 4 (d) 3D lattice. The black dashed line represents the analytical lower bound presented in Eq. (13).

A numerical solution of the graph sample (green star) and moment (purple circle) Lyapunov exponents (Eqs (9) and (10)) and the largest eigenvalue of Eq. (12) (horizontal blue line) for four types of graphs: (a) Mean field (b) 1D lattice (c) regular graph of degree 4 (d) 3D lattice. The black dashed line represents the analytical lower bound presented in Eq. (13). The theoretical analysis for simple topologies, such as the mean field and the lattice model[1,25-27], suggests the existence of an additional phase transition in the high noise limit in the transient case. This phase transition could not be detected using the perturbative analysis. The numerical solutions in Fig. 3 show some divergence, for high noise values, but it is not clear yet whether this is due to numerical instability or the high noise phase transition.

Conclusions

We have presented a stochastic model describing diffusion on a graph with an additional multiplicative stochastic noise. The dependence of the large-scale behaviour of this model on the topology/geometry of the graph is analyzed. The problem we address is whether one can determine the topology of the network based on measurements of some observables. This problem arises in different contexts and disciplines. We show that one of the main structure parameters determining the asymptotic behaviour of the system is the spectral dimension. It is a measure of the complexity of the network which indicates to what extent the network is recurrent. We established the relation between the spectral dimension and the two-point function. This was obtained by calculating the correlation functions perturbatively, in the strength of the noise. This calculation is done in the spectral domain without translation invariance assumption and it shows a direct connection to the return probability distribution. In the transient case, of spectral dimension , there exists a phase transition from a phase when the average correlation between any two nodes is finite and exponentially decaying with time, to a phase in which the average correlation between each two nodes has a power law tail in time. We also present the moment and sample Lyapunov exponents for a general graph. We calculate them numerically in simple topologies. We investigate the connection between the two definitions considered above of the moment Lyapunov exponents on the graph. The connection between these two definitions is an interesting issue and deserves a further analysis. This is also true concerning the connection between the size of the graph and time. Our analysis and results can be used to study other complex systems, especially, our spectral generalization of the Janssen-De Dominicis technique; using this technique one can calculate moment functions which are related to average observables in networks. In the context of MRI this model is rather a new proposition for a simplified model which takes into accounts the effect of interaction among pores. Our analysis of the correlation functions may lead the way to MRI measurements of the spectral dimension and the Lyapunov exponents. It provides a new way to analyze and understand MR data for porous systems and can lead to new experiments and observable parameters that may reveal exciting structural properties of our brain.

Methods

Perturbation theory analysis

The model introduced in Eqs (1) and (2) for a general network is hard to solve exactly. Here, we demonstrate how methods adopted from field theory can provide a unifying framework to produce perturbative approximations for measurable physical quantities. Any stochastic or even deterministic system can be described in terms of a path integral, to which asymptotic methods can be applied systematically. Often of interest are observable quantities such as correlation functions (moments) of m or the probability density function of the process p(m, t). Path integral methods provide a convenient tool for computing quantities such as moments and transition probabilities perturbatively using Feynmann diagrams. They also make renormalization group methods (see ref.[32] Chap. 5) available when perturbation theory breaks down. The strategy of path integral methods is to derive a generating function or functional for the moments and response functions. For stochastic differential equation this can be done using the Janssen-De Dominicis formalism[32,39]. The action derived from Eq. (1) by using this formalism is as follows (see Supplementary Information Sec. 4 for more details). Here, for convenience we write the equation in the equivalent Itô’s formwhere the first term is the free theory and the second term is the “interacting” non-quadratic term. Here the initial conditions have been omitted since our focus is on stationary states and a loss of memory of the initial conditions is assumed. The Janssen-De Dominicis formalism allows us to calculate the correlation functions to all orders using perturbation theory by expanding around the non-interaction term. The analysis here is similar to the lattice case[32], a difference being that we use the eigenbasis of the Laplacian of the graph. The eigenvalues and eigenvectors of the Laplacian operator are defined by the equation Lϕ = λϕ and , λ ≥ 0. The zero eigenvalue corresponds to the uniform eigenfunction: . The difficulty in this expansion is that, in contrast to the Fourier eigenfunctions, momentum is not assumed to be preserved, and neither is translation invariance. Therefore, . Keeping this in mind, we can now calculate the correlation function in the Laplacian basis. We define the transforms of the fields with respect to the eigenspace by , and by the inverse transformation: and . The free propagator of the process is diagonal in this basis:Let us also introduce the transition probability matrix:It can be shown, by using similar arguments as for the lattice topology, that , i.e., in this theory the propagator is not renormalized. Note also that due to the symmetric form of the interaction, any correlation function with odd combination of m and vanishes. Therefore an interesting quantity to investigate is the two-point correlation function. We analyze this quantity in the diagonal basis of the Laplacian. The expansion of the two-point correlation function is then written asThe actions in the basis of the Laplacian readsandIn order to calculate perturbatively the two-point function, we use the concept of the Feynmann diagrams. For convenience, we look at the cumulative function. We sum over all the fully connected diagrams using methods similar to those customarily applied in the lattice case (see Chap. 9 in ref.[32]). The difference here is that conservation of momentum is not assumed. We calculate the two-point cumulative correlation function in the time domain in the eigenbasis of the Laplacian:where we define Δt = t + t − t − t. This function Γ(τ) is called the vertex function in dynamic field theory terms[32]. The vertex function is calculated below using perturbation theory. We show the derivation for the first three orders; the general case follows by induction. The first-order expansion (l = 1) yields, using Wick’s theorem,The second-order expansion (l = 2) yieldsFigure 4 shows a diagrammatic representation of the second-order perturbative calculation of Eq. (15) in the eigenbasis domain. The third order expansion (l = 3) can derive in the same way:Following this derivation, we can provide the lth-order result in perturbation theory:where we denoted I(t) = P(t)P(t). Note that, in the integration over time we put t2 = τ1 and t = τ. Figure 5 shows a diagrammatic representation of the geometric series presented in Eq. (17). Since there is no conservation of momentum, each vertex in the sum has to include a product of four eigenvectors. The function Γ(τ) can be calculated exactly to all orders in perturbation theory, yieldingThe matrix I(t) is defined as the entry-wise square of the transition matrix P(t):where I(0) = 1. We call the matrix I(t) the collision matrix. Taking ρ = μ = η = χ = 0, and t = t = 0, t = t = t, such that and applying the Laplace transform , we obtain for the vertex functionIn the results section, we present an analysis of the convergence of the vertex function for some graph topologies based on the asymptotic properties of the collision matrix . This allows us to derive the phase diagram of the model.
Figure 4

A diagrammatic representation of the second-order in perturbation theory Eq. (15).

Figure 5

A diagrammatic representation of the sum of geometric series of concatenated two-point loop diagrams (Dyson’s equation); this sum produces the self-consistent equation for the full vertex.

A diagrammatic representation of the second-order in perturbation theory Eq. (15). A diagrammatic representation of the sum of geometric series of concatenated two-point loop diagrams (Dyson’s equation); this sum produces the self-consistent equation for the full vertex.
  10 in total

1.  Local vs average behavior on inhomogeneous structures: Recurrence on the average and a further extension of Mermin-Wagner theorem on graphs.

Authors: 
Journal:  Phys Rev Lett       Date:  1996-04-15       Impact factor: 9.161

2.  Universal properties of spectral dimension.

Authors: 
Journal:  Phys Rev Lett       Date:  1996-02-12       Impact factor: 9.161

3.  Phase transitions and random walks on graphs: A generalization of the Mermin-Wagner theorem to disordered lattices, fractals, and other discrete structures.

Authors: 
Journal:  Phys Rev Lett       Date:  1992-06-15       Impact factor: 9.161

4.  Dynamic scaling of growing interfaces.

Authors: 
Journal:  Phys Rev Lett       Date:  1986-03-03       Impact factor: 9.161

5.  Laplacian spectra of, and random walks on, complex networks: are scale-free architectures really important?

Authors:  A N Samukhin; S N Dorogovtsev; J F F Mendes
Journal:  Phys Rev E Stat Nonlin Soft Matter Phys       Date:  2008-03-14

6.  Spectral and dynamical properties in classes of sparse networks with mesoscopic inhomogeneities.

Authors:  Marija Mitrović; Bosiljka Tadić
Journal:  Phys Rev E Stat Nonlin Soft Matter Phys       Date:  2009-08-24

7.  A simple matrix formalism for spin echo analysis of restricted diffusion under generalized gradient waveforms

Authors: 
Journal:  J Magn Reson       Date:  1997-11       Impact factor: 2.229

8.  Explore or exploit? A generic model and an exactly solvable case.

Authors:  Thomas Gueudré; Alexander Dobrinevski; Jean-Philippe Bouchaud
Journal:  Phys Rev Lett       Date:  2014-02-05       Impact factor: 9.161

9.  NMR-based diffusion lattice imaging.

Authors:  Frederik Bernd Laun; Lars Müller; Tristan Anselm Kuder
Journal:  Phys Rev E       Date:  2016-03-07       Impact factor: 2.529

10.  Microscopic interpretation and generalization of the Bloch-Torrey equation for diffusion magnetic resonance.

Authors:  Inbar Seroussi; Denis S Grebenkov; Ofer Pasternak; Nir Sochen
Journal:  J Magn Reson       Date:  2017-01-27       Impact factor: 2.229

  10 in total

北京卡尤迪生物科技股份有限公司 © 2022-2023.