Literature DB >> 29689074

Entangled time in flocking: Multi-time-scale interaction reveals emergence of inherent noise.

Takayuki Niizato1, Hisashi Murakami2.   

Abstract

Collective behaviors that seem highly ordered and result in collective alignment, such as schooling by fish and flocking by birds, arise from seamless shuffling (such as super-diffusion) and bustling inside groups (such as Lévy walks). However, such noisy behavior inside groups appears to preclude the collective behavior: intuitively, we expect that noisy behavior would lead to the group being destabilized and broken into small sub groups, and high alignment seems to preclude shuffling of neighbors. Although statistical modeling approaches with extrinsic noise, such as the maximum entropy approach, have provided some reasonable descriptions, they ignore the cognitive perspective of the individuals. In this paper, we try to explain how the group tendency, that is, high alignment, and highly noisy individual behavior can coexist in a single framework. The key aspect of our approach is multi-time-scale interaction emerging from the existence of an interaction radius that reflects short-term and long-term predictions. This multi-time-scale interaction is a natural extension of the attraction and alignment concept in many flocking models. When we apply this method in a two-dimensional model, various flocking behaviors, such as swarming, milling, and schooling, emerge. The approach also explains the appearance of super-diffusion, the Lévy walk in groups, and local equilibria. At the end of this paper, we discuss future developments, including extending our model to three dimensions.

Entities:  

Mesh:

Year:  2018        PMID: 29689074      PMCID: PMC5915279          DOI: 10.1371/journal.pone.0195988

Source DB:  PubMed          Journal:  PLoS One        ISSN: 1932-6203            Impact factor:   3.240


Introduction

Collective behavior, such as swarming, schooling and flocking, is widely observed in nature [1-3]. These highly ordered behaviors of a group have been compared to a single body with one mind [4]. The emergence of these behaviors has been investigated by computer simulations, such as the Boids model and the self-propelled particle model [5-9]. In these models, each agent in has neighbors within a certain radius or ranking distance (topological distance) [10-13], and agents coordinate their behavior with these neighbors. This mutual coordination with some extra interactions produces various flocking formations. However, experimental studies show that within the coherent collective behavior, the behavior of individuals seems disordered [14-18]. In starling flocks, the neighbors of each bird shuffle and are not fixed [17]. Super-diffusion in collective behavior supports this observation. The diffusion of birds in flocks is faster than Brownian motion, and this super-diffusion is also observed in fish schools (e.g., Plecoglossus altivelis) [19]. Murakami et al. also found that, when described in the center of a mass reference frame, individuals’ trajectories in fish schools show Lévy walks, which are an optimal strategy for balancing exploration and exploitation, [20, 21]. Furthermore, their results suggest the trajectory of each fish is not constrained to a local region of the group but covers the whole group. It has been pointed out that these noisy behaviors facilitate the collective behavior [22, 23]. For instance, locusts in a bounded condition adjust their noisy behavior according to environment [14] and large swarms of midges use their correlation (the degree of how much and how far the individual’s behavior affects the others it is not directly interacting with) to achieve collective behavior despite the lack of collective order [22, 23]. The correlation in the group enlarges each individual’s effective perception range so that it is larger than its interaction range. This enlarged perception range increases the group’s susceptibility to external perturbation [24, 25]. Recently, Mateo et al. [26] analyzed the correlation effect in terms of network structures in flock simulations. However, the correlation itself does not explain how individuals in flocks generate their noise-like behavior, because the correlation in the flock only tells us how the information (individual’s perturbation) is spread and shared with other members, and never tells us how an individual’s decision in the group will be made in a certain environment. For instance, the study of locust swarms suggests that each locust tunes its noisy behavior by itself according to its local polarity [14]. Self-tuning noisy behavior is also observed in Plecoglossus altivelis schools more directly, since the trajectory of the individual fish in the center of the mass reference shows motion that is sometimes ballistic and at other times entangled [19]. All these studies suggest that these self-regulated noisy movements inside the group need to be explained independently with correlational properties because the statistical analysis lacks individual’s perspective. Until now, there have been few studies that have tried to explain these behaviors [27]. Before we describe our flocking model, we begin by discussing the origin of alignment and attraction over multiple time scales. The attraction and alignment forces used in most models result from the positions and directions of an agent’s neighbors. The balance between these two forces is important for producing various flocking formations, such as swarming, milling, and schooling [5, 28–41]. Each agent in a flock is attracted to its neighbor’s current position and aligns its direction of motion to that of its neighbor. From a conventional viewpoint, attraction depends on the neighbors’ positions, whereas alignment depends on the neighbors’ directions of motion [2]. However, alignment also means that each agent is attracted to the infinite-future position of its neighbor along the neighbor’s current direction of motion. Thus the difference between alignment and attraction can be viewed as lying not in their properties (direction vs. position) but in the time scale, that is, attraction to the current position (t = 0) vs. attraction to the infinite-future position (t = ∞) of a neighbor. Thus, attraction and alignment are interactions on two extreme time scales. If agents have various time scales between these two extremes, we should expect that various flocking behaviors to emerge. By considering these multi-time-scale interactions, we constructed a new flocking model that shows various flocking behaviors, such as swarming, milling, and schooling, which can coexist. The parameters of our model do not need to be tuned, unlike previous models. Furthermore, if we modify some interactions in our model, we can also obtain self-propelled particle-like behavior. This proves that our model is a natural extension of previous models. The coexistence of various flocking formations means that flocking behaviors in our model are built on the noisy movement of individuals, which has statistical regularities, such as super-diffusion and Lévy walks in groups. In addition, we also demonstrate that our flocking model is at local equilibrium; thus, the local alignment speeds are much faster than the network arrangement speeds [42]. Our model reproduces all these empirical results. Finally, we discuss future developments for our model, including extending it to three dimensions and using other interaction formats.

Material and method

Origin of alignment and attraction

First, we provide a brief overview of alignment and attraction in flocking models. Although alignment and attraction have been widely used in many flocking models, there have been few discussions of what alignment and attraction mean. Alignment is the coordinating of the direction of motion with an agent’s neighbors and attraction is the tendency of the agent to move toward a neighbor’s current position. Some studies suggest that alignment has a weak effect in fish schools [31-33]. This may be because, in the captive environments in which fish schools are usually observed, the concept of infinite-future positions may not be appropriate. However, for starling flocks, alignment fits well with the empirical results because birds have effectively infinite space in the sky [15-17]. Although attraction and alignment seem quite different because of the difference between the properties on which they act, we can describe alignment in a way that is similar to the description of attraction. If an agent adjusts its direction of motion to that of its neighbor, the agent points toward the infinite-future position of its neighbor (see Fig 1A; note that the dashed line connecting the agent of interest at O to the infinite-future position of its neighbor is only curved because of the constraint of representing a point at infinity on a finite page). Alignment is, therefore, the result of a kind of attraction: “attraction” is an attraction to the neighbor’s current position, Q0, while “alignment” is an attraction to the infinite-future position of the neighbor, Q∞. Thus, the difference between alignment and attraction is not in the interaction properties but in the time scales.
Fig 1

Schematics of attraction, alignment, and their generalization.

(A) Schematic of attraction (left) and alignment (right). The agent (red dot) is attracted toward its neighbor’s (black dot) current position (Q0: left) and infinite future position (Q∞: right) (B) Schematic of attraction and alignment with a neighborhood. The circle indicates the boundary of the agent’s neighborhood. In the attraction case (left), the point P0 is precisely the same as Q0. In the alignment case (right), the point P∞ reflects the information of Q∞ as the point on the boundary the agent would reach if it moved in a direction parallel with its neighbor’s motion. (C) Schematic of the generalization from P0 and P∞ to P and P. P is the crossing point of the agent’s extended direction vector at the boundary of the neighborhood. P is the point where the vector from the agent to a certain prediction point Q crosses the boundary of the neighborhood.

Schematics of attraction, alignment, and their generalization.

(A) Schematic of attraction (left) and alignment (right). The agent (red dot) is attracted toward its neighbor’s (black dot) current position (Q0: left) and infinite future position (Q∞: right) (B) Schematic of attraction and alignment with a neighborhood. The circle indicates the boundary of the agent’s neighborhood. In the attraction case (left), the point P0 is precisely the same as Q0. In the alignment case (right), the point P∞ reflects the information of Q∞ as the point on the boundary the agent would reach if it moved in a direction parallel with its neighbor’s motion. (C) Schematic of the generalization from P0 and P∞ to P and P. P is the crossing point of the agent’s extended direction vector at the boundary of the neighborhood. P is the point where the vector from the agent to a certain prediction point Q crosses the boundary of the neighborhood.

Short-term and long-term prediction on the edge of agent’s neighborhood

These two time scales hint at other possible flocking models. Taking this view of attraction, allows us to consider attraction at arbitrary time scales t between the two extremes Q0 and Q∞, namely, attraction to Q. To implement these intermediate points of attraction in our model, we convert each point Q to a point P on the edge of an agent’s neighborhood. In Fig 1B a circular neighborhood is superimposed upon the diagrams in Fig 1A. Attraction point P0 is the position of a neighbor on the edge of an agent’s neighborhood, and P∞ is the point that the agent’s neighborhood intersects with the dashed line from O to Q∞ (O is the position of the agent of interest). The two extremes Q0 and Q∞ are thus converted to points on the edge of the neighborhood as P0 and P∞, respectively. Note that attraction point P0 is derived from information Q0 from inside the neighborhood and alignment point P∞ is derived from information Q∞ from outside the neighborhood. Fig 1C shows a generalization of our discussion. There is one neighbor within the agent’s neighborhood. We define the quasi-attraction point P as the point where the extended neighbor’s direction vector crosses the agent’s neighborhood. When the neighbor is near the edge of the neighborhood, the quasi-attraction point is almost the same as the attraction point. Next, the quasi-alignment point P is defined as the intersection point where the vector to a prediction point (white circle in Fig 1C) from the center crosses the boundary of the agent’s neighborhood. The type of prediction determines the flocking behavior. In this paper, we use the alignment prediction and anticipation methods. Note that τ and T represent the two most extreme prediction times. T is always larger than τ because τ is related to a short-term prediction while T is related to a long-term prediction: a neighbor would only take a short time (minimum time of zero) to reach the edge of the agent’s neighborhood from the inside while it would take longer to reach a point outside of the neighborhood (e.g., the white circle in Fig 1C). Not also that, since Q and P are only points on a neighbor’s predicted trajectory, the neighbor need not actually travel to either of these points. However, one implication of these predictions should be noted. Through the generalization process shown in Fig 1C, the relation of the two extremes P0 and P∞, which we have observed, is weakened while preserving its original properties.

Prediction methods

The time scales τ and Τ can take many values according to the neighbor’s position relative to the boundary of the agent’s neighborhood and the prediction method. In this section, we introduce the alignment prediction and anticipation methods. There may be other prediction methods; however, these are the only examples that we have found. In this paper, we only use the anticipation method for our analysis. The alignment method is only used to show that this method can create self-propelled particle-like behavior. Alignment prediction: This prediction method is similar to the alignment interaction that is widely used. This method produces group behaviors like the self-propelled particle model (S1 Movie). Fig 2A shows a schematic of the alignment prediction method. The end point of an extremely elongated neighbor’s direction vector is used as the prediction point Q (white circle in Fig 2A). Thus, the prediction point is Q = (t) + T(t), where (t) is neighbor i’s position and (t) is its velocity vector at time t. In this paper, we fixed T = 300 because this method is used only to provide an example of the generation of self-propelled-particle-model-like behavior.
Fig 2

Schematics of the two prediction methods.

(A) Alignment prediction: the prediction point Q is the end point of the elongated neighbor’s direction vector (t). (B) Anticipation: the prediction point Q is the predicted landing point if the turning rate of the agent’s neighbor is the same as its previous rate, that is, dφ(t) = dφ(t + 1). The value of s indicates how long the agent refers to its neighbor’s past movement in terms of a number of time steps. The quasi-alignment point P is the point where the segment from the agent to the prediction point Q crosses the boundary of the neighborhood.

Anticipation: This prediction method is similar to that of Morin et al. [35). Past direction turning rate dφ(t) affects the future direction turning rate dφ(t + 1), assuming dφ(t+1) = dφ (t) for the prediction (Fig 2B). For precision we write dφ(t) as dφ(t) because the degree of dφ depends on how long the agent of interest refers to its neighbor’s past movements (s steps), that is, dφ(t) = arg((t)) − arg((t)) where (t) = (t) − (t−s) and (t) = (t−s) − (t−2s). The prediction point for anticipation is Q = (t) + T (t) (t), where (t) is a unit vector with argument arg(1(t)) + dφ(t). The symbol ||-|| indicates the vector norm. The value of T is given by s*r1(t)/ (t) where r1(t) is the radius of the neighborhood at time t. This definition means that enlarging s, that is how long the agent refers to its neighbor’s past, enlarges T. The value of s is uniquely determined as the minimum value such that all neighbors’ Q’s lie outside the neighborhood. In S1 Fig, we give the graph of the distribution of s, which shows the frequency of s values for our simulations. We use this method for all the results (S2 Movie).

Schematics of the two prediction methods.

(A) Alignment prediction: the prediction point Q is the end point of the elongated neighbor’s direction vector (t). (B) Anticipation: the prediction point Q is the predicted landing point if the turning rate of the agent’s neighbor is the same as its previous rate, that is, dφ(t) = dφ(t + 1). The value of s indicates how long the agent refers to its neighbor’s past movement in terms of a number of time steps. The quasi-alignment point P is the point where the segment from the agent to the prediction point Q crosses the boundary of the neighborhood.

Method descriptions

In this section, we describe our flocking model algorithm. The parameter settings and other more complicated mathematical expressions are provided in the Method section and S1 Text. First, we define the functions and symbols that we use in our model. We define four functions: (i) The distance function d(, ); its output is the Euclidean distance between position vectors and in (in this paper, n = 2). (ii) The mean angle function mean(X); its output is the mean angular direction in set X. (iii) The covering function Cov(X); its output is the set {θ| min({θ − mean(X)| θ ∈ X }) ≤ θ − mean(X) ≤ max({θ − mean(X)| θ ∈ X })}. The relation X ⊆ Cov(X) always holds. (vi) The random function Ran(S): its output is an element x randomly selected from set S. Next, we define four sets for our model. = {1, 2, …, N} are the tags for the agents and N = ||, where |-| indicates the cardinality of a set. = {(i, j)| i, j ∈ , i and j are Voronoi neighbors} is a network constructed from all directly connected Delaunay triangles. The set = {j| j ∈ , (i, j) ∈ or i = j} represents all of agent i’s neighbors, including itself. The set = {j| j ∈ , d(, ) < R or i = j} represents all of agent i’s neighbors inside its repulsion zone, including itself. R is a parameter giving the size of the repulsion zone, which is fixed in our simulations. Our model uses the Voronoi neighborhood to determine neighbors because recent studies, especially in fish schools, suggest that the Voronoi neighborhood provides a more valid description of collective behavior than other neighborhoods [36, 37]. We need to define a circular neighborhood including all Voronoi-connected-neighbors (t) for agent i at time t to obtain quasi-attraction points (i.e., P’s) and quasi-alignment points (i.e., P’s) on the boundary of this neighborhood. This circle is C(t) = (t) + {(r(t)cos(θ), r(t) sin(θ))| 0 ≦ θ<2π} where r(t) = max({d((t), (t)) | j ∈ (t)}) + c (c is a constant parameter). All short-term (i.e., P) and long-term (i.e., P) predictions for an agent i lie on this circle C(t). Here, we describe the two interaction methods (Fig 3A and 3B).
Fig 3

Schematics of the showing short-term and long-term prediction on the circle.

(A) The circle shows quasi-attractions as internal information for agent 1. The numbers under the dots are the tags for the agents. The bold blue arc is a covering set for 1. (B) The circle shows quasi-alignments as external information for agent 1. Each prediction point determines Θi. The bold red arc is a covering set for 1. (C) The complete chart of our algorithm. In the first step, find an agent which has its neighbors within its repulsion range () colored green) and apply the repulsion algorithm (the updated agent is shown in red). Repeat this until there is no agent which has its neighbors within its repulsion range. Then, construct a Delaunay triangulation from all agents’ positional information. Apply quasi-attraction (Cov((t)) colored blue) and quasi-alignment (Cov((t)) colored red) and take their intersection (I(t) colored yellow). The agent of interest (colored blue) selects its direction from this yellow interval.

Direction through common information: A quasi-attraction point collection of P’s for i is a collection of directions, (t) = {θ1(t), θ2(t), …, θ (t)}, because only directional information matters for C(t) in our model (Fig 3A). Similarly, a quasi-alignment point collection of Ps for i is also a collection of directions, (t) = {Θ1(t), Θ2(t), …, Θ (t)} (Fig 3B). Each P is given from Q using the prediction method, that is, alignment or anticipation (see the section Prediction Methods). We take the intersection between Cov((t)) and Cov((t)), I(t) = Cov((t)) ∩ Cov((t)). These common short-term (i.e., P) and long-term (i.e., P) predictions determine agent i’s next direction of motion, φ(t + 1) = mean(φ(t), Ran(I(t))). In particular, in the case I(t) = ∅, φ(t + 1) = mean(φ(t), Ran(C—J(t))), where J (t) = Cov((t)) ∪ Cov((t)) and C is a unit circle. Repulsion: Repulsion in our model is a deflection of an agent’s movement direction from the interval. Because the neighbors in its repulsion zone are , a quasi-attraction point collection of P on gives . Then, agent i’s next direction is , where C is a unit circle.

Schematics of the showing short-term and long-term prediction on the circle.

(A) The circle shows quasi-attractions as internal information for agent 1. The numbers under the dots are the tags for the agents. The bold blue arc is a covering set for 1. (B) The circle shows quasi-alignments as external information for agent 1. Each prediction point determines Θi. The bold red arc is a covering set for 1. (C) The complete chart of our algorithm. In the first step, find an agent which has its neighbors within its repulsion range () colored green) and apply the repulsion algorithm (the updated agent is shown in red). Repeat this until there is no agent which has its neighbors within its repulsion range. Then, construct a Delaunay triangulation from all agents’ positional information. Apply quasi-attraction (Cov((t)) colored blue) and quasi-alignment (Cov((t)) colored red) and take their intersection (I(t) colored yellow). The agent of interest (colored blue) selects its direction from this yellow interval. The intersection I(t) in (I) is constructed from both short-term predictions (i.e., quasi-attractions) and long-term predictions (i.e., quasi-alignments) on the circle C. Each agent can choose any direction from the interval I(t). The interaction in our model only determines a bundle of directions in an obtained interval, but not a certain direction. This fact suggests that the direction that the agent chooses always has some uncertainty, even if it can precisely compute and predict all its neighbors’ behaviors, because the agent can only obtain interval information I(t) and select one element from this interval.

Noise definition

In this paper, we regard noise as arising from an agent’s ability to resolve space divisions (i.e., a cognitive resolution of space). The bundle of directions in interval I gives the next direction of motion for an agent. Treating noise as cognitive resolution blurs this interval (see S2 Fig). This assumption comes from the finite nature of the accuracy of animal cognitive systems (e.g., animal visual perception) [38-41]. Consider the p-division of unit circle C for agent i, where p is a natural number. The p-division acts on circle C centered on the agent’s position at time t. Each fragment of the circle is , where the agent direction of motion is φ(t) and 0 ≦ a Large p preserves the original structure of I. Interval I and partitioned interval are generally equal. However, when p is small, the partitioned interval always includes the original interval I. This mismatch generates more choices from Cov(I) than from the original interval I because the selection area from which each agent chooses a direction expands (S2 Fig). In this paper, a noise parameterζdetermines the value of , where [-] is the floor function. Note that noise strength is not the same as used in traditional models. Noise in traditional models acts on an agent’s pre-determined direction of motion. On the other hand, the role of noise in our model is only to expand the interval Cov(I) and increase the set of directions which the agent can take.

Speed definition

We consider that turning rate curbs the agent’s speed (for instance, in [19] Plecoglossus altivelis’s speed is small when the turning rate is large). So, we define agent i’s speed at time t as (t + 1) = Vcos(φ(t + 1)–φ(t)) where V is a maximum velocity. Distribute all agents randomly in a two-dimensional space, which is 100(m)×100(m) in our model. Each agent also has a random velocity. Each agent checks whether it has its neighbors in its repulsion zone or not. If it does, go to 2.1. Otherwise, go to 3. 2.1. Compute the agent’s quasi-attraction points on for each agent in (t) = {j|d((t), (t)) < R} to obtain . Make a covering set, , and determine the agent’s next direction, . 2.2. Determine the agent’s velocity: (t + 1) = Vcos(φ(t + 1)–φ(t)) where V is a maximum velocity. 2.3. Update the agent’s position: (t + 1) = (t) + (t + 1). Draw Delaunay triangles (i.e., find the Voronoi neighbors) (t) = {(i, j)| i, j ∈ , i and j are Voronoi neighbors} from agent’s distribution to find its directly connected neighbors, (t) = {j| j ∈ , (i, j) ∈ (t) or i = j}. Obtain the edge of neighborhood C(t) = (t) + {(r(t)cos(θ), r(t) sin(θ))| 0 ≦ θ<2π} where r(t) = max({d((t), (t)) | j ∈ (t)}) + c (c is a constant parameter). Compute the agent’s quasi-attraction and quasi-alignment points for (t) and get (t) and (t) by using the method (I). Make two covering sets (Cov((t)) and Cov((t))) and take the intersection, I(t) = Cov((t)) ∩ Cov((t)). 3.1. If I(t) ≠ ∅, the agent’s next direction is . 3.2. If I(t) = ∅, the agent’s next direction is Update the rest of the agent states (t + 1) = Vcos(φ(t + 1)–φ(t)) and (t + 1) = (t) + (t + 1) synchronously. Then update t → t + 1 and return to 2.

Algorithm

In Fig 3C, we describe the chart of our algorithm. The parameters are listed in the Method section. This model has no boundary conditions. Almost all agents are simultaneously updated by using the three steps; however, some agents apply step 2 before proceeding. This procedure is introduced to avoid the situation of agents’ positions coinciding. Thus, our model uses a semi-synchronous update rule.

Results

Schooling, milling, and swarming

First, we examine the effects of noise as cognitive resolution (the ability to divide space; in this case, circle division) and the maximum velocity V on the collective behavior of our model. To summarize the agents’ behavior, we use two measures, the degree of polarity O and torus O, which have been widely used for collective behavior. High polarity O means that the groups are in high alignment (ordered state) and high O means that groups are in a milling state, which is sometimes observed in fish schools. In contrast, low O and O values indicate swarming (i.e., the flock is in a completely disordered state). Note that 0 ≦ O, O ≦1. O and O are expressed mathematically as where N is the number of agents (|| = N), is the unit velocity vector of agent i, and is a unit vector pointing from the group’s center of mass toward agent i. Most flocking models have suggested that these qualitatively different behaviors (schooling, milling, and swarming) are parameter-dependent, which means a certain set of parameters corresponds to each group’s behavior. The parametric region for producing multiple states at the same time is a restricted one [37]. However, Fig 4A suggests that agents in our model (N = 100) can switch between schooling (high alignment), milling, and swarming behaviors. Agents can have a high O, a high O, or a low O and O (S2 Movie). Almost all parameter settings (velocity and noise intensity) have the same tendency in terms of the coexistence of three different behaviors (Fig 4B and 4C). Even if the agents are in a highly ordered state on average (O>0.9), their minimum O is lower than 0.3 in most cases (S3 Fig). This fact suggests that high alignment flocks are not always stable. Although milling states are observed more often in low-velocity regions (S3 Fig), high O values above 0.5 remain in both the high-velocity and noise regions. To achieve various collective behaviors in flock models, many researchers have suggested that the balance between attraction and alignment is crucial. In contrast, the agents in our model can tune the balance between attraction and alignment as the time scales (τ and T) by themselves, as discussed in the previous section.
Fig 4

Averaged polarity O and torus O values and their time series.

(A) Time series of O and O values (N = 100). (B) Polarity for the conditions V = 1.0, 2.0, 3.5, 5.0, 10.0, and 20.0 with respect to noise of 0.01, 0.05, 0.10, 0.50, and 1.00. V is the maximum velocity that each agent can have. The graph shows low polarity in the low-velocity regions because of repulsion effects as we discussed in the main text. (C) Torus structure for the conditions V = 1.0, 2.0, 3.5, 5.0, 10.0, and 20.0 with respect to noise of 0.01, 0.05, 0.10, 0.50, and 1.00 (628, 124, 62, 12, and 6 divisions of the unit circle). The 17,000 steps were run 30 times for all cases.

Averaged polarity O and torus O values and their time series.

(A) Time series of O and O values (N = 100). (B) Polarity for the conditions V = 1.0, 2.0, 3.5, 5.0, 10.0, and 20.0 with respect to noise of 0.01, 0.05, 0.10, 0.50, and 1.00. V is the maximum velocity that each agent can have. The graph shows low polarity in the low-velocity regions because of repulsion effects as we discussed in the main text. (C) Torus structure for the conditions V = 1.0, 2.0, 3.5, 5.0, 10.0, and 20.0 with respect to noise of 0.01, 0.05, 0.10, 0.50, and 1.00 (628, 124, 62, 12, and 6 divisions of the unit circle). The 17,000 steps were run 30 times for all cases. Noise as cognitive resolution works differently compared with past flocking models. Usually, increasing the noise intensity results in disordered groups. Although this trend is also observed for high noise intensity (light green in Fig 4B), it is not always the case. In low-velocity regions, noise increases the O on average. In the low-velocity region, the agents collide with each other because agents tend to stick together. Collisions happen less frequently in the high-velocity case. It is true that each agent tends to move closer in this model, but high velocity decreases the possibility of becoming trapped in another agent’s repulsion region because high-velocity agents pass through other agents’ repulsion areas (in our simulation, the average collision rate in a group of 100 agents per step is 4.54, 1.49 and 0.77 for V/R = 4/4, 7/4 and 10/4, respectively). These collisions destabilize the flock alignment. The effect of this misalignment by collision would be greater for relatively low noise intensity than for relatively high noise intensity. Indeed, low noise agents show high alignment, but the unexpected large direction changes (repulsion in this case) are easily spread all over the flock because of the high space division ability. This perturbation spreading tendency prevents low noise agents from creating high alignment in the low-velocity region.

Diffusion in flocks

Recent findings about real flocks and schools have shown that individual behavior inside the group is disordered rather than highly ordered. Seamless shuffling with neighbors and bustling movements have been little reported in flocking models [11]. In particular, the maximum entropy approach is known to be able to describe these collective properties well, but it does not provide individual perspectives to describe what is going on inside these collective behaviors. Murakami’s recent finding on the Lévy walks inside fish schools shows the statistical regularity in these movements [19]. Exploring and exploiting the behavior of each single fish in the school suggest individual intelligent behavior within the group. It is difficult to understand their finding without considering each individual’s decision under imperfect and fragmented information. Our model explains these collective behaviors. Fig 5A shows super-diffusion in a flock, which is described as where (t) is the position of agent i at time t, (t) is the position of the center of the mass of the flock at time t, and (t) = (t) − (t) is the position of agent i in the center of the reference frame. We averaged over all N and over all time lags of duration t in the interval [0, T], where T is the total time interval (T = 2000 steps). The behavior of the mean square displacement δr2(t) is well-described by the following power law Where α is the diffusion exponent (0–2) and D is the diffusion coefficient. The value of α is important. If α is 1, the process is normal diffusion, and if α is >1, the process is super-diffusion (α = 2 is purely ballistic diffusion). If the agents undergo super-diffusion collectively, information transfer is much faster than for normal diffusion.
Fig 5

Shuffling behavior inside the flock.

(A) Mean square displacement in the center of mass reference frame up to 30 steps. D = 26.0 and α = 1.59. (B) Neighbors overlap. Q against time t for groups of 100 individuals. Full lines represent Eq (6) for 0.66 (fitted value) with α = 1.55 and . (C) Neighbors overlap: Q against number of neighbors M for 100 individuals. Values are not fitted for high M because M is too large for the group size (M = 80 and N = 100).

Shuffling behavior inside the flock.

(A) Mean square displacement in the center of mass reference frame up to 30 steps. D = 26.0 and α = 1.59. (B) Neighbors overlap. Q against time t for groups of 100 individuals. Full lines represent Eq (6) for 0.66 (fitted value) with α = 1.55 and . (C) Neighbors overlap: Q against number of neighbors M for 100 individuals. Values are not fitted for high M because M is too large for the group size (M = 80 and N = 100). Fig 5A shows that the diffusion exponent of our model is >1 (i.e., super-diffusion). The diffusion exponent, α, is 1.55 on average and its value matches the experimental results well (Plecoglossus altivelis: 1.34 < α <1.73 and starlings: α = 1.73; other parameter results for Eq (4) and mutual diffusion are listed in S4 Fig and S1 Table). This super-diffusive behavior in flocks results in neighbor shuffling. Neighbor shuffling means that each agent’s neighbors change with time. Cavagna and other researchers [17] described this using the time-dependent overlapping neighbors rate Q where M(t) is the number of neighbors of agent i at time t + t0 that share the same neighborhood. members as agent i’s member M at time t0. Q decreases over time. According to their calculation, the distribution of Q along time t is well fitted by where is an effective dimension, α is a mutual diffusion exponent, and c is a constant parameter to fit Q (the method of obtaining and α is shown in S5 and S6 Figs). This function is also satisfied in our results. Agents in our model change position dynamically inside the flock.

Lévy walk in the flock

The Lévy walk is an optimal searching strategy found widely in nature, from the movement of bacteria to human movements [43-46], especially in foraging strategies. This behavior can be described by where l is step length and μ is a power law exponent. Step length is the cumulative traveling distance between two pausing points. This procedure is based on the intermittent behavior of the Lévy walk. The definition of the pause (i.e., the state of not moving) is the same as the definition used by Murakami and other researchers [19, 43, 44]). Namely, when dr > ||(t) − (t—1)||, or when the traveling distance in the center of mass reference frame for one step is smaller than a given threshold, dr, agent i is regarded as pausing. Fig 6A is an example of one agent’s trajectory in the center of the mass reference frame for a group size of 100. The graph shows that the trajectory has ballistic movements (non- entangled) and entangled lines. The trajectory in Fig 6B resembles the trajectories of Plecoglossus altivelis in Ref. [19]. This observation is also confirmed through our statistical analysis. Fig 6B shows the step length distributions for 100 agents (gray lines), which are similar. The red line in Fig 6B is the fitting curve with the truncated power law (slope: μ = 2.11), which is often used to estimate the Lévy walk power law distribution, of averaged data for 100 distributions. S3 Table lists an example of all the statistical tests for 100 agents and averaged results over 20 simulations for 100 agents for other threshold dr settings (see detailed discussion in the Method section or Refs. [47, 48]).
Fig 6

Lévy walk behavior inside flocks.

(A) Example of an agent’s trajectory in the center of a mass reference frame for 4000 steps. The vertical bar is the mean flock size during the simulation. (B) Power law distribution of step lengths. The vertical line is the step length rank for each individual. The gray lines correspond to each individual’s step length for its trajectory. The bold red line is the fitting curve with averaged parameters for the truncated power law function. Details of the statistical tests for the power law are given in S2 and S3 Tables and the Method section.

Lévy walk behavior inside flocks.

(A) Example of an agent’s trajectory in the center of a mass reference frame for 4000 steps. The vertical bar is the mean flock size during the simulation. (B) Power law distribution of step lengths. The vertical line is the step length rank for each individual. The gray lines correspond to each individual’s step length for its trajectory. The bold red line is the fitting curve with averaged parameters for the truncated power law function. Details of the statistical tests for the power law are given in S2 and S3 Tables and the Method section. Our result of an average value of μ = 2.11 (see S2 Table for various parameter settings) matches Murakami’s experimental data well (1.86 < μ < 2.33). Furthermore, an exponent value of μ ≈ 2 is the value for an optimal foraging strategy in a certain environment [20, 21]. The whole trajectory covers the mean flock size in Fig 6A. As Murakami and other researchers suggested [19], the bustling behavior inside the flock is not restricted to a local region, but occurs throughout the group. Thus, the agents in our model can communicate with each other in the most efficient way.

Local equilibrium of the flock

We have shown that our model can produce the noisy behavior of individuals. However, the highly ordered macroscale behavior must be connected to the noisy microscale behavior. To bridge this gap, we use Mora and other researcher’s findings [42]. According to their study, parallel processing of different time scale events for micro- and macroscopic behavior explains the apparent paradox of an ordered state being based on a disordered state. They defined two time scales: the alignment time scale, which corresponds to an individual’s behavior over short time scales; and the neighbor exchange time scale, which corresponds to the network (connected topological neighborhood] rearrangement time as a whole flock. We use the name neighbor exchange time instead of using network rearrangement time, following [42]. The alignment time is the relaxation time that eliminates the effect of divergent modes because of the Goldstone theorem (see Ref. [42] for a detailed discussion). The alignment time is approximately τrelax = (Jn)−1, where J is the overall interaction strength and n is the number of nearest neighbors of each agent. This formula is based on an approximation assuming that the group is in a state of high polarity, O ≈ 1. Therefore, this method must be applied carefully to our model. We discuss the validity of using this method in the Method section. The neighbor exchange time is defined as the characteristic decay time of the autocorrelation function Cnetwork(t) = ∑n(t0)n(t + t0), where . Here k(t) is the topological distance at t, given as the time-dependent rank of agent j among agent i’s neighbors ranked by distance. This function decays exponentially, Cnetwork(t) ≈ C0 exp(−t/τnetwork) (see S7 Fig). The relationship between τrelax and τnetwork is crucial. When τrelax ≈ τnetwork, the flock shows only highly polarized behaviors without any rapid shuffling of neighbors in the flock. When τrelax ≤ τnetwork, the flock shows swarming because the network arrangement is faster than the local alignment speed. The most interesting case is when τrelax ≪ τnetwork, which is an adiabatic system or local equilibrium. The alignment speed is much faster than the network arrangement speed. This gap between the two time scales enables the flock to have consistent bridging between highly ordered macroscopic behavior and disordered microscopic behavior. Fig 7A shows our results. About 200 ordered differences of the two time scales (average of τrelax ≈ 180τnetwork) are observed in our model. The size of this gap in our model is similar to Mora’s experimental results (τrelax ≈ 100τnetwork). In addition, we found that there is no correlation between τrelax and τnetwork, as Mora and other researchers found (Fig 7B; Pearson correlation coefficient: 0.02).
Fig 7

Local equilibrium in flocks.

(A) Histogram of alignment time τrelax (red bars) and neighbor exchange time τnetwork (blue bars). The vertical line is the frequency for 50 simulations. Alignment time 1/Jn is almost equal to 1 step in our simulation where n = 5. (B) The scatter plot of τrelax versus τnetwork shows no correlation (Pearson correlation coefficient: 0.02).

Local equilibrium in flocks.

(A) Histogram of alignment time τrelax (red bars) and neighbor exchange time τnetwork (blue bars). The vertical line is the frequency for 50 simulations. Alignment time 1/Jn is almost equal to 1 step in our simulation where n = 5. (B) The scatter plot of τrelax versus τnetwork shows no correlation (Pearson correlation coefficient: 0.02).

Discussion

Our model is based on reconsidering attraction and alignment, which most researchers have used for flocking models. We re-interpreted attraction and alignment so that they differ in their time scales, rather than in their fundamental nature. Each agent uses several different time scales through short-term and long-term prediction. The time scale for each agent is heterogeneous in our model because each agent uses different time scale predictions (τ and T). The circle C, which reflects both short-term and long-term prediction, plays an important role in our model. The interval I on circle C never gives only one direction, but provides a bundle of directions (movement possibilities) for each agent. As we can see in Fig 4, the ratio between the maximum velocity V and the repulsion radius R plays a definitive role for various collective behaviors rather than the noise strength. Therefore, making the interval on C is the main driving force producing the various properties that we have seen in this study. Note this is not the same as stochastic model with certain probabilistic distributions [49, 50], observed in stochastic differential equation, because our model never gives any probabilistic distributions in advance. The dots on the circle in our model reflect various time scale events. Having various possible movements means that the agent’s movement is different from computing one specific direction by applying multiple forces (e.g., attraction and alignment) because the covering interval, Cov(I) only gives each agent a set of choices. Neighbors’ directions of movement can only restrict an agent’s movement to a certain region no matter how refined their space division ability p is. This restriction for direction selection is weaker than in previous flocking models, and it allows our model to exhibit super-diffusion and the Lévy walk. This type of interaction resembles Gunji and other researcher’s mutual anticipation model [51-53]. The agents in their model also have many possible transition states, although their anticipation method is unique. Gunji and other researchers suggest that having many potential vectors (possible movements) produces a group with high cohesion despite the high noise intensity. The rule that the agent has many possible movements may be key for future models attempting to produce collective ordering while shuffling individual’s positions. The coexistence of different time scales leads to local equilibrium in our model. Local equilibrium (an adiabatic system) resembles Harken’s synergetics theory [54, 55], known as the enslaving principle, which says that a fast system is directly affected by a slow system. In our flocking model, the alignment is the relatively fast system and the neighbor exchange (or network rearrangement) is the relatively slow system (τrelax ≪ τnetwork). The theory suggests that network dynamics in the flock directly affects local alignment dynamics, and a time scale gap is necessary to achieve self-organization in the flock. This theory also supports our intuitive understanding of the behavior, as a collective having one mind [4]. Finally, we discuss possible future developments for our model. Extension of the model to three dimensions: Our model extends easily to three dimensions. Some studies have reproduced the coexistence of schooling and milling; however, as we have discussed before, the parameter region for coexistence is very small. The main problem of these studies, we think, is the difficulty of extending to higher dimensions because they use the relative orientation heading for their model and analysis [25, 28]. The relative orientation heading is the angle between two heading directions. The problem emerges when considering the intersection of a bundle of heading directions. This is only valid in two-dimensional space because three-dimensional space has torsions. However, in our model, we simply replace the interaction circle with an interaction sphere and create a spherical cap-shaped covering function for three dimensions. Find other prediction methods and other neighborhood shapes: We introduced the alignment (I) and anticipation (II) prediction methods. Alignment prediction produces Boid-like and SPP-like behavior and anticipation prediction matches many empirical results well. There are more possible prediction methods, which would change the behavior in the model. In addition, different neighborhood shapes may also change the behavior [27, 30]. For example, Sonoda and other researchers [30] found that changing the interaction range from a circle to an ellipse created a long-lasting torus (milling) structure. In our model, although agents formed a torus structure, it lasted for a shorter time than schooling and swarming. Using a different neighborhood shape may make the flock torus structure last longer.

Methods

Unit length

We define the unit length as the norm of a unit velocity vector and its indication as m.

Parameter setting

The parameters are as in Table 1 unless indicated otherwise. The ratio between the repulsion radius and the maximum velocity is the most definitive factor for our model because, as we have suggested in the manuscript, collisions (or repulsion) in flock largely depend on an agent’s velocity. In our setting, the ratio between them is fixed at 7/4. Considering the repulsion radius as an agent’s body size, the speed would not be very high compared with its body size. In the Supporting Information, we discuss three ratio values do not affect our result and suggest the possibility that a low ratio would not be good for our analysis.
Table 1

Parameters for Figs 4 to 7.

ParameterValuesSymbolUnit
Number of agents100NNone
Maximum velocity7Vm per step
Repulsion radius4Rm per step
Noise strength0.2ζDegrees (rad)
Constant parameter10cm
Time1tSteps

Threshold value dr for Lévy walk

We set the threshold parameter dr for each trial to obtain enough step length data: dr = 0.8D, where D is and T = 9000. The parameter dr determines the flock’s average travelling distance during the whole process in the center of mass of the reference frame. We examined three cases (dr = 0.9D, 0.8D and 0.7D) to demonstrate the threshold parameter-independence of our results.

Validity of the approximation for τrelax ≈ 1/Jn

Mora’s approximation for the relaxation time can apply at high polarities, that is, O ≈ 1. To apply this method to our flocking model, the method requires modifications because the noisy behaviors in our model prevent stable high-polarity behaviors from being obtained. To confirm the validity of the method for our results, we used two treatments: (i) creating a highly ordered group with O > 0.9, and (ii) omitting the out values that come from highly noisy behavior. These are discussed in more detail below. Compared with τrelax, obtaining τnetwork does not need such careful treatment. All the graph curves apparently satisfy exponential approximations (S7 Fig). Setting high-velocity agents (V = 20 and R = 2): Under this condition, the average O is 0.93. The approximation of the overall interaction strength, J, is valid under this condition. This condition rarely happens in real flocks, especially in two dimensions, because the agent’s speed (V) is 10 times faster than its body length (R). This condition is only used to examine the validity of the approximation. Out values: The out value is far from the average of a series of trials (2000 steps). In S8 Fig and S4 Table, the average J values with no out values are around 0.1. However, out values with very low (<<0.1) or even negative J values occur because the average O is far from 1.0 (e.g. O ≈ 0.5, when V = R = 4). Fortunately, these values are rare in our model, occurring at most about once in 2000 steps (V = 4 and R = 4). Furthermore, we have no out values at all when V = 20 and R = 2. This proves that Mora’s approximation works when V = 20 and R = 2. Therefore, we expect that the approximation is valid for most cases.

Frequency distribution of s in the anticipation method.

The value of s indicates how long an agent refers to its neighbor’s past movement in terms of a number of steps. Each bar corresponds to one parameter set: V/R = 4/4 (blue), 7/4 (green), 10/4 (red). The graph shows that in our setting s-values concentrate at the minimum, that is, most agents refer back one step before movement. (PDF) Click here for additional data file.

Two examples of the covering function when p = 32 and 8.

The light blue band is any interval I on the edge of the neighborhood. Light red bands are for each p. The covered intervals tend to be larger when p is low. (PDF) Click here for additional data file.

Maximum and minimum values for O and O.

(A) (B) Average maximum values for O and O. The horizontal axis is maximum velocity V with repulsion radius R = 2 and number of individuals N = 100. The value of O tends to be high in the low velocity regions. (C) (D) Average minimum values for O and O. The horizontal axis is maximum velocity V with repulsion radius R = 2 and number of individual N = 100. The value of O is less than 0.3 in the high velocity regions, although the average O values are more than 0.9 in Fig 2B. (PDF) Click here for additional data file.

Normal diffusion and mutual diffusion for each parameter set.

(A) Graphs of the super-diffusion for our results on other parameters (V = 4 and 10 with R = 4). (B) Graphs of mutual diffusion for our results with other parameters (V = 4 and 10 with R = 4), where and (t) = – (agent j is the nearest neighbor of i at time t0). Mean displacement approximately fits power law function . (PDF) Click here for additional data file.

Neighbor overlap for M and t.

(A) (B) Graphs of the neighbor overlap value Q(t) along the number of neighbors M for our results with other parameters (V = 4 and 10 with R = 4). (C) (D) Graphs of the neighbor shuffling along time t for our results with other parameters (V = 4 and 10 with R = 4). Both full lines represent Eq (6) in the main manuscript with 0.073 (fitted value), where α = 1.49 and for V = 4 and with 0.066 (fitted value), where α = 1.57 and for V = 10. (PDF) Click here for additional data file.

Number of agent’s neighbors M as a function of radius R of the sphere containing them.

The scatter is fitted by up to M = 30 when N = 50, V = 7, and R = 4. In this figure, a = 0.02, , and R2 = 0.998. gives effective dimensions for fitting Eq 6. (PDF) Click here for additional data file.

Example of an exponential function Cnetwork(t) ≈ C0exp(−t/τnetwork) for 50 samples with V = 7 and R = 4.

All the curves decay exponentially. (PDF) Click here for additional data file.

Histogram of interaction strength J for each parameter set for 50 samples.

The value of J is low for low velocities because of the collision effects. The lowest J value is around 0.11. Considering n = 5 (median of the number of neighbors), τrelax is less than 2 (steps) at most. In particular, for V = 7 and R = 4, τrelax ranges from 1.33 (steps) to 1.53 (steps). (PDF) Click here for additional data file.

Averaged results for running the simulation 20 times for N = 100.

(PDF) Click here for additional data file.

Averaged results for running the simulation 20 times for different parameter sets and different threshold parameters.

The exponentials of power law μ are around 2. The pass rate is the average pass rate for the Kolmogorov–Smirnov (KS) test [36-40]. We use the truncated power law function to fit our flocking data [37, 38]. The function of the truncated power law function is , where μ is the power law exponent, x is the start of the tail of a series of the trial, and x is the maximum value of a series of the trial. A pass rate of 1 means that the agent’s step length distribution in a flock can be fitted by the truncated power law distributions. For V = 4, the step length distributions tend to fail the KS test because of the collision effect among agents in the flock. The collision effect means that the agents at low velocities tend to have high collision probabilities. Because the repulsion force caused by collision makes an agent turn away from its neighbors, the ballistic movements of agents in the flock disappear. (PDF) Click here for additional data file.

Example of the power law test for all the flock members (all the distributions in Fig 6B and threshold dr = 0.8D).

The test followed Clauset’s methods [39]. If p > 0.1, then the power law assumption cannot be rejected. All the agents pass the KS test. In addition to the KS test, we also used the Akaike Information Criterion (AIC) to examine whether the truncated power law graph can be distinguished from the exponential function f(x) = λexp(−λ(x − x)), where λ is the exponential parameter [36]. If the AIC value is close to 1, the obtained graphs are more likely to follow the truncated power law rather than the exponential power law. (PDF) Click here for additional data file.

Averaged results for running the simulations 50 times for N = 100.

From the left, average gap between τnetwork and τrelax, Pearson correlation coefficient (PCC) and its p value, which show no correlation for all parameter sets, and average out value through 2000 steps in a simulation series. (PDF) Click here for additional data file.

Algorithm description and the definition of symbols and functions.

(PDF) Click here for additional data file.

Example of the simulation of our model using alignment prediction methods for N = 50.

(MP4) Click here for additional data file.

Example of the simulation of our model using the anticipation methods for N = 50.

(MP4) Click here for additional data file.
  40 in total

1.  Visual sensory networks and effective information transfer in animal groups.

Authors:  Ariana Strandburg-Peshkin; Colin R Twomey; Nikolai W F Bode; Albert B Kao; Yael Katz; Christos C Ioannou; Sara B Rosenthal; Colin J Torney; Hai Shan Wu; Simon A Levin; Iain D Couzin
Journal:  Curr Biol       Date:  2013-09-09       Impact factor: 10.834

2.  Collective memory and spatial sorting in animal groups.

Authors:  Iain D Couzin; Jens Krause; Richard James; Graeme D Ruxton; Nigel R Franks
Journal:  J Theor Biol       Date:  2002-09-07       Impact factor: 2.691

Review 3.  Collective cognition in animal groups.

Authors:  Iain D Couzin
Journal:  Trends Cogn Sci       Date:  2008-12-06       Impact factor: 20.229

4.  Finite-size scaling as a way to probe near-criticality in natural swarms.

Authors:  Alessandro Attanasi; Andrea Cavagna; Lorenzo Del Castello; Irene Giardina; Stefania Melillo; Leonardo Parisi; Oliver Pohl; Bruno Rossaro; Edward Shen; Edmondo Silvestri; Massimiliano Viale
Journal:  Phys Rev Lett       Date:  2014-12-01       Impact factor: 9.161

5.  Scale-free correlations in starling flocks.

Authors:  Andrea Cavagna; Alessio Cimarelli; Irene Giardina; Giorgio Parisi; Raffaele Santagati; Fabio Stefanini; Massimiliano Viale
Journal:  Proc Natl Acad Sci U S A       Date:  2010-06-14       Impact factor: 11.205

6.  Inferring the structure and dynamics of interactions in schooling fish.

Authors:  Yael Katz; Kolbjørn Tunstrøm; Christos C Ioannou; Cristián Huepe; Iain D Couzin
Journal:  Proc Natl Acad Sci U S A       Date:  2011-07-27       Impact factor: 11.205

7.  Deciphering interactions in moving animal groups.

Authors:  Jacques Gautrais; Francesco Ginelli; Richard Fournier; Stéphane Blanco; Marc Soria; Hugues Chaté; Guy Theraulaz
Journal:  PLoS Comput Biol       Date:  2012-09-13       Impact factor: 4.475

8.  Vision in two cyprinid fish: implications for collective behavior.

Authors:  Diana Pita; Bret A Moore; Luke P Tyrrell; Esteban Fernández-Juricic
Journal:  PeerJ       Date:  2015-08-04       Impact factor: 2.984

9.  Swarming bacteria migrate by Lévy Walk.

Authors:  Gil Ariel; Amit Rabani; Sivan Benisty; Jonathan D Partridge; Rasika M Harshey; Avraham Be'er
Journal:  Nat Commun       Date:  2015-09-25       Impact factor: 14.919

10.  Diffusion of individual birds in starling flocks.

Authors:  A Cavagna; S M Duarte Queirós; I Giardina; F Stefanini; M Viale
Journal:  Proc Biol Sci       Date:  2013-02-13       Impact factor: 5.349

View more
  2 in total

1.  Lévy walk process in self-organization of pedestrian crowds.

Authors:  Hisashi Murakami; Claudio Feliciani; Katsuhiro Nishinari
Journal:  J R Soc Interface       Date:  2019-04-26       Impact factor: 4.118

2.  Four-Types of IIT-Induced Group Integrity of Plecoglossus altivelis.

Authors:  Takayuki Niizato; Kotaro Sakamoto; Yoh-Ichi Mototake; Hisashi Murakami; Takenori Tomaru; Tomotaro Hoshika; Toshiki Fukushima
Journal:  Entropy (Basel)       Date:  2020-06-30       Impact factor: 2.524

  2 in total

北京卡尤迪生物科技股份有限公司 © 2022-2023.