Literature DB >> 35153590

Cost efficiency of institutional incentives for promoting cooperation in finite populations.

Manh Hong Duong1, The Anh Han2.   

Abstract

Institutions can provide incentives to enhance cooperation in a population where this behaviour is infrequent. This process is costly, and it is thus important to optimize the overall spending. This problem can be mathematically formulated as a multi-objective optimization problem where one wishes to minimize the cost of providing incentives while ensuring a minimum level of cooperation, sustained over time. Prior works that consider this question usually omit the stochastic effects that drive population dynamics. In this paper, we provide a rigorous analysis of this optimization problem, in a finite population and stochastic setting, studying both pairwise and multi-player cooperation dilemmas. We prove the regularity of the cost functions for providing incentives over time, characterize their asymptotic limits (infinite population size, weak selection and large selection) and show exactly when reward or punishment is more cost efficient. We show that these cost functions exhibit a phase transition phenomenon when the intensity of selection varies. By determining the critical threshold of this phase transition, we provide exact calculations for the optimal cost of the incentive, for any given intensity of selection. Numerical simulations are also provided to demonstrate analytical observations. Overall, our analysis provides for the first time a selection-dependent calculation of the optimal cost of institutional incentives (for both reward and punishment) that guarantees a minimum level of cooperation over time. It is of crucial importance for real-world applications of institutional incentives since the intensity of selection is often found to be non-extreme and specific for a given population.
© 2021 The Authors.

Entities:  

Keywords:  evolution of cooperation; evolutionary game theory; institutional incentives

Year:  2021        PMID: 35153590      PMCID: PMC8791050          DOI: 10.1098/rspa.2021.0568

Source DB:  PubMed          Journal:  Proc Math Phys Eng Sci        ISSN: 1364-5021            Impact factor:   2.704


Introduction

The problem of promoting the evolution of cooperative behaviour within populations of self-regarding individuals has been intensively investigated across diverse fields of behavioural, social and computational sciences [1-5]. Various mechanisms responsible for promoting the emergence and stability of cooperative behaviours among such individuals have been proposed. They include kin and group selection [6,7], direct and indirect reciprocities [8-12], spatial networks [13-16], reward and punishment [17-22] and pre-commitments [23-27]. Institutional incentives, namely rewards for cooperation and punishment for wrongdoing, are among the most important ones [22,28-36]. Different from other mechanisms, in order to carry out institutional incentives, it is assumed that there exists an external decision maker (e.g. institutions such as the United Nations and the European Union) that has a budget to interfere in the population to achieve a desirable outcome. Institutional enforcement mechanisms are crucial for enabling large-scale cooperation. Most modern societies implement certain forms of institutions for governing and promoting collective behaviours, including cooperation, coordination and technology innovation [37-42]. Providing incentives is costly and it is therefore important to minimize the cost while ensuring a sustained level of cooperation over time [28,31,41]. Despite its paramount importance, so far there have been only a few works exploring this question. In particular, Wang et al. [35] use optimal control theory to provide an analytical solution for cost optimization of institutional incentives assuming deterministic evolution and infinite population sizes (modelled using replicator dynamics). This work therefore does not take into account various stochastic effects of evolutionary dynamics such as mutation and non-deterministic behavioural update [4,43,44]. In a deterministic system consisting of cooperators and defectors, once the latter disappear (for instance through strong institutional punishment), there is no further change to the system and thus no further interference in it is required. When mutation is present, this behaviour can however recur and become abundant over time, requiring institutions to spend more of their budget on providing further incentives. Moreover, a key factor of behavioural update, the intensity of selection [4]—which determines how strongly an individual bases their decision to copy another individual’s strategy on their fitness difference—might strongly impact an institutional incentives strategy and its cost efficiency. Its value is usually found to be specific for a given population [45-48] and thus should be taken into account when designing suitable cost-efficient incentives. For instance, when selection is weak such that behavioural update is close to a random process (i.e. an imitation decision is independent of how large the fitness difference is), providing incentives would make little difference to cause behavioural change, however strong it is. When selection is strong, incentives that ensure a minimum fitness advantage to cooperators would ensure a positive behavioural change. In a stochastic, finite-population context, so far this problem has been investigated primarily using agent-based and numerical simulations [28,31,49-52]. Results demonstrate several interesting phenomena, such as the significant influence of the intensity of selection on incentive strategies and optimal costs. However, there is no satisfactory rigorous analysis available at present that allows one to determine the optimal way of providing incentives. This is a challenging problem because of the large but finite population size and the complexity of stochastic processes governing the population dynamics. In this paper, we provide exactly such a rigorous analysis. We study cooperation dilemmas in both pairwise (the Donation game (DG)) and multi-player (the Public Goods game (PGG)) settings [4]. They are among the most well-studied models for investigating the evolution of cooperative behaviour where individual defection is always preferred over cooperation while mutual cooperation is the preferred collective outcome for the population as a whole. Adopting a popular stochastic evolutionary game approach for analysing well-mixed finite populations [53-55], we derive the total expected costs of providing institutional reward or punishment, characterize their asymptotic limits (namely, for an infinite population, weak selection and strong selection) and show the existence of a phase transition phenomenon in the optimization problem when the intensity of selection varies. We calculate the critical threshold of phase transitions and study the minimization problem when the selection is less than and greater than the critical value. We furthermore provide numerical simulations to demonstrate the analytical results. The rest of the paper is organized as follows. In §2, we introduce the models and methods, deriving mathematical optimization problems that will be studied. The main results of the paper are presented in §3. In §4, we discuss possible extensions for future work. Finally, detailed computations, technical lemmas and proofs of the main results are provided in the electronic supplementary material.

Models and methods

Cooperation dilemmas

We consider a well-mixed, finite population of self-regarding individuals or players, who interact with each other using one of the following one-shot (i.e. non-repeated) cooperation dilemmas: the DG or its multi-player version, the PGG. In these games, a player can choose either to cooperate (i.e. a cooperator or player) or to defect (i.e. a defector, or player). Let and be the average pay-offs of a player and a player in a population with players and players, respectively (see also §2.3 for more details). We show below that the difference does not depend on . For cooperation dilemmas, it is always the case that .

Donation game

The pay-off matrix of the DG (for a row player) is given as follows: where and represent the cost and benefit of cooperation, where . DG is a special version of the Prisoner’s Dilemma (PD) game. Denoting as the pay-off of a strategist when playing with strategist from the pay-off matrix above, we obtain and Thus,

Public Goods game

In a PGG, players interact in a group of size , where they decide to cooperate, contributing an amount to a common pool, or to defect, contributing nothing to the pool. The total contribution in a group will be multiplied by a factor , where (for the PGG to be a social dilemma), which is then shared equally among all members of the group, regardless of their strategy. We obtain [56] and Thus,

Cost of institutional reward and punishment

To reward a cooperator (respectively, punish a defector), the institution has to pay an amount (resp., ) so that the cooperator’s (defector’s) pay-off increases (decreases) by , where are constants representing the efficiency ratios of providing the corresponding incentive. As we study reward and punishment separately, without losing generality, we set [22,28]. Thus, the key question here is: What is the optimal value of the individual incentive cost that ensures a sufficient desired level of cooperation in the population (in the long run) while minimizing the total cost spent by the institution?

Deriving the expected cost of providing institutional incentives

We adopt here the finite population dynamics with the Fermi strategy update rule [44], stating that a player with fitness adopts the strategy of another player with fitness with a probability given by , where represents the intensity of selection (see details in §2c). We compute the expected number of times the population contains C players, . For that, we consider an absorbing Markov chain of states, , where represents a population with players. and are absorbing states. Let denote the transition matrix between the transient states, . The transition probabilities can be defined as follows, for : The entries of the so-called fundamental matrix of the absorbing Markov chain give the expected number of times the population is in the state if it starts in the transient state [57]. As a mutant can randomly occur at either or , the expected number of visits at state is, thus, . The total cost per generation is Hence, the expected total costs of interference for institutional reward and institutional punishment are, respectively,

Cooperation frequency

Since the population consists of only two strategies, the fixation probabilities of a () player in a homogeneous population of () players when the interference scheme is carried out are, respectively, and Computing the stationary distribution using these fixation probabilities, we obtain the frequency of cooperation (see §2.3), Hence, this frequency of cooperation can be maximized by maximizing The fraction in equation (2.3) can be simplified as follows [54]: In the above transformation, and are the probabilities of decreasing or increasing the number of C players (i.e. ) by one in each time step, respectively. We consider non-neutral selection, i.e. (under neutral selection, there is no need to use incentives). Assuming that we desire to obtain at least an fraction of cooperation, i.e. , it therefore follows from equation (2.4) that Therefore, it is guaranteed that, if , at least an fraction of cooperation can be expected. This condition implies that the lower bound of monotonically depends on . Namely, when it increases with , while when it decreases with .

Optimization problems

Bringing all these factors together, we obtain the following cost-optimization problems of institutional incentives in stochastic finite populations: where is either or , defined in (2.2), which respectively corresponds to institutional reward or punishment. We show in the electronic supplementary material that is a smooth function on .

Methods: evolutionary dynamics in finite populations

We adopt in our analysis the evolutionary game theory (EGT) methods for finite populations [53-55]. Herein, individuals’ pay-offs represent their fitness or social success, and evolutionary dynamics is shaped by social learning [4,43], whereby the most successful players will tend to be imitated more often by the other players. Here, social learning is modelled using the pairwise comparison rule [44], that is, a player with fitness adopts the strategy of another player with fitness with probability given by the Fermi function, where conveniently describes the selection intensity ( represents neutral drift while represents increasingly deterministic selection). In the absence of mutations or exploration, the end states of evolution are inevitably monomorphic: once such a state is reached, it cannot be escaped through social learning. We assume that, with a certain mutation probability, an individual switches randomly to a different strategy without imitating another individual. In addition, we assume here the small mutation limit [53,55,58]. Thus, at most two strategies are present in the population at a time. The evolutionary dynamics can be described by a Markov chain, where each state represents a homogeneous population and the transition probabilities between any two states are given by the fixation probability of a single mutant [53,55,58]. The resulting Markov chain has a stationary distribution, which describes the average time the population spends in an end state. The small mutation limit allows us to obtain an analytical form of the frequency of cooperation (see below). It is noteworthy that, although we focus here on the small mutation limit, this approach has been shown to be widely applicable to scenarios which go well beyond the strict limit of very small mutation rates [45,46,48,59]. The fixation probability of a single mutant A taking over a whole population with B players is as follows (see [44,55,60] for details) where describes the probability of changing the number of A players by one in a time step. Specifically, when , , representing the transition probability at the neutral limit. Considering the set of two strategies C and D (see [53,58] for the calculation for any number of strategies). Their stationary distribution is given by the normalized eigenvector associated with the eigenvalue 1 of the transpose of a matrix [53,58] which is . The first term is the frequency of cooperation and the second one is that of defection.

Main results

The present paper provides a rigorous analysis of the expected total cost of providing an institutional incentive (2.2) and the associated optimization problem (2.6). In this section, we state our main analytical results, theorems 3.1–3.4, and provide numerical simulations to illustrate the analytical results. The proofs of these results, which require a delicate analysis of the cost functions, are presented in the electronic supplementary material. In the following theorems, denotes the cost function for either institutional reward, , or institutional punishment, , as obtained in (2.2). Also, denotes the well-known harmonic number Our first main result provides qualitative properties and asymptotic limits of .

Theorem 3.1. (qualitative properties and asymptotic limits of total cost functions)

(finite population estimates) The expected total cost of providing an incentive satisfies the following estimates for all finite populations of size : (infinite population limit) The expected total cost of providing an incentive satisfies the following asymptotic behaviour when the population size tends to : where is the Euler–Mascheroni constant. (weak selection limit) The expected total cost of providing an incentive satisfies the following asymptotic limit when the selection strength tends to 0: (strong selection limit) The expected total cost of providing an incentive satisfies the following asymptotic limit when the selection strength tends to : and The lower and upper bounds obtained in part (I) of the theorem suggest that the total expected cost function for both reward and punishment behaves asymptotically in the order of for sufficiently large . This is confirmed in part (II), noting that . We also show that the leading asymptotic coefficient of depends on the game (i.e. DG or PGG) and its parameters. Hence, it is important to adopt a precise optimal value of (e.g. obtained by solving the optimization problem (2.6)), as a small increase in this individual incentive cost can lead to a significant increase in , especially when the population size is large. Figure 1 numerically demonstrates this asymptotic limit.
Figure 1

Large population size limit. We calculate numerically the expected total cost of incentive for reward and punishment, varying population size , for different values of and . The dashed lines represent the corresponding theoretical limiting values obtained in theorem 3.1 for the large population size limit, . We observe that numerical results are in close accordance with those obtained theoretically. Results are obtained for DG with , . (Online version in colour.)

Large population size limit. We calculate numerically the expected total cost of incentive for reward and punishment, varying population size , for different values of and . The dashed lines represent the corresponding theoretical limiting values obtained in theorem 3.1 for the large population size limit, . We observe that numerical results are in close accordance with those obtained theoretically. Results are obtained for DG with , . (Online version in colour.) Parts (III) and (IV) of the theorem provide theoretical estimations of under the weak () and strong () selection limits. For the weak selection limit, the expected total costs are the same for reward and punishment, i.e. . For the strong selection limit, is smaller than, equal to or greater than , depending on whether is smaller than, equal to or greater than . Figure 2 provides numerical validation of the theoretical weak and strong selection asymptotic behaviours of , for different population sizes . We can observe that, for a given individual incentive cost , the range of increases significantly for larger .
Figure 2

Weak and strong selection limits. We calculate numerically the total expected cost of incentive for reward and punishment, by varying the intensity of selection, for different values of and . The dashed lines represent the corresponding theoretical limiting values obtained in theorem 3.1 for weak and strong selection limits. We observe that numerical results are in close accordance with those obtained theoretically. Results are obtained for DG with , . (Online version in colour.)

Weak and strong selection limits. We calculate numerically the total expected cost of incentive for reward and punishment, by varying the intensity of selection, for different values of and . The dashed lines represent the corresponding theoretical limiting values obtained in theorem 3.1 for weak and strong selection limits. We observe that numerical results are in close accordance with those obtained theoretically. Results are obtained for DG with , . (Online version in colour.) Our second main result concerns the optimization problem (2.6). We show that the cost function exhibits a phase transition when the selection intensity varies.

Theorem 3.2. (optimization problems and phase transition phenomenon)

(phase transition phenomena and behaviour under the threshold) Define where and as well as and are defined in the electronic supplementary material (see §§1 and 2 there, respectively). There exists a threshold value given by such that is non-decreasing for all and is non-monotonic when . As a consequence, for (behaviour above the threshold value) For , the number of changes of the sign of is at least two for all and there exists an such that the number of changes is exactly two for . As a consequence, for , there exist such that, for , is increasing when , decreasing when and increasing when . Thus, for , The proofs of theorems 3.1 and 3.2 for the cases of reward and punishment are given in §§1 and 2 in the electronic supplementary material, respectively. We also provide explicit computations for and to illustrate these theorems in §3 in the electronic supplementary material. Based on numerical simulations, we conjecture that the requirement could be removed and theorem 3.2 is true for all finite . In electronic supplementary material, figure S2, using numerical calculation we have shown that satisfies the conjecture, ensuring the validity of the numerical examples below. Theorem 3.2 gives rise to the following algorithm to determine the optimal value for .

Algorithm 3.3. (finding optimal cost of incentive )

: (i) : population size, (ii) : intensity of selection, (iii) game and parameters: PD ( and ) or PGG (, and ), and (iv) : minimum desired cooperation level. Compute {in PD: ; in PGG: }. Compute . Compute where and , as well as and , are defined in the electronic supplementary material. Compute . If : Otherwise (i.e. if ) Compute that is the largest root of the equation for the reward case or that of for the punishment case. Compute : if : . Otherwise (if ): if : ; if : . : and . To illustrate theorem 3.2 and algorithm 3.3, we focus on the case of reward. Figure 3 shows the cost function as a function of , for different values of , and to illustrate the phase transition when varying , in a DG. We can see that, in all cases, these numerical observations are in close accordance with theoretical results. For example, with (figure 3a), we found . For , are increasing functions of . Thus, the optimal cost of incentive , for a given required minimum level of cooperation . For example, with , for to ensure at least 70% of cooperation (), then . When one needs to compare and . For example, with , : for (black dashed line), then , so ; for (green dashed line), then , so (red solid line); for (blue dashed line), since , .
Figure 3

We use algorithm 3.3 to find optimal that minimizes (for institutional reward) while ensuring a minimum level of cooperation . We also use as examples a small population size ((a)) and a larger one ((b)), for DG (, ). (Online version in colour.)

We use algorithm 3.3 to find optimal that minimizes (for institutional reward) while ensuring a minimum level of cooperation . We also use as examples a small population size ((a)) and a larger one ((b)), for DG (, ). (Online version in colour.) Similarly, with a larger population size (; see figure S1 in the electronic supplementary material, bottom row), we obtained . In general, similar observations are obtained as in the case of a small population size . Except that, when is large, the values of for different non-extreme values of minimum required cooperation (say, ) are very small (given the log scale of in the formula of ). This value is also smaller than , with a cost , making the optimal cost of incentive. Similar results are obtained for PGG (figure 4). When is extremely high (i.e. greater than , for a large ) (we do not look at extremely low values since we would like to ensure at least a sufficient level of cooperation), then we can also see other scenarios where the optimal cost is (see figure S1 in the electronic supplementary material, bottom row). We thus can observe that for , for sufficiently large population size and large enough (), then the optimal value of is always . Otherwise, is the optimal cost.
Figure 4

We use algorithm 3.3 to find optimal that minimizes while ensuring a minimum level of cooperation , for PGG (, , ) with . Similar observations to those for DG are obtained. (Online version in colour.)

We use algorithm 3.3 to find optimal that minimizes while ensuring a minimum level of cooperation , for PGG (, , ) with . Similar observations to those for DG are obtained. (Online version in colour.) Our last result provides a comparison of the expected total costs for providing institutional reward and punishment, for different individual incentive costs .

Theorem 3.4. (reward versus punishment costs)

The difference between the expected total costs of reward and punishment is given by As a consequence, when we have In this case, The proof of theorem 3.4 is given in §3 in the electronic supplementary material. Numerical calculation in figure 5 shows the expected total costs for reward and punishment (DG), for varying . We observe that reward is less costly than punishment () for and vice versa when . It is exactly as shown analytically in theorem 3.4. This analytical result is confirmed here for different population size and intensity of selection . Figure 6 also confirms the second part of the theorem, where for small , if one can choose the type of incentive to use, either reward or punishment, then the former can provide a lower cost when requiring less than 50% cooperation at minimum and the latter otherwise. This is in line with previous work showing that reward mechanisms work very well to promote cooperation in environments in which it is rare, while punishment mechanisms are better at maintaining high levels of cooperation (e.g. [28,35,52]).
Figure 5

Comparison of the total costs for reward and punishment as a function of , for different values of and . Reward is less costly than punishment () for small , and vice versa. The threshold of for this change was obtained analytically (see theorem 3.1), which is exactly equal to . Results are obtained for DG with , . (Online version in colour.)

Figure 6

Compare the total costs for reward and punishment at the optimal value (obtained using algorithm 3.3), by varying the minimum required level of cooperation, . Reward is more cost efficient for small , while punishment is more cost efficient when is larger. In both cases, the threshold is around . Other parameters: , DG with , . (Online version in colour.)

Comparison of the total costs for reward and punishment as a function of , for different values of and . Reward is less costly than punishment () for small , and vice versa. The threshold of for this change was obtained analytically (see theorem 3.1), which is exactly equal to . Results are obtained for DG with , . (Online version in colour.) Compare the total costs for reward and punishment at the optimal value (obtained using algorithm 3.3), by varying the minimum required level of cooperation, . Reward is more cost efficient for small , while punishment is more cost efficient when is larger. In both cases, the threshold is around . Other parameters: , DG with , . (Online version in colour.)

Discussion

Institutional incentives such as punishment and reward provide an effective tool for promoting the evolution of cooperation in social dilemmas. Both theoretical and experimental analysis has been carried out [29,36,37,52,61-63]. However, past research usually ignores the question of how institutions’ overall spending, i.e. the total cost of providing these incentives, can be minimized, while at the same time guaranteeing a minimum desired level of cooperation over time. Answering this question allows one to estimate exactly how incentives should be provided, that is, how much to reward a cooperator and how severely to punish a wrongdoer. Existing works that consider this question usually omit the stochastic effects that drive population dynamics, namely when the intensity of selection varies. Resorting to a stochastic evolutionary game approach for finite, well-mixed populations, we have provided theoretical results for the optimal cost of incentives that ensure a desired level of cooperation while minimizing the total budget, for a given intensity of selection, . We show that this cost strongly depends on the value of , owing to the existence of a phase transition in the cost functions when varies. This behaviour is missing in works that consider a deterministic evolutionary approach [35]. The intensity of selection plays an important role in evolutionary processes. Its value differs depending on the pay-off structure (i.e. scaling game pay-off matrix by a factor is equivalent to dividing by that factor) and is usually found to be specific for a given population, which can be estimated through behavioural experiments [45-48]. Thus, our analysis provides a way to calculate the optimal incentive cost for a given population and game pay-off matrix at hand. With regard to theoretical importance, we characterized asymptotic behaviours of the total cost functions for both reward and punishment (namely, in the limits of a large population, weak selection and strong selection) and compared these functions for the two types of incentive. We showed that punishment is always more costly for a small (individual) incentive cost () but less so when this cost is above a certain threshold. We provided an exact formula for this threshold. This result provides insights into the choice of which type of incentives to use. In the context of institutional incentives modelling, a crucial issue is the question of how to maintain the budget for providing incentives [59,64]. The problem of who pays or contributes to the budget is a social dilemma in itself, and how to escape this dilemma is a critical research question. In this work, we focus on the question of how to optimize the budget used for the provided incentives. There are several simplifications made for the theoretical analysis to be possible. First, in order to derive the analytical formula for the frequency of cooperation, we assumed the small mutation limit. Despite the simplified assumption, this small mutation limit approach has been shown to be widely applicable to scenarios which go well beyond the strict limit of very small mutation rates [46,48,59]. Relaxing this assumption would make the derivation of a close form for the frequency of cooperation intractable. Second, we focused in this paper on two important cooperation dilemmas, the DG and the PGG. They have in common a useful property that the difference in (average) pay-off between a cooperator and a defector, , does not depend on , the number of cooperators in the population. This property allows us to simplify the fundamental matrix to a tridiagonal form and apply the techniques of matrix analysis to obtain a close form of its inverse matrix (see electronic supplementary material). In games with more complex pay-off matrices such as the PD in its general form and the collective risk game [65], the difference depends on and the technique in this paper cannot be directly applied. We might consider other approaches to approximate the inverse matrix, exploiting its block structure.
  38 in total

1.  Reward and punishment.

Authors:  K Sigmund; C Hauert; M A Nowak
Journal:  Proc Natl Acad Sci U S A       Date:  2001-09-11       Impact factor: 11.205

2.  Evolution of cooperation by multilevel selection.

Authors:  Arne Traulsen; Martin A Nowak
Journal:  Proc Natl Acad Sci U S A       Date:  2006-07-07       Impact factor: 11.205

Review 3.  Evolutionary explanations for cooperation.

Authors:  Stuart A West; Ashleigh S Griffin; Andy Gardner
Journal:  Curr Biol       Date:  2007-08-21       Impact factor: 10.834

4.  Coordinated punishment of defectors sustains cooperation and can proliferate when rare.

Authors:  Robert Boyd; Herbert Gintis; Samuel Bowles
Journal:  Science       Date:  2010-04-30       Impact factor: 47.728

5.  Human strategy updating in evolutionary games.

Authors:  Arne Traulsen; Dirk Semmann; Ralf D Sommerfeld; Hans-Jürgen Krambeck; Manfred Milinski
Journal:  Proc Natl Acad Sci U S A       Date:  2010-02-08       Impact factor: 11.205

6.  First carrot, then stick: how the adaptive hybridization of incentives promotes cooperation.

Authors:  Xiaojie Chen; Tatsuya Sasaki; Åke Brännström; Ulf Dieckmann
Journal:  J R Soc Interface       Date:  2015-01-06       Impact factor: 4.118

7.  When agreement-accepting free-riders are a necessary evil for the evolution of cooperation.

Authors:  Luis A Martinez-Vaquero; The Anh Han; Luís Moniz Pereira; Tom Lenaerts
Journal:  Sci Rep       Date:  2017-05-30       Impact factor: 4.379

8.  Mediating artificial intelligence developments through negative and positive incentives.

Authors:  The Anh Han; Luís Moniz Pereira; Tom Lenaerts; Francisco C Santos
Journal:  PLoS One       Date:  2021-01-26       Impact factor: 3.240

9.  Good agreements make good friends.

Authors:  The Anh Han; Luís Moniz Pereira; Francisco C Santos; Tom Lenaerts
Journal:  Sci Rep       Date:  2013       Impact factor: 4.379

10.  Generosity motivated by acceptance--evolutionary analysis of an anticipation game.

Authors:  I Zisis; S Di Guida; T A Han; G Kirchsteiger; T Lenaerts
Journal:  Sci Rep       Date:  2015-12-14       Impact factor: 4.379

View more
  2 in total

1.  Institutional incentives for the evolution of committed cooperation: ensuring participation is as important as enhancing compliance.

Authors:  The Anh Han
Journal:  J R Soc Interface       Date:  2022-03-23       Impact factor: 4.118

2.  Long-lasting effects of incentives and social preference: A public goods experiment.

Authors:  Maho Nakagawa; Mathieu Lefebvre; Anne Stenger
Journal:  PLoS One       Date:  2022-08-25       Impact factor: 3.752

  2 in total

北京卡尤迪生物科技股份有限公司 © 2022-2023.