| Literature DB >> 30940850 |
Andrea Guazzini1,2, Federica Stefanelli1,2, Enrico Imbimbo3,4, Daniele Vilone5,6, Franco Bagnoli2,7, Zoran Levnajić8.
Abstract
We report the results of a game-theoretic experiment with human players who solve problems of increasing complexity by cooperating in groups of increasing size. Our experimental environment is set up to make it complicated for players to use rational calculation for making the cooperative decisions. This environment is directly translated into a computer simulation, from which we extract the collaboration strategy that leads to the maximal attainable score. Based on this, we measure the error that players make when estimating the benefits of collaboration, and find that humans massively underestimate these benefits when facing easy problems or working alone or in small groups. In contrast, when confronting hard problems or collaborating in large groups, humans accurately judge the best level of collaboration and easily achieve the maximal score. Our findings are independent on groups' composition and players' personal traits. We interpret them as varying degrees of usefulness of social heuristics, which seems to depend on the size of the involved group and the complexity of the situation.Entities:
Mesh:
Year: 2019 PMID: 30940850 PMCID: PMC6445098 DOI: 10.1038/s41598-019-41773-2
Source DB: PubMed Journal: Sci Rep ISSN: 2045-2322 Impact factor: 4.379
Figure 1Comparison of the experimental and the simulated values of agent fitness and average probability of cooperation. Top two panels: Comparison of experimental AF (black crosses) and best attainable AFbest (green circles). Top left panel: four plots (a–d) show the comparison over four values of problem complexity R. Within each plot we show the values for four group sizes S, where N/N indicates the group composed of a single player, while N/4, N/2 and N/1 respectively indicate the divisions into 4, 2 and 1 group (since experiment and simulation have different total number of players/agents, for easier comparison we use the notation involving number of groups). Top right panel: four plots (a–d) show the same comparison, but this time over four group sizes S. When dealing with simple problems and when playing in small groups, humans earn much less than they could. This performance improves as problems get harder and/or as humans play in larger groups. Finally, when facing hardest problems or when playing all together in one group, humans earn as much as they could. Humans somewhere appear to earn slightly more than evolving agents: this is an artifact coming from the statistical nature of the simulations (see Supplement). Bottom two panels: Comparison of experimental C (black crosses) and Cbest leading to best agent fitness (green circles). Bottom left panel: four plots (a–d) show the average level of cooperation C for four values of problem complexity R. Within each plot we show the values for four group sizes S, as in the top panels. Bottom right panel: four plots (a–d) show the same comparison, but this time over four group sizes S. When in isolation/small groups and/or when confronting simple problems, humans cooperate far less that it would be best for their benefit. As groups become bigger and problems become harder, average human level of collaboration increases. Finally, when all in one group or when facing hardest problems, humans correctly judge the best level of cooperation. All differences between experimental and simulated values in all panels are statistically significant. All values have error bars (very small) that express standard deviations.