Literature DB >> 29118142

How social information can improve estimation accuracy in human groups.

Bertrand Jayles1,2, Hye-Rin Kim3, Ramón Escobedo2, Stéphane Cezera4, Adrien Blanchet5,6, Tatsuya Kameda7, Clément Sire1, Guy Theraulaz8,5.   

Abstract

In our digital and connected societies, the development of social networks, online shopping, and reputation systems raises the questions of how individuals use social information and how it affects their decisions. We report experiments performed in France and Japan, in which subjects could update their estimates after having received information from other subjects. We measure and model the impact of this social information at individual and collective scales. We observe and justify that, when individuals have little prior knowledge about a quantity, the distribution of the logarithm of their estimates is close to a Cauchy distribution. We find that social influence helps the group improve its properly defined collective accuracy. We quantify the improvement of the group estimation when additional controlled and reliable information is provided, unbeknownst to the subjects. We show that subjects' sensitivity to social influence permits us to define five robust behavioral traits and increases with the difference between personal and group estimates. We then use our data to build and calibrate a model of collective estimation to analyze the impact on the group performance of the quantity and quality of information received by individuals. The model quantitatively reproduces the distributions of estimates and the improvement of collective performance and accuracy observed in our experiments. Finally, our model predicts that providing a moderate amount of incorrect information to individuals can counterbalance the human cognitive bias to systematically underestimate quantities and thereby improve collective performance.
Copyright © 2017 the Author(s). Published by PNAS.

Entities:  

Keywords:  collective intelligence; computational modeling; self-organization; social influence; wisdom of crowds

Mesh:

Year:  2017        PMID: 29118142      PMCID: PMC5703270          DOI: 10.1073/pnas.1703695114

Source DB:  PubMed          Journal:  Proc Natl Acad Sci U S A        ISSN: 0027-8424            Impact factor:   11.205


In a globalized, connected, and data-driven world, people rely increasingly on online services to fulfill their needs. AirBnB, Amazon, Ebay, and Trip Advisor, to name just a few, have in common the use of feedback and reputation mechanisms (1) to rate their products, services, sellers, and customers. Ideas and opinions increasingly propagate through social networks, such as Facebook or Twitter (2–4), to the point that they have the power to cause political shifts (5). In this context, it is crucial to understand how social influence affects individual decision-making and its resulting effects at the level of a group. Two observations can be made about these collective phenomena: (i) people often make decisions not simultaneously but sequentially (6, 7), and (ii) decision tasks involve judgmental/subjective aspects. Social psychological research on group decision-making has established that consensual processes vary greatly depending on the demonstrability of answers (8). When the solution is easy to show, people often follow the “truth-wins” process, whereas when the demonstrability is low, they are much more susceptible to “majoritarian” social influence (9). Thus, collective estimation tasks where correct solutions cannot be easily shown are particularly well suited for measuring the impact of social influence on individuals’ decisions. Galton’s original work (10) on estimation tasks shows that the median of independent estimates of a quantity can be impressively close to its true value. This phenomenon has been popularized as the wisdom of crowds (WOC) effect (11), and it is generally used to measure a group’s performance. However, because of the independence condition, it does not consider potential effects of social influence. In recent years, it has been debated whether social influence is detrimental to the WOC or not: some works argue that it reduces group diversity without improving the collective error (12, 13), while others show that it is beneficial if one defines collective performance otherwise (14, 15). One or two of the following measures were used to define performance and diversity. Let us define as the estimate of individual , as its average over all individuals, and as the true value of the quantity to estimate. Then, is a measure of group diversity, and and are two natural measures of the group performance. However, these estimators are not independent, since , which shows that a decrease in diversity is beneficial to group performance, as measured by , contrary to the general claim. Later research showed that social influence helps the group perform better if one considers only information coming from informed (16), successful (17), or confident (18) individuals. We will show that these traits are actually strongly related. The way that social information is defined also matters: providing individuals with the arithmetic or geometric mean of estimates of other individuals has different consequences (18). Other than these methodological issues, it is difficult to precisely analyze and characterize the impact of social influence on individual estimates without controlling the quality and quantity of information that is exchanged between subjects. Indeed, human groups are often composed of individuals with heterogeneous expertise; therefore, in a collective estimation task, one cannot rigorously control the quality and quantity of shared social information, and the quantification of individual sensitivity to this information is hence very delicate. To overcome this problem, we performed experiments in which subjects were asked to estimate quantities about which they had very little prior knowledge (low demonstrability of answers) before and after having received social information. The interactions between subjects were sequential and local, while most previous works have used a global kind of interaction, with all individuals being provided some information (estimates of other individuals in the group) at the same time (12–14, 18, 19). From the individuals’ estimates and the social information that they received, we were able to deduce their sensitivity to social influence. Moreover, by introducing virtual experts (artificial subjects providing the true answer, thus affecting social information) in the sequence of estimates—without the subjects being aware of it—we were able to control the quantity and quality of information provided to the subjects and to quantify the impact of this information on the group performance. Our results show that the subjects’ reaction to social influence is heterogeneous and depends on the distance between personal and group opinion. We then use the data to build and calibrate a model of collective estimation to analyze and predict the impact of information quantity and quality received by individuals on the performances at the group level.

Experimental Design

Subjects were asked to answer questions for which they had to estimate various social, geographical, or astronomical quantities or the number or length of objects in a picture. For each question, the experiment proceeded in two steps: subjects had to first provide their personal estimate . Then, after receiving the social information , they were asked to give a new estimate . is defined as the geometric mean of the previous estimates ( or 3). Subjects answered each question sequentially () and were not told the value of . Since humans think in terms of orders of magnitude (20), we used the geometric mean for —which averages orders of magnitude—rather than the arithmetic one. Virtual “experts” providing the true value for each question were inserted at random into the sequence of participants (). For each sequence involving 20 human participants, we controlled the number , 5, 15, or 80, and hence, the percentage , , , or of virtual experts, respectively. The social information delivered to human participants, being the geometric mean of previous estimates, is hence strongly affected by these virtual experts. When providing their estimates and , subjects had to report their confidence level in their answer on a Likert scale ranging from one (very low) to five (very high) and were asked to choose the reason that best explained their second estimate among a list of eight possibilities. We used initial conditions for the social information chosen reasonably far from the true answer and imposed loose limits to the estimates that subjects could give to prevent them from answering too absurdly. All graphs presented here are based on the 29 questions ( prior and final estimates) from the experiment performed in France. A similar experiment was conducted in Japan; all results can be found in , where the full experimental protocol is described in detail. The aims and procedures of the experiments conformed to the ethical rules imposed by the Toulouse School of Economics and the Center for Experimental Research in Social Sciences at Hokkaido University. All subjects in France and Japan provided written consent for their participation.

Results

Distribution of Individual Estimates.

Previous works have shown that distributions of independent individual estimates are generally highly right-skewed, while distributions of their common logarithm are much more symmetric (12, 13, 18). This is because humans think in terms of orders of magnitude, especially when large quantities are involved, which makes the logarithmic scale more natural to represent human estimates (20). In these works, participants were mostly asked “easy” questions for which they had good prior knowledge (high demonstrability), such that the answers ranged over one to two orders of magnitude at most (12–14, 17–19, 21–23). To ensure that little information was present before the inclusion of our virtual experts and to more clearly identify the impact of social influence, we selected “hard” questions (low demonstrability). These questions involve very large quantities, and answers span several orders of magnitude, making the log transform of estimates even more relevant. To compare quantities that can differ by orders of magnitude, we normalize each estimate by the true answer to the question at hand and define the log-transformed estimate . Note that the log transform of the actual answer is . Fig. 1 shows the distribution of before and after social information has been provided to the subjects (). Although such distributions have often been presented as close to Gaussian distributions (13, 18), we find that they are much better described by Cauchy distributions because of their fat tails, which account for the nonnegligible probability of estimates extremely far from the truth. The Cauchy probability distribution function readswhere is the center/median and is the width of the distribution. shows the distribution of estimates in the Japan experiment, and shows that, when the same questions were asked, distributions of personal estimates in France and Japan are almost identical.
Fig. 1.

(A) Probability distribution function (PDF) of log-transformed normalized estimates , where is the subject’s estimate and is the true answer to the question before (blue) and after (red) social influence. All conditions () are aggregated ( shows the PDF for each value of ). Solid lines are the results of our model based on Cauchy distributions, while dashed lines are Gaussian fits. (B) PDF of sensitivities to social influence . The numbers at the top are the probabilities for each category of behavior: contradict (Cont; ), keep (Ke; ), compromise (Comp; ), adopt (Ad; ), and overreact (Ov; ). Experimental data are shown in black, and numerical simulations of the model are in red. The full range of goes from to . The figure is limited to the interval [−1, 2], and the values of S outside this range were grouped in the boxes and .

(A) Probability distribution function (PDF) of log-transformed normalized estimates , where is the subject’s estimate and is the true answer to the question before (blue) and after (red) social influence. All conditions () are aggregated ( shows the PDF for each value of ). Solid lines are the results of our model based on Cauchy distributions, while dashed lines are Gaussian fits. (B) PDF of sensitivities to social influence . The numbers at the top are the probabilities for each category of behavior: contradict (Cont; ), keep (Ke; ), compromise (Comp; ), adopt (Ad; ), and overreact (Ov; ). Experimental data are shown in black, and numerical simulations of the model are in red. The full range of goes from to . The figure is limited to the interval [−1, 2], and the values of S outside this range were grouped in the boxes and . For the Cauchy distribution, the mean and standard deviation (SD) are not defined. Therefore, good estimators of and are the median and one-half the interquartile range (the difference between the third and the first quartiles) of the experimental distribution, respectively. In the following, () and () will refer to the median and one-half the interquartile range of the experimental distribution before social influence (after social influence), respectively. Cauchy and Gaussian distributions belong to the so-called stable distributions family. More generally, being a set of estimates drawn from a symmetric probability distribution characterized by its center and width , we define the weighted average , with ; is a stable distribution if has the same probability distribution as the original , up to the new width . Indeed, the center remains the same because of the condition , but the width may decrease after averaging (law of large numbers), depending on the stable distribution considered. Cauchy and Gaussian represent two extremes of the stable distribution family, with Lévy distributions being intermediate cases: for the Cauchy distribution, the width remains unchanged, whereas the narrowing of is maximum for the Gaussian distribution (). In the case of actual human estimates, the relevance of a certain distribution can be related to the degree of prior knowledge of the group. When individuals have no idea about the answer to a question, the weighted average of arbitrary answers cannot be statistically better () or worse () than the arbitrary answers themselves, leading to a Cauchy distribution for these estimates (the only distribution for which ). However, when there is a good prior knowledge, one expects that combining answers gives a better statistical estimate (; Gaussian). When the quantity to estimate is closely related to general intuition (ages, dates, etc.), estimates should hence follow a Gaussian-like distribution, while when individuals have very little knowledge about the answer, as in our experiment, estimates should be Cauchy-like distributed. The rationale for naturally observing stable distributions is explained in . We use the term Cauchy-like, because Fig. 1 shows that the distributions of prior () and final () estimates are slightly skewed toward low estimates (), reminiscent of the human cognitive bias to underestimate numbers, because of the nonlinear internal representation of quantities (24). As we will show, this phenomenon has strong implications on the influence of information provided to the group. We also observe a clear sharpening of the distribution of estimates after social influence mainly caused by the presence of the virtual experts, hence affecting the value of the social information and ultimately, the final estimate of the actual subjects. This sharpening becomes stronger as the percentage of experts increases (). Moreover, consistent with our introductory discussion of the measurement methods of group performance, we propose the two following indicators: (i) collective performance , which represents how close the center of the distribution is to zero (the log transform of the true value ), and (ii) collective accuracy , which is a measure of the proximity of individual estimates to the true value.

Distribution of Individual Sensitivities to Social Influence.

After having received social information, an individual may reconsider her personal estimate . The natural way for humans to aggregate estimates is to use the median (22) or the geometric mean (18), which both tend to reduce the effect of outliers. Here, the social information that we provided to the subject was the geometric mean of the previous answers (including that of the virtual experts providing the true answer ): . Moreover, one can always represent the new estimate as the weighted geometric average of the personal estimate and the social information . Hence, we can uniquely define the sensitivity to social influence by . The value corresponds to subjects keeping their initial estimates, while corresponds to subjects adopting the estimate of their peers. In terms of log-transformed variables , we obtainwhere the log-transformed social information is simply the arithmetic mean , and thus, . Note that, in this language, is simply the barycenter coordinate of the final estimate in terms of the initial personal estimate and the social information. Fig. 1 shows that the experimental distribution of has a bell-shaped part that we roughly assimilate to a Gaussian, with two additional Dirac peaks exactly at and ( shows the numerical values). Five types of behavioral responses can be identified: keeping one’s opinion (peak at ), adopting the group’s opinion (peak at ), making a compromise between one’s opinion and the group’s opinion (), overreacting to social information (), and contradicting it (). Quite surprisingly, responses that consist of overreacting and contradicting were generally overlooked in previous works (21–23, 25), either considered as noise and simply not taken into account or sometimes included into the peaks at and , despite these behaviors being not negligible (especially overreacting). We find that the median of is , in agreement with previous results (15, 18, 25), meaning that individuals tend to give more weight to their own opinion than to information coming from others (14, 19). Moreover, the distributions of for the experiment performed in Japan and for men and women (in France) are very similar to that of Fig. 1 (). We find that the subjects’ behavioral reactions are highly consistent, reflecting robust differences in personality or general knowledge: in each session, according to the way that subjects modified their estimates on average in the first questions, we split the subjects into three subgroups. We first define “confident” subjects as the one-quarter of the group minimizing , where is the index of the questions (i.e., the subjects who were on average closest to ), and the “followers” as the one-quarter of the group minimizing (i.e., closest to ). The other one-half of the group is defined as the “average” subjects. shows the distributions of for the three subgroups computed from questions 25–29. The differences are striking (): for the group of confident subjects, the peak at is about seven times higher than the peak at , while for the group of followers, it is less than twice larger. Moreover, the distribution for average subjects is found to be very close to the global distribution shown in Fig. 1.

Impact of the Difference Between Personal and Group’s Opinions on Individual Sensitivity to Social Influence.

Fig. 2 shows that, on average, depends on the distance between personal and group estimates. Up to a threshold of orders of magnitude, there is a linear cusp relation between and . The farther away the social information is from a subject’s personal estimate , the more likely the latter is to trust the group as increases. Fig. 2 shows the origin of this correlation: as social information gets farther from personal opinion, the probability to keep one’s opinion () decreases, while the probability to compromise increases. Interestingly, the adopting behavior does not change with . The same phenomena have been observed in the Japan experiment ().
Fig. 2.

(A) Mean sensitivity to social influence against the distance between personal estimate and social information (group estimate). Black circles correspond to experimental data, while red open circles are simulations of the model. Note that only about of data are beyond three orders of magnitude. (B) Fraction of subjects keeping (maroon), adopting (pink), and being in the Gaussian-like part of the distribution of (mostly compromisers; purple) against .

(A) Mean sensitivity to social influence against the distance between personal estimate and social information (group estimate). Black circles correspond to experimental data, while red open circles are simulations of the model. Note that only about of data are beyond three orders of magnitude. (B) Fraction of subjects keeping (maroon), adopting (pink), and being in the Gaussian-like part of the distribution of (mostly compromisers; purple) against .

Model.

We now introduce an individual-based model to understand the respective effects of individual sensitivity to social influence and information quality and quantity on collective performance and accuracy observed at the group level. In the model, we simulate a sequence of successive estimates performed by the agents (not including the virtual experts). A typical run of the model consists of the following steps for a given condition . An initial condition is chosen at random according to the experimental ratios of initial conditions. With probability , the true value zero is introduced into the sequence, and with probability , an agent plays. The agent first determines its personal estimate from a Cauchy distribution restricted to . The agent receives, as social information, the average of the previous final estimates . The agent chooses its sensitivity to social influence , consistent with the results of Figs. 1 and 2. In particular, is drawn in a Gaussian distribution of mean with probability or takes the value or with probability and . and have a linear cusp dependence with , while is kept independent of . For a given value of , the average sensitivity is , where and the slope are extracted from Fig. 2. is hence given by . The threshold is determined consistently by the condition , where is the value of the plateau beyond in Fig. 2. The values of all parameters are reported in . being drawn, the final estimate is given by Eq. . One starts again from step ii for the next agent.

Comparison Between Theoretical and Experimental Results.

For all graphs, we ran 100,000 simulations, so that the model predictions error bars are negligible. Fig. 1 shows that the distribution of sensitivities to social influence obtained in the model (red curve in Fig. 1) is similar by construction to the experimental one. Also, by construction of the model (step v above), the cusp dependence of the sensitivity to social influence with respect to is well-reproduced by the model (Fig. 2, red curve with open symbols). We now address several nontrivial predictions of the model.

Estimates after social influence.

Fig. 1 (all values of aggregated) and (for each ) compare favorably the distributions of estimates predicted by the model with the experimental results (before and after social influence). Social influence leads to the sharpening of the distributions of estimates, and this effect increases as more information is provided to the group.

Impact of social information on collective performance.

Fig. 3 shows the collective performance (precisely defined above) and the width of the distribution of estimates for the different and . The collective performance is zero when the distribution is centered on the true value, such that the closer it is to zero, the better. As expected, when , no significant improvement is observed in the collective performance. Then, as increases, the center gets closer to the true value, and the width decreases accordingly, such as was also observed in the experiments in Japan (). Note that the experimental error bars ( describes their computation) decrease after social influence, reflecting the decrease of the width of the estimate distribution after social influence and the driving of people’s opinion by the virtual experts.
Fig. 3.

Collective performance, defined as the absolute value of the median of estimates (A) and width of the distribution of estimates (B), for all before (blue) and after (red) social influence. Both improve with after social influence, except for the collective performance at . Full circles correspond to experimental data, while open circles represent the predictions of the full model. The black lines are the predictions of the simple solvable model presented in . For , only model predictions are available.

Collective performance, defined as the absolute value of the median of estimates (A) and width of the distribution of estimates (B), for all before (blue) and after (red) social influence. Both improve with after social influence, except for the collective performance at . Full circles correspond to experimental data, while open circles represent the predictions of the full model. The black lines are the predictions of the simple solvable model presented in . For , only model predictions are available. The collective performance and estimate distribution width predicted by the model (Fig. 3, open circles) are in good agreement with those observed in the experiment. The very small effect of , only reliably observed in the model in Fig. 3, is explained in . As shown there, a simpler model, where we neglect the dependence of with (Fig. 2), can be analytically solved. It leads to fair predictions (black lines on Fig. 3), although it tends to underestimate the collective performance improvement and does not capture the reduction of the distribution width already observed at . This model guided us to design our experiments, and its relative failure motivated us to investigate the phenomenon illustrated in Fig. 2 and included in the full model described above.

Impact of sensitivity to social influence on collective accuracy.

Fig. 4 ( shows an alternative representation) shows the collective accuracy for the five categories of behavioral responses identified in Fig. 1 and for the whole group before and after social information has been provided. Before social influence, keeping leads to the best accuracy, while adopting and overreacting behaviors are associated with the worst accuracy. However, as more reliable information is indirectly provided by the experts, and in particular for , adopting and overreacting lead to the best accuracy after social influence (14, 19). The contradicting behavior is the only one for which the accuracy is deteriorating after social influence. Finally, compromising leads to a systematic improvement of the accuracy as the percentage of experts increases (better than keeping for ), very similar to that of the whole group. The collective accuracy for each behavioral category is again fairly well-predicted by the model (we discuss below the disagreement between model predictions and experimental data in Fig. 4 for the adopters before social influence).
Fig. 4.

Collective accuracy (median distance to the truth of individual estimates) before (blue) and after (red) social influence against for the five behavioral categories identified in Fig. 1 and for the whole group (all). Adopting leads to the sharpest improvement and the best accuracy for . Full circles correspond to experimental data, while open circles represent the predictions of the model (including for %, a case not tested experimentally).

Collective accuracy (median distance to the truth of individual estimates) before (blue) and after (red) social influence against for the five behavioral categories identified in Fig. 1 and for the whole group (all). Adopting leads to the sharpest improvement and the best accuracy for . Full circles correspond to experimental data, while open circles represent the predictions of the model (including for %, a case not tested experimentally). The sensitivity to social influence and the collective accuracy are strongly related to confidence (). The more confident the subjects, the less they tend to follow the group and the better their accuracy is, especially before social influence. This makes the link between confident (18), informed (16), and successful (17) individuals: they are generally the same persons. However, individuals who are too confident (keeping behavior; arguably because they have an idea about the answer, hence their good accuracy before social influence) tend to discard others’ opinion. Although it might sometimes work—especially if no external information is provided —they lose the opportunity to benefit from valuable information learned by others. Meanwhile, adopting and overreacting subjects have poor confidence and accuracy before social influence, arguably because they do not know much about the questions. Note that the model, not including any notion of confidence or heterogeneous prior knowledge, overestimates the quality of the accuracy before social influence for the adopting behavior. However, even at , adopting subjects perform about as well as the other categories after social influence. In fact, if enough information is provided (), they are even able to reach almost perfect collective accuracy. Similar results have been found in the Japan experiment as shown on . show similar graphs for the collective performance in France and Japan.

Predicting the effect of incorrect information given to the human group by virtual agents.

We used the model to investigate the influence on the group performance of the quality and quantity of information delivered to the group (i.e., the value of the answer provided by the percentage of virtual agents). In our experiments, the group was provided with the (log transform of the) true value (the agents were experts). We expect a deterioration of the collective performance and accuracy as moves too far away from zero and as a greater amount of incorrect information is delivered to the group (by increasing ). The optimum collective accuracy is reached for a strictly positive V, whatever the value of (), as also predicted by our simple analytical model. Hence, incorrect information can be beneficial to the group: providing the group with overestimated values can counterbalance the human cognitive bias to underestimate quantities (24).

Discussion

Quantifying how social information affects individual estimations and opinions is a crucial step to understand and model the dynamics of collective choices or opinion formation (26). Here, we have measured and modeled the impact of social information at individual and collective scales in estimation tasks with low demonstrability. By controlling the quantity and quality of information delivered to the subjects, unbeknownst to them, we have been able to precisely quantify the impact of social influence on group performance. We also tested and confirmed the cross-cultural generality of our results by conducting experiments in France and Japan. We showed and justified that, when individuals have poor prior knowledge about the questions, the distribution of their log-transformed estimates is close to a Cauchy distribution. The distribution of the sensitivity to social influence is bell-shaped (contradict, compromise, overreact), with two additional peaks exactly at (keep) and (adopt), which lead to the definition of robust social traits as checked by further observing the subjects inclined to follow these behaviors. When subjects have little prior knowledge, we found that their sensitivity to social influence increases (linear cusp) with the difference between their estimate and that of the group, at variance with what was found in ref. 19, for questions where subjects had a high prior knowledge. We used these experimental observations to build and calibrate a model that quantitatively predicts the sharpening of the distribution of individual estimates and the improvement in collective performance and accuracy as the amount of good information provided to the group increases. This model could be directly applied or straightforwardly adapted to similar situations where humans have to integrate information from other people or external sources. We studied the impact of virtual experts on the group performance, a methodology allowing us to rigorously control the quantity () and quality () of the information provided to a group with little prior knowledge. These virtual experts can be seen either as an external source of information accessible to individuals (e.g., the Internet, social networks, media, etc.) or as a very cohesive (all having the same opinion ) and overconfident (all having ) subgroup of the population, such as can happen with “groupthink” (27). When these experts provide reliable information to the group, a systematic improvement in collective performance and accuracy is obtained experimentally and is quantitatively reproduced by our model. Moreover, if the experts are not too numerous and the information that they give is slightly above the true value, the model predicts that social influence can help the group perform even better than when the truth is provided, as this incorrect information compensates for the human cognitive bias to underestimate quantities. We also showed that the sensitivity to social influence is strongly related to confidence and accuracy: the most confident subjects are generally the best performers and tend to weight the opinion of others less. When the group has access to more reliable information, this behavior becomes detrimental to individual and collective accuracy, as too confident individuals lose the opportunity to benefit from this information. Overall, we showed that individuals, even when they have very little prior knowledge about a quantity to estimate, are able to use information from their peers or from the environment to collectively improve the group performance as long as this information is not highly misleading. Ultimately, getting a better understanding of these influential processes opens perspectives to develop information systems aimed at enhancing cooperation and collaboration in human groups, thus helping crowds become smarter (28, 29). Future research will have to focus on the experimental validation of our theoretical predictions when providing incorrect information to the group, with the intriguing possibility of actually improving its performance. It would also be interesting to study the impact on the group performance of the number of estimates given as social information (instead of only their mean) and of revealing the confidence and/or reputation of those who share these estimates.
  11 in total

1.  Log or linear? Distinct intuitions of the number scale in Western and Amazonian indigene cultures.

Authors:  Stanislas Dehaene; Véronique Izard; Elizabeth Spelke; Pierre Pica
Journal:  Science       Date:  2008-05-30       Impact factor: 47.728

2.  Is the true 'wisdom of the crowd' to copy successful individuals?

Authors:  Andrew J King; Lawrence Cheng; Sandra D Starke; Julia P Myatt
Journal:  Biol Lett       Date:  2011-09-14       Impact factor: 3.703

3.  How social influence can undermine the wisdom of crowd effect.

Authors:  Jan Lorenz; Heiko Rauhut; Frank Schweitzer; Dirk Helbing
Journal:  Proc Natl Acad Sci U S A       Date:  2011-05-16       Impact factor: 11.205

4.  Globally networked risks and how to respond.

Authors:  Dirk Helbing
Journal:  Nature       Date:  2013-05-02       Impact factor: 49.962

5.  A 61-million-person experiment in social influence and political mobilization.

Authors:  Robert M Bond; Christopher J Fariss; Jason J Jones; Adam D I Kramer; Cameron Marlow; Jaime E Settle; James H Fowler
Journal:  Nature       Date:  2012-09-13       Impact factor: 49.962

6.  Strategies for revising judgment: how (and how well) people use others' opinions.

Authors:  Jack B Soll; Richard P Larrick
Journal:  J Exp Psychol Learn Mem Cogn       Date:  2009-05       Impact factor: 3.051

7.  Opinion Formation by Social Influence: From Experiments to Modeling.

Authors:  Andrés Chacoma; Damián H Zanette
Journal:  PLoS One       Date:  2015-10-30       Impact factor: 3.240

8.  Quantifying the effects of social influence.

Authors:  Pavlin Mavrodiev; Claudio J Tessone; Frank Schweitzer
Journal:  Sci Rep       Date:  2013       Impact factor: 4.379

9.  Social influence and the collective dynamics of opinion formation.

Authors:  Mehdi Moussaïd; Juliane E Kämmer; Pantelis P Analytis; Hansjörg Neth
Journal:  PLoS One       Date:  2013-11-05       Impact factor: 3.240

10.  Modelling Influence and Opinion Evolution in Online Collective Behaviour.

Authors:  Corentin Vande Kerckhove; Samuel Martin; Pascal Gend; Peter J Rentfrow; Julien M Hendrickx; Vincent D Blondel
Journal:  PLoS One       Date:  2016-06-23       Impact factor: 3.240

View more
  21 in total

1.  The wisdom of partisan crowds.

Authors:  Joshua Becker; Ethan Porter; Damon Centola
Journal:  Proc Natl Acad Sci U S A       Date:  2019-05-13       Impact factor: 11.205

2.  How Our Perception and Confidence Are Altered Using Decision Cues.

Authors:  Tiasha Saha Roy; Bapun Giri; Arpita Saha Chowdhury; Satyaki Mazumder; Koel Das
Journal:  Front Neurosci       Date:  2020-01-14       Impact factor: 4.677

3.  Counteracting estimation bias and social influence to improve the wisdom of crowds.

Authors:  Albert B Kao; Andrew M Berdahl; Andrew T Hartnett; Matthew J Lutz; Joseph B Bak-Coleman; Christos C Ioannou; Xingli Giam; Iain D Couzin
Journal:  J R Soc Interface       Date:  2018-04       Impact factor: 4.118

4.  The impact of incorrect social information on collective wisdom in human groups.

Authors:  Bertrand Jayles; Ramón Escobedo; Stéphane Cezera; Adrien Blanchet; Tatsuya Kameda; Clément Sire; Guy Theraulaz
Journal:  J R Soc Interface       Date:  2020-09-09       Impact factor: 4.118

5.  Social information and spontaneous emergence of leaders in human groups.

Authors:  Shinnosuke Nakayama; Elizabeth Krasner; Lorenzo Zino; Maurizio Porfiri
Journal:  J R Soc Interface       Date:  2019-02-28       Impact factor: 4.118

6.  Crowd control: Reducing individual estimation bias by sharing biased social information.

Authors:  Bertrand Jayles; Clément Sire; Ralf H J M Kurvers
Journal:  PLoS Comput Biol       Date:  2021-11-29       Impact factor: 4.475

7.  Diversity of opinions promotes herding in uncertain crowds.

Authors:  Joaquin Navajas; Oriane Armand; Rani Moran; Bahador Bahrami; Ophelia Deroy
Journal:  R Soc Open Sci       Date:  2022-06-22       Impact factor: 3.653

8.  Individuals fail to reap the collective benefits of diversity because of over-reliance on personal information.

Authors:  Alan Novaes Tump; Max Wolf; Jens Krause; Ralf H J M Kurvers
Journal:  J R Soc Interface       Date:  2018-05       Impact factor: 4.118

Review 9.  Social information use and social information waste.

Authors:  Olivier Morin; Pierre Olivier Jacquet; Krist Vaesen; Alberto Acerbi
Journal:  Philos Trans R Soc Lond B Biol Sci       Date:  2021-05-17       Impact factor: 6.671

10.  Machine learning to support social media empowered patients in cancer care and cancer treatment decisions.

Authors:  Daswin De Silva; Weranja Ranasinghe; Tharindu Bandaragoda; Achini Adikari; Nishan Mills; Lahiru Iddamalgoda; Damminda Alahakoon; Nathan Lawrentschuk; Raj Persad; Evgeny Osipov; Richard Gray; Damien Bolton
Journal:  PLoS One       Date:  2018-10-18       Impact factor: 3.240

View more

北京卡尤迪生物科技股份有限公司 © 2022-2023.