Rosa A Rossi-Goldthorpe1,2, Yuan Chang Leong3, Pantelis Leptourgos1, Philip R Corlett1,4. 1. Department of Psychiatry, Yale University, New Haven, Connecticut, United States of America. 2. Interdepartmental Neuroscience Program, Yale University, New Haven, Connecticut, United States of America. 3. Department of Psychology, University of Chicago, Chicago, Illinois, United States of America. 4. Wu Tsai Institute, Yale University, New Haven, Connecticut, United States of America.
Abstract
Self-deception, paranoia, and overconfidence involve misbeliefs about the self, others, and world. They are often considered mistaken. Here we explore whether they might be adaptive, and further, whether they might be explicable in Bayesian terms. We administered a difficult perceptual judgment task with and without social influence (suggestions from a cooperating or competing partner). Crucially, the social influence was uninformative. We found that participants heeded the suggestions most under the most uncertain conditions and that they did so with high confidence, particularly if they were more paranoid. Model fitting to participant behavior revealed that their prior beliefs changed depending on whether the partner was a collaborator or competitor, however, those beliefs did not differ as a function of paranoia. Instead, paranoia, self-deception, and overconfidence were associated with participants' perceived instability of their own performance. These data are consistent with the idea that self-deception, paranoia, and overconfidence flourish under uncertainty, and have their roots in low self-esteem, rather than excessive social concern. The model suggests that spurious beliefs can have value-self-deception is irrational yet can facilitate optimal behavior. This occurs even at the expense of monetary rewards, perhaps explaining why self-deception and paranoia contribute to costly decisions which can spark financial crashes and devastating wars.
Self-deception, paranoia, and overconfidence involve misbeliefs about the self, others, and world. They are often considered mistaken. Here we explore whether they might be adaptive, and further, whether they might be explicable in Bayesian terms. We administered a difficult perceptual judgment task with and without social influence (suggestions from a cooperating or competing partner). Crucially, the social influence was uninformative. We found that participants heeded the suggestions most under the most uncertain conditions and that they did so with high confidence, particularly if they were more paranoid. Model fitting to participant behavior revealed that their prior beliefs changed depending on whether the partner was a collaborator or competitor, however, those beliefs did not differ as a function of paranoia. Instead, paranoia, self-deception, and overconfidence were associated with participants' perceived instability of their own performance. These data are consistent with the idea that self-deception, paranoia, and overconfidence flourish under uncertainty, and have their roots in low self-esteem, rather than excessive social concern. The model suggests that spurious beliefs can have value-self-deception is irrational yet can facilitate optimal behavior. This occurs even at the expense of monetary rewards, perhaps explaining why self-deception and paranoia contribute to costly decisions which can spark financial crashes and devastating wars.
People lie to others, but they also lie to themselves. We might deceive others more convincingly by better deceiving ourselves [1]. Self-deception may also protect self-esteem [2]. We deceive ourselves into believing that we are kinder, fairer, and more proficient than average [1]. The accompanying overconfidence can be adaptive both intra- and interpersonally–increasing performance [2] and persuasiveness [3]. However, too much self-deception can culminate in deleterious consequences [4,5], and ultimately, delusional beliefs [6].Paranoia–the belief that others have malicious intentions towards us–shares many of the hallmarks of self-deception [7]. It may protect self-esteem [8,9], and, by polarizing the social world it may solidify group identity [7], via direct inflation of self-image, or indirectly, through overconfidence. Confident people are more convincing [10,11], and, in so being, they further reinforce their own misbeliefs [12,13]. Paranoid beliefs are of course social, in that they are about powerful and nefarious others. The coalitional cognition account of paranoia posits that it arises from the excessive operation of an evolved mechanism of coalitional threat detection, which manages reputations and interactions with groups of others [7]. It has some support [14]. However, it is not clear that apparently complex social behaviors are necessarily underwritten by mechanisms dedicated to social cognition [15,16]. Paranoia was found to be unrelated to betrayal aversion–when one has a higher aversion to risky situations where outcomes are contingent upon social factors compared to non-social factors, which does not support the coalitional cognition model [17]. Instead, paranoia may arise from domain-general mechanisms of uncertainty weighted belief updating [18,19]. Here we attempted an explicit separation of social and non-social influences to belief updating and paranoia in order to shed light on whether paranoia arises from socially-specific processes or domain-general cognitive mechanisms. More broadly, given the potential social and non-social loci of self-deception, and the possible relationships between delusions and self-deception, we aimed to triangulate the relationships between paranoia, self-deception, and overconfidence, using a perceptual decision-making task, self-ratings of paranoia, and computational modeling of behavior. In so doing, we hoped to adjudicate between competing accounts. For example, we could relate paranoia, self-deception, and over confidence to the social processes in our task and model, and that would favor social accounts of these phenomena.Self-deception flourishes under uncertainty [20], and in laboratory tasks, paranoid individuals expect more volatility but also fail to learn appropriately from volatility [18]. It is as yet unclear whether paranoia and self-deception share underlying psychological mechanisms, and whether they are similarly sensitive to uncertainty or social affiliative processes. A shared mechanism might suggest that paranoia could amplify self-deceptive behaviors, thus bolstering misbeliefs and causing more distress.To investigate the relationships between paranoia and self-deception, we adapted a perceptual decision-making task with varying levels of stimulus ambiguity. The task has two sources of information, one social and one non-social, that can allow us to dissect differential contributions to the decision-making and explore interactions with paranoia. Using computational modeling that explicitly quantifies these contributions of social and non-social information to decisions, we sought to delineate whether and how self-deception and over-confidence are related to paranoia. We hypothesized that paranoia would be associated with enhanced self-deception, as well as higher confidence reported overall due to the shared characteristics and relationship with delusional beliefs. In prior work we showed non-social mechanisms contributed to paranoia, whilst others have posited a specifically social, coalitional mechanism. We sought to adjudicate by examining the impact of group identity on perceptual decision making. If group identity interacts with paranoia status then we would favor coalitional accounts. If instead non-social mechanisms prevail then we would favor a domain-general explanation of paranoia.
Methods
Ethics statement
All experiments were approved by the Yale University Human Investigation Committee. Written informed consent was provided by all participants.
Behavioral task
Participants classified merged images of faces and scenes, as either containing more face or scene, and they expressed their confidence in their choice [21]. These “chimeric” images ranged from 100% face and 0% scene to 100% scene and 0% face over 80 trials (C1 Phase–No Partner). After each classification they rated their confidence about their decision on a 1–7 scale. Participants were required to answer each trial before proceeding to the next. After the 80 trials, they were informed they were either working with a partner who was either a collaborator (N = 329), or a competitor (N = 334), who would be placing bets on whether the next image would be mostly face or mostly house (Fig 1A) [21]. In the cooperation condition, the participant would receive a monetary bonus if their partner’s bet was correct, in addition to the earnings from correctly classifying the image (10 cents if both them and the partner were correct). In the competition condition, the participants would lose money if their partner’s bet was correct (if their classification was correct, 4 cents; if their classification was incorrect, 7 cents). The payoff matrix for each condition is given in Fig 1B. Participants are not told that the partner or opponent has been given any more information than them, and importantly, the bet is made before the image for the trial is shown. Crucially, the reward maximizing strategy is to classify the images correctly. Participants were informed that they would be compensated based upon how many images they classified correctly, and were told this on both phases of the experiment. The participants saw their partner’s bet before seeing the image, before providing their classification and confidence again (C2 Phase). They classified the same images they saw in C1. In experiment 1, the bets in the C2 phase were correct exactly 50% of the time. Note that in the C1 phase participants only classified the image and there were no bets–the bets were added in the C2 phase.
Fig 1
Task structure for C2 phase and interaction effect.
A, sequence of task for the 2 conditions. B, payoff matrices for both conditions. Participants should ideally classify the image objectively (as they did in the initial classification phase) without using the bet to inform their decision. C, psychometric functions showing the percentage of scene in the image versus probability of responding scene averaged over all participants. D, participant’s choices displayed a motivational bias. The bet x group interaction shows that participants in the cooperation group tended to align with the bet (higher probability of answering scene when the bet was scene), while the competition group tended to disagree with the bet (higher probability of responding with scene when the bet was face). E, response patterns for the two experimental conditions. Self-deception is defined differently based upon experimental group.
Task structure for C2 phase and interaction effect.
A, sequence of task for the 2 conditions. B, payoff matrices for both conditions. Participants should ideally classify the image objectively (as they did in the initial classification phase) without using the bet to inform their decision. C, psychometric functions showing the percentage of scene in the image versus probability of responding scene averaged over all participants. D, participant’s choices displayed a motivational bias. The bet x group interaction shows that participants in the cooperation group tended to align with the bet (higher probability of answering scene when the bet was scene), while the competition group tended to disagree with the bet (higher probability of responding with scene when the bet was face). E, response patterns for the two experimental conditions. Self-deception is defined differently based upon experimental group.
Questionnaires
Participants reported demographic information (age, gender, income, educational level, ethnicity, and race) as well as mental health questions (diagnosis, medication use), and completed the Revised Green et al. Paranoia Thoughts Scale (R-GPTS) [22], Beck’s Anxiety Inventory (BAI) [23], Beck’s Depression Inventory (BDI) [24]. We included free response questions to detect bot respondents. Participants who scored 11 or higher on the R-GPTS persecution scale were classified as high paranoia as this is the recommended clinical cutoff [22]. Participants who scored above 16 on Beck’s Anxiety Inventory were classified as high anxiety based on the recommended cutoff for clinically significant anxiety [25].Participants (N = 719) were recruited for experiment 1 online via CloudResearch. Participants who declined more than 30% of the survey responses were automatically rejected. Non-sensical free responses were rejected (N = 48). For experiment 1, our total sample (N = 663) of complete submissions included 334 participants for the competition condition and 329 participants for the cooperation condition. For experiment 2, we applied the same criteria. Participants (N = 327) were recruited through CloudResearch, this time using the new Data Quality feature. Only 3 submissions were excluded.
Behavioral analysis
Motivational bias was assessed with a general linear mixed effects model (GLME) using the lme4 package in R. GLMEs were also fit to choice data using only scene percentage as a variable in order to confirm that classifications were related to the objective scene percentage (rather than random responding).If a classification changed between sessions (C1 and C2) to either agree with the bet (cooperation condition) or to disagree with the bet (competition condition) the response was self-deceptive [26]. Response patterns determining a self-deceptive trial are also shown in Fig 1E. The raw self-deception score for a participant was computed as the sum of the number of self-deceptive responses divided by the total number of responses. To explore whether participants were merely guessing when they changed their minds to conform to or defect from the bets, we multiplied their number of deception trials by their normalized confidence on those trials:We will refer to this metric combining the amount of self-deception with the confidence increase while self-deceiving as confidence-weighted self-deception (CWSD).
Computational modeling
We adapted open-access code for a Hierarchical Gaussian Filter (HGF) with 2 streams of processing in MATLAB 2018a (MathWorks R, Natick, MA). The HGF includes a generative model of the agent’s inferences (perceptual model), and a response model incorporating their action choices. Our perceptual model had two layers of beliefs, split into separate social and non-social arms, and the response model was a softmax for binary choices [27] [28].The first level of the generative model (x1, and x1,) represents the beliefs about the accuracy of the bet (1 = correct, 0 = incorrect) and the image category (1 = scene, 0 = face), respectively. The second level describes the perception of the tendency of the first level: the tendency for the bet to be correct (x2,) and the tendency of the image category (x2,). The 2nd level has a Markov-like dependence where the estimate of x2, and x2, are updated from their respective values on the previous time step according to a Gaussian random walk with variance ω:The first level beliefs are computed directly from 2nd level at time t, through a logistic sigmoid:The specific formulations of Eqs 4 and 5 were deduced from model comparison. Since current classification might be influenced by the previous images, we incorporated a recency bias that weighted the non-social prediction towards the previous image, depending upon its ambiguity (Fig 2C). The recency bias is based upon the amount of ambiguity in the previous image. As a result, the recency bias towards a particular classification will be maximized when ambiguity is minimized. When the previous image is 50% face and 50% scene, the recency bias is zero. We map this recency bias to a linear function of the scene percentage of the image where the maximum value is 1 and the minimum is -1:
Fig 2
A, the 2-level HGF with parallel processing streams for social and non-social stimuli. The choice data is fed into the model, which is inverted to obtain parameter estimations for an individual. The perceptual model includes the both the social and non-social information, which is then used to compute the combined belief, b. This combined belief is the input to the response model. More details are given in the methods. B, increasing ωs causes the prediction about the accuracy of the bet () to become closer to an extreme (1 or 0). This tilts the combined belief towards this prediction. C, increasing ωns causes the recency bias to have less of an effect on the prediction about the image categorization () while perceived tendency (second level belief) dominates the prediction. The effect is stronger when the recent image is very ambiguous.
A, the 2-level HGF with parallel processing streams for social and non-social stimuli. The choice data is fed into the model, which is inverted to obtain parameter estimations for an individual. The perceptual model includes the both the social and non-social information, which is then used to compute the combined belief, b. This combined belief is the input to the response model. More details are given in the methods. B, increasing ωs causes the prediction about the accuracy of the bet () to become closer to an extreme (1 or 0). This tilts the combined belief towards this prediction. C, increasing ωns causes the recency bias to have less of an effect on the prediction about the image categorization () while perceived tendency (second level belief) dominates the prediction. The effect is stronger when the recent image is very ambiguous.On the social side, we explored how adding a bias term on the logistic sigmoid connecting x2, and x1, might help explain motivated perception. We incorporated an additive term on the exponent (shifting the inflection point of the psychometric curve), a multiplicative term on the exponent (shifting the steepness of the psychometric curve) as well as a combination of those terms. The multiplicative term provided better fit, but we determined this term had a high correlation with ω (Pearson’s r = 0.998, p <2.2–16). As a result, we replaced this multiplicative term with ω (perceptual model P3), and the best-fitting model had a bias term that was a linear scaling, ηωs as a multiplicative term on the exponent (Fig 2B).Although mapping the 2nd to first level was different between the two streams, the computations by which the beliefs evolved on the 2nd level were the same for the 2 processing streams.The belief at the second level (μ2), is updated by the precision-weighted prediction error from the first level:
where δ1 is the prediction error at the first level and π2(t) is the precision of the posterior second level belief.The first level predicted belief () is determined by the logistic sigmoid above (Eqs 4 and 5), and the prediction error generated incorporates the model inputs (bet accuracy and scene percentage) for the respective processing streams for the current trial.In order to combine the two information streams, the belief, b(t), was computed as a linear combination of the predictions of the first level beliefs, weighted by their precisions.This combined belief was then fed into a softmax function to compute the probability of agreeing with the bet:We also examined the effect of adding term to weight the two streams in the response model as in Diaconescu et al. (2014) [29] [27], which ultimately did not fit our behavioral data (S6 Table). Initial values for all parameters are in S7 Table.
Statistics
Statistical analyses were performed in RStudio, Version 1.2.5033. Model parameters and self-deception scores were analyzed using ANOVAs, with Bonferroni correction for multiple-comparison (as needed). We performed ANCOVAs for model parameters using three sets of covariates: (1) demographics (age, gender, ethnicity, and race); (2) mental health factors (medication usage, diagnostic category); (3) and metrics and correlates of global cognitive function (educational attainment, income).
Results
Behavioral data
As the percentage of scene in the chimera increased, the probability of responding scene followed an s-shaped psychometric curve, indicating that in general, participants were able to categorize the chimeras accurately (Fig 1C). However, there was a motivational bias: the bets influenced the participants’ choices differently based on experimental condition (cooperation vs. competition, a significant bet x group interaction, z = 8.802, p<2e-16, b = 0.131875, 95% CI: [0.1025, 0.16124]). Participants in the cooperation condition were more likely to agree with the bet while participants in the competition condition were more likely to disagree with the bet (Fig 1D), indicating that participants were motivated to respond based on their relationship with the partner.
Paranoia and self-deception
We defined a self-deceptive response as a change in response between sessions C1 and C2 to either agree with the bet of the collaborator (cooperation condition), or to disagree with the bet of the opponent (competition condition). For each participant, the raw self-deception score was computed as the sum of the number of self-deceptive response. Using the response pattern of self-deception (Fig 1E) as well as our confidence-weighted self-deception metric (Eq 1), we investigated the relationship between self-deception and paranoia. Analysis of variance revealed a main effect of paranoia (high or low) on self-deception scores, a main effect of group (competition or collaboration) but no paranoia by group interaction for self-deception. High paranoia participants made more self-deceptive choices (Self-deception score; F(1, 659) = 13.65, pbonf = 0.0007155, = 0.02045), and were more confident on those trials (Mean confidence on SD trials; F(1,620) = 81.691, pbonf<2e-16, = 0.116). The difference between groups remained significant when we examined confidence-normalized self-deception score (CWSD equation; see Methods; F(1, 620) = 58.0612, pbonf = 2.8659e-13, = 0.0859; Fig 3B and 3C). We also found that the cooperation group had increased confidence-weighted self-deception (Cooperation vs. competition groups; F(1, 620) = 15.0085, pbonf = 3.442e-4, = 0.02673)–people were more likely to confidently self-deceive to conform to their partners’ bet in the cooperation group relative to defecting from the bet in the competition group. The absence of group by paranoia interaction emphasizes that in vs out-group membership was not differentially impacted by paranoia (Fig 3D and 3E). This contradicts the coalitional model of paranoia, which would predict increased self-deception in the high paranoia participants in the competition compared to the high paranoia participants in the cooperation group.
Fig 3
Self-deceptive responses occurred more with ambiguous images and are different between paranoia groups.
A, the high paranoia group self-deceived more on slightly less ambiguous images than the low paranoia group. B, the high paranoia group had elevated raw self-deception scores (percentage of self-deceptive responses). C, mean confidence on those self-deceptive trials was elevated in high paranoia participants. D, the confidence-weighted self-deception, which controls for individual variation in baseline-confidence, is higher in the high paranoia group. E, confidence-weighted self-deception is also elevated in the cooperation group relative to the competition group. *P ≤ 0.05, **P ≤ 0.01, ***P ≤ 0.001.
Self-deceptive responses occurred more with ambiguous images and are different between paranoia groups.
A, the high paranoia group self-deceived more on slightly less ambiguous images than the low paranoia group. B, the high paranoia group had elevated raw self-deception scores (percentage of self-deceptive responses). C, mean confidence on those self-deceptive trials was elevated in high paranoia participants. D, the confidence-weighted self-deception, which controls for individual variation in baseline-confidence, is higher in the high paranoia group. E, confidence-weighted self-deception is also elevated in the cooperation group relative to the competition group. *P ≤ 0.05, **P ≤ 0.01, ***P ≤ 0.001.
Which trials engender self-deception?
Across paranoia groups, most self-deceptive responses occurred for the most ambiguous images (50/50 scene-face, Fig 3A). A GLME model showed a significant interaction between image ambiguity and paranoia group (GLME: z = 5.853, p = 4.84e-09, b = 0.002643, 95% CI: [0.0018, 0.00353])–the high paranoia group evinced self-deception to the slightly less ambiguous stimuli.The task is structured so that participants should ideally evaluate each image independently and ignore the bet. However, the existence of a motivational bias implies that the individual must be attributing some sort of value to the bet, and as a result, they might update this value as they gain more information through these trials. Participants might erroneously infer that image classification might depend on previous images, and update their beliefs about the images accordingly. In particular, these inaccurate associations might contribute to self-deceptive behavior.To explore this idea, we decided to utilize a belief-updating model to draw inferences from participant choices regarding their latent beliefs about the task and stimuli. The Hierarchical Gaussian Filter (HGF) allows for the investigation of how hierarchical beliefs (in the perceptual model) influence choices (response model). The 2-arm HGF with integration of social and non-social information [28] allows us to separate the influence of the bet and the image and examine the higher level beliefs and parameters governing the perception of those cues. The generative model is outlined in Fig 2, and parameter descriptions can be found in S3 Table.We found a significant difference in the initial beliefs (priors) at x2, between the cooperation and competition groups. The cooperation group had a significantly elevated compared to the competition group (F(1, 654) = 16.7405, pbonf = 0.000145, = 0.03919; Fig 4A). The elevation in in the cooperation group represents a stronger initial belief that the bet would be more accurate, aligning with, and perhaps underwriting the observed motivational bias effect. This is also an important manipulation check, participants (regardless of their paranoia status) weighted the suggestion of a collaborator more strongly than that of a competitor. There was a significantly increased ω in high paranoia participants (F(1, 654) = 18.6837, pbonf = 5.349e-5, = 0.027) (Fig 4B). While ω controls the variance of the second level belief, the interesting effect of this parameter is its interplay with the recency bias; a bias based upon the ambiguity of the previous image (Fig 2C). When ω is greater, the recency bias term on the 1st level (the influence of the sensory inputs) impacts image classification less. This means that the ambiguity of the previous image has less impact upon the prediction for the next image, while the higher-level associations about the tendency of the image dominates the prediction. The sensory information contributes less to the classification while the higher-level associations contribute more–which is less optimal in a task where stimuli are independent. This could represent a lack of trust in one’s abilities or sensory experiences which result in reliance upon the higher-level associative beliefs, independent of others’ advice. The high paranoia group also evinced an elevated ω (F(1,654) = 9.425, pbonf = 0.006687, = 0.0133)–so they showed increased variance of the second level belief governing the tendency of the bet to be accurate. Overall, this represents a more unstable belief about the perceived bet accuracy (Fig 4C). In both groups we found significant correlations between ω and confidence-weighted self-deception (Fig 4D). While the low paranoia group evinced a significant correlation (Pearson’s r = 0.282, p = 1.535–9), the correlation as significantly stronger in the high paranoia group (Pearson’s r = 0.462, p = 6.163e-11, Fisher’s z-transformed r, p = 0.0228). Self-deception independent of paranoia level was driven by ω (perceived unreliability of ones’ own choices). That drive was stronger in high paranoia participants.
Fig 4
Estimated parameters show differences based on paranoia group and experimental group.
A, the cooperation group has an elevated prior for the social information (μ02,s) compared to the competition group. B, the variance of the perceived tendency of image categorization (ωns) is increased in high paranoia group as well as C, the variance of the perceived tendency of bet accuracy (ωs). *P ≤ 0.05, **P ≤ 0.01, ***P ≤ 0.001. D, ωns (variance of xs,ns) is correlated with confidence weighted self-deception. The correlation is statistically stronger in the high paranoia group. *P ≤ 0.05, **P ≤ 0.01, ***P ≤ 0.001.
Estimated parameters show differences based on paranoia group and experimental group.
A, the cooperation group has an elevated prior for the social information (μ02,s) compared to the competition group. B, the variance of the perceived tendency of image categorization (ωns) is increased in high paranoia group as well as C, the variance of the perceived tendency of bet accuracy (ωs). *P ≤ 0.05, **P ≤ 0.01, ***P ≤ 0.001. D, ωns (variance of xs,ns) is correlated with confidence weighted self-deception. The correlation is statistically stronger in the high paranoia group. *P ≤ 0.05, **P ≤ 0.01, ***P ≤ 0.001.
Bet manipulation
In order to better characterize the impact of social influences on perceptual decisions, we manipulated the accuracy of the bets in a follow up study (N = 324). In experiment 1, bets were 50% accurate. We increased bet accuracy to 75%. This manipulation significantly impacted self-deception and confidence. The number of self-deceptive trials and normalized confidence in self-deception decreased in the high paranoia group relative to experiment 1 (Independent samples t-test; raw self-deception: t(192.12) = 3.0756, pbonf = 0.004814, Cohen’s d = 0.358, 95% CI: [0.9612, 4.3981]; confidence: t(96.42) = 2.5655, pbonf = 0.02368, Cohen’s d = 0.4157, 95% CI: [0.01886, 0.1478]), while remaining unchanged in low paranoia participants (Independent samples t-test; raw self-deception: t(679.42) = 2.1347, pbonf = 0.06628, Cohen’s d = 0.1488, 95% CI: [0.0868, 2.076]; confidence: t(461.33) = -0.934, pbonf = 0.7016, Cohen’s d = 0.0758, 95% CI: [-0.05496116, 0.01954774]). This indicates that the high paranoia participants were sensitive to their partners’ abilities (S3 Fig).We fit the same 2-layer HGF to the new dataset. The difference in ω in high paranoia we found in the original experiment was preserved in the follow-up (F(1, 319) = 6.4532, pbonf = 0.014, = 0.0208), and there was no significant interaction between paranoia group and bet accuracy (F(1, 977) = 0.451, pbonf = 1, S4C Fig). High paranoia participants evinced elevated variability in the tendency to perceive the image as face or scene, manifest as an overweighting of stimulus tendency rather than current sensory evidence. In contrast, there was no difference in ω (F(1,319) = 0.0225, pbonf = 1, = 0.000197; S4B Fig) between paranoia groups. This suggests that increasing the partners’ accuracy caused high paranoia participants to perceive less social volatility and behave less self-deceptively.We found no difference between the two experiments in the number of trials on which participants could have self-deceived (instances in which bet differed from C1 classifications, Independent samples t-test: t(532.55) = 0.21728, p = 0.8281, Cohen’s d = 0.0159, 95% CI: [-0.00617, 0.0077]). Furthermore, as in experiment 1, stimulus ambiguity drove self-deception–the 50/50 scene/face stimuli were most likely to engender self-deception. However, in experiment 2, the self-deception to less ambiguous cues was less pronounced. We replicated the effect of group manipulation (cooperation versus competition) on initial social beliefs. We found a main effect of experimental group, however, it did not survive Bonferroni correction for multiple comparisons (F(1, 319) = 4.6709, puncorrected = 0.03112, pbonf = 0.09336, = 0.014; S4A Fig). Again, these prior beliefs were no different between the high and low paranoia participants. This lack of interaction is hard to reconcile with models of paranoia that rely on coalitional cognition, since we found no effect of paranoia on coalition or competition [7].
Model selection and validation
For the model space shown in S4 Table, we compared a variety of perceptual and response models. Due to the high number of models, we used family-wise comparison to narrow down a winning perceptual and winning response model. Family BMS for the perceptual model space yielded a winning model of P1 (HGF with a scaled-ω), with a protected exceedance probability of 1 (S5 Table). We found a winning response model of R1 (softmax with decision-noise only) with a protected exceedance probability of 0.9786 (S6 Table). Correspondingly, our winning model was M1, which used a P1 perceptual model and R1 response model.
Model simulations
We utilized each individual parameter set found in Experiment 1 to simulate responses for each participant–the simulated responses were used to invert the original model to validate our findings regarding group differences in parameters. Each parameter of interest was significantly correlated with its simulated companion (S1 Fig: ω: r = 0.2386473, p = 4.864e-10; ω: r = 0.7283063, p <2.2e-16; log-transformed ; r = 0.7815166, p <2.2e-16). The group differences based on paranoia group membership for both ω (F(1, 656) = 6.072, p = 0.014, = 0.0087) and ω (F(1, 656) = 15.95, p = 7.24e-5, = 0.0222) were preserved in the simulated parameter sets, as well as the main effect of experimental group on the social priors (; F(1, 651) = 12.51, p = 0.000432, = 0.0186).Successful parameter recovery and recapitulation of the observed group effects reassures us that we have the appropriate model.
Bayesian versus non-Bayesian models
The Rescorla and Wagner (1972) rule centers prediction error in learning [30]. Cues have associations with valued outcomes and those associations are updated by mismatches between the associative predictions and the experienced outcomes (prediction errors) weighted by fixed associability parameters that correspond to the salience of the cues and outcomes [31]. Despite its success, the model is non-normative and heuristic [30]. It does not conform to the principles of probability theory and often performs poorly in real-world situations where outcomes and states must be inferred under uncertainty [30]. The Rescorla-Wagner rule had poor fit to our data, compared to the HGF-type models, even when accounting for model complexity (S4 and S5 Tables). A Bayesian model that incorporates uncertainty provided a better account of self-deception and overconfidence and their association with paranoia.
Self-esteem, paranoia & overconfidence
The initial classification phase (C1) measures each participant’s objective classification ability. We ranked participants on this metric. Next, we ranked participants on their perceived choice reliability during C2 (ω). Computing the difference in these ranks gives a metric of participants’ insight into their performance. Having a large difference in these ranks (rank of ω >> rank of C1-score) corresponds to an overly pessimistic view of oneself. We find a significant correlation between the rank difference and confidence-weighted self-deception (High paranoia: Pearson’s r = 0.554, p = 6.163e-16; Low paranoia: Pearson’s r = 0.343, p = 1.618e-13) which suggests that low self-confidence and diminished ability increase the incidence and confidence of self-deceptive responses (Fig 5A). These correlations were significantly different (Fisher’s z-transformed r, p = 0.0027), suggesting paranoid participants take the opportunity to bolster their view of themselves. Overconfidence and self-deception protect against negative self-image, but at an economic cost.
Fig 5
A, Difference between the relative perceived choice reliability (rank of ωns) and relative objective classification performance (rank of C1 accuracy) is correlated with confidence-weighted self-deception. A higher ωns represents a lower perceived choice reliability (analogously, a higher perceived choice unreliability), so individuals scoring high on this rank difference have high perceived unreliability and low ability. This relationship is significantly stronger in the high paranoia group compared to the low paranoia group. B, High paranoia is responsible for elevated ωns independent of anxiety. The high paranoia high anxiety group showed similar values of ωns to the high paranoia low anxiety group, while the low paranoia high anxiety group had a significantly decreased ωns.
A, Difference between the relative perceived choice reliability (rank of ωns) and relative objective classification performance (rank of C1 accuracy) is correlated with confidence-weighted self-deception. A higher ωns represents a lower perceived choice reliability (analogously, a higher perceived choice unreliability), so individuals scoring high on this rank difference have high perceived unreliability and low ability. This relationship is significantly stronger in the high paranoia group compared to the low paranoia group. B, High paranoia is responsible for elevated ωns independent of anxiety. The high paranoia high anxiety group showed similar values of ωns to the high paranoia low anxiety group, while the low paranoia high anxiety group had a significantly decreased ωns.
Demographics & confounds
Paranoia often correlates with demographic features and other affective states. We found significant correlations between depression and paranoia (Pearson’s r = 0.529, p = <2e2-16), and between anxiety and paranoia (Pearson’s r = 0.612, p = <2e2-16). In order to dissect the impact of anxiety and depression on the key model parameter (ω), we performed a multiple regression with self-ratings of paranoia, anxiety and depression as predictors. Both GPTS (paranoia) and BAI (anxiety) scores were significant predictors of ω (βGPTS = 0.19, 95% CI: [0.1, 0.29], t = 3.991, p = 7.34e-5; βBAI = 0.2, 95% CI: [0.06, 0.34], t = 2.792, p = 0.00539), while BDI (depression) score was not (βBAI = -0.12, 95% CI: [-0.25, 0.01], t = -1.766, p = 0.078). A Farrar-Glauber test for multicollinearity showed that there appeared to be collinearity between the BDI and BAI scores in particular (Overall collinearity: χ2 = 981.0465, p <0.05; Farrar-Glauber F-test: FGPTS = 381.9105 (p <0.01), FBDI = 1239.6627 (p <0.01), FGPTS = 1520.1804 (p <0.01); Partial correlations: ρBDI,BAI = 0.729 (p <2.2e-16), ρGPTS,BAI = 0.367 (p <2.2e-16), ρBDI,GPTS = 0.07 (p = 0.08). In order to dissociate the effects of anxiety and paranoia on these parameters, we split participants into four groups: Those with high paranoia and high anxiety (1), high paranoia and low anxiety (2), low paranoia and high anxiety (3), and low paranoia and low anxiety (4). We found that the high paranoia/high anxiety group (1) were no different from the high paranoia/low anxiety group (2) in ω (F(3, 659) = 6.457, p = 0.00026,, = 0.0316, Post-hoc Tukey test, p = 0.345, 95% CI: [-0.0481, 0.0137]) while the low paranoia/high anxiety group (3) had a significantly lower ω compared to the high paranoia/high anxiety group (1) (Post-hoc Tukey test, p = 0.0034, 95% CI: [0.00409, 0.0727]; Fig 5B). Though anxiety and paranoia are highly correlated, paranoia appears more responsible for the group differences in self-deception and the associated model parameters.We performed ANCOVAs using demographics (race, ethnicity, age, gender), psychiatric diagnosis and medication usage, and socioeconomic factors (income, education) as covariates (S2 Table). All effects of paranoia group on ω were robust to the inclusion of all the covariates, as was the effect of experimental group on initial beliefs.
Discussion
People with high paranoia made more high-confidence self-deceptive responses during challenging perceptual decisions under social influence. They overrode their previous choices to agree with collaborators and defect from competitors. This effect was attenuated by making the partner’s bets more accurate. We fit a computational model which captured how participants estimated and weighted the influence of current and historical sensory data as well as current and historical social inputs. In this framework self-deception in paranoia was not driven by changes in initial prior weighting of social information (though such priors did distinguish the group working with a collaborator from the group working against a competitor). Rather, the increased self-deception in high paranoia participants was driven by two processes: (1) an underweighting of current sensory inputs relative to the prevailing tendencies from recent trials and (2) an overweighting of the partners’ current bet relative to the history of bet accuracy. Taken together, these data are consistent with self-deception flourishing in high paranoia as a result of a lack of confidence in ones’ own perceptual inferences, coupled with an excessive influence of social suggestions (regardless of affiliation). We observed less self-deception when the partners’ bets were more accurate, suggesting that self-deception is particularly likely in paranoid participants when self (non-social) and others (social) are experienced as unreliable sources of information.Some have argued that motivated reasoning and self-deception contradict Bayesian accounts of belief updating, suggesting instead that biased beliefs are really preferences—things that people desire to be true, and that they are driven by identity (what defines people and their important groups like political parties) [32]. Others, have pushed back, suggesting instead that these biases might be understood in terms of differences in perceived reliability of evidence or evidence sources [33], prior beliefs [33], or deriving utility from beliefs and their consistency [34]. The HGF approach is inherently Bayesian [30,35], since it rests on sequential updating of beliefs according to Bayes’ theorem, where beliefs represent inferences about hidden states of the environment (self, others, and external stimuli) in the form of posterior probability distributions, incorporating estimates of estimation uncertainty and environmental uncertainty [30,35]. Taking this approach, we found that over-confident self-deception and paranoia appears explainable in Bayesian terms: as changes in learning rates and relative weightings of social information, in response to pessimistic estimates about ones’ own proficiency in perceptual judgments, particularly under high stimulus ambiguity. This model outperformed a simpler non-normative heuristic model [31] which neither fit nor simulated our observations. Group identity drove changes in prior weightings, however, contrary to coalitional accounts of paranoia, we did not see those prior beliefs contributing significantly to self-deception and paranoia in our data. Neural data could further illuminate the issue of social and non-social contributions to belief updating and paranoia. For example, orbitofrontal cortex and amygdala may track non-social belief updating and dorsomedial and ventromedial prefrontal cortex more social specific mechanisms [36]. Our work suggests that paranoia may be the purview of the former, rather than the latter, though of course these mechanisms are densely interrelated [37-39].In experiment 2, we found that decreasing the ambiguity of the social information (increasing the fidelity of the partner bets) was also impactful. Under social comparison theory, individuals are compelled to improve their performance and minimize discrepancies between their own and others’ performance, generating competitive behaviour [40]. As we describe presently, uncertainty can prompt social comparison [40,41]. However, comparison concerns decrease dramatically when uncertainty about one’s ranking relative to others is removed [42]. We contend that increasing the accuracy of partners’ bets in experiment 2, neutralized high-confidence self-deception because it made the discrepancy between participant and partner performance clearer and rendered self-deception less necessary, warranted, or appropriate.Our work involves online self-report of psychiatric symptoms. It is possible our high scoring participants were simply responding inattentively, and thus, our paranoid participants were not really paranoid but rather disengaged [43]. In the work establishing this concern, inattentive responders yielded depression and anxiety scores near the clinical mean, while our participants scored lower. Furthermore, we think it unlikely that inattentive responding (on tasks or scales) could yield the specific set of findings we report presently, rather, we imagine more random distribution of ratings across scales and choices across trial-type, instead of maximal self-deception during the most ambiguous trials. It is also hard to imagine how inattention would yield increased confidence on self-deceptive trials.Our modeling work was consistent with self-deception impacting self-esteem and thence over-confidence in high paranoia participants. However, our task did not have a conduit for that over-confidence–in terms of convincing others of one’s insights or abilities [3]. A task with reciprocal exchange between participants would be enlightening. Differing self-deception when confidence is communicated between partners would be consistent with a role for self-deception in deceiving others as well as self [1]. In an advice-giving task, patients with schizophrenia were overconfident in their own advice, particularly those with delusions [44]. Our data suggest this effect might be driven by self-deception secondary to an experience of one’s own perceptual unreliability. Furthermore, boosting self-esteem–by conditioning positive self-associations—appears to mollify paranoia [45], it ought to similarly diffuse self-deception.Given the debate about self-deception and delusions [46], it will be important to establish whether the same effects are present in people with confirmed delusional beliefs. Recent work on advice giving by people with schizophrenia suggests that patients with delusions are over confident in their advice [44]. We suggest that our data are consistent with the possibility that delusion (albeit on the extreme end of a continuum of paranoia) might entail self-deception. At the same time–in light of our data—delusion and self-deception may not violate epistemic rationality [47] and might harbor adaptive function [48].
Demographics table for experiment 1.
(PDF)Click here for additional data file.
ANCOVAs for model parameters.
(TIFF)Click here for additional data file.
Perceptual Model Parameters.
(TIFF)Click here for additional data file.
Model Space.
(TIFF)Click here for additional data file.
Family-wise Bayesian model selection for perceptual models.
(TIFF)Click here for additional data file.
Family-wise Bayesian model selection for response models.
(TIFF)Click here for additional data file.
Initial Prior Values for parameters.
(TIFF)Click here for additional data file.
Parameters of interest were correlated with their simulated values obtained from simulation of responses and inversion of the model.
(TIFF)Click here for additional data file.Fraction of self-deceptive responses using the simulated responses for the winning model (A) and for a simulated responses of a normative model (B). The normative model simulations fail to capture the key components of the behavioral data–it predicted too many self-deceptive responses for the extreme images and did not properly predict the differences based on paranoia group.(TIFF)Click here for additional data file.
Self-deception and confidence-weighted self-deception is different between the two experiments only in the high paranoia group.
A, raw self-deception scores are lower in the high paranoia group with more accurate bets than the high paranoia group with less accurate bets. There is no difference in the low paranoia group. B, mean confidence on self-deceptive trials decreases in the high paranoia group with higher bet accuracy. *P ≤ 0.05, **P ≤ 0.01, ***P ≤ 0.001.(TIFF)Click here for additional data file.
Group parameter differences in the two experiments differ.
A, both experiments show a difference in social priors (μ02,s), with the cooperation group having an increased prior compared to the competition group. B, the difference in the variance of the 2nd level belief on the social side (ωs) between paranoia groups disappears when the bet accuracy is higher in experiment 2. C, the difference between paranoia groups in the variance of the 2nd level belief about the image (ωns) is maintained in both experiments. *P ≤ 0.05, **P ≤ 0.01, ***P ≤ 0.001.(TIFF)Click here for additional data file.21 Jul 2021Dear Dr. Corlett,Thank you very much for submitting your manuscript "Paranoia, Self-Deception, and Overconfidence" for consideration at PLOS Computational Biology.As with all papers reviewed by the journal, your manuscript was reviewed by members of the editorial board and by several independent reviewers. In light of the reviews (below this email), we would like to invite the resubmission of a significantly-revised version that takes into account the reviewers' comments.The three reviewers all saw quite some merit in your paper. However, two of them had serious concerns, that mostly involved a lack of clarity in the model, the behavioural variables measured and the statistical tests done. Reviewer 2 also makes the important point that the hypotheses should be clear from the introduction, which I second. Finally, reviewer 3 brings up an important potential confound of inattention. I therefore would like to request you to carefully go through each of the points made by the reviewers and to revise the manuscript to clarify all these points.We cannot make any decision about publication until we have seen the revised manuscript and your response to the reviewers' comments. Your revised manuscript is also likely to be sent to reviewers for further evaluation.When you are ready to resubmit, please upload the following:[1] A letter containing a detailed list of your responses to the review comments and a description of the changes you have made in the manuscript. Please note while forming your response, if your article is accepted, you may have the opportunity to make the peer review history publicly available. The record will include editor decision letters (with reviews) and your responses to reviewer comments. If eligible, we will contact you to opt in or out.[2] Two versions of the revised manuscript: one with either highlights or tracked changes denoting where the text has been changed; the other a clean version (uploaded as the manuscript file).Important additional instructions are given below your reviewer comments.Please prepare and submit your revised manuscript within 60 days. If you anticipate any delay, please let us know the expected resubmission date by replying to this email. Please note that revised manuscripts received after the 60-day due date may require evaluation and peer review similar to newly submitted manuscripts.Thank you again for your submission. We hope that our editorial process has been constructive so far, and we welcome your feedback at any time. Please don't hesitate to contact us if you have any questions or comments.Sincerely,Marieke Karlijn van Vugt, PhDAssociate EditorPLOS Computational BiologySamuel GershmanDeputy EditorPLOS Computational Biology***********************Reviewer's Responses to QuestionsComments to the Authors:Reviewer #1: This is an excellent and timely paper. The authors set out to investigate the interaction of social context (cooperation vs competition) with perceptual belief updating and trait paranoia. To do so they designed an elegant joint perceptione experiment in which participants view visual morphs and make bets as to which morph is dominate, either competing through deception or cooperating through self-deception. The authors then apply this task in a large online cohort, and model responses using an adapted version of the hierarchical gaussian filter, operationalizing the decision process as a dual stream processes in which social and perceptual beliefs are integrated together. This approach reveals interesting differences in choice behavior and underlying computational mechanisms, which are further replicated and extended in a second follow up experiment manipulating choice accuracy. Overall the findings are robust and interesting, the modelling appropriate, and the manuscript extremely well written. I have little to add to this manuscript other than a curiosity about the confidence ratings, which here are largely used to titrate behavioral responses. However, adding metacognitive modelling at this stage would probably bloat an otherwise extremely tight paper, so this is just my curiosity. I have only one major comment for consideration:1. If I understand the model validation steps correctly, the authors do an excellent job of motivating a complex family of models determining the complexity of the winning dual stream model. They also compare this model to a non-bayesian alternative using model comparison, and then conduct a posterior predictive check by correlating simulated data generated by this model to empirically fitted data. My only question is if this final step is sufficient - from the supplementary figure there appears to be some potential deviance between the two parameter steps. Is there a way to quantify this directly? I could imagine for example using a cross-validation procedure, e.g. fitting the models to a subset of the data and then predicting "hold out" data to determine empirical out of sample prediction accuracy. However, this may be unnescessary if I have misunderstood some aspect of the model validation procedure.Congratulations to the authors for an excellent contribution - good work.Reviewer #2: Rossi-Goldthorpe RA et al.The authors describe data from a study in which they examined perceptual inference in participants under conditions in which they did or did not get advice from a confederate who was either collaborating or competing with them. Data was analyzed at both the direct behavioral level and with an HGF.I think this study has a lot going for it. The basic task design and the results (as far as I can sort them out) are quite interesting. In general this is also an interesting question of relevance to psychopathology. However, many things are not clearly explained. I had a hard time understanding the task design, the behavioral variables being analyzed and, more fundamentally, the hypotheses and how the overall approach was addressing those. Although I could understand enough to generally follow the thread of the results, I think a much more detailed account of the task, the behavioral data analyzed, and a more clear account of the findings would substantially improve this manuscript.Comments1. The details of the task were unclear, as was the definition of self-deception. For some of the ANOVA models it wasn’t clear what the dependent variable was. For example, “Analysis of variance revealed a main effect of paranoia (high or low), a main effect of group (competition or collaboration) but no paranoia by group interaction for self-deception.” I don’t know what is being analyzed. I think the behavioral data should be shown more clearly. And then the metrics that are derived from that data that serve as dependent variables should be explained clearly and illustrated. Best to do this in the results, but the methods could also use more detail on these points.2. It would be useful to show and reference a figure for this when it is stated, “the probability of responding scene followed an s-shaped psychometric curve, indicating that in general, participants were able to categorize the chimeras accurately.”3. For this section, “The difference between groups remained significant when we examined confidence-normalized self-deception score (F(1, 620) = 58.0612, pbonf= 2.8659e-13, 2 184 = 0.0859; Figure 3B, C). We also found that the cooperation group had increased confidence-weighted self-deception (F(1, 620) = 15.0085, pbonf=3.442e-4, 2 186 = 0.02673) – people were more likely to confidently self-deceive to conform to their partners’ bet in the cooperation group relative to defecting from the bet in the competition group. The absence ofgroup by paranoia interaction, suggests that centering in vs out-group membership was not differentially impacted by paranoia (Figure 3D-E).” What is confidence-normalized self-deception score? What is “centering in vs out-group…”?4. The model should also be explained in the results. Some comments on what the variables measure. I know some of this is in the methods, but it would be much easier to give some background on the model and the variables in the results. The technical definitions can be given in the methods. It’s best to present the results and then summarize what they mean in each paragraph. It is also helpful if you can go back to the raw data and compute metrics directly on the raw data that show the effects extracted by the model. Also, I assume the model is mainly for studying learning, but the learning effects were buried and so it was not clear how participants updated belief estimate over trials on the basis of past feedback. It is also not clear why there was a comparison to Rescorla-Wagner, since such a model has no way of incorporating ambiguity in the cues, and the RW model is also no really applicable since the cues contain the information about the relevant behavior, whereas RW is more learning associations between values and arbitrary cues. This is not really a meaningful comparison.5. The introduction could use a rewrite. It does not flow well. But more importantly, it does not setup the specific experiment that was done, or why it was done. What are the hypotheses and how does the experiment address these?Reviewer #3: This is an interesting study of how perceptual beliefs can be swayed by the view of a confederate, in subjects with low or high paranoia levels. The authors used an online task in which a chimeric image is shown and a participant must rate it as majority face or house. A confederate then 'bets' on the outcome in a second condition, and the rating is made again, with a bonus for both being correct in the collaborator condition, or for correctly identifying they are incorrect in the competitor condition. A Bayesian model was used to evaluate the relative contributions of previous trials and social information to decisions made.The authors found:- Participants were more likely to agree with collaborators' bets and disagree with opponents' bets- This was especially the case for high paranoia participants (but not when bets were more accurate), but no interaction with competition/collaboration was found- The modelling foundthe cooperation group had stronger initial faith in the betlower recency bias (bigger omega_ns) about the image in high paranoia (replicated in another sample with higher bet accuracy)>updating about the accuracy of the bet (bigger omega_s) in high paranoia- Lower recency bias about the image correlated with 'self-deception' - this was stronger in high paranoia- Lower recency bias appeared to relate to paranoia rather than anxiety, although these were correlatedOverall the paper uses a clever task and sophisticated modelling to show that paranoia is associated with less trial-to-trial influence of the image, and more influence of the confederate (in terms of updating about bet accuracy). I do have some concerns about the results, however: the main ones are i) whether this pattern of results could also be explained by inattentive responding? ii) I find some of the statements in the paper rather strong given the evidence - especially the term 'self-deception', and some assertions about paranoia. These are detailed below:p3 - Were the subjects told that the reward-maximising strategy is to judge accurately?Also, describing a response that changes according to the opponent/collaborator's bet as "self-deceptive" doesn't make sense to me. If I adjust my opinions having heard someone else's views, am I deceiving myself? This seems too strong and loaded a term to apply to this effect. For example, on p7, the finding that "self-deception... was driven by perceived unreliability of ones' own choices" could better be described as "loss of confidence in oneself increases the influence of others".p4 - The description of the model is quite confusing and scanty. It needs to be much clearer. What is zeta? What is eta? What is w in Figure 2A? Is one of these the recency bias, as that term doesn't appear in Supp Table 3? Omega_ns doesn't appear in any equation that I found - yet it is the parameter behind the key group effect?p7 - Given there was no trial-to-trial dependency in this task, it seems unwarranted to say that a lower recency bias in paranoid individuals might reflect a "lack of trust in one's abilities", given that having no recency bias at all is actually optimal here, if I understand correctly?Also, a persuasive recent preprint by Zorowitz et al (psyarxiv.com/rynhk) showed that inattentive responding can induce spurious results in online studies. Can the authors reassure the reader that this could not explain their results? It seems to me that more inattentive players are likely to show less influence of recent trials, more influence of a confederate, and are more likely to score highly on anxiety/paranoia scales. The fact that no interaction with competition/collaboration is a bit concerning here - an interaction would make the results more specific to paranoia itself. Are any of these effects highly unlikely to be accounted for by inattentive responding?p8 - What exactly would the 'coalitional cognition' account of paranoia predict, and why?p9 - Why is it not the case that high paranoia individuals just have a more uncertain model? Hence self-confidence is lower and ability is diminished, but others' influence over them is stronger? What exactly justifies the added interpretation that overconfidence and 'self-deception' are *protecting* against negative self-image? Rather than just inevitable consequences of having a noisier model?p10 - It is odd to perform a multiple regression showing that both paranoia and anxiety significantly correlate with omega_ns, but not to report the actual results in terms of betas, confidence intervals etc. If the regression is abandoned because of multicollinearity issues, the variance inflation factors should be reported to justify this. What was the cut-off for 'high' vs 'low' anxiety and how was it chosen? Also, the authors state "paranoia appears more responsible for the group differences in self-deception and the associated model parameters" - but omega_ns (unless I misunderstand) concerns the recency bias, not self-deception: its relationship with self-deception is indirect. To make the statement above, this analysis should have looked at self-deception, paranoia and anxiety directly?p10 - the first process driving increased self-deception in high paranoia is described as "an underweighting of current sensory inputs relative to the prevailing tendencies from recent trials", but to me a reduced recency bias in that group ought to mean current inputs are *less* influenced by prevailing tendencies?p11 - is it not too much to assert that "we found... paranoia can indeed by explained in Bayesian terms" given that no interaction with condition was found? i.e. the effect of competition was not greater in these subjects? Surely an account of paranoia must explain the particular direction of the effect?Minor pointsp2 - I cannot find the Hagen (2008) reference but does it really show that too much self-deception leads to delusional beliefs? Also the evidence that paranoia protects self-esteem is pretty weak, all told (Murphy et al, 2018, Lancet Psych), so the claims about paranoia causing "direct inflation of self-image" are not realistic.Supp Fig 1 - the correlations are not reported? Also the recovered parameters end up squashed into a smaller range, although I suppose this doesn't matter so much if one is only interested in group differences. Also for mu0_2, this should be transformed and the correlation computed in transformed space as the vast majority of the datapoints are in the cloud just above zero.**********Have the authors made all data and (if applicable) computational code underlying the findings in their manuscript fully available?The PLOS Data policy requires authors to make all data and code underlying the findings described in their manuscript fully available without restriction, with rare exception (please refer to the Data Availability Statement in the manuscript PDF file). The data and code should be provided as part of the manuscript or its supporting information, or deposited to a public repository. For example, in addition to summary statistics, the data points behind means, medians and variance measures should be available. If there are restrictions on publicly sharing data or code —e.g. participant privacy or use of data from a third party—those must be specified.Reviewer #1: YesReviewer #2: YesReviewer #3: Yes**********PLOS authors have the option to publish the peer review history of their article (what does this mean?). If published, this will include your full peer review and any attached files.If you choose “no”, your identity will remain anonymous but your review may still be made public.Do you want your identity to be public for this peer review? For information about this choice, including consent withdrawal, please see our Privacy Policy.Reviewer #1: Yes: Micah AllenReviewer #2: NoReviewer #3: NoFigure Files:While revising your submission, please upload your figure files to the Preflight Analysis and Conversion Engine (PACE) digital diagnostic tool, . PACE helps ensure that figures meet PLOS requirements. To use PACE, you must first register as a user. Then, login and navigate to the UPLOAD tab, where you will find detailed instructions on how to use the tool. If you encounter any issues or have any questions when using PACE, please email us at .Data Requirements:Please note that, as a condition of publication, PLOS' data policy requires that you make available all data used to draw the conclusions outlined in your manuscript. Data must be deposited in an appropriate repository, included within the body of the manuscript, or uploaded as supporting information. This includes all numerical values that were used to generate graphs, histograms etc.. For an example in PLOS Biology see here: http://www.plosbiology.org/article/info%3Adoi%2F10.1371%2Fjournal.pbio.1001908#s5.Reproducibility:To enhance the reproducibility of your results, we recommend that you deposit your laboratory protocols in protocols.io, where a protocol can be assigned its own identifier (DOI) such that it can be cited independently in the future. Additionally, PLOS ONE offers an option to publish peer-reviewed clinical study protocols. Read more information on sharing protocols at https://plos.org/protocols?utm_medium=editorial-email&utm_source=authorletters&utm_campaign=protocols25 Aug 2021Submitted filename: reviewer-comments-response-8-23.docxClick here for additional data file.8 Sep 2021Dear Dr. Corlett,Thank you very much for submitting your manuscript "Paranoia, Self-Deception, and Overconfidence" for consideration at PLOS Computational Biology. As with all papers reviewed by the journal, your manuscript was reviewed by members of the editorial board and by several independent reviewers. The reviewers appreciated the attention to an important topic. Based on the reviews, we are likely to accept this manuscript for publication, providing that you modify the manuscript according to the review recommendations.The two reviewers who were not yet satisfied with your manuscript are now a lot happier, but still request a few minor revisions. I suggest you carefully look at these suggestions to clarify the writing even further. Good luck!Please prepare and submit your revised manuscript within 30 days. If you anticipate any delay, please let us know the expected resubmission date by replying to this email.When you are ready to resubmit, please upload the following:[1] A letter containing a detailed list of your responses to all review comments, and a description of the changes you have made in the manuscript. Please note while forming your response, if your article is accepted, you may have the opportunity to make the peer review history publicly available. The record will include editor decision letters (with reviews) and your responses to reviewer comments. If eligible, we will contact you to opt in or out[2] Two versions of the revised manuscript: one with either highlights or tracked changes denoting where the text has been changed; the other a clean version (uploaded as the manuscript file).Important additional instructions are given below your reviewer comments.Thank you again for your submission to our journal. We hope that our editorial process has been constructive so far, and we welcome your feedback at any time. Please don't hesitate to contact us if you have any questions or comments.Sincerely,Marieke Karlijn van Vugt, PhDAssociate EditorPLOS Computational BiologySamuel GershmanDeputy EditorPLOS Computational Biology***********************A link appears below if there are any accompanying review attachments. If you believe any reviews to be missing, please contact ploscompbiol@plos.org immediately:[LINK]Reviewer's Responses to QuestionsComments to the Authors:Reviewer #2: The authors have substantially improved the manuscript. The intro could still be written a bit more fluently, particularly the first paragraph. But it’s better. I am still not clear on a few details of the experiment.Specifically, I assume that in C1, there is no partner response? The participant is making choices but they do not see a choice made by a partner? This should be made clear. I assume there was no partner, but then I did not understand this sentence, “In experiment 1, the bets were correctly exactly 50% of the time.” Is this referring to the partner’s bets in C2? Or what is this referring to?The payoff matrix should be stated in the methods, with a reference to Figure 1B. This is important. Also, in the 50% condition, it is not clear how the subject should respond. Specifically, I don’t think the following comment is clear for the 50% condition, “Crucially, the reward maximizing strategy is to classify the images correctly.” This would only be true if the partner really was guessing and therefore correct at chance level. So, the subject has to assume this. If the subject assumes the partner is above chance it would make sense to respond with or against the partner depending on the condition. It is possible then that paranoia changes the subject’s assessment of whether the partner is correct or not, and the subject is behaving optimally, as opposed to in a self-deceptive way.It is possible that the experimental design accounts for this possibility, but if it does, it was not clear to me. Please clarify why the subjects should not respond against the partner in the competition condition and with the partner in the cooperation condition, when the stimulus is ambiguous (50%). Please also clarify why paranoia could not also be affecting the subject’s assessment of the partner’s accuracy.Reviewer #3: Thanks to the authors for responding to my comments - I have only some minor remaining points that don't require re-review.Regarding the concept of 'self-deception' in this experiment. I had not grasped that the partner is betting on an image *that they have not seen either* - can this be made as clear as possible in the methods? I had assumed the partner had seen it but the player had not.The addition to the discussion: "Paranoia was found to be unrelated to betrayal aversion – when one has a higher aversion to risky situations where outcomes are contingent upon social factors compared to non-social factors which does support the coalitional cognition model". Is this correct or do the authors mean this does NOT support the coalitional model? I am not sure why absence of betrayal aversion supports that model?About the parameter recovery - I hope it does not become common practice to only report p-values: these will be strongly affected by the number of simulations conducted and so are a bit meaningless. The correlation is really the key measure.**********Have the authors made all data and (if applicable) computational code underlying the findings in their manuscript fully available?The PLOS Data policy requires authors to make all data and code underlying the findings described in their manuscript fully available without restriction, with rare exception (please refer to the Data Availability Statement in the manuscript PDF file). The data and code should be provided as part of the manuscript or its supporting information, or deposited to a public repository. For example, in addition to summary statistics, the data points behind means, medians and variance measures should be available. If there are restrictions on publicly sharing data or code —e.g. participant privacy or use of data from a third party—those must be specified.Reviewer #2: YesReviewer #3: Yes**********PLOS authors have the option to publish the peer review history of their article (what does this mean?). If published, this will include your full peer review and any attached files.If you choose “no”, your identity will remain anonymous but your review may still be made public.Do you want your identity to be public for this peer review? For information about this choice, including consent withdrawal, please see our Privacy Policy.Reviewer #2: NoReviewer #3: NoFigure Files:While revising your submission, please upload your figure files to the Preflight Analysis and Conversion Engine (PACE) digital diagnostic tool, https://pacev2.apexcovantage.com. PACE helps ensure that figures meet PLOS requirements. To use PACE, you must first register as a user. Then, login and navigate to the UPLOAD tab, where you will find detailed instructions on how to use the tool. If you encounter any issues or have any questions when using PACE, please email us at figures@plos.org.Data Requirements:Please note that, as a condition of publication, PLOS' data policy requires that you make available all data used to draw the conclusions outlined in your manuscript. Data must be deposited in an appropriate repository, included within the body of the manuscript, or uploaded as supporting information. This includes all numerical values that were used to generate graphs, histograms etc.. For an example in PLOS Biology see here: http://www.plosbiology.org/article/info%3Adoi%2F10.1371%2Fjournal.pbio.1001908#s5.Reproducibility:To enhance the reproducibility of your results, we recommend that you deposit your laboratory protocols in protocols.io, where a protocol can be assigned its own identifier (DOI) such that it can be cited independently in the future. Additionally, PLOS ONE offers an option to publish peer-reviewed clinical study protocols. Read more information on sharing protocols at https://plos.org/protocols?utm_medium=editorial-email&utm_source=authorletters&utm_campaign=protocolsReferences:Review your reference list to ensure that it is complete and correct. If you have cited papers that have been retracted, please include the rationale for doing so in the manuscript text, or remove these references and replace them with relevant current references. Any changes to the reference list should be mentioned in the rebuttal letter that accompanies your revised manuscript.If you need to cite a retracted article, indicate the article’s retracted status in the References list and also include a citation and full reference for the retraction notice.9 Sep 2021Submitted filename: reviewer-comments-9-8-updated.docxClick here for additional data file.15 Sep 2021Dear Dr. Corlett,We are pleased to inform you that your manuscript 'Paranoia, Self-Deception, and Overconfidence' has been provisionally accepted for publication in PLOS Computational Biology.Before your manuscript can be formally accepted you will need to complete some formatting changes, which you will receive in a follow up email. A member of our team will be in touch with a set of requests.Please note that your manuscript will not be scheduled for publication until you have made the required changes, so a swift response is appreciated.IMPORTANT: The editorial review process is now complete. PLOS will only permit corrections to spelling, formatting or significant scientific errors from this point onwards. Requests for major changes, or any which affect the scientific understanding of your work, will cause delays to the publication date of your manuscript.Should you, your institution's press office or the journal office choose to press release your paper, you will automatically be opted out of early publication. We ask that you notify us now if you or your institution is planning to press release the article. All press must be co-ordinated with PLOS.Thank you again for supporting Open Access publishing; we are looking forward to publishing your work in PLOS Computational Biology.Best regards,Marieke Karlijn van Vugt, PhDAssociate EditorPLOS Computational BiologySamuel GershmanDeputy EditorPLOS Computational Biology***********************************************************I think you have sufficiently addressed the reviewers' comments. Congratulations with the acceptance of your paper.30 Sep 2021PCOMPBIOL-D-21-00920R2Paranoia, Self-Deception, and OverconfidenceDear Dr Corlett,I am pleased to inform you that your manuscript has been formally accepted for publication in PLOS Computational Biology. Your manuscript is now with our production department and you will be notified of the publication date in due course.The corresponding author will soon be receiving a typeset proof for review, to ensure errors have not been introduced during production. Please review the PDF proof of your manuscript carefully, as this is the last chance to correct any errors. Please note that major changes, or those which affect the scientific understanding of the work, will likely cause delays to the publication date of your manuscript.Soon after your final files are uploaded, unless you have opted out, the early version of your manuscript will be published online. The date of the early version will be your article's publication date. The final article will be published to the same URL, and all versions of the paper will be accessible to readers.Thank you again for supporting PLOS Computational Biology and open-access publishing. We are looking forward to publishing your work!With kind regards,Andrea SzaboPLOS Computational Biology | Carlyle House, Carlyle Road, Cambridge CB4 3DN | United Kingdom ploscompbiol@plos.org | Phone +44 (0) 1223-442824 | ploscompbiol.org | @PLOSCompBiol
Authors: Peter H Rudebeck; Mark E Walton; Benjamin H P Millette; Elizabeth Shirley; Matthew F S Rushworth; David M Bannerman Journal: Eur J Neurosci Date: 2007-10-10 Impact factor: 3.386
Authors: Praveen Suthaharan; Erin J Reed; Pantelis Leptourgos; Joshua G Kenney; Stefan Uddenberg; Christoph D Mathys; Leib Litman; Jonathan Robinson; Aaron J Moss; Jane R Taylor; Stephanie M Groman; Philip R Corlett Journal: Nat Hum Behav Date: 2021-07-27