Literature DB >> 33027301

Contrasting effects of information sharing on common-pool resource extraction behavior: Experimental findings.

Dimitri Dubois1, Stefano Farolfi2, Phu Nguyen-Van3,4, Juliette Rouchier5.   

Abstract

This paper experimentally investigates the impact of different information sharing mechanisms in a common-pool resource game, with a view to finding a mechanism that is both efficient and inexpensive for the managing agency. More precisely, we compare the observed extraction levels produced as a result of three mechanisms: a mandatory information sharing mechanism and two voluntary information sharing mechanisms that differ in the degree of freedom given to the players. Our main result is that a voluntary information sharing mechanism could help in reaching a lower average extraction level than that observed with the mandatory mechanism.

Entities:  

Mesh:

Year:  2020        PMID: 33027301      PMCID: PMC7540899          DOI: 10.1371/journal.pone.0240212

Source DB:  PubMed          Journal:  PLoS One        ISSN: 1932-6203            Impact factor:   3.240


Introduction

In economics, goods are usually classified according to two dimensions: excludability and rivalry. A good is excludable if a person can be excluded from its consumption and is rival if its consumption by one person reduces its consumption by another. These two dimensions make it possible to classify goods into four categories: private goods (excludable and rival), club goods (excludable, but not rival), common goods (non-excludable, but rival) and public goods (non-excludable and non-rival). Public goods and common goods have the particularity of placing individual and collective interests in apparent opposition as well as creating tension in the choice of action, which is commonly referred to as a social dilemma. Indeed, rivalry in extractive common goods implies that agents may think that they should consume as much of the good as possible, fearing that the others leave nothing. Non-rivalry in public goods implies that agents have an incentive to benefit from the goods without contributing to their production. The phenomenon of over-exploitation of common goods was highlighted by Hardin in 1968 [1] with his famous "tragedy of the commons". However, it has been shown that in some real-life settings, complex property rights and institutions could enable the emergence of trust and make coordination possible, so as to attain self-governance of common goods and sustainability of the resource [2-4]. On the other hand, laboratory and field investigations on common-pool resources have shown that social information, i.e. information available to participants about the actions of the others, is a key element for managing the resource. This article contributes to the existing literature on this topic. In the context of a public good game, [5] conducted a laboratory experiment in which they tested three treatments that differed in the information available to players. In a baseline treatment, players had no information about the actions of other members of their group at neither the aggregate nor individual level. In a second treatment, the players had information about the total contribution made by the group to the public good. Finally, in a third treatment, the players were provided with information about both the total contribution of the group and (anonymous) individual contributions. The data revealed significantly higher levels of contribution in the last treatment. The authors explained that; in a situation of complete information, it is not in the interest of individuals to forgo participation in the production of the public good. The reason for this, is that even if it had just a small influence over the contributions made by the other members of the group, opportunistic behavior would inevitably lead to lower production of the good. Signaling cooperative behavior becomes strategic in this case. A few years later [6] found nevertheless that explicit signaling through announcements did not improve the average level of contribution in the game. A reputation mechanism, i.e. an individual history of past behavior in the game visible by everyone in the group, did not either. In the context of a common-pool resource game; which is the standard representation of a common good [7, 8] tested two treatments in the laboratory. In the first treatment individuals had aggregated information (total group extraction), and in the second treatment they had the additional information on the individual choices and payoffs of their group members. The authors observed that in the complete information treatment individuals extracted more and thus moved away from the socially optimal solution. The main reason was that when complete information was available, the individuals had a tendency to imitate the best performance (in addition to imitating the average level of extraction in the group), which in this kind of game means the least cooperative individual. These studies clearly show that social information plays a very important role in social dilemmas (see also [9]). The fact that individual decisions are public in the group, even if anonymous, is interpreted by some as the possibility of sending a signal of cooperation. This can be beneficial to the group as it increases the production of the public good or improves the management of the common-pool resource. However, this information can also have harmful effects when it encourages the imitation of less cooperative choices in the group. Information about the actions taken by all the players in the group is therefore not necessarily beneficial. In this article we set out to establish a system of voluntary sharing of information that keeps the beneficial effects of the dissemination of information (signal) and mitigates its negative effects (imitation of the most selfish). Our hypothesis is that a voluntary sharing (or disclosure) information mechanism allows the creation of selective social information, or upward social information [10-12], which favors cooperation between individuals. We set up a laboratory experiment using a common-pool resource game to measure the extent to which a voluntary sharing mechanism makes it possible to mitigate over-exploitation of the resource compared to a system in which the collection and dissemination of information, at an individual level, is automatic and mandatory. The fact that individual extractions are automatically and mandatorily made public may have a negative effect on the motivations of the individual, such as crowding-out effects [13] or boomerang effects [14] for example. Furthermore, this implies collection and dissemination of information that escapes the individual, who may then feel watched and not free to make their own choices [15]. Letting the individual choose to make their level of extraction public is likely to make them feel responsible and give them the impression that they can act on the collective norm, by voluntarily expressing what they consider to be appropriate behavior [16]. They may also feel satisfaction in showing their cooperation in the social dilemma; the warm-glow effect [17], and acting in a prosocial manner that benefits other people [18]. If the most cooperative individuals voluntarily share their extraction decision with the other members of their group, the social information provided to the group is likely to favor the emergence of a cooperative descriptive norm [19]; as observed by [10], as well as triggering conditional cooperative behavior [20]. We therefore believe that a voluntary, rather than mandatory, information sharing mechanism leads to a higher level of cooperation within a group. In a common-pool resource game, this means lower levels of resource extraction. In addition, if there is a cost for collecting and disseminating social information, it may be useful to find a mechanism that is both more efficient and less costly [21]. Nowadays the exchange of information is extensively facilitated by connected objects and social networks. Smartphones are constantly asking us if we want to share our information and activities (sports activities, photos, videos, news, etc.). It is simple to share information within a group or community, often at the click of a button. The use of common-pool resources has also been transformed by new techniques that allow users to share and disclose real-time information about their consumption. In the electricity sector for instance, many countries have deployed smart grids to place data flow and information management at the heart of improving supply efficiency. In the case of domestic water use; particularly in Western countries, remote metering enables water services and providers to improve the efficiency of water supplies. More real-time information about consumption is also provided to water users, who can then adjust their habits and detect possible leaks. In France, 50% of domestic water users should have a remote meter connected within 3 to 4 years [22]. These systems do not yet provide water consumers with information on the consumption of others, but initiatives to establish social standards are underway. Benchmarks do already exist based on aggregations of macro-data (city, region, and nation). In California, the law on sustainable groundwater management; adopted by the State in 2004, obliges groundwater sustainability agencies (GSA) to draw up groundwater sustainability plans, as well as requiring users located outside the GSA’s management area to report their extraction levels to the State Water Board. Conversely, in countries like Tunisia, Algeria and Morocco, private water wells are proliferating rapidly [23] and information on the extraction of water from these wells is hidden by irrigators to avoid paying fees or penalties. Community management of aquifers has been recognized as one of the possible instruments for groundwater management [24], but it can only be achieved if all users commit to a cooperative scheme and share information on their private consumption. Even so, both management costs and the attitudes of the users depend on the policy implemented. Several voluntary information disclosure initiatives have started to develop, such as the Carbon Disclosure Program in the field of forest degradation. This program requires its members to disclose how they address forest risk commodities in their supply chain [25]. A voluntary information sharing mechanism can be an important tool for regulators, as it does not require that the collection of private information be organized by adding technical tools, therefore making it much less expensive. Up until now though, its effectiveness is yet to be proven. The purpose of this article is to test, in the laboratory, the effects of two voluntary information sharing mechanisms in the context of decentralized management of a common resource. Testing the mechanism in the laboratory is a first step [26, 27], a field experiment will be necessary in a second step to test its external validity [28]. We experimentally test two treatments based on voluntary information sharing and compare the observations to a reference treatment in which information is collected and disclosed in an automatic and mandatory manner. In the first treatment, after making their extraction decision, the players have to decide whether or not they wish to make this extraction decision public. In the second treatment, players have to decide whether or not they wish to make their extraction level public, as well as additionally deciding the amount of extraction they wish to publicize. [21] experimentally tested two voluntary sharing mechanisms in a public good game. Her first treatment was identical to ours, where after choosing their level of contribution to the public good, the players had to decide whether or not they wished their contribution to be made public to the group. In her second treatment, players decided whether or not to publicize their contribution decision before making it. [21] showed that these two treatments led to higher levels of contribution than a treatment in which contributions were automatically and mandatorily made public. However, she did not observe any difference between the two voluntary sharing systems. The contribution of our article is threefold. Firstly, we contribute to the literature on the role and impact of social information in social dilemmas, more specifically in the context of a common-pool resource. Secondly, we confirm that the results of [21] can be transposed into the context of the common-pool resource game. Thirdly, we shed light on the effects of a new voluntary sharing mechanism, which gives more freedom to individuals. Our first result is confirmation that the mechanism of voluntary sharing of information on the chosen level of extraction reduces the phenomenon of over-exploitation of the resource compared to automatic and mandatory sharing. Our second result is that a voluntary information sharing mechanism in which an individual decides the value of the information disclosed is just as effective as a mechanism that does not allow this freedom. It does however introduce strategic behaviors on the part of the least cooperative individuals, which in the long run, may lessen the positive effect of voluntary sharing.

Materials and methods

Experimental design

In a Common Pool Resource game (hereafter CPR), each player i in a group of N players can extract from yi = 0 to yi = E tokens from a common resource that contains NxE tokens. E was equal to 10 in our experiment. For each extracted token, player i earned 3 ECUs (Experimental Currency Unit), but created a negative externality for each of the other group members. In our experimental game, the payoff function of player i was given by πi(yi, Y) = 3yi—0.01875Y2 where Y = Σi yi and yi is the individual amount extracted by player i. To avoid a corner solution (zero-unit extraction as the socially optimal solution), we adapted an existing model [29] by transforming the linear payoff function into a quadratic one. Our game had features that ensured a social dilemma where individual and collective interests were divergent. These features were: (i) whatever the amount extracted by the other group members, player i had a higher payoff when extracting the maximum (10), and whatever the amount extracted by i, the payoff was higher when the other group members extracted nothing from the common resource, with the dominant strategy therefore being to extract the maximum possible (10); (ii) the collective payoff, computed as the sum of individual payoffs, was maximized when the total amount extracted by the group was 20 tokens, with a symmetric issue where each player extracted exactly 5 tokens. We tested two treatments with voluntary information sharing mechanisms in comparison to a reference treatment with automatic and mandatory information sharing. More precisely, in this baseline treatment; called MD for Mandatory Disclosure, the player’s extraction was automatically and mandatorily displayed on the summary screen of each member of their group. In the two voluntary sharing treatments, the player's extraction was displayed on the summary screen of each other member of their group only if they had previously chosen to share such information. The difference between the two voluntary sharing treatments was that in one treatment; called FD for Free Disclosure, the player could also decide the value to be displayed on the screens of the other members of their group while in the other treatment; called VD for Voluntary Disclosure, the player did not have this option. Therefore, in VD if the player had decided to make their decision public, the value seen by the other group members was the player’s actual extraction. In FD, this was not necessarily the case, as it could have been the actual extraction or a different value altogether, since the player was free to decide what value to display. Table 1 summarizes the treatments.
Table 1

Summary view of the treatments.

MDVDFD
Voluntary sharingNoYesYes
Freedom to choose the value to be disclosedNoNoYes
The game used in all three treatments was the one described in the first paragraph of this section, with fixed groups of four players formed randomly at the beginning of the game. The game consisted of 20 rounds, with each round divided into three stages in the two voluntary sharing treatments (extraction decision, disclosure decision and summary) and two stages in the MD treatment (extraction decision and summary). In the extraction decision stage, the player had to decide how many tokens to extract from the collective account, an integer between 0 and 10. In the disclosure decision stage (only in treatments VD and FD, skipped in treatment MD because the player didn’t have this choice) the player had to decide whether or not they wanted their extraction to be displayed on the summary screen of their group members, with the additional option in FD treatment to enter the value to be displayed. In the summary stage, the screen displayed (for the current round): the player’s extraction level; the total amount extracted by their group and their payoff for the round. In the MD treatment the players were also informed of the individual extractions of all members of their group. In the VD treatment the players were informed of the individual extractions of the group members who had chosen to disclose this information. Likewise for FD treatments, but here the players knew that the displayed values had been entered by the group members themselves. From each screen the players had access to the history of the previous rounds. The history screen included the information corresponding to each past round. Explicitly (for each past round): the extraction of the player; the total extraction of the group; the individual extractions of the members of the group (all or some of them); the payoff of the round and the cumulative payoff since the first round. Two important elements need to be specified. Firstly that in the disclosure decision stage, when deciding whether to make their extraction decision public or not (and its value in the FD treatment), the players had information on the total quantity extracted by their group in the extraction stage. The second specification being that on the summary screen, regardless of the treatment, the individual extractions were anonymous and this was common knowledge. Indeed, on the one hand there was no associated identifier and furthermore it was specified that these values were displayed in a random order each round. Table 2 gives a summary view of the different steps depending on the treatment. The instructions of the experiment are available at https://dx.doi.org/10.17504/protocols.io.bgxzjxp6.
Table 2

Stages of one round, in the three treatments.

MDVDFD
Extraction decision stagePlayer decided how much they extracted from the collective account
Disclosure decision stageSkipped, the player had no choicePlayer decided whether their extraction would be displayed on the round summary screen of their group membersPlayer decided whether their extraction would be displayed on the round summary screen of their group members, and if so, entered the value that would be displayed
Summary stageRound summary
The experiment took place at the Montpellier Laboratory for Experimental Economics (LEEM) in France. The experimental protocol was presented to the LEEM working group, which ensured that the protocol was in compliance with the rules of experimental economics and the corresponding ethical rules. The latter gave its agreement for the experiment to be carried out within the LEEM platform. We organized six sessions (two per treatment), each with 16 or 20 participants. A total of 104 subjects took part in the experiment. The mean age of the participants was 26 years (std. 6.58, median age 24 years), with 56.73% of women and 43.27% of men. The participants were students from various disciplines of the university of Montpellier randomly selected from a pool of nearly 3,000 volunteers handled with the Online Recruitment Software for Economic Experiments (ORSEE, [30]). We made sure that none of the participants had previously participated in a common-pool resource game. The experiment took place on a computer. Each subject was in an individual box with partitions around them to ensure anonymity of decisions. The sessions lasted approximately one and a half hours, including initial instructions and payments. The average payment in the game was € 13.

Conjectures

The first conjecture states that even if there is no direct reward associated with sharing, many players voluntarily share their extraction decision. By no direct reward associated with sharing, we mean that there was no monetary gain in the game associated with the action of sharing its extraction. The payoffs of the game depended solely on the extraction choices of the player and the members of their group.

Conjecture 1: Individuals voluntarily disclose extractions with significant frequency

There are several possible motivations behind a player’s choice to disclose their decision, as explained by [21]. These motivations include the willingness to signal an intention to cooperate [5]; a warm-glow effect [17]; a feeling about the right level of extraction in the game, i.e. the extraction that is socially appropriate [16], or the descriptive norm that should apply in the group [19]. In the Voluntary Disclosure treatment, since the values disclosed are the actual extractions, non-cooperative individuals have no interest in making their extractions public if they think they might negatively influence the choice of others. In addition, some studies have shown that individuals who do not behave in a pro-social manner may feel guilty or ashamed when it is observable by others ([31-33], see also [34] for a theoretical model). Therefore, we expect that the voluntary sharing mechanism of the VD treatment will favor the emergence of upward social information i.e. cooperative individuals disclose their extractions and non-cooperative individuals keep them private. To corroborate this, we should observe in the VD treatment that the disclosed extractions are lower compared to the non-disclosed ones (Conjecture 2.1). Given the arguments underpinning conjectures 1 and 2.1, there are several reasons to believe that extractions in the VD treatment will be lower compared to extractions in the MD treatment (Conjecture 2.2). First, we know from several studies that almost 50% of individuals are conditional cooperators [20, 35–37]. If these conditional cooperators observe cooperative decisions, they are likely to make cooperative decisions as well. Second, it has been shown that social information favors the convergence of decisions towards observed decisions [10–12, 38, 39], and that this imitation dynamic lies behind the formation of social norms [19, 40]. Conjecture 2: In the Voluntary Disclosure treatment: the disclosed extractions are lower than the non-disclosed ones the average extractions are lower than in the Mandatory Disclosure treatment The effect of the Free Disclosure mechanism isn’t so straightforward. On one hand, cooperative players are supposed to consent to make their decision public and to disclose the amount actually extracted, while non-cooperative players are supposed to prefer to hide their extraction (as in the VD mechanism). As a result, the disclosed extractions should be lower than the non-disclosed ones (Conjecture 3.1). Yet on the other hand, some of the cooperative players may use the freedom to choose the value disclosed to signal the socially optimum solution, even if their actual extraction is a little higher. This is an attempt not to lose too much money while the (symmetric) socially optimal solution is not achieved. In addition, non-cooperative players can hide behind a false declared extraction; lower than the actual one, to benefit from the possible influence of this signal on the other members of the group (Conjecture 3.2). [41] shows that for some people their concern over appearing honest and cooperative may outweigh their desire to actually be honest and cooperative. These people prefer lying to appearing selfish. This is common knowledge, which can call into question the self-selection mechanism induced by voluntary sharing mentioned in the argumentation of conjecture 2.1. If this strategic behavior is detected, it can: undermine trust in the group; prevent the formation of a group norm (or norm compliance) and cause conditional cooperators to reduce their cooperation. Overall, the Free Disclosure mechanism is likely to create noise in the disclosed social information and thus cause difficulties for the group in managing the common-pool resource. As it stands, we are not able to predict the outcome of the Free Disclosure treatment with certainty, as it partly depends on the composition of the group (the number of unconditional cooperators, conditional cooperators, and non-cooperators). We nevertheless expect that the average extractions would be lower than in the Mandatory Disclosure treatment (Conjecture 3.3), because on average only one third of people are non-cooperative (free-riders, [20]), and probably not all are willing to lie to gain more on the backs of others. Conjecture 3: In the Free Disclosure treatment: the disclosed extractions are lower than the non-disclosed extractions the disclosed extractions are lower than the actual extractions the average extractions are lower than in the Mandatory Disclosure treatment

Results

When voluntary disclosure was offered to players (VD and FD treatments), most of them decided to make their decision public; as shown in Fig 1, in support of our first conjecture. More specifically, in the VD treatment, more than 30% of players chose to disclose their extraction decision 100% of the time. This percentage dropped to 16% in the FD treatment but; as can be seen in the figure, between 5 and 15% of players revealed their decision at least 10 periods out of 20 (50% of the time). The graph at the bottom of the figure shows that the frequency of information sharing was relatively constant over time (between 60% and 85%) with a decrease however in the VD treatment that was not observed in the FD treatment.
Fig 1

Frequency of voluntary disclosure.

Treatment effect

Table 3 provides statistics on the average level of extraction depending on the treatment and Fig 2 displays the average extraction trends for the three treatments. From the first round, without any prior information about the other members’ behavior in the group, the VD treatment stood out from the MD and FD treatments with a lower average extraction level (7.31 vs 8.06 and 8.03 respectively). This initial effect persisted all throughout the game, even though with replications the three treatments converged towards the choice of the dominant strategy, as is often the case in experimental social dilemma games [42, 43]. We observe in Fig 2 that the average extraction in the MD treatment was almost always higher than in the other two treatments. Non-parametric tests (Wilcoxon rank-sum test) comparing the average extractions of the three treatments confirmed that the averages in the MD treatment were significantly higher than in the VD treatment (rank-sum test statistic = 4.193, p-value < .001) and also higher than in the FD treatment (rank-sum test statistic = 3.111, p-value = .002), supporting conjectures 2.2 and 3.3. On the other hand, the two voluntary sharing treatments were not significantly different from each other (rank-sum test statistic = -1.258, p-value = .208).
Table 3

Summary statistics.

Average extraction (std)
Treatment# GroupsRound 1Rounds 1 to 10Rounds 11 to 20Rounds 1 to 20
MD88.06 (2.20)9.00 (1.83)9.45 (1.46)9.22 (1.67)
VD97.31 (2.66)7.84 (2.55)8.86 (2.31)8.35 (2.49)
FD98.03 (2.43)8.26 (2.40)9.06 (1.77)8.66 (2.14)
Fig 2

Evolution of average extraction per treatment.

To further analyze the data, we consider the following econometric model. Let yit be the extraction of player i in round t. This amount is both left and right-censored, i.e. 0 ≤ yit ≤ 10. We have a dynamic panel data model: yit = ρyi,t-1 + xit'b + μi + εit; i = 1, 2,. . ., N; t = 1, 2,. . ., T, where yi,t-1 is the extraction of player i in the previous round, xit corresponds to the whole set of explanatory variables including both time-variant variables (total extraction of player i’s group in t-1; decision-making time; information-related variables such as dummy of information sharing and the number of individuals who disclosed their decision in the previous round) and time-invariant variables (treatment dummy variables). The main concern of our model is that during the experiment players can learn about the decision process. We think that past individual and group decisions can have an impact on the individual’s current decision. This thus corresponds to the concept of state dependence, persistence in individual decisions over time, or learning effects often highlighted in the literature. It is recognized that the dynamic nature of the model is related to the well-known initial conditions problem leading to the inconsistency of traditional estimators in panel data econometrics (see for example [44, 45]). Note that the regression error term is composed of two parts: an idiosyncratic error εit and an individual-specific effect μi. Following [45], the initial conditions problem can be fixed by specifying a more general model where the μi are defined as correlated random effects with the following assumption: μi | yi1, zi ~ N(α0 + α1 yi1 +zi'γ, ). This assumption appears to be general enough, as it suggests that an individual-specific effect depends not only on the initial extracted amount yi1, but also on a set of values of explanatory variables (zi ≡ xi1,…, xiT). The model with this assumption therefore corresponds to a dynamic Tobit model for panel data with correlated random effects (CRE). It results that the whole set of explanatory variables of our model includes: the lagged individual extraction y; the control variables included in x; the initial individual decision y and the set of auxiliary regressors z. A comparison between nonlinear dynamic models with correlated random effects and it’s fixed effects counterpart was beyond the scope of this study (a reference to [45, 46], among others will provide more details on this issue). However, we can list at least three advantages of the CRE approach. Firstly, it specifies a more general distribution for the individual effects that can be correlated with the regressors. Secondly, this approach makes it possible to calculate the average partial (or marginal) effects. Finally, the endogeneity of some regressors can be conveniently handled by using the control function approach of [46]. Furthermore, estimation of the CRE dynamic Tobit model, compared to the dynamic Tobit model with standard random effects, implies two additional sets of variables: initial decision (yi1) and a set of auxiliary variables (zi). A likelihood-ratio (LR) test was performed to compare the two models. The null hypothesis corresponded to α1 = γ = 0. For the whole sample (all treatments included), the test statistic was 275.93 and the p-value of the chi-squared distribution with 58 degrees of freedom was close to 0, leading to the rejection of the original model in favor of the dynamic Tobit model with correlated random effects. This test shows the importance of the initial observation problem, which has to be controlled for. The significance (at the 10% level) of this coefficient (α1, Table 4) provides an illustration of this result, which reveals the presence of an anchoring effect (although not as strong) due to the first decision as underlined by [47]. Table 4 presents estimation results. We compared our CRE dynamic Tobit model to the static Tobit model with standard random effects by using a likelihood-ratio χ2 test. The results showed unambiguously that the static model was dominated by our model for the whole sample. We also ran the CRE dynamic Tobit model without control variables (“Decision-making time” and “Time trend”). The signs of the coefficients of our main variables remained unchanged. Moreover, the likelihood-ratio test (which was a χ2 distributed under the null) showed that our model was strongly preferred. We report the estimated coefficients and the corresponding partial effects of explanatory variables on the expected value of individual extraction (given that it is censored at 0 and 10). Note that this quantity is defined by E(yit) = Pr(0 ≤ yit ≤ 10) * E(yit|0 ≤ yit ≤ 10) + 10 * Pr(yit > 10), where E(yit|0 ≤ yit ≤ 10) is the expected value of the dependent variable when it is truncated at 0 and 10. The estimated partial effects are globally consistent with the estimated coefficients as the signs and the significance are very similar. We find that both treatments where disclosure of decisions is on a voluntary basis; specifically the VD and FD treatments, have a significant negative impact on the amount extracted in the game (however, the two coefficients are not different from each other, χ2 = 0.10, p-value = 0.755). This confirms what can be seen in Fig 2, and also confirms the non-parametric tests reported above. The estimation further shows that individual decisions are strongly related to what the players observed from the group during the previous round and that there is a natural tendency towards higher extraction as time elapses. Estimates also reveal that extracting less from the common resource seems to be based on a more cognitive decision-making process since the shorter the decision-time, the higher the extraction [48]. This is consistent with the study of [49] which found that faster subjects more often choose the option with the highest payoff for themselves. Finally, the estimation shows that with all else equal, the first extraction decision in the game matters. This decision was taken without any prior knowledge about the behavior of the group as it took place just after the reading of the instructions for the experiment. [47] also observed this phenomenon. Hence, even if a mechanism tries to influence the dynamics of the individual’s decision process, the individual's initial intention remains a strong anchor [50].
Table 4

Estimation results for the whole sample using the CRE dynamic Tobit model with individual extraction as the dependent variable.

VariableCoefficientPartial effect
(Std.Err.)(Std.Err.)
Individual past decision0.0060.002
(0.061)(0.021)
Group past decision0.133**0.046**
(0.030)(0.010)
Decision-making time-0.049**-0.017**
(0.010)(0.003)
Treatment VD-1.830**-0.636**
(0.563)(0.195)
Treatment FD-2.171**-0.753**
(1.040)(0.361)
Time trend0.163**0.056**
(0.019)(0.006)
Individual initial decision0.178*0.062*
(0.095)(0.033)
Intercept-8.229
(5.148)
Log-likelihood-2129.767
Wald test for model significanceχ2 (63) = 902.19p-value<0.001
LR test for CRE dynamic Tobit without controlsχ2 (21) = 143.09p-value<0.001
LR test for standard RE static Tobitχ2 (59) = 278.01p-value<0.001
Number of observations1976
Number of individuals104
Uncensored observations672
Left-censored observations26
Right-censored observations1278

Notes: Standard errors are given in brackets. Significance level:

*10%

**5%.

Notes: Standard errors are given in brackets. Significance level: *10% **5%.

Information sharing effect

Fig 3 provides several useful curves to understand what happened in the two voluntary disclosure treatments. The VD treatment is on the left graph and the FD treatment is on the right graph. First, there is the average of the non-disclosed extractions (dotted line and triangle marker facing down). Second, there is the average of the disclosed extractions (dashed line and triangle marker). In the VD treatment, this average corresponds exactly to the curve of the displayed extractions. For the FD treatment we have added the average of the displayed extractions, which in this treatment can be different from the actual extractions of the players (dash-dot line with pentagon marker). Finally, we report the average extraction (plain line with star marker), i.e. the same curves as in Fig 2 for these treatments.
Fig 3

Evolution of average extractions depending on whether or not they were disclosed.

Clearly, the players who decided to disclose their decision extracted less than the others. This holds in both treatments (Wilcoxon test p-value < .001 in both treatments), with a larger difference for VD than for FD, however. This supports conjectures 2.1 and 3.1. In the VD treatment, we find that the overall average of the extractions and the average of the disclosed extractions (disclosed value) are very close and follow the same trajectory. This is consistent with the expected effects of social information. These being: imitation; convergence of decisions with observed decisions and creation of a norm within the group [10–12, 19, 38–40]. Conversely, in the FD treatment the average of the displayed extractions differs greatly from the average of the actual extractions of the group, thus being in line with conjecture 3.2. The average extraction curve for players who did not disclose their choices is higher in the VD treatment than in the FD treatment. The difference comes from the fact that in the VD treatment the least cooperative individuals did not disclose their choices, whereas in the FD treatment they disclosed an extraction value, even if it did not correspond to their actual extraction. Fig 4 helps to further understand the FD treatment. The graph shows three new curves in addition to the curve of the average extractions of players who chose not to disclose their extractions. The first curve corresponds to the average of the extractions of players who disclosed their decision and did not lie in the value they entered (dotted line with triangle marker). The second curve shows the average of the extractions of players who disclosed their decision but entered a value different from their actual extraction (dash-dot line with square marker). Finally, the third curve shows the average of the values entered by the latter players (small dotted line with star marker). It is clear that players who lied about the amount they reported extracted far more than those who disclosed their actual extraction (Wilcoxon p-value < .001) and also more than those who refused to make their decision public (Wilcoxon p-value = .010). However, as can be seen with the curve of the values they entered, these players seem to have quickly understood that the social optimum was to extract 5 units. The freedom given to the players to decide for themselves the extraction that would be disclosed favored the emergence of a strategic behavior consisting in reporting an extraction close to the social optimum. They sent a false signal, expecting others to decrease their extraction in order to increase their individual profit. After the tenth round, when the average extraction starts to increase, one can observe a disclosed extraction by lying players that decreases. A simple linear regression (individual reported extractions on rounds) gives a coefficient value -0.08 for rounds 1 to 10 and to -0.14 for rounds 11 to 20, which are not statistically different at the 5% level. We have not explored the reason why these participants report an extraction level lower than the socially optimal value. The assumption we can make is that it is a way to slow down the increase towards the maximum extraction. In order to identify the effects of specific explanatory variables (in particular those related to information sharing), estimations were carried out treatment-by-treatment. Table 5 presents the estimation results of the same model (dynamic Tobit with correlated random effects) for the MD, VD, and FD treatments respectively. As in the case of the whole sample, we performed an LR test to compare the models with and without correlated random effects (i.e. null hypothesis α1 = γ = 0) for each of the three treatments. The result was unambiguously in favor of the CRE dynamic Tobit model (test statistic was 61.051, 93.543, and 90.688 for the MD, VD and FD treatments respectively). In addition, test results for every treatment were also favorable to our model when it was compared to either the static Tobit model with standard random effects or the CRE dynamic Tobit model without control variables (see Table 5). As the estimated partial effects are consistent with the estimated coefficients (i.e. they have similar signs and significance levels), we can rely on any of the two sets of coefficients to interpret the results. Table 5 provides estimation results for the MD treatment in the first column. By definition, the set of explanatory variables does not contain any factors related to the voluntary sharing mechanism. The estimated coefficients are therefore similar to the case of the whole sample presented in Table 4.
Fig 4

Actual extractions in FD for players who did not disclose their decisions, for players who disclosed their actual extraction, and for players who disclosed an amount different from the actual one.

For the last case the disclosed amount is also shown.

Table 5

Estimation results by treatment, using the CRE dynamic Tobit model with individual extraction as the dependent variable.

MDVDFD
VariableCoefficientPartial effectCoefficientPartial effectCoefficientPartial effect
(Std. Err.)(Std.Err.)(Std. Err.)(Std.Err.)(Std. Err.)(Std.Err.)
Individual past decision0.260*0.058*0.217**0.090**-0.333**-0.131**
(0.140)(0.031)(0.106)(0.044)(0.133)(0.053)
Group past decision0.189**0.042**0.065*0.027*0.178**0.070**
(0.068)(0.015)(0.037)(0.015)(0.061)(0.024)
Decision-making time-0.049**-0.011**-0.059**-0.024**-0.073**-0.029**
(0.018)(0.004)(0.014)(0.006)(0.021)(0.008)
Time trend0.133**0.030**0.175**0.072**0.139**0.055**
(0.044)(0.010)(0.029)(0.012)(0.056)(0.022)
Individual initial decision0.0590.0130.476**0.197**-0.119-0.047
(0.253)(0.057)(0.141)(0.058)(0.126)(0.050)
Information sharing, current round4.6291.915
(3.384)(1.396)
Information sharing, previous round-3.382**-1.399**
(1.549)(0.638)
Information sharing, #members in the group0.0130.005-0.180-0.071
(0.247)(0.102)(0.289)(0.114)
Information sharing & lying, current round4.5301.791
(3.855)(1.523)
Information sharing & non-lying, current round8.3473.301
(5.276)(2.082)
Information sharing & lying, previous round-2.188-0.865
(1.748)(0.690)
Information sharing & non-lying, previous round-4.731*-1.871*
(2.694)(1.063)
Intercept-10.860-6.737**-14.047**
(14.438)(1.596)(4.576)
Log-likelihood-470.06-783.55-823.01
Wald test for model significanceΧ2 (23) = 165.15p < .001Χ2 (27) = 534.49p < .001Χ2 (30) = 345.13p < .001
LR test for CRE dynamic Tobit without controlsΧ2 (2) = 20.15p < .001Χ2 (2) = 70.16p < .001Χ2 (2) = 51.23p < .001
LR test for standard RE static TobitΧ2 (21) = 67.12p < .001Χ2 (21) = 94.38p < .001Χ2 (20) = 92.02p < .001
Number of observations608684684
Number of individuals323636
Uncensored observations133269270
Left-censored observations2420
Right-censored observations473411394

Notes: Standard errors are given in brackets. Significance level:

*10%

**5%.

Actual extractions in FD for players who did not disclose their decisions, for players who disclosed their actual extraction, and for players who disclosed an amount different from the actual one.

For the last case the disclosed amount is also shown. Notes: Standard errors are given in brackets. Significance level: *10% **5%. The models estimated for the VD and FD treatments include additional variables linked to the voluntary disclosure mechanism. More precisely, for the VD treatment we added two dummy variables to indicate whether or not players consented to disclose their extractions in the current and in the previous round (“Information sharing, current round” and “Information sharing, previous round”). We also added the number of members in the group who chose to disclose their individual decision. In the FD treatment; as players could report an extraction that was different from the actual one, there were three possible situations, each of them corresponding to a dummy variable: (i) the players refused to disclose their decision (the reference); (ii) they consented but reported an extraction that was different from the actual one (“Information sharing & lying”) and (iii) they consented and reported their actual extraction (“Information sharing & non-lying”). We added the present and the past value for the latter two dummies. It should be noted that including whether or not extractions were disclosed might have created an estimation bias. Indeed, individuals could simultaneously make multiple decisions about (i) their extraction; (ii) their choice on whether or not to disclose their decision and (iii) the amount they reported. This phenomenon led us to consider the corresponding explanatory variables as endogenous regressors (i.e. the variable “Information sharing, current period” for the VD treatment, and the variables “Information sharing & lying, current round” and “Information sharing & non-lying, current round” for the FD treatment). For this purpose, we applied the control function approach proposed by [46], which is particularly suitable for nonlinear models such as our Tobit model with correlated random effects. Table 5 provides estimation results that account for this endogeneity bias. Based on a robust t-test (also proposed by [46]), we found that the exogeneity of these regressors could not be held even if their coefficients were not statistically significant. In other words, an estimation assuming the exogeneity of these regressors could lead to misinterpretation. The control function approach of [46], consisting of a two-step estimation, is relatively simple to implement. At the first step, a probit model for the endogenous regressor is estimated in order to obtain a generalized residual. The second step corresponds to the estimation of the usual nonlinear model (i.e. Tobit model with correlated random effects) with the previously computed generalized residuals as an additional regressor. See [46] for more computational details. Lastly, we performed a robust t-test for the significance of these generalized residuals. For the VD treatment, the t-statistic was -2.08, while for the FD treatment, the t-statistic was -1.20 for the first generalized residuals (corresponding to “Information sharing & lying, current round”) and -1.97 for the second generalized residuals (“Information sharing & non-lying, current round”). This result implies the significance of generalized residuals in the nonlinear regressions, thereby supporting the control for endogeneity of information sharing when using data in the VD and FD treatments. In the VD treatment, the model estimates confirm that the player's decision in the current round was strongly influenced by the player's decision in the first round of the game and by their decision in the previous round. The anchoring effect observed in Table 4 is ultimately significant only in this treatment. For us this can be interpreted by the fact that there is less noise in the information available to players in the VD treatment, as players can trust the information and easily identify good will. The total extraction of the group in the previous round also exerted a strong influence on the player’s current extraction decision. Moreover, estimates confirm that a player who decided to make their extraction public in the previous round extracted less in the current round than an individual who preferred to keep their extraction private in the previous round (supporting conjecture 2.1). In the FD treatment, the extraction of the player in the previous round had a negative impact on their extraction in the current round, and the extraction of the player in the first round of the game was not a significant variable in the model. Our interpretation is that the social information in this treatment is less selective and has a strategic dimension, which seems to result in players fluctuating more in their extraction decisions. The estimates also tell us that what the player did in the previous round had more influence than what they intended to do in the current round. Thus, the information sharing variables (yes with a lie or yes without a lie) of the current round are not significant compared to the reference variable (no information sharing). A player who decided to make their "true" extraction public in the previous round (i.e. without a lie) extracted less in the current round than a player who preferred to keep their extraction private or who made it public but displayed a "false" value (i.e. lied). Thus, in both voluntary sharing treatments, it can be said that the individuals who contribute to better management of the common resource are those who are willing to make their extractions public with complete honesty and transparency.

Conclusion

The management of common resources is an everyday universal challenge on both a large scale; such as with the oceans or the atmosphere, and also in local situations; such as the use of a borehole shared by a small community. Many factors contribute to improving the management of these resources, as was shown by Nobel Prize winner Elinor Ostrom in numerous articles. Among these factors there is information about the actions of other users of the resource, which is commonly referred to as social information. The effects of this social information are mixed. On one hand, it stimulates cooperative actions, as users may feel obliged to show signs of cooperation [5]. Moreover, it provides users with the opportunity to send a signal about the expected cooperation within the group and the social norm that seems appropriate [16, 19]. On the other hand, however, it is likely to spotlight the least cooperative users. Those users; due to the structure of the social dilemma, have higher earnings than cooperative users as the exploitation of the resource generates individual profits. Therefore, if users tend to imitate the best performance, over-exploitation of the resource is inevitable and can be accelerated by social information [8]. Providing information about the actions taken by all the users in the group is therefore not necessarily beneficial. In this article we proposed setting up a system of voluntary sharing of information, in order to keep the beneficial effects of disseminating information (signal) and mitigate its negative effects (imitation of the most selfish), and therefore to favor the emergence of upward social information [10]. We experimentally tested two voluntary information sharing mechanisms. In the first one, the players were only invited to indicate whether they agreed to publicize their level of extraction. In the second one, the players were free to decide whether or not to disclose their level of extraction, but they were additionally responsible for reporting the amount that was made public. We compared these treatments with a benchmark treatment in which individual decisions were automatically and mandatorily disclosed. We could have used a benchmark with no social information at all, i.e. with only aggregate information on the total amount extracted by the group. This may be a limitation of this study, but our main concern was to test a mechanism that allows for self-selection of the social information provided to users of a common resource. For the two voluntary disclosure mechanisms, the data exhibited a lower average extraction compared to the system with automatic and mandatory disclosure of decisions and therefore reduced the phenomenon of over-exploitation of the resource. The main reason is that voluntary disclosure allows selection in the social information disseminated to players. The more cooperative players make their actions public, while less cooperative players keep this information private, so as not to have a negative influence on others and thus accelerate the tragedy of the commons. However, while the voluntary disclosure mechanism leaves players free to set the reported amount extracted, less cooperative players exploit the strategic dimension. Those players do not hesitate to lie in an attempt to gain more from the common-pool resource. Ultimately, this is likely to undermine the beneficial effect of the mechanism by breaking trust in the group and preventing the formation of a social norm. [21] showed that the voluntary disclosure mechanism improves the average contribution to the production of a public good. Our study showed that the benefits of this mechanism are still valid in an extractive common resource context. Further investigation is however required to better understand the ins and outs of voluntary information disclosure mechanisms. For example, [21] tested the mechanism in small groups like we did (group size of 5 and 4, respectively). It would be interesting to test it in larger groups where it would be more difficult to influence the norm or to infer the extraction levels that others do not disclose. Furthermore, information was shared anonymously, so the effects of the reputation mechanism when anonymity is lifted may need to be studied. This could increase the motivation of individuals to signal themselves as cooperative, and also limit the strategic behavior of displaying a value lower than the actual extraction. In the same vein, identifying the type of players upstream of the game (unconditional cooperator, conditional cooperator, free-rider or other); with a procedure inspired by those proposed by [20] and [51], would make it possible to learn more about individual behaviour in the game according to the voluntary sharing mechanism put in place. Finally, it would be interesting to test the voluntary information-sharing mechanism in a dynamic environment, where the resource is constantly evolving. It is a more complex framework but also closer to reality for common resources [52]. The development of new technologies and of the Internet of Things (IoT) greatly facilitates the sharing and the disclosure of information through smart devices. As a result of this, consideration can be given to new mechanisms for managing common-pool resources. Smart grids and smart meters are examples of devices that allow this purpose to be pursued. For instance, in the electricity sector regulators are able to monitor grid traffic in real time and take appropriate actions to reduce stress on the grid in peak hours. At the same time, users can receive information; such as electricity pricing, through variable pricing on a real time basis and are better incentivized to manage and adjust their energy consumption [53]. Using smart devices to manage common resources does however come up against the problem of the social acceptability of the automatic and mandatory collection of individual data. Adopting voluntary based data collection could help to solve part of this problem. The efficient management of a common-pool resource at a reasonable cost is a serious challenge for decision-makers. The voluntary dimension of providing information about resource extraction can be a useful tool in this direction. If the regulator only installs smart meters for voluntary users, it reduces costs compared to a generalized installation. In addition, if the regulator asks users to self-report their extractions, it eliminates metering installation and operating costs. The cost in this case is for the user to self-report. However, audits need to accompany self-reporting systems and partial installations of meters. The regulator must therefore calculate the trade-off between the costs of installing meters and the costs of auditing unmetered users. When users self-report their extractions, the number of audits should be higher, as some users will declare amounts that are different from their actual extractions. In California for instance, the law on sustainable groundwater management; adopted by the State in 2004, stipulates that the board requires annual extraction reports and that metering may be required to acquire the data. The board can then issue orders to acquire the information that is needed. The full text can be found online at https://mavensnotebook.com/2020/01/23/sgma-implementation-groundwater-sustainability-evaluation-and-state-water-board-intervention/ and https://www.waterboards.ca.gov/water_issues/programs/gmp/intervention.html. The dissemination of extraction data for social information purposes represents a step towards voluntary provision, i.e. the case data collected by the regulator is revealed to end-users. As many authors have shown with laboratory and field experiments, social information and the following social comparison (see also [54-56] for research on social comparison nudges), lead users to adopt more cooperative behaviors. With a voluntary disclosure mechanism, the dissemination of social information becomes possible, and since it is selective, its efficiency is increased; as shown by [21] and by us in this paper. The free information disclosure mechanism we have proposed in this article is even more likely to foster social acceptability. However, as we have seen, it introduces opportunistic and strategic behaviors which may offset the benefits of social information in the long run and on a larger population scale. To address this problem the mechanism could be accompanied by audit systems similar to those for tax returns. This would increase the cost of the mechanism but is also likely to further increase its effectiveness. This is something that should be examined and tested in further investigations. (DOCX) Click here for additional data file. 31 Mar 2020 PONE-D-20-03051 Contrasting effects of information sharing on CPR extraction behaviour: experimental findings PLOS ONE Dear Dr. DUBOIS, Thank you for submitting your manuscript to PLOS ONE. After careful consideration, we feel that it has merit but does not fully meet PLOS ONE’s publication criteria as it currently stands. Therefore, we invite you to submit a revised version of the manuscript that addresses the points raised during the review process. We would appreciate receiving your revised manuscript by May 15 2020 11:59PM. When you are ready to submit your revision, log on to https://www.editorialmanager.com/pone/ and select the 'Submissions Needing Revision' folder to locate your manuscript file. If you would like to make changes to your financial disclosure, please include your updated statement in your cover letter. To enhance the reproducibility of your results, we recommend that if applicable you deposit your laboratory protocols in protocols.io, where a protocol can be assigned its own identifier (DOI) such that it can be cited independently in the future. For instructions see: http://journals.plos.org/plosone/s/submission-guidelines#loc-laboratory-protocols Please include the following items when submitting your revised manuscript: A rebuttal letter that responds to each point raised by the academic editor and reviewer(s). This letter should be uploaded as separate file and labeled 'Response to Reviewers'. A marked-up copy of your manuscript that highlights changes made to the original version. This file should be uploaded as separate file and labeled 'Revised Manuscript with Track Changes'. An unmarked version of your revised paper without tracked changes. This file should be uploaded as separate file and labeled 'Manuscript'. Please note while forming your response, if your article is accepted, you may have the opportunity to make the peer review history publicly available. The record will include editor decision letters (with reviews) and your responses to reviewer comments. If eligible, we will contact you to opt in or out. We look forward to receiving your revised manuscript. Kind regards, Valerio Capraro Academic Editor PLOS ONE Journal Requirements: When submitting your revision, we need you to address these additional requirements. 1. Please ensure that your manuscript meets PLOS ONE's style requirements, including those for file naming. The PLOS ONE style templates can be found at http://www.journals.plos.org/plosone/s/file?id=wjVg/PLOSOne_formatting_sample_main_body.pdf and http://www.journals.plos.org/plosone/s/file?id=ba62/PLOSOne_formatting_sample_title_authors_affiliations.pdf 2. Please improving statistical reporting and refer to p-values as "p<.001" instead of "p=0". Our statistical reporting guidelines are available at https://journals.plos.org/plosone/s/submission-guidelines#loc-statistical-reporting Additional Editor Comments (if provided): I have now collected four reviews from four experts in the field. The reviews are somehow split, with one recommending rejection and three recommending major revision. Therefore, I would like to invite you to revise your work following the reviewers' comments. Needless to say that all comments must be addressed. Besides the reviewers' comments, I would like to ask you to avoid using acronyms in the abstract (they minimise impact). I am looking forward for the revision. [Note: HTML markup is below. Please do not edit.] Reviewers' comments: Reviewer's Responses to Questions Comments to the Author 1. Is the manuscript technically sound, and do the data support the conclusions? The manuscript must describe a technically sound piece of scientific research with data that supports the conclusions. Experiments must have been conducted rigorously, with appropriate controls, replication, and sample sizes. The conclusions must be drawn appropriately based on the data presented. Reviewer #1: Partly Reviewer #2: Partly Reviewer #3: Partly Reviewer #4: Partly ********** 2. Has the statistical analysis been performed appropriately and rigorously? Reviewer #1: No Reviewer #2: Yes Reviewer #3: Yes Reviewer #4: Yes ********** 3. Have the authors made all data underlying the findings in their manuscript fully available? The PLOS Data policy requires authors to make all data underlying the findings described in their manuscript fully available without restriction, with rare exception (please refer to the Data Availability Statement in the manuscript PDF file). The data should be provided as part of the manuscript or its supporting information, or deposited to a public repository. For example, in addition to summary statistics, the data points behind means, medians and variance measures should be available. If there are restrictions on publicly sharing data—e.g. participant privacy or use of data from a third party—those must be specified. Reviewer #1: Yes Reviewer #2: No Reviewer #3: No Reviewer #4: Yes ********** 4. Is the manuscript presented in an intelligible fashion and written in standard English? PLOS ONE does not copyedit accepted manuscripts, so the language in submitted articles must be clear, correct, and unambiguous. Any typographical or grammatical errors should be corrected at revision, so please note any specific errors here. Reviewer #1: Yes Reviewer #2: Yes Reviewer #3: Yes Reviewer #4: Yes ********** 5. Review Comments to the Author Please use the space provided to explain your answers to the questions above. You may also include additional comments for the author, including concerns about dual publication, research ethics, or publication ethics. (Please upload your review as an attachment if it exceeds 20,000 characters) Reviewer #1: The paper studies the effect of extraction disclosure in a common pool resource game. Based on a controlled lab experiment, the authors introduce three different treatments wherein they test the effect of a mandatory disclosure policy, a voluntary one and a policy characterized by the freedom to disclose any amount. They find that a voluntary policy would help to reduce extraction compared to a mandatory one. However the freedom to disclose appear to be counter-productive. Although the research question is interesting, but I have some concerns about the robustness of the approach and the contribution of the paper appears to be small. Comments: - The authors present a series of example to motivate the paper but they coul explain why they have decided to introduce these interventions in a CPR game setting. There is mixed evidence for the external validity of CPR experimental studies and behavior in the experiments sometimes does not match observed behavior outside the experiment. It would be interesting to see some motivation for the choice of the game. - The section dedicated to the conjectures is not very informative. It does not help to understand how subjects are expected to behave in each treatment and, more importantly, why. Especially it would be interesting to discuss what are the possible mechanisms that might explain the positive effects of the main treatments relative to a baseline CPR game. Currently, the paper is quick on this and just evoke the formation of social norms. What about dishonest behavior by free-riders in the VD treatment? Having such a discussion could improve the added value and contribution of the paper. - There are other (potentially relevant) papers on CPR game with communication or social interactions prior to making decisions, see for example, Cardenas et al (2000), Rodriguez-Sickert (2008) or Andries et al (2011) for a survey of literature. How does your paper relate to this strand of the literature? - Overall, the authors should spend more effort in explaining why and how their paper extends the existing literature on CPR experiments. - In several occasions, the authors write that in social dilemmas “most players are conditional cooperators” and cite Fischbacher et al (2001). However, this is not true, Fischbacher et al found that on average about 50% of subjects are found to be conditional cooperators and recent studies have mostly confirmed this figure. - I am not convinced by the analysis in Section 3: o There is no non-parametric analysis of the average behaviours in the treatments. Given the small size of the groups and the small variation of averages between groups, I suspect there is no significant differences but the authors should discuss it. o Also median and/or distribution analysis could be performed here in order to give insights on the potential differences. o In the regression analysis, it is not clear to me why they use a dynamic panel model with previous extraction and initial extraction (that is not really defined by the way) and not a simple random effect tobit or OLS with no dynamics. Indeed, including the lagged dependent variable may lead to estimation problems due to endogeneity issues. Nothing specific is said about this issue but the error term can't be statistically independent of y(t-1). o Also it would be interesting to run an estimate without the demographic to see how the effect is dependent on the controls. Reviewer #2: The paper studies the relative efficiency of three information disclosure mechanisms on common pool resources (CPR) problems: Mandatory disclosure, voluntary disclosure and free disclosure. For this purpose, the authors run an experiment on CPR where groups of 4 individuals decide on the number of tokens (from 1 to 10) they extract, taking into account that their payoffs increase with the tokens they get, but also that there is a negative externality to the group increasing with the total number of tokens extracted in the group. Although the symmetric equilibrium of the resulting game would imply to extract 5 tokens each, the average number of extractions during the 20 rounds is between 7 and 10. These extractions are higher when a mandatory disclosure on the extractions is implemented than when voluntary mechanisms are considered, although the number of extractions increase and converge to the maximum along the experiment in all treatments. However, the results show that there is a significant number of ‘cooperative’ people that decided to disclose their decisions, which is higher in voluntary than in free disclosing treatment, since the latter tend to exploit the disclose information strategically. Although the results of the are somehow expected, the experiment is appealing, apparently well-executed and the analysis well-performed, which are the main items for publication in PLoS ONE. However, I also believe that there are several issues that should be considered in the revision. See my main comments below: Comments 1) There is a paper on CPR published in PLoS ONE (Lacomba et al. 2017) that shows that there is a significant number of cooperators (which decide to follow a sustainable rule of extractions) independently on the disclosure of information about the actions of other players. However, the (mandatory) disclose of information either accelerates the non-cooperative behavior or lead to a more egalitarian share of the resources. These two dynamics are highly dependent on the first choice of the players. This story is also found in the paper under revision, where the initial decisions have a positive and significant impact in the voluntary contribution treatment. Nevertheless, these ideas are not exploited enough in the paper. I suggest to study all these issues in depth. 2) Linked to the previous comment, I miss a baseline treatment where there is no disclosure of information at all. Would all subjects behave as free-riders or even in this case there would be a significant proportion on cooperators? 3) From the average results in Figure 2 the people that follow the optimal strategy (taken just 5 tokens) cannot be appreciated. It would be interesting an analysis on the percentage of people who follow the ‘optimal strategy’ in all treatments. I have the impression that subjects are not capable of deducing their optimal strategy from the payoff function in the experiment. Maybe they should have received some examples in the instructions to clarify it. I think that the instructions of the experiment should be appended in the paper to shed light on this issue. 4) The results seem to be highly dependent on the number of people in the groups. For two individuals (as in the paper mentioned in comment #1) free-riders are immediately detected with the disclosure of information. With four subjects, free riders (and information manipulators) can be identified along the experiment and this explains the convergence of voluntary and free disclosure trends. An analysis of the sensibility of results to the number of subjects would be performed or at least mentioned as a possible limitation. 5) It is not clear to me if the students were economically incentivized or not (for instance Conjecture 1 states “Even without a specific monetary incentive”). The experimental economics literature is very critical with non-monetary-incentivized experiments. Therefore, this issue should be also clarified. In general, I miss more details on the experimental procedures, as well as the instructions. 6) The paper run panel data regressions to investigate on the individual decisions on extractions. I would devote a particular Section for the econometric modeling. The models and the variables used should be properly justified, as well as the employed method (a tobit model with correlated random effects). Note that dynamic panel data models, by construction, present endogeneity problems that should be taken into account (maybe through GMM estimation). The group effects have to be treated as well, since every group has different interactions. All the tables can be gathered in a single table so as estimates can be easily compared and statistics on the model performance should be also displayed. As a matter of fact, the interpretation of the results deserves further explanations. 7) It would be also interesting to analyze gender issues in relation to free-riding, voluntary discussing and manipulation on reported information in free disclosure treatment. Are any significant differences between men and women? 8) The paper states that the free disclosure mechanism seems to be less efficient than voluntary disclosure, although less expensive since it does not require an agency for data collection. Nevertheless, it would require an external audit to manage with the strategic manipulation of data. An alternative solution would be the implementation of ‘strategy-proof’ mechanisms that incentivize truth-reporting. I suggest to look for alternative payoff functions where the penalty be less sensitive to the deviations from the optimal strategy (maybe in terms of N times de median extraction) or where an extra penalty for randomly detection for liars be introduced. References Lacomba, J.A., Lagos, F., Perote, J. (2017). The Lazarillo's game: Sharing resources with asymmetric conditions. PLOS ONE 12/7, e0180421. Reviewer #3: Review of: Contrasting effects of information sharing on CPR extraction behaviour: experimental findings The paper presents results from a behavioral lab experiment designed to test the effect of different information sharing regimes on the extraction levels in a repeated common-pool resource (CPR) game played by groups of four participants. Results show that if participants are given a choice of whether or not to disclose their extraction level in each round, extraction levels are closer to the optimal level at the start as compared to the automatic, full disclosure condition, but then converge to inefficient, maximum extraction levels over time. I like the idea behind this experiment: If cooperation (optimum level extraction) is driven by descriptive norms, giving participants the choice of whether or not to disclose their individual extraction levels may lead to higher cooperation because free-riders will be less likely to disclose their full extraction levels. The results of this experiment thus inform us that implementing voluntary disclosure mechanisms may not be sufficient to promote sustainable cooperation in a four-person, repeated CPR game. This is an important insight. However, as the paper is currently written, the papers’ contribution to our knowledge may remain unclear for inattentive readers. Here are my suggestions on how the clarity of this paper could be improved (in order of appearance in the paper): Abstract: The last sentence is not very informative. The authors repeat this sentence at the end of the introduction, but there it is not informative either. In essence I believe there is not much the authors can say about the two treatment conditions except that in both participants behavior converges to maximum extraction. I’d therefore replace this sentence with one that better reflects the actual results of this study. At the bottom of page 3 the authors briefly refer to previous literature on factors that promote cooperation in CPR games and mention communication, reputation and information as important factors. Somewhat later it becomes clear that the factor the authors focus on is information. However, communication and reputation also involve information. The paper would benefit if the authors were more specific about what they mean by information already at this point in the paper. The paper might also benefit from a somewhat longer review (one or two paragraphs) of these mechanisms in the CPR context. As mentioned already above, the main theoretical argument implicit in the introduction is that pro-social participants (i.e. participants with other-regarding preferences) would extract less and disclose their true extraction levels more than free-riders and this would lead to average extraction levels closer to the optimum due to descriptive norms. But what is unclear in this argument is why pro-socials will be more likely to disclose their true extraction levels? Discloser as such does not produce any benefits for others, only extraction levels do. So why is truthful information sharing in this case a pro-social act? Only because disclosure is costly is not sufficient an argument to infer pro-social preferences in those who disclose. Clarifying this point is all the more important as it is not costly at all to disclose one’s extraction level in the experiment. In the first paragraph on page 5 the authors talk about framing to say that it may matter whether the social dilemma is a give-some (e.g. PGG) or take-some (CPR) dilemma. I think they should acknowledge some relevant literature on this topic (e.g., van Dijk and Wilke, 2000; Gächter et al, 2017). In this same paragraph the authors mention a study by Kreitmair (2015), which is most similar to what they do in this paper. But then they never get back to Kreitmair (2015) in the discussion. I think they should discuss how their results compare to Kreitmair (2015). In the second paragraph on page 5 the authors describe their experimental conditions. The description of the free disclosure condition (FD) should be improved. To what do players agree? And they declare themselves as what? I think it is only a matter of wording. And once these terms are introduced, the authors should stick to using the same throughout the paper (e.g. use mandatory rather than compulsory, etc.). In the same paragraph the authors write: “The FD mechanism is free of charge for the regulatory agency because the information is provided by the actors themselves.” Does it matter for your argument that it might be the most costly measure for consumers? They need to go online and enter a value each period. On page 7 the experimental conditions are explained once more. In the paragraph describing the voluntary disclosure treatment the authors write: ”Specifically, it was anonymously displayed in the summary screen…” What do the authors mean by that? The experimental design and procedure (section 2.2) are insufficiently described. Information on the number of sessions, participants per session, the randomization procedure, length of sessions, how much participants earned on average, etc. are missing. Moreover, it is standard in experimental social science to add a version of the instructions (translated into English) in the online appendix. The authors need to add this information to allow for a better assessment of their use of methods. Also, I was wondering whether they could add participants’ average age in footnote 7. Table 2 should also contain the number of cases and standard deviations. First line on page 11 should say Figure 2 (not Figure 1), no? Same is probably true for last sentence on page 12. I’d prefer if the authors compiled all four regression models in one table. By putting the standard errors below the coefficient estimates it should be possible. One table will facilitate the comparison of effects across different models. Moreover, information about the target variable should be placed in the title of the tables; it is currently hidden in the table notes. In Table 3 the effect of treatment FD is larger than the effect of VD. This seems inconsistent with what Figure 2 shows. The authors should maybe check the labels in Table 3 once more. In Figure 4 I did not get what the dotted line with the star symbol stands for. It does not seem to be explained in the text either. Note that information sharing can also be included in the model fitted to all data. This can be done by creating a dummy for whether a participant is in the mandatory disclosure condition (0) or in one of the voluntary disclosure conditions (1) and interacting this dummy with the information sharing variables that are currently only in Tables 5 and 6. Relatedly I was wondering why the authors include a variable for “information sharing in current period”. Do participants first decide whether to share the info and then the amount they want to extract? If so, that was not clear from the description of the experimental design and procedures and therefore should be added there. The discussion section is very brief and superficial. Apart from relating their results to the results by Kreitmair (2015), the authors could discuss some obvious limitations of their study and point out directions for future research. As far as I can see, the main limitations of the study are: (1) there is no control condition without information provision, which would correspond to the standard nowadays (most people do not know how much their neighbors extract or what the average in their neighborhood or their village/ city is). (2) Even if free-riders did not disclose their extraction level, given the small groups (n =4) participants could infer the number of free riders from average or even own earnings in a round. Finally, I was wondering whether the authors can say a bit more about the extent of conditional cooperation based on their results. By expanding this discussion they could tie their paper back to the start where they talk about social norms and conditional cooperation. Finally note that the authors’ statement in the very last paragraph that “Observations from our experiment showed that a mechanism based on voluntary sharing is effective,…” is not justified given the results. Eventually, all groups reach close to maximum extraction level. The authors should adjust their conclusions accordingly. I will be happy to have another look at the revised version but the authors should seriously consider all the points I make above in their revision. References Gächter, S., Kölle, F., & Quercia, S. Reciprocity and the tragedies of maintaining and providing the commons. Nature Human Behaviour, 1, 650-656. van Dijk, E., & Wilke, H. (2000). Decision-induced focusing in social dilemmas: Give-some, keep-some, take-some, and leave-some dilemmas. Journal of Personality and Social Psychology, 78, 92-104. Reviewer #4: The main question in this research is compelling. However, there are several issues and biases that authors need to handle to overcome the shortcomings of this paper. It appears that authors have taken several things in a very light manner without paying enough attention. The current experiments could not fully convince the readers how the mechanism of voluntary information disclosure can be implemented in real life. However, it is a new issue that authors are trying to handle, but it is crucial to be clear what the authors can claim and cannot form the current experiments. It is advisable to be clear on the part “scope of experiments” and to state the limitations and future work if something is out of scope. Therefore, to publish this paper, the authors need to handle the significant issues raised seriously. Moreover, authors should also keep in mind that this work is going to be published in the multidisciplinary journal. Therefore, policy implication needs to clear cut for the general readers. ********** 6. PLOS authors have the option to publish the peer review history of their article (what does this mean?). If published, this will include your full peer review and any attached files. If you choose “no”, your identity will remain anonymous but your review may still be made public. Do you want your identity to be public for this peer review? For information about this choice, including consent withdrawal, please see our Privacy Policy. Reviewer #1: No Reviewer #2: No Reviewer #3: Yes: Wojtek Przepiorka (Utrecht University, Department of Sociology / ICS) Reviewer #4: No [NOTE: If reviewer comments were submitted as an attachment file, they will be attached to this email and accessible via the submission site. Please log into your account, locate the manuscript record, and check for the action link "View Attachments". If this link does not appear, there are no attachment files to be viewed.] While revising your submission, please upload your figure files to the Preflight Analysis and Conversion Engine (PACE) digital diagnostic tool, https://pacev2.apexcovantage.com/. PACE helps ensure that figures meet PLOS requirements. To use PACE, you must first register as a user. Registration is free. Then, login and navigate to the UPLOAD tab, where you will find detailed instructions on how to use the tool. If you encounter any issues or have any questions when using PACE, please email us at figures@plos.org. Please note that Supporting Information files do not need this step. Submitted filename: Report on PONE-D-20-03051.pdf Click here for additional data file. Submitted filename: PONE-D-20-03051_review.pdf Click here for additional data file. 31 May 2020 Dear editor, Thank you for letting us rework our article to improve it. We have changed a lot of things, as requested by the reviewers. It seems to us that the article is much clearer in this new version, with more justification and a better position compared to the existing literature. Given these many changes, we could not attach the paper with the tracking changes, it was no longer readable, we created a new file. Consequently, the "Manuscript revised with tracking changes" file is the same as the "Manuscript" file. We hope that this new version will meet the reviewers' expectations. Sincerely yours Submitted filename: revision_reponse_reviewer_4_1.docx Click here for additional data file. 23 Jun 2020 PONE-D-20-03051R1 Contrasting effects of information sharing on CPR extraction behaviour: experimental findings PLOS ONE Dear Dr. DUBOIS, Thank you for submitting your manuscript to PLOS ONE. After careful consideration, we feel that it has merit but does not fully meet PLOS ONE’s publication criteria as it currently stands. Therefore, we invite you to submit a revised version of the manuscript that addresses the points raised during the review process. Please submit your revised manuscript by Aug 07 2020 11:59PM. If you will need more time than this to complete your revisions, please reply to this message or contact the journal office at plosone@plos.org. When you're ready to submit your revision, log on to https://www.editorialmanager.com/pone/ and select the 'Submissions Needing Revision' folder to locate your manuscript file. Please include the following items when submitting your revised manuscript: A rebuttal letter that responds to each point raised by the academic editor and reviewer(s). You should upload this letter as a separate file labeled 'Response to Reviewers'. A marked-up copy of your manuscript that highlights changes made to the original version. You should upload this as a separate file labeled 'Revised Manuscript with Track Changes'. An unmarked version of your revised paper without tracked changes. You should upload this as a separate file labeled 'Manuscript'. If you would like to make changes to your financial disclosure, please include your updated statement in your cover letter. Guidelines for resubmitting your figure files are available below the reviewer comments at the end of this letter. If applicable, we recommend that you deposit your laboratory protocols in protocols.io to enhance the reproducibility of your results. Protocols.io assigns your protocol its own identifier (DOI) so that it can be cited independently in the future. For instructions see: http://journals.plos.org/plosone/s/submission-guidelines#loc-laboratory-protocols We look forward to receiving your revised manuscript. Kind regards, Valerio Capraro Academic Editor PLOS ONE Additional Editor Comments (if provided): I have now collected three reviews from three of the four experts who reviewed the first version of this article (the fourth one declined my invitation). The reviewers think that the paper has improved, but two of them (and especially one of them) still think that more work should be done before the article can be published. Therefore, I invite you to revise your work again. Please follow the reviewers' comments closely. Looking forward for the revision. [Note: HTML markup is below. Please do not edit.] Reviewers' comments: Reviewer's Responses to Questions Comments to the Author 1. If the authors have adequately addressed your comments raised in a previous round of review and you feel that this manuscript is now acceptable for publication, you may indicate that here to bypass the “Comments to the Author” section, enter your conflict of interest statement in the “Confidential to Editor” section, and submit your "Accept" recommendation. Reviewer #2: All comments have been addressed Reviewer #3: (No Response) Reviewer #4: All comments have been addressed ********** 2. Is the manuscript technically sound, and do the data support the conclusions? The manuscript must describe a technically sound piece of scientific research with data that supports the conclusions. Experiments must have been conducted rigorously, with appropriate controls, replication, and sample sizes. The conclusions must be drawn appropriately based on the data presented. Reviewer #2: Yes Reviewer #3: Partly Reviewer #4: Yes ********** 3. Has the statistical analysis been performed appropriately and rigorously? Reviewer #2: Yes Reviewer #3: Yes Reviewer #4: Yes ********** 4. Have the authors made all data underlying the findings in their manuscript fully available? The PLOS Data policy requires authors to make all data underlying the findings described in their manuscript fully available without restriction, with rare exception (please refer to the Data Availability Statement in the manuscript PDF file). The data should be provided as part of the manuscript or its supporting information, or deposited to a public repository. For example, in addition to summary statistics, the data points behind means, medians and variance measures should be available. If there are restrictions on publicly sharing data—e.g. participant privacy or use of data from a third party—those must be specified. Reviewer #2: Yes Reviewer #3: Yes Reviewer #4: Yes ********** 5. Is the manuscript presented in an intelligible fashion and written in standard English? PLOS ONE does not copyedit accepted manuscripts, so the language in submitted articles must be clear, correct, and unambiguous. Any typographical or grammatical errors should be corrected at revision, so please note any specific errors here. Reviewer #2: Yes Reviewer #3: Yes Reviewer #4: Yes ********** 6. Review Comments to the Author Please use the space provided to explain your answers to the questions above. You may also include additional comments for the author, including concerns about dual publication, research ethics, or publication ethics. (Please upload your review as an attachment if it exceeds 20,000 characters) Reviewer #2: The experiment is well-executed, the analysis well-performed and the results represent interesting contributions to behavioral and public economics, which are the main items for publication in PLoS ONE. The paper has significantly improved in the revision as a consequence of the comments of all the reviewers’ comments. Personally, I am fully satisfied with the answers of the authors. Maybe I would also state a minor comment before publication. Please, revise that there is a one-to-one correspondence between cites and references (I have detected that some cited papers are not included in the references, e.g. Lacomba et al., 2017). Reviewer #3: The paper has much improved but it still requires much work. The authors did not sufficiently address all my points in their revision. General comments Information sharing as pro-social act: I am still not convinced that in the context of this experiment, sharing extraction information can be called pro-social in general. On page 4 the authors write that “sharing one's decision can also be seen as a pro-social act that enhances one's social image and self image”. Note, however, that the motive of enhancing one’s social image and/or self-image does not imply other-regarding preferences. Lab experiments with economic games are not a standard approach in the social sciences an therefore the readership of PLOS ONE may not be entirely familiar with the terminology the authors use. Of course it is not necessary to explain everything from the start. I would only like to encourage the authors to read their paper through the lens of someone who is not an experimental economist and see whether everything is so self-explanatory (it is not). The paper would benefit from having it edited by a professional English editing service. If I remember correctly, PLOS ONE does not allow footnotes. The authors should check this and integrate the text they currently have in the footnotes in the main text. Throughout the paper, subscript notation does not seem to be working out very well. The authors should take greater care when editing in-text formulas, equations, variables, etc. There are a few points regarding the econometric model that are difficult to understand (see my particular comments on that). Particular comments Table 2 should mention in MD at Step 2 that players had no choice. Otherwise one might wonder what happens in this condition. Also in the text the authors should say that nothing happens in Step 2 of MD, and in Step 3 subjects get the round summary. It will be clearer and more consistent in this way. It is still unclear how subjects were randomly assigned to experimental conditions (randomization, not random selection, is the quintessential precondition for causal inference in experiments). If randomization was not possible, the authors should mention it explicitly. On page 12, the sentence “Experimental economics is very useful in this case too”, does not seem to be very useful. Despite of what the authors write in their reply, the use of terminology is still not consistent. For example, in Figure 1, y-axes are labeled with “disclosure” and “sharing” although, in all likelihood, these terms are meant to say the same. Speaking of Figure 1 (and the other figures for that matter), the labels are very small, which makes it difficult to read the figures. The authors should try to improve this. Bottom of page 13 and top of page 14, what does “statistic =” stand for? What kind of statistic do the authors report here (Chi2? F? t?). Were these tests comparing average extractions for all 20 rounds? And, please, do not use one-sided p-values. Use two-sided with alpha = 5% throughout the paper. This is standard (probably also in PLOS ONE). Concerning the econometric model described on page 14, I was wondering why the authors have a lagged dependent as well as the total group extraction level from the previous round. At least the second variables should be total group extraction minus player i’s extraction. This will reduce collinearity and better allow to identify conditional cooperation. What is the theoretical rationale for including decision time in the model. The authors need to justify that (I did not see they do that in the earlier parts of the paper). On page 15/16, the authors write: “Finally, the estimation showed that, regardless of the treatment, the first decision in the game mattered. ” However, as far as I can see, this is not something they can conclude based on the estimation of the model shown in Table 4. To be able to say that there is a positive effect of the initial decision irrespective of treatment, there must be no significant interaction effects. But the authors do not estimate interactions between treatment variables and initial decisions. Moreover, they admit that these effects differ across treatment on page 19 (contradicting their initial statement). Related to that, I was wondering how the authors, in the statistical model, distinguish between the individuals’ past decision in round 2 and these individuals’ initial decision. Maybe I missed in the text the authors’ explanations. In figure 4, I find it interesting that the subjects that lie about their extractions lie increasingly more with the average increase of extraction levels. They seem to believe that once things go to “hell”, lying more about the true state of nature could maintain things in their favor. I am juts wondering whether the authors would like to say a bit more about this finding in the paper. I did not find a discussion of the limitations of the authors approach in the conclusions section. From my previous review: “…the authors could discuss some obvious limitations of their study and point out directions for future research. As far as I can see, the main limitations of the study are: (1) there is no control condition without information provision, which would correspond to the standard nowadays (most people do not know how much their neighbors extract or what the average in their neighborhood or their village/ city is). (2) Even if free-riders did not disclose their extraction level, given the small groups (n =4) participants could infer the number of free riders from average or even own earnings in a round…” Reviewer #4: The review work is mostly satisfying. However, I felt that limitations of the research is not yet fully explained such as the implementation of the mechanisms author propose in a dynamic environment shall be a very good idea for extension. However, for a first case study author have taken the static framework to make it easier and allows a better understanding of the behavioral dimension of the effects induced by these voluntary sharing mechanisms, which is understandable. Therefore, I think, author should still go ahead and report this. ********** 7. PLOS authors have the option to publish the peer review history of their article (what does this mean?). If published, this will include your full peer review and any attached files. If you choose “no”, your identity will remain anonymous but your review may still be made public. Do you want your identity to be public for this peer review? For information about this choice, including consent withdrawal, please see our Privacy Policy. Reviewer #2: No Reviewer #3: Yes: Wojtek Przepiorka (Utrecht University, Department of Sociology / ICS) Reviewer #4: No [NOTE: If reviewer comments were submitted as an attachment file, they will be attached to this email and accessible via the submission site. Please log into your account, locate the manuscript record, and check for the action link "View Attachments". If this link does not appear, there are no attachment files.] While revising your submission, please upload your figure files to the Preflight Analysis and Conversion Engine (PACE) digital diagnostic tool, https://pacev2.apexcovantage.com/. PACE helps ensure that figures meet PLOS requirements. To use PACE, you must first register as a user. Registration is free. Then, login and navigate to the UPLOAD tab, where you will find detailed instructions on how to use the tool. If you encounter any issues or have any questions when using PACE, please email PLOS at figures@plos.org. Please note that Supporting Information files do not need this step. 4 Aug 2020 Dear Editor, Thank you for letting us review our article. The reviewers' comments and suggestions really helped us improve the article. As answered to Reviewer 3, it was not possible to send the article to a professional English-language editing service between the two revisions because of summer holidays (the university's administrative department is on leave). If the article is accepted, we commit to do so as soon as possible. Submitted filename: revision_2_answers_001.docx Click here for additional data file. 20 Aug 2020 PONE-D-20-03051R2 Contrasting effects of information sharing on CPR extraction behaviour: experimental findings PLOS ONE Dear Dr. DUBOIS, Thank you for submitting your manuscript to PLOS ONE. After careful consideration, we feel that it has merit but does not fully meet PLOS ONE’s publication criteria as it currently stands. Therefore, we invite you to submit a revised version of the manuscript that addresses the points raised during the review process. Please submit your revised manuscript by Oct 04 2020 11:59PM. If you will need more time than this to complete your revisions, please reply to this message or contact the journal office at plosone@plos.org. When you're ready to submit your revision, log on to https://www.editorialmanager.com/pone/ and select the 'Submissions Needing Revision' folder to locate your manuscript file. Please include the following items when submitting your revised manuscript: A rebuttal letter that responds to each point raised by the academic editor and reviewer(s). You should upload this letter as a separate file labeled 'Response to Reviewers'. A marked-up copy of your manuscript that highlights changes made to the original version. You should upload this as a separate file labeled 'Revised Manuscript with Track Changes'. An unmarked version of your revised paper without tracked changes. You should upload this as a separate file labeled 'Manuscript'. If you would like to make changes to your financial disclosure, please include your updated statement in your cover letter. Guidelines for resubmitting your figure files are available below the reviewer comments at the end of this letter. If applicable, we recommend that you deposit your laboratory protocols in protocols.io to enhance the reproducibility of your results. Protocols.io assigns your protocol its own identifier (DOI) so that it can be cited independently in the future. For instructions see: http://journals.plos.org/plosone/s/submission-guidelines#loc-laboratory-protocols We look forward to receiving your revised manuscript. Kind regards, Valerio Capraro Academic Editor PLOS ONE Additional Editor Comments (if provided): One of the reviewers suggests one last minor change. Please address this comment at your earliest convenience. I am looking forward for the final version. [Note: HTML markup is below. Please do not edit.] Reviewers' comments: Reviewer's Responses to Questions Comments to the Author 1. If the authors have adequately addressed your comments raised in a previous round of review and you feel that this manuscript is now acceptable for publication, you may indicate that here to bypass the “Comments to the Author” section, enter your conflict of interest statement in the “Confidential to Editor” section, and submit your "Accept" recommendation. Reviewer #2: All comments have been addressed Reviewer #3: All comments have been addressed ********** 2. Is the manuscript technically sound, and do the data support the conclusions? The manuscript must describe a technically sound piece of scientific research with data that supports the conclusions. Experiments must have been conducted rigorously, with appropriate controls, replication, and sample sizes. The conclusions must be drawn appropriately based on the data presented. Reviewer #2: Yes Reviewer #3: Yes ********** 3. Has the statistical analysis been performed appropriately and rigorously? Reviewer #2: Yes Reviewer #3: Yes ********** 4. Have the authors made all data underlying the findings in their manuscript fully available? The PLOS Data policy requires authors to make all data underlying the findings described in their manuscript fully available without restriction, with rare exception (please refer to the Data Availability Statement in the manuscript PDF file). The data should be provided as part of the manuscript or its supporting information, or deposited to a public repository. For example, in addition to summary statistics, the data points behind means, medians and variance measures should be available. If there are restrictions on publicly sharing data—e.g. participant privacy or use of data from a third party—those must be specified. Reviewer #2: Yes Reviewer #3: Yes ********** 5. Is the manuscript presented in an intelligible fashion and written in standard English? PLOS ONE does not copyedit accepted manuscripts, so the language in submitted articles must be clear, correct, and unambiguous. Any typographical or grammatical errors should be corrected at revision, so please note any specific errors here. Reviewer #2: Yes Reviewer #3: Yes ********** 6. Review Comments to the Author Please use the space provided to explain your answers to the questions above. You may also include additional comments for the author, including concerns about dual publication, research ethics, or publication ethics. (Please upload your review as an attachment if it exceeds 20,000 characters) Reviewer #2: I am satisfied with the answers on my previous comments and I consider that the current paper deserves publication in PLOS ONE. Reviewer #3: Many thanks for addressing the remaining issues I had with the paper. I think the paper has much improved once more. For better readability of the results section, please consider adding references to the estimates and the corresponding statistics in the regression tables that you mention in the text in the text as well. Otherwise, the reader will not be able to follow how you used the regression models to test for significant differences etc. And, although I think the text is pretty good in terms of language, it could benefit from having it read once more by an editing service or native speaker. ********** 7. PLOS authors have the option to publish the peer review history of their article (what does this mean?). If published, this will include your full peer review and any attached files. If you choose “no”, your identity will remain anonymous but your review may still be made public. Do you want your identity to be public for this peer review? For information about this choice, including consent withdrawal, please see our Privacy Policy. Reviewer #2: No Reviewer #3: Yes: Wojtek Przepiorka (Utrecht University, Department of Sociology) [NOTE: If reviewer comments were submitted as an attachment file, they will be attached to this email and accessible via the submission site. Please log into your account, locate the manuscript record, and check for the action link "View Attachments". If this link does not appear, there are no attachment files.] While revising your submission, please upload your figure files to the Preflight Analysis and Conversion Engine (PACE) digital diagnostic tool, https://pacev2.apexcovantage.com/. PACE helps ensure that figures meet PLOS requirements. To use PACE, you must first register as a user. Registration is free. Then, login and navigate to the UPLOAD tab, where you will find detailed instructions on how to use the tool. If you encounter any issues or have any questions when using PACE, please email PLOS at figures@plos.org. Please note that Supporting Information files do not need this step. 20 Sep 2020 We have made the final minor changes suggested by Reviewer 3 and, thanks to him, the content of the document has been further improved. Submitted filename: Reviewer 3.docx Click here for additional data file. 23 Sep 2020 Contrasting effects of information sharing on CPR extraction behaviour: experimental findings PONE-D-20-03051R3 Dear Dr. DUBOIS, We’re pleased to inform you that your manuscript has been judged scientifically suitable for publication and will be formally accepted for publication once it meets all outstanding technical requirements. Within one week, you’ll receive an e-mail detailing the required amendments. When these have been addressed, you’ll receive a formal acceptance letter and your manuscript will be scheduled for publication. An invoice for payment will follow shortly after the formal acceptance. To ensure an efficient process, please log into Editorial Manager at http://www.editorialmanager.com/pone/, click the 'Update My Information' link at the top of the page, and double check that your user information is up-to-date. If you have any billing related questions, please contact our Author Billing department directly at authorbilling@plos.org. If your institution or institutions have a press office, please notify them about your upcoming paper to help maximize its impact. If they’ll be preparing press materials, please inform our press team as soon as possible -- no later than 48 hours after receiving the formal acceptance. Your manuscript will remain under strict press embargo until 2 pm Eastern Time on the date of publication. For more information, please contact onepress@plos.org. Kind regards, Valerio Capraro Academic Editor PLOS ONE Additional Editor Comments (optional): Reviewers' comments: 28 Sep 2020 PONE-D-20-03051R3 Contrasting effects of information sharing on common-pool resource extraction behavior: experimental findings Dear Dr. Dubois: I'm pleased to inform you that your manuscript has been deemed suitable for publication in PLOS ONE. Congratulations! Your manuscript is now with our production department. If your institution or institutions have a press office, please let them know about your upcoming paper now to help maximize its impact. If they'll be preparing press materials, please inform our press team within the next 48 hours. Your manuscript will remain under strict press embargo until 2 pm Eastern Time on the date of publication. For more information please contact onepress@plos.org. If we can help with anything else, please email us at plosone@plos.org. Thank you for submitting your work to PLOS ONE and supporting open access. Kind regards, PLOS ONE Editorial Office Staff on behalf of Dr. Valerio Capraro Academic Editor PLOS ONE
  9 in total

1.  Revisiting the commons: local lessons, global challenges.

Authors:  E Ostrom; J Burger; C B Field; R B Norgaard; D Policansky
Journal:  Science       Date:  1999-04-09       Impact factor: 47.728

2.  Lab experiments for the study of social-ecological systems.

Authors:  Marco A Janssen; Robert Holahan; Allen Lee; Elinor Ostrom
Journal:  Science       Date:  2010-04-30       Impact factor: 47.728

3.  The constructive, destructive, and reconstructive power of social norms.

Authors:  P Wesley Schultz; Jessica M Nolan; Robert B Cialdini; Noah J Goldstein; Vladas Griskevicius
Journal:  Psychol Sci       Date:  2007-05

4.  Judgment under Uncertainty: Heuristics and Biases.

Authors:  A Tversky; D Kahneman
Journal:  Science       Date:  1974-09-27       Impact factor: 47.728

5.  The tragedy of the commons. The population problem has no technical solution; it requires a fundamental extension in morality.

Authors:  G Hardin
Journal:  Science       Date:  1968-12-13       Impact factor: 47.728

6.  Lying to appear honest.

Authors:  Shoham Choshen-Hillel; Alex Shaw; Eugene M Caruso
Journal:  J Exp Psychol Gen       Date:  2020-01-30

7.  The Lazarillo's game: Sharing resources with asymmetric conditions.

Authors:  Juan A Lacomba; Francisco Lagos; Javier Perote
Journal:  PLoS One       Date:  2017-07-13       Impact factor: 3.240

8.  Shame in decision making under risk conditions: Understanding the effect of transparency.

Authors:  Tomas Bonavia; Josué Brox-Ponce
Journal:  PLoS One       Date:  2018-02-14       Impact factor: 3.240

9.  Reciprocity and the Tragedies of Maintaining and Providing the Commons.

Authors:  Simon Gächter; Felix Kölle; Simone Quercia
Journal:  Nat Hum Behav       Date:  2017-08-28
  9 in total
  1 in total

1.  Relational quality and uncertainty in common pool water management: an exploratory lab experiment.

Authors:  Marcela Brugnach; Sander de Waard; Dimitri Dubois; Stefano Farolfi
Journal:  Sci Rep       Date:  2021-07-26       Impact factor: 4.379

  1 in total

北京卡尤迪生物科技股份有限公司 © 2022-2023.