| Literature DB >> 34667239 |
Alice Wong1,2,3, Garance Merholz1,4, Uri Maoz5,6,7,8,9.
Abstract
The human ability for random-sequence generation (RSG) is limited but improves in a competitive game environment with feedback. However, it remains unclear how random people can be during games and whether RSG during games can improve when explicitly informing people that they must be as random as possible to win the game. Nor is it known whether any such improvement in RSG transfers outside the game environment. To investigate this, we designed a pre/post intervention paradigm around a Rock-Paper-Scissors game followed by a questionnaire. During the game, we manipulated participants' level of awareness of the computer's strategy; they were either (a) not informed of the computer's algorithm or (b) explicitly informed that the computer used patterns in their choice history against them, so they must be maximally random to win. Using a compressibility metric of randomness, our results demonstrate that human RSG can reach levels statistically indistinguishable from computer pseudo-random generators in a competitive-game setting. However, our results also suggest that human RSG cannot be further improved by explicitly informing participants that they need to be random to win. In addition, the higher RSG in the game setting does not transfer outside the game environment. Furthermore, we found that the underrepresentation of long repetitions of the same entry in the series explains up to 29% of the variability in human RSG, and we discuss what might make up the variance left unexplained.Entities:
Year: 2021 PMID: 34667239 PMCID: PMC8526708 DOI: 10.1038/s41598-021-99967-6
Source DB: PubMed Journal: Sci Rep ISSN: 2045-2322 Impact factor: 4.379
Figure 1The progress of each trial in the experiment. Following the countdown, participants were required to press the appropriate key for rock, paper, or scissors within 500 ms of the onset of the Go signal; otherwise, the trial was forfeit. Their selection was then presented on the screen. In Game trials (on the right), their selection was accompanied by the computer’s selection and by the Game score.
Within- and between-participants division of the experiment.
| Pre-game | Game | Post-game |
|---|---|---|
| Random sequence generation | Random sequence generation | |
Each participant carried out all three parts of the experiment (Pre-game, Game, and Post-game). Different participant groups were randomized into conditions 1–3 only in the Game part of the experiment.
Figure 2Visualization of exclusion criteria. Scatter (left) and box (right) plots showing the LZC scores of all the sequences in the Unaware and Aware conditions. Sequences more than 1.5*IQR away from the median were identified as outliers and shown in red. See Supplementary Table 1 for a list of excluded sequences and for more on sequence IDs.
Figure 3LZC scores of human generated sequences. Violin plots of LZC scores by experimental part and condition are shown with the mean and 95% confidence intervals superimposed in black are shown on the left. The empirical distribution of 1000 bootstrapped pseudo-random sequences is given on the right. The horizontal, solid red lines—on the left and right—indicate the mean LZC score at the 2.5th and 97.5th percentile (bottom and top, respectively) of 1000 bootstrapped, computer-generated pseudo-random sequences. Values between these lines are statistically indistinguishable from the computer-generated pseudo-random sequence distribution. (See Supplementary Fig. 2 for all LZC scores, including the Control condition).
Figure 4Average run-length scores of human generated sequences. Average run-length scores by experimental part and condition are indicated below each violin plot and in black inside each violin plot with 95% CI. The horizontal red line indicates the mean run-length of the 2.5th percentile of 1000 bootstrapped pseudo-random sequences. (See Supplementary Fig. 3 for all LZC scores, including the Control condition).