| Literature DB >> 27458418 |
Giorgio Gronchi1, Marco Raglianti2, Stefano Noventa3, Alessandro Lazzeri4, Andrea Guazzini5.
Abstract
Psychological research has found that human perception of randomness is biased. In particular, people consistently show the overalternating bias: they rate binary sequences of symbols (such as Heads and Tails in coin flipping) with an excess of alternation as more random than prescribed by the normative criteria of Shannon's entropy. Within data mining for medical applications, Marcellin proposed an asymmetric measure of entropy that can be ideal to account for such bias and to quantify subjective randomness. We fitted Marcellin's entropy and Renyi's entropy (a generalized form of uncertainty measure comprising many different kinds of entropies) to experimental data found in the literature with the Differential Evolution algorithm. We observed a better fit for Marcellin's entropy compared to Renyi's entropy. The fitted asymmetric entropy measure also showed good predictive properties when applied to different datasets of randomness-related tasks. We concluded that Marcellin's entropy can be a parsimonious and effective measure of subjective randomness that can be useful in psychological research about randomness perception.Entities:
Keywords: Differential Evolution algorithm; Marcellin's entropy; Renyi's entropy; Shannon's entropy; asymmetric entropy; overalternating bias; randomness perception
Year: 2016 PMID: 27458418 PMCID: PMC4934134 DOI: 10.3389/fpsyg.2016.01027
Source DB: PubMed Journal: Front Psychol ISSN: 1664-1078
Figure 1The empirical subjective randomness measured by Falk and Konold (solid line) and the second order entropy computed by Shannon's formula (dashed line).
Best parameter tuning for different entropy models and their respective best and worst fitness measures (and percentage of convergence) found by DE.
| Shannon | – | 0.949 | x |
| Renyi | α = 2.37 | 0.875 (100%) | (0%) |
| Marcellin | 0.728 (95%) | 0.755 (5%) | |
Figure 2The target function (solid) is the empirical data from Falk and Konold. The entropies are respectively computed by Shannon (dashed), Renyi (dotted), and Marcellin (solid with circles) formulas after parameter fitting.
Figure 3Relationship between the percentage of random responses for the set of 128 sequences of Experiment A (A) and B (B) and their Marcellin's entropy (with fitted parameters, Table . Pearson's r = 0.60 and r = 0.67, respectively.
Pearson product-moment correlations of Experiment A and B results with different subjective randomness scores (Marcellin's Entropy, Difficulty Predictor, Griffiths and Tenenbaum's model).
| Experiment A | 0.60 | 0.67 | 0.76 |
| Experiment B | 0.67 | 0.73 | 0.80 |