| Literature DB >> 31243285 |
Ohad Dan1,2, Yonatan Loewenstein3,4,5,6.
Abstract
Entities:
Year: 2019 PMID: 31243285 PMCID: PMC6594951 DOI: 10.1038/s41467-019-10825-6
Source DB: PubMed Journal: Nat Commun ISSN: 2041-1723 Impact factor: 14.919
Fig. 1Choice engineering in a repeated two-alternative forced-choice task. a Experimental task—A reward schedule allocates binary rewards to each of the alternatives in each of the trials. The subject repeatedly chooses between the two alternatives. If the subject chooses a rewarded alternative, then she receives a monetary reward in that trial. In this example, the first choice is “1” and it yields a reward ($ sign) while the second choice, “2”, does not yield any reward. No feedback is given about the foregone payoff (the reward that was associated with the alternative that was not chosen). b Based on the Law of Effect, bias in favor of alternative 1 is expected to be maximal if all choices of alternative 1 are associated with a reward (red circles) while choosing alternative 2 is never rewarded (black X). c Choice architecture. If the number of rewards associated with the two alternatives is constrained, a choice-architect may choose to use the primacy heuristic and place all rewards associated with alternative 1 at the beginning of the sequence and those of alternative 2 at its end. d, e A choice engineer can utilize a quantitative model of choice to optimize the reward schedule. d Static schedule optimized for a QL agent. e Static schedule optimized for a CATIE agent
Fig. 2The effect of reward schedule on the bias of a QL agent. Red, orange and blue bars denote the naïve schedule, and schedules optimized assuming a QL agent and a CATIE agent, respectively. Light colors denote static schedules whereas dark colors denote a dynamic reward schedule (see Supplementary Methods). Dashed line denotes chance level and error-bars are standard errors of the mean