| Literature DB >> 31055649 |
Adrian Meule1,2, Anna Richard3, Anja Lender4,5, Radomir Dinic6, Timo Brockmeyer7, Mike Rinck8, Jens Blechert4,5.
Abstract
Most tasks for measuring automatic approach-avoidance tendencies do not resemble naturalistic approach-avoidance behaviors. Therefore, we developed a paradigm for the assessment of approach-avoidance tendencies towards palatable food, which is based on arm and hand movements on a touchscreen, thereby mimicking real-life grasping or warding movements. In Study 1 (n = 85), an approach bias towards chocolate-containing foods was found when participants reached towards the stimuli, but not when these stimuli had to be moved on the touchscreen. This approach bias towards food observed in grab movements was replicated in Study 2 (n = 60) and Study 3 (n = 94). Adding task features to disambiguate distance change through either corresponding image zooming (Study 2) or emphasized self-reference (Study 3) did not moderate this effect. Associations between approach bias scores and trait and state chocolate craving were inconsistent across studies. Future studies need to examine whether touchscreen-based approach-avoidance tasks reveal biases towards other stimuli in the appetitive or aversive valence domain and relate to relevant interindividual difference variables.Entities:
Mesh:
Year: 2019 PMID: 31055649 PMCID: PMC7479004 DOI: 10.1007/s00426-019-01195-1
Source DB: PubMed Journal: Psychol Res ISSN: 0340-0727
Fig. 1Experimental setup in all three studies. Participants sat in front of a table on which a touchscreen monitor was positioned in portrait orientation with an angle of approximately 15°
Fig. 2Representative pull trials in a food block in Study 1 (a), Study 2 (b), and Study 3 (c). Each trial began with the display of a hand symbol in the center of the screen. When participants touched this symbol with five fingers, two pictures appeared at the top and bottom of the screen. Participants were instructed to either respond to pictures with food or to pictures with non-edible objects and to move pictures at the top towards themselves (to the bottom of the screen) and to move pictures at the bottom away from themselves (to the top of the screen). The picture disappeared and the next trial started when the picture reached the opposite border of the screen. In Study 1, all participants performed the same task (a). In Study 2, one group of participants performed the task with a zoom feature and one group of participants performed the task as in Study 1 (b). In Study 3, one group of participants performed the task with a manikin displayed at the bottom, one group of participants performed the task with a manikin displayed at the top, and one group of participants performed the task as in Study 1 (c). Note that the arrows were not used in the task but are presented here for illustration
Fig. 3Grabbing times as a function of trial type (push vs. pull) and stimulus (food vs. objects) in Study 1 (a), Study 2 (b), and Study 3 (c). Note that grabbing times in Study 1 include the time participants needed to recognize the pictures and decide to which picture they had to reach (i.e., the time between picture onset and the moment when participants lifted their hand off the starting position). This decision time was not included in grabbing times in Study 2 and Study 3. Therefore, grabbing times are longer in Study 1 than in Study 2 and Study 3 and include a main effect of stimulus (i.e., that participants were faster for food than objects across trial types), which was similarly found in Study 2 and Study 3 when decision time was analyzed separately. Error bars represent standard errors of the mean