| Literature DB >> 30285541 |
Elvio Blini1,2, Clément Desoche3, Romeo Salemme1,3, Alexandre Kabil3, Fadila Hadj-Bouziane1,2, Alessandro Farnè1,2,3.
Abstract
Closer objects are invariably perceived as bigger than farther ones and are therefore easier to detect and discriminate. This is so deeply grounded in our daily experience that no question has been raised as to whether the advantage for near objects depends on other features (e.g., depth itself). In a series of five experiments ( N = 114), we exploited immersive virtual environments and visual illusions (i.e., Ponzo) to probe humans' perceptual abilities in depth and, specifically, in the space closely surrounding our body, termed peripersonal space. We reversed the natural distance scaling of size in favor of the farther object, which thus appeared bigger, to demonstrate a persistent shape-discrimination advantage for close objects. Psychophysical modeling further suggested a sigmoidal trend for this benefit, mirroring that found for multisensory estimates of peripersonal space. We argue that depth is a fundamental, yet overlooked, dimension of human perception and that future studies in vision and perception should be depth aware.Entities:
Keywords: depth; multisensory integration; perception; peripersonal space; visual streams
Mesh:
Year: 2018 PMID: 30285541 PMCID: PMC6238160 DOI: 10.1177/0956797618795679
Source DB: PubMed Journal: Psychol Sci ISSN: 0956-7976
Demographic Information for the Five Experiments
| Experiment | Sample size | Left-handed ( | Age (years) | |
|---|---|---|---|---|
|
|
| |||
| 1 | 20 (10 female) | 2 | 23.4 | 3.13 |
| 2 | 32 (16 female) | 2 | 21.8 | 2.52 |
| 3 | 21 (10 female) | 6 | 23.9 | 2.06 |
| 4 | 21 (11 female) | 6 | 24.6 | 2.58 |
| 5 | 20 (10 female) | 0 | 24 | 3.94 |
Fig. 1.The main features of each experiment. Experiment 1 exploited a 3-D virtual-reality setting. Shapes were presented close to (50 cm) or far away from (300 cm) participants, below the fixation cross; this resulted in close shapes always being perceived to be lower than farther ones. Retinal size was kept constant. The proprioceptive input coming from the position of the hand was manipulated to be close to or far from the closer shape. Experiment 2 exploited a Ponzo illusion in a 2-D display. Shapes were presented in the lower (close) or upper (far) visual field. Retinal size was kept constant. Experiment 3 exploited a 3-D virtual-reality setting. Unlike in Experiment 1, shapes were presented at the fixation level, and their position on the transverse axis and retinal size were kept constant. Experiment 4 exploited a 3-D virtual-reality setting. Shapes were presented at the fixation level, and their position on the transverse axis was kept constant, but retinal size varied, being naturally scaled as a function of distance. Experiment 5 exploited a 3-D virtual-reality setting. Shapes were presented at the fixation level and at six different distances (50, 100, 150, 200, 250, and 300 cm, labeled D1 to D6 here). Retinal size was scaled as a function of distance.
Models Contrasted in Experiment 5
| Curve | Equation |
|---|---|
| Linear |
|
| Logarithmic |
|
| Exponential |
|
| Sigmoidal |
|
Fig. 2.Results from Experiments 1 to 4: box-and-whisker plots depicting the mean gain in response time as a function of distance (interindividual variability of the peripersonal-space advantage, calculated by subtracting response times to close objects from response times to far objects, in ms). In each plot, the vertical length of the box represents the interquartile range, the thick horizontal line represents the median, and the whiskers indicate the full range of values. Dots outside the whiskers represent values exceeding 1.5 times the interquartile range.
Results From Experiment 5
| Measure and curve | Root-mean-square error (RMSE) | Akaike information criterion (AIC) |
|---|---|---|
| Accuracy | ||
| Linear | 0.13 ( | −1.72 ( |
| Logarithmic | 0.21 ( | 4.28 ( |
| Exponential | 0.08 ( | −7.94 ( |
| Sigmoidal | 0.05 ( | −8.5 ( |
| Response time | ||
| Linear | 3.3 ( | 37.37 ( |
| Logarithmic | 5.6 ( | 43.69 ( |
| Exponential | 3.19 ( | 36.95 ( |
| Sigmoidal | 2.19 ( | 36.44 ( |
Note: The table gives values for group means, fitted with the relative equations. The number of participants who favored each model is reported in parentheses.
Fig. 3.Results from Experiment 5, in which we presented shapes at six different depths. Group-wise predicted sigmoidal curves are shown for mean accuracy (left panel) and mean response time (RT) advantage (right panel) as a function of distance (labeled here from D1, close, to D6, far). Error bars show standard errors of the mean. The y-axes refer to the odds of providing a correct response (accuracy) and the relative RT advantage observed with respect to participant-specific mean performance.