| Literature DB >> 24364701 |
Lina I Davitt1, Filipe Cristino1, Alan C-N Wong2, E Charles Leek1.
Abstract
This study examines the kinds of shape features that mediate basic- and subordinate-level object recognition. Observers were trained to categorize sets of novel objects at either a basic (between-families) or subordinate (within-family) level of classification. We analyzed the spatial distributions of fixations and compared them to model distributions of different curvature polarity (regions of convex or concave bounding contour), as well as internal part boundaries. The results showed a robust preference for fixation at part boundaries and for concave over convex regions of bounding contour, during both basic- and subordinate-level classification. In contrast, mean saccade amplitudes were shorter during basic- than subordinate-level classification. These findings challenge models of recognition that do not posit any special functional status to part boundaries or curvature polarity. We argue that both basic- and subordinate-level classification are mediated by object representations. These representations make explicit internal part boundaries, and distinguish concave and convex regions of bounding contour. The classification task constrains how shape information in these representations is used, consistent with the hypothesis that both parts-based, and image-based, operations support object recognition in human vision.Entities:
Mesh:
Year: 2013 PMID: 24364701 PMCID: PMC3977674 DOI: 10.1037/a0034983
Source DB: PubMed Journal: J Exp Psychol Hum Percept Perform ISSN: 0096-1523 Impact factor: 3.332
Figure 1The stimuli set (Ziggerins) used in the experiment.
Figure 2Schematic illustration of the trial structure for the sequential matching task.
Figure 3Examples of the algorithmically generated Regions Of Interest (ROIs) for each model: (a) the 2D curvature map used to define (b) external convex and (c) concave regions; and (d) internal part boundaries defined by the minima/short-cut rule (see the Methods section).
Figure 4Data-model correspondences across tasks. These are expressed (see the Methods section) in terms of model matching correspondence (MMC). MMC values are expressed relative to the percentage of overlap accounted for by variation in low-level image statistics or visual saliency (see Leek et al., 2012) by generating fixation overlap distributions relative to those predicted by the saliency model. A positive MMC value indicates a higher fixation data-model correspondence than that accounted for by visual saliency. In contrast, a negative value indicates a lower fixation data-model correspondence than that accounted for by visual saliency. Bars show standard error of the mean (% overlap).