| Literature DB >> 34907466 |
Tonghe Zhuang1, Angelika Lingnau2.
Abstract
Objects can be categorized at different levels of abstraction, ranging from the superordinate (e.g., fruit) and the basic (e.g., apple) to the subordinate level (e.g., golden delicious). The basic level is assumed to play a key role in categorization, e.g., in terms of the number of features used to describe these actions and the speed of processing. To which degree do these principles also apply to the categorization of observed actions? To address this question, we first selected a range of actions at the superordinate (e.g., locomotion), basic (e.g., to swim) and subordinate level (e.g., to swim breaststroke), using verbal material (Experiments 1-3). Experiments 4-6 aimed to determine the characteristics of these actions across the three taxonomic levels. Using a feature listing paradigm (Experiment 4), we determined the number of features that were provided by at least six out of twenty participants (common features), separately for the three different levels. In addition, we examined the number of shared (i.e., provided for more than one category) and distinct (i.e., provided for one category only) features. Participants produced the highest number of common features for actions at the basic level. Actions at the subordinate level shared more features with other actions at the same level than those at the superordinate level. Actions at the superordinate and basic level were described with more distinct features compared to those provided at the subordinate level. Using an auditory priming paradigm (Experiment 5), we observed that participants responded faster to action images preceded by a matching auditory cue corresponding to the basic and subordinate level, but not for superordinate level cues, suggesting that the basic level is the most abstract level at which verbal cues facilitate the processing of an upcoming action. Using a category verification task (Experiment 6), we found that participants were faster and more accurate to verify action categories (depicted as images) at the basic and subordinate level in comparison to the superordinate level. Together, in line with the object categorization literature, our results suggest that information about action categories is maximized at the basic level.Entities:
Mesh:
Year: 2021 PMID: 34907466 PMCID: PMC9363348 DOI: 10.1007/s00426-021-01624-0
Source DB: PubMed Journal: Psychol Res ISSN: 0340-0727
Fig. 1Dendrogram illustrating the results of the hierarchical clustering analysis. Actions belonging to the same cluster are highlighted in the same color. Blue: locomotion, purple: ingestion, yellow: object manipulation, red: sensation/leisure-related actions, green: learning/studying, turquoise: communication
Labels for action categories at the superordinate and subordinate level resulting from the taxonomic depth task (Experiment 2)
| Superordinate level | Basic level | Subordinate level |
|---|---|---|
| Locomotion (sich fortbewegen) | To go (gehen) | To walk (spazieren gehen) |
| To hike (wandern) | ||
| To walk a dog (Gassi gehen) | ||
| To drive (fahren) | To drive a car (Auto fahren) | |
| To ride a bike (Fahrrad fahren) | ||
| To take a bus (Bus fahren) | ||
| To swim (schwimmen) | To swim front crawl (Kraulschwimmen) | |
| To swim breaststroke (Brustschwimmen) | ||
| Ingestion (Nahrung aufnehmen) | To drink (trinken) | To drink water (Wasser trinken) |
| To drink beer (Bier trinken) | ||
| To drink coffee (Kaffee trinken) | ||
| To eat (essen) | To eat an apple (einen Apfel essen) | |
| To eat cake (Kuchen essen) | ||
| To cook (kochen) | To cook noodles (Nudeln kochen) | |
| To cook soup (Suppe kochen) | ||
| Cleaning (sauber machen) | To brush (putzen) | To clean windows (Fenster putzen) |
| To brush teeth (Zähne putzen) | ||
| To clean the bathroom (Bad putzen) | ||
| To wash (waschen) | To wash clothes (Wäsche waschen) | |
| To do the dishes (Geschirr abwaschen) | ||
| To clean the face (Gesicht waschen) | ||
| Communication (Kommunizieren) | To talk (sich unterhalten) | To talk to friends (sich mit Freunden unterhalten) |
| To talk on the phone (sich am Telefon unterhalten) | ||
| To listen (Hören) | To listen to someone (jemandem zuhören) | |
| To listen to the radio (Radio hören) | ||
| To tell (erzählen) | To tell a joke (einen Witz erzählen) | |
| To tell a story (eine Geschichte erzählen) |
Basic level actions were selected on the basis of Experiment 1
Relationship between actions at the superordinate (columns) and subordinate (rows) level within (highlighted in grey, yellow, green, and blue) and across categories (1: very weak relationship; 7: very strong relationship)
Note: Actions that were rated as outliers using MAD analysis (Leys et al., 2013) are marked in bold and were removed from further analyses
Ratings of abstraction (1: very concrete, 7: very abstract) and complexity (1: very simple, 7: very complex)
Note: Mean ratings of abstraction and complexity for actions provided at the subordinate level belonging to one of four different superordinate levels (grey: ‘locomotion’; yellow: ‘ingestion’; green: ‘cleaning’ and blue: ‘communication’). Actions that were determined as outliers using MAD analysis (Leys et al., 2013) are marked in bold and were removed
Actions at the superordinate, basic and subordinate level used in Experiments 4–6
| Superordinate level | Basic level | Subordinate level | |
|---|---|---|---|
| Locomotion (sich fortbewegen) | To drive (fahren) | To drive a car (Auto fahren) | |
| To take a bus (Bus fahren) | |||
| To swim (schwimmen) | To swim front crawl (Kraulschwimmen) | ||
| To swim breaststroke (Brustschwimmen) | |||
| Ingestion (Nahrung aufnehmen) | To drink (trinken) | To drink water (Wasser trinken) | |
| To drink beer (Bier trinken) | |||
| To eat (essen) | To eat an apple (einen Apfel essen) | ||
| To eat cake (Kuchen essen) | |||
| Cleaning (sauber machen) | To brush (putzen) | To clean the windows (Fenster putzen) | |
| To brush teeth (Zähne putzen) | |||
| To wash (waschen) | To do the dishes (Geschirr abwaschen) | ||
| To clean the face (Gesicht waschen) | |||
Fig. 2Mean number of common, distinct and shared features of actions at the superordinate, basic and subordinate level. a Actions at the basic level were described with more common features than actions at the other two levels. b Actions at the superordinate and basic level were described with more distinct features than actions at the subordinate level. c Actions at the subordinate level were described with more shared features than actions at the superordinate level
Results of the Kruskal–Wallis H test for common, distinct and shared features (upper part), and for common features, separately for movement, body part and object features (lower part)
| Test statistic | Total number | df | Sig. | ||
|---|---|---|---|---|---|
| Common features | 11.68 | 21 | 2 | 0.003*** | 0.58 |
| Distinct features | 11.81 | 21 | 2 | 0.003*** | 0.59 |
| Shared features | 7.77 | 21 | 2 | 0.021* | 0.39 |
| Common features | |||||
| Movement features | 7.67 | 21 | 2 | 0.022* | 0.38 |
| Body part features | 1.03 | 21 | 2 | 0.317 | 0.05 |
| Object features | 4.39 | 21 | 2 | 0.314 | 0.22 |
Note: *p < 0.05; **p < 0.01; ***p < 0.005; ****p < 0.001
Fig. 3Mean RT and accuracy in the auditory priming experiment (Experiment 5). a Auditory cues at the basic and subordinate level led to faster responses in comparison to auditory cues at the superordinate level, and this effect was stronger if the auditory cue matched the action. b Auditory cues at the subordinate level led to more accurate responses in comparison to auditory cues at the basic and superordinate level. Auditory cues that matched the following action picture led to more accurate responses at the subordinate level, whereas they led to less accurate responses at the superordinate level. Error bars show SEM
Fig. 4Design and procedure used in Experiment 6. Upper panel: Each block consisted of 72 trials and lasted 150 s. Lower panel: At the beginning of each block, participants were presented with a written label (in german) corresponding to an action at one of the three taxonomic levels (e.g., ‘trinken’—‘to drink’; see Table 4) for 1 s. This label was followed by a block of 72 trials. In each trial, participants were presented with an image of an action (duration: 16.67–166.67 ms in steps of 16.67 ms), followed by a scrambled mask (2 s). In each trial, participants were instructed to judge whether the action image (e.g., a picture of a person drinking a glass of beer) corresponded to the label provided at the beginning of the block (e.g., ‘trinken’—‘to drink’). In the case of a match between the action depicted in the action image and the label (‘matched trials’), participants were instructed to click the left mouse button, whereas they were asked to press the right button in the case of a non-match (‘non-matched trials’)
Fig. 5RT and accuracy for matched trials as a function of the exposure duration of the action image, separately for the three taxonomic levels. a Participants were faster to verify the category of actions at the basic and the subordinate level in comparison to the superordinate level across all examined exposure durations. b For short exposure durations, the accuracy to verify the category of actions was not affected by the taxonomic level. For long exposure durations, participants were more accurate to verify the category of actions at the basic and subordinate level in comparison to the superordinate level. At exposure duration = 50 ms, participants responded more accurately to action images preceded by category labels at the subordinate in comparison to the superordinate level