Literature DB >> 35181883

Does the body talk to the body? The relationship between different body representations while observing others' body parts.

Alessia Tessari1, Giovanni Ottoboni1.   

Abstract

The way human bodies are represented is central in everyday activities. The cognitive system must combine internal, visceral, and somatosensory, signals to external, visually driven information generated from the spatial placement of others' bodies and the own body in the space. However, how different body representations covertly interact among them when observing human body parts is still unclear. Therefore, we investigated the implicit processing of body parts by manipulating either the body part stimuli' posture (conditions a and b) or the participants' response body posture (conditions c, d, and e) in healthy participants (N = 70) using a spatial compatibility task called Sidedness task. The task requires participants to judge the colour of a circle superimposed on a task-irrelevant body part picture. Responses are facilitated when the spatial side of the responding hand corresponds to the spatial code generated by the hand stimulus's position with respect to a body of reference. Results showed that the observation of the task-irrelevant body parts oriented participants' attention and facilitated responses that were spatial compatible with the spatial position such body parts have within a configural representation of the body structure (i.e., Body Structural Representation) in all the five experimental conditions. Notably, the body part stimuli were mentally attached to the body according to the most comfortable and less awkward postures, following the anatomo-physiological constraints. Moreover, the pattern of the results was not influenced by manipulating the participants' response postures, suggesting that the automatic and implicit coding of the body part stimuli does not rely on proprioceptive information about one's body (i.e., Body Schema). We propose that the human body's morphometry knowledge is enriched by biomechanical and anatomo-physiological information about the real body movement possibilities. Moreover, we discuss the importance of the automatic orienting of attention based on the sidedness within the context of imitational learning.
© 2022 The Authors. British Journal of Psychology published by John Wiley & Sons Ltd on behalf of The British Psychological Society.

Entities:  

Keywords:  attention; body; body schema; body structural representation; imitation; implicit coding

Mesh:

Year:  2022        PMID: 35181883      PMCID: PMC9545991          DOI: 10.1111/bjop.12558

Source DB:  PubMed          Journal:  Br J Psychol        ISSN: 0007-1269


The cognitive system stores visuo‐spatial representations of the whole‐body structure and the position of each body part is related to the others. Knowledge about the human Body Structural Representation is enriched by biomechanical and anatomo‐physiological information about the possible movements. Human bodies get mentally recalled if prompted by cues in positions that are respectful of the physiological constraints of the human body, but independent of the personal body postures.

BACKGROUND

Bodies are special objects. Such a unique condition derives from the ability people possess to represent their own body and the others’ body (Arzy et al., 2006; Coolen & de Mul, 2014; Slaughter et al., 2004), integrating multiple sensory information from early on in life (Fontan et al., 2017; Slaughter et al., 2004) into dedicated brain areas (Blanke, 2012). However, the way this happens is still a matter of investigation (Dijkerman & de Haan, 2007; Fontan et al., 2017). As brain‐damaged patient studies suggest, the ways people code body information and develop body representations are diverse (Reed & Farah, 1995), as the nature of each piece of information is different. Semantic and emotional aspects are represented within the Body Image; visuo‐spatial, configural knowledge is processed within the Body Structural Representation (hereafter BSR) and sensory‐motor information within the Body Schema (hereafter BS; see Corradi Dell’Acqua & Rumiati, 2007, for a review). Many authors (e.g., Buxbaum & Coslett, 2001; Reed et al., 2003; Schwoebel & Coslett, 2005; Sirigu et al., 1991) proposed that these representations differ according to their formats (e.g., sensorimotor, visuo‐spatial, and lexical‐semantic), the body of reference (e.g., one's own body vs. human body in general), the duration of the representations (e.g., short‐term vs. long‐term), and the modality the representations are accessed (conscious vs. unconscious). These representations are partially independent in the human adult (see Berlucchi & Aglioti, 1997; Boccia et al., 2020; Schwoebel & Coslett, 2005), besides the fact that Body Image (i.e., lexical‐semantic knowledge) and BSR (i.e., visuo‐spatial knowledge) developmentally share the same sensorimotor background (i.e., the BS) experienced since birth (Buxbaum & Coslett, 2001; Paillard, 1987; Paillard et al., 1983; Pitron et al., 2018; Schwoebel & Coslett, 2005; Sirigu et al., 1991; Tsakiris, 2010). Significantly, they differ according to the level of consciousness and format: Body Image is conceptual and explicit (i.e., individuals are conscious of possessing knowledge about the body); BSR is conceptual and mainly implicit (i.e., mainly unconscious); whereas BS is perceptual and mainly implicit (Longo, 2015). However, they are all represented configurationally and are automatically and mandatorily recognized only if their constituents are assembled physiologically (e.g., Ottoboni et al., 2005; Reed et al., 2003; Urgesi et al., 2007). Such a configural and holistic representation of the body entails the head, shoulders, knees, and feet located along a vertical plane, which runs along the direction of gravity, perpendicular to one of the axes transecting the body, the axial plane. Moreover, the holistic representation implies that the front and the back of the body are asymmetrically divided by the coronal (frontal/back) plane. The constraints these two planes generate are deeply wired into the human mind (Myachykov et al., 2014) and have a role in directing attention towards the locations of the space where faces are directed (Mareschal et al., 2013). Even though BS and its related embodiment‐based processing play a crucial role when explicit, conscious, and aware coding of body parts is carried out (Lara, 2017; Myachykov et al., 2014; Zhou et al., 2017), its contribution to the implicit, unaware, and automatic coding of body parts is not clear. Moreover, despite the several investigations interested in semantic and sensory‐motor processing, BSR remains the representation less investigated (see Longo, 2015, for an analogue critic). BSR is supposed to be a visuo‐spatial body representation that works like a map into which each body part is located according to its natural, human, and physical constraints. Regarding the role of the visual and spatial components in developing such a representation, studies with blind people suggest that they possess an internal representation of the human body, even though impoverished and distorted (Kinsbourne & Lempert, 1980). Moreover, activation in the visual Extrastriate Body Area (more related to single body parts processing), but not in the Fusiform Body Area (more related to full‐body configurations) was found in congenitally blind people while performing body‐related tasks suggesting a lack of experience with full‐body configurations, compared to the perception of individual body parts (Striem‐Amit & Amedi, 2014). These results suggest an essential role of both the visual and the spatial components, as the tactile‐kinesthetic‐proprioceptive information can feed the spatial components but cannot entirely compensate for the lack of visual experience. Unlike the BS, which dynamically updates information about where one's body parts currently are in space, BSR is a representation of a ‘standard human body’ (Corradi‐Dell’Acqua et al., 2009), and it is also involved when an observer needs to consider a body part explicitly (e.g., fingers, Dolgilevica et al., 2020; Romano et al., 2017, 2019; Tamè et al., 2017) or implicitly (e.g., hands or feet; Ottoboni et al., 2005; Tessari et al., 2012a). The way the BSR enters in the process of body part coding/processing assumes it to be more dynamical than initially supposed and that a body part coding/processing is modulated by its posture (Ottoboni et al., 2005; Tessari et al., 2010, 2012a). For example, given sufficient visual cues accompanying a hand stimulus (such as the attached forearm), the hand is automatically represented within an entire human body, following the vertical plane (i.e., the head above and the feet below), and based on the coronal/frontal plane (i.e., face in front and back behind). Incorporating the body part within the whole body is only possible if lawful anatomo‐physiological connections between body parts and the entire body are respected: when hands/feet are displayed without the forearm/ankle, or the constraints are modified and depicted according to a non‐human way, no automatic body part coding takes place (Ottoboni et al., 2005; Tessari et al., 2010, 2012a). In those cases, body parts cannot be envisaged as belonging to a standard human body (the BSR). These results were consistent with other studies investigating the explicit judgements of hands (Gentilucci et al., 1998, 2000). Such an ability to represent the body according to its anatomo‐physiological constraints seems to develop early in life. Using the preferential looking paradigm, Slaughter et al. (2004) demonstrated that 12‐months‐old infants do not detect the violations in pictorial illustration of the body with scrambled body parts, but 18‐months‐old infants do.

The current study

This study has a twofold goal. First, it aims to investigate whether, as well as specifying standard anatomical body structure, BSR also specifies standard ‘comfortable’ postures and hand‐arm ones specifically. Second, it explores the extent to which the automatic access to BSR may be affected by the body‐related kinaesthetic information processed by the BS. To achieve this goal, we used an implicit, pre‐attentive paradigm developed to test the BSR (i.e., the sidedness task; Ottoboni et al., 2005; Tessari et al., 2012a). In such a task, participants perform a colour judgement task (i.e., they judge the colour of a circle superimposed on the picture of the body part) by giving spatially lateralized manual responses and are not required to directly process the task‐irrelevant body stimuli (body parts such as hands or feet) explicitly. However, responses show a spatial compatibility effect between the side of the response and the spatial position of the body part (task‐irrelevant stimuli) within an envisioned human body (the BSR). Results can only be explained by assuming that participants automatically and unconsciously/implicitly mentally complete each body part with the corresponding body even if the body part is irrelevant to the colour judgment task. This generates an attentional shift towards the side of the space where the part of the body is located. In turn, such spatial attentional shift facilitates responses given on the same side of space as in other spatial compatibility tasks (e.g., Rubichi et al., 1997) as they align their body midline with the midline of the envisioned body map (i.e., the BSR), according to observer‐centred coordinates like in other spatial compatibility effects based on biological stimuli (Ansorge & Wühr, 2004; Zorzi et al., 2003; see Lameira et al., 2015 for similar results with hands). See Figure 1 for a graphic explanation of the effect. Notably, the sidedness effect follows both the vertical momentum and the coronal/frontal plane above described (Myachykov et al., 2014). For example, each up‐right hand in the palm view elicits a body that is facing towards the observer (frontal perspective of the BSR), and up‐right hands depicted in the back view elicit the representation of a body that is facing away from the observer (back perspective of the BSR). Thus, for example, observing a red circle superimposed on the palm view of a left hand with fingers pointing upward would generate an attentional shift to the right side of the space, and responses to red would be faster when participants respond with their right hand. In summary, the sidedness effect originates from the interplay between two spatial codes: an allocentric spatial code generated by the body‐part stimulus's position within the representation of the human body (the BSR); and an egocentric spatial code based on its position according to the observer's point of view.
FIGURE 1

A graphic representation of the sidedness and the involved frames of reference. Here, the palm view of a left hand generates an attentional shift (the red arrow) towards the right (i.e., it generates a spatial right code being on the right side of space from the observer's perspective/point of view) as it is attached to a human body facing the observer, that is a frontal view of the SBR, that is, the allocentric representation of the human body. An up‐right hand depicted from the back view elicits the representation of a body facing away from the observer (back perspective of the BSR) and, according to the observer's perspective, it generates an attentional shift (the red arrow) towards the left side of the space

A graphic representation of the sidedness and the involved frames of reference. Here, the palm view of a left hand generates an attentional shift (the red arrow) towards the right (i.e., it generates a spatial right code being on the right side of space from the observer's perspective/point of view) as it is attached to a human body facing the observer, that is a frontal view of the SBR, that is, the allocentric representation of the human body. An up‐right hand depicted from the back view elicits the representation of a body facing away from the observer (back perspective of the BSR) and, according to the observer's perspective, it generates an attentional shift (the red arrow) towards the left side of the space This paradigm is helpful because by manipulating the anatomo‐physiological features and plausibility of the body part stimuli, we gather information on the abstract and supra‐dimensional representation of the BSR. For example, Ottoboni et al. (2005) showed that the sidedness effect disappears when body parts, such as hands, are severed at the wrist level, suggesting that the presence of an anatomical link (i.e., the forearm) is necessary to integrate a body part mentally (e.g., the seen hand) with a body representation (see Tessari et al., 2012a for similar results with feet; Gentilucci et al. 1998, 2000, for similar results with hands). The sidedness effect was also observed in a full‐body configuration but only when the arms were attached to the body in an anatomically and biomechanically plausible way (e.g., a right hand was connected to the body through a right limb but not a left one; Tessari et al., 2010; see Slaughter et al., 2004, for similar results with infants using a different paradigm). Moreover, these results support that BSR is automatically accessed in a configural way and not based on single body parts (see also Reed et al., 2003, 2006). As regards the first aim of this study, we manipulated the stimulus posture to investigate whether ipsilateral arm postures (Figures 2, 3, 4a‐c‐d) are preferred to others where the arms are, for example, crossed (Figure 2b) within the BSR. This being the case, we predict new sidedness relations from, for example, hand‐forearms with fingers pointing downwards. For example, a comfortable hand‐arm posture can be maintained for these new stimuli by merely lowering the arms (i.e., the arms remain ipsilateral and pronated; see Figure 2c), but the sidedness relations actually reverse.
FIGURE 2

The Sidedness effect suggests that the ipsilateral arm posture can be derived from different views rotating the representation of the body of 180 degrees around the vertical plane (the vertical momentum is maintained) to rich a frontal or a back position (the coronal plane). This process results in different views of the same arm posture. Some of them (a, c, and d) follow the most comfortable anatomo‐physiological constraints of the human body, as we hypothesized in this paper and generate different sidedness relations with respect to the observer point of view (see Figure 1). On the contrary, postures with arms crossing the body midline, like in b, do not satisfy this requirement and were discharged by previous studies (Ottoboni et al., 2005; Tessari et al., 2010)

FIGURE 3

Panel (a) displays the experimental setting. The stimuli were photographs of right and left hands displayed from either the back or the palm view, with a superimposed coloured circle. Participants had to respond to the colour of the circle by pressing two lateralized response keys. According to the spatial code generated by sidedness (as displayed in Figure 1), spatially lateralized responses could be facilitated or not. In Ai.), an example of non‐corresponding pairing between side of response (left) and stimulus handedness (right); however, RTs are the fastest as the side of the response (left) correspond to the side of the hand with respect to its sidedness (a right hand in a palm view is connected to a body of reference and lies on the left side). In Aii.), another example of non‐corresponding pairing between side of response (right) and stimulus handedness (left); however, RTs are the longest as the side of the response (right) does not correspond to the side of the hand with respect to its sidedness (a left hand in a back view is connected to a body of reference and lies on the left side). Panel (b) displays the experimental timeline

FIGURE 4

The figure displays the results. In the top row, experiments in which the posture of the stimulus hand was manipulated. Panel (a) shows the mean reaction times (RTs) in condition a and panel (b) in condition b. In the bottom row, Experiments in which the posture of the response position was manipulated are displayed. Panel (c) shows mean reaction times (RTs) in condition c, panel (d), those of condition d, and panel (e), those of condition e. Data are reported as a function of the stimulus‐response (handedness) pairings correspondence and the views of the hands. Bars represent standard errors

The Sidedness effect suggests that the ipsilateral arm posture can be derived from different views rotating the representation of the body of 180 degrees around the vertical plane (the vertical momentum is maintained) to rich a frontal or a back position (the coronal plane). This process results in different views of the same arm posture. Some of them (a, c, and d) follow the most comfortable anatomo‐physiological constraints of the human body, as we hypothesized in this paper and generate different sidedness relations with respect to the observer point of view (see Figure 1). On the contrary, postures with arms crossing the body midline, like in b, do not satisfy this requirement and were discharged by previous studies (Ottoboni et al., 2005; Tessari et al., 2010) Panel (a) displays the experimental setting. The stimuli were photographs of right and left hands displayed from either the back or the palm view, with a superimposed coloured circle. Participants had to respond to the colour of the circle by pressing two lateralized response keys. According to the spatial code generated by sidedness (as displayed in Figure 1), spatially lateralized responses could be facilitated or not. In Ai.), an example of non‐corresponding pairing between side of response (left) and stimulus handedness (right); however, RTs are the fastest as the side of the response (left) correspond to the side of the hand with respect to its sidedness (a right hand in a palm view is connected to a body of reference and lies on the left side). In Aii.), another example of non‐corresponding pairing between side of response (right) and stimulus handedness (left); however, RTs are the longest as the side of the response (right) does not correspond to the side of the hand with respect to its sidedness (a left hand in a back view is connected to a body of reference and lies on the left side). Panel (b) displays the experimental timeline Regarding the second aim, the reader should remember that BSR is not the unique representation that prompts people to experience the unitary entity of their ‘selves’ to accomplish their everyday duties, among which body‐related tasks. The literature on the explicit body part recognition (i.e., tasks requiring to judge whether, for example, a hand is a left or a right one; Candini et al., 2016; Conson et al., 2017; Parsons, 1994) indicates that people accomplish mental movements where their hand overlaps that of the stimuli and such movements become more difficult as a function of the angular disparity existing between the other's and the observer's hand, that is, between the posture of the hand to judge and the observer's own hand (see also Böffel & Müsseler, 2018). As a consequence of this mental imagery mechanism, a higher advantage emerges when people judge the laterality of pictures representing their own hands (Frassinetti et al., 2011; Shmuelof & Zohary, 2008; Vainio & Mustonen, 2011). It has been assumed that body parts are mandatorily mapped into the observer's motor system in a first‐person perspective, relying on the BS. Also, literature on the explicit tasks based on the BSR (such as Dolgilevica et al., 2020; Tamè et al., 2017 who used the in‐between test) suggests that the body's anatomical and external spatial reference are integrated when locating the touch on fingers and that the body representations might dynamically interact. Thus, we reasoned that a limit in the sidedness effect was not manipulating the participants’ response posture to investigate the role of self‐mapping and BS in the covert and implicit processing of body parts. In previous studies using the sidedness paradigm, an advantage of a view (back or palm; e.g., an advantage of back hand stimuli when responding in a compatible posture, such as with hands on a keyboard and back up) over the other view has never been found (Ottoboni et al., 2005; Tessari et al., 2010, 2012b). We can speculate that the proprioceptive response posture was not salient enough, as participants always responded in a very comfortable and canonical posture (hand over a table, palm down). Therefore, we modulated the response modality to make it proprioceptively more difficult and, thus, salient. We reasoned that if response posture (i.e., BS) plays a role when accessing unconsciously, automatically, and implicitly the BSR, then a self‐mapping advantage should emerge in line with the explicit laterality judgements (e.g., Candini et al., 2016; Conson et al., 2017; Frassinetti et al., 2011; Parsons, 1994).

Experiment

As described above, we evaluated the effects of the implicit, covert, and automatic processing of the body part (the hand/forearm) using the sidedness task, where the main task is to judge the colour of a circle superimposed on each picture, regardless of the body part. If coding the spatial features of the body part is framed within a mental body structure (i.e., the BSR), then coding should be modulated as a function of (a) the anatomo‐physiological constraints of the human body; (b) the least awkward way of moving the limbs, and (c) the frontal/back plane. However, if the BS also influences automatic and implicit body part coding, as in the explicit body part processing (e.g., in handedness recognition and hand laterality judgment; de Lange et al., 2008; Funk & Brugger, 2007; Lameira et al., 2008; Parsons, 1994; Petit & Harris, 2005, or in‐between test, Dolgilevica et al., 2020; Tamè et al., 2017), then the proprioceptive features of one's own corresponding body part should favour view‐related priming effects, and observers should code handedness by cognitively binding the stimulus's visual features to the proprioceptive features of their own corresponding body part, elaborated through the BS (Ionta & Blanke, 2009; Schwoebel et al., 2001). In conditions a and b, the posture of the presented body parts (i.e., the hand/forearm whole) was manipulated (in condition a, hand fingers pointed downwards as if the arm were stretched straight to the side of the body; in condition b, the hand fingers pointed laterally and the forearm led horizontally). In conditions c, d, and e, we manipulated the proprioceptive salience of the response posture by increasing its kinematic difficulty while remaining within a task where the body part's laterality is task‐irrelevant. In condition c, participants responded by pressing upside‐down keys with palm upwards; in condition d, participants responded on keys placed behind the back; In condition e, participants responded with their feet. If implicit body part coding is modulated by the anatomo‐physiological constraints of the human body, the awkwardness of a movement, and the frontal/back plane, then we predict that with hand/forearms with fingers pointing downward (condition a), an ipsilateral, pronated arm posture, obtained by merely lowering the arms, should be preferred (see Figure 2c). In condition b, we reasoned that the most comfortable position in which we can imagine a horizontal hand/forearm seen from the back is the one in which it is connected to an arm bent at the elbow and positioned towards the abdomen or behind the back (see Figure 2d). We then investigated if also the BS plays a role in the implicit coding of body parts and forced participants to use unusual response posture, presenting upright fingers hands as in literature (e.g., Ottoboni et al., 2005; Tessari et al., 2010). In condition c, pressing keys upwards could prioritize the (proprioceptive) representation of the participant's own palm view and produce a palm‐related priming effect in our task; on the other hand, a behind‐back unusual response should prioritize the (proprioceptive) representation of the participant's back hand and generate a back‐related priming effect (condition d). However, suppose implicit body part coding in the sidedness task is only based on access to the BSR. In that case, the results should not be affected by the response manipulation and should produce the typically reported pattern based on the attentional shift generated by representing each body part attached to a whole‐body configuration (Ottoboni et al., 2005; Tessari et al., 2010). At last, in condition e, where responses were given with feet pressing a foot pedal if the BS plays a role, we expect to find no effect, as there is no overlap between the effector shown as a stimulus and the one used to respond. On the contrary, if the spatial code is body‐part related, a sidedness effect should rise based on the stimulus posture.

METHOD

Participants

Seventy right‐handed participants were recruited. Fourteen participants (14 females; mean age =22.75 years, SD = 3.27) took part in condition a; 14 participants took part in condition b (9 females; mean age =23.53 years, SD = 2.11); 14 participants took part in condition c (6 females; mean age = 24.37, SD = 8.84); 14 participants in condition d (6 females; mean age = 24.37, SD=8.84) and 14 participants in condition e (6 females; mean age = 24.37, SD = 8.84). They all attended courses at an Italian University and received no credits to participate in the experiment. All participants reported normal or correct‐to‐normal vision and were right‐handed at Oldfield’ test (1971). A power analysis was conducted on the interaction view ×correspondence based on Ottoboni et al., 2005 (Experiment 2B, where the same stimuli were used). We used G*Power (Faul et al., 2007): with a Cohen's f = 1.68 (calculated for the view ×correspondence interaction; ηp 2 = 0.577), Alpha =0.05, and Power of 0.80, we would need only 4 participants. However, we decided to collect a sample of double size, to have a number of participants similar to previous literature.

Materials and procedure

The stimuli consisted of photographs of hand and forearm with fingers pointing downwards (condition a; Figure 2c), fingers pointing laterally (condition b; Figure 2d), or fingers pointing upright (conditions c, d, and e; Figure2a), displayed from either a back or palm view. The photographs might have either a blue or a red circle superimposed over the centre of the hand. They were obtained by rotating the stimuli used in experimental condition ‘a’ in Ottoboni et al. (2005, Experiment 2B) upside‐down or horizontally. The stimuli were presented randomly in two separate blocks, one for each hand view (i.e., palm and back hand view). The order of the two blocks was counterbalanced between participants, and each block was composed of 120 stimuli (240 total). Each stimulus (left or right hand, from the palm or back view) was presented 60 times: it had a red circle superimposed over the hand 30 times, and a blue circle 30 times. The block design was used to minimize learning and transfer bias and compare the results with those obtained in previous studies. Participants were seated in a dimly lit room approximately 60 cm away from a 15" 100Hz screen connected to a desktop computer that featured an integrated video card. The presentation of the stimuli and the response collections were controlled using E‐Prime 1.1 (Schneider et al., 2012). The photograph of the hands measured 23 × 9 degrees of visual angle on the screen, while the circle was 4 degrees wide. The experiment began with brief instructions. In conditions a and b, participants were required to judge the colour of the circle as quickly and accurately as possible, pressing one of two response keys positioned above the table holding the computer screen. As above described, the hand stimulus in the background was task‐irrelevant. In condition c, the keyboard was fixed upside‐down underneath the table supporting the computer monitor, and participants held their hands palm up (the response keys were those at the extremes of the keyboard, the ‘Ctrl’ key and the ‘Enter’ key). In condition d, participants responded by pressing downwards on one of two response keys placed behind their back. Again, the response keys were ‘Ctrl’ and ‘Enter’, and special care was taken when positioning each participant to minimize the behind‐the‐back response posture's discomfort. Finally, in condition e, participants responded by pressing two pedalboards fixed on a square wooden plank (70 × 70 cm) placed under the computer table. Half of the participants were required to press the left key if the circle was red and the right key if the circle was blue; the other half were assigned the opposite colour–response mapping. Each trial began with a central fixation cross visible for 1000 ms and then replaced by the hand‐circle stimulus. The stimulus (the body part with a superimposed coloured circle) remained on the screen for 100 ms. After stimulus offset and within a window of 1000 ms, participants could give their colour‐dependent response. Performance feedback was then displayed on the screen for 1500 ms, indicating the latency and accuracy of each response. See Figure 3 for a visual explanation of the task and the timeline.

Statistical analysis

We conducted a 2 × 2 repeated‐measures Analysis of Variance (ANOVA) with view (back vs. palm) and correspondence (corresponding vs. non‐corresponding pairings) as within‐subject factors for reaction times (RTs) and accuracy, using SPSS 25 statistical software (IBM software). Correspondence was defined based on stimulus handedness (i.e., right or left hand) and response side. The corresponding pairings were the ones where the responding hand matched the handedness of the hand stimulus (e.g., left‐hand response–left‐hand stimulus; right‐hand response–right‐hand stimulus). The non‐corresponding pairings were the ones where the responding hand did not match the handedness of the hand stimulus (e.g., left‐hand response –right‐hand stimulus; right‐hand response–left‐hand stimulus). Note that this definition of correspondence is based on handedness and not ‘sidedness’ as the latter is based on the spatial code generated by the position of a body part within the BSR and can vary according to the above‐defined criteria (anatomo‐physiological constraints; the least awkward way of moving the limbs, and frontal/back plane) and could not be defined a priori as it was the object of investigation. Significant interactions were analysed using Bonferroni‐corrected t‐tests (p < .025) as we were interested in only two comparisons: corresponding vs. non‐corresponding pairings for both palm and back view. One‐tailed t‐test was used as we have a‐priori hypotheses on the corresponding versus non‐corresponding pairing difference directions. RTs (two standard deviations above or below the participant's mean) analysis was performed on correct data. We aimed at investigating the emergence of the sidedness effect (revealed by a significant interaction between the view and correspondence factors; see Ottoboni et al., 2005 for the original definition).

RESULTS

RTs in condition a did not differ either across correspondence (F(1, 13) =2.05 p =.176, ηp 2 =.136), or between the two views (view: F(1, 13) =1.22, p =.289, ηp 2 =.086). The two‐way interaction correspondence ×view was significant (F(1, 13) =18.95, p < .001, ηp 2 =.593). One‐tailed paired‐samples t‐test indicated that the differences between the corresponding and non‐corresponding pairings were significant for the back view (t(13) =2.68, p = .009, D = .019; corresponding pairings, M = 395.95, SE = 14.40, and non‐corresponding pairings, M = 378.75, SE = 13.81) and in trend for the palm view (t(13) = −1.75, p =.05, D =.103; corresponding pairings, M = 377.18, SE = 12.41, and non‐corresponding pairings, M = 382.18, SE = 12.93; see Figure 4A). Regarding accuracy, neither the view (F(1,13) =1.18, p =.297, ηp 2 =.083) nor correspondence (F(1,13) =.008, p =.929, ηp 2 =.001) factors were significant, but the interaction was ((F(1,13) =21.21, p <.001, ηp 2 =.620), as corresponding pairings (M = 56.93, SE = 0.34) were less accurate than the non‐corresponding ones (M = 58.50, SE = 0.42) for the back view (one‐tailed t(13) =4.07, p < .001, D = 1.74), but corresponding pairings (M = 58.93, SE = 0.32) were in trend more accurate than the non‐corresponding ones (M = 57.43, SE = 0.50) for the palm view (one‐tailed t(13) =1.82, p = .04, D = .49). The figure displays the results. In the top row, experiments in which the posture of the stimulus hand was manipulated. Panel (a) shows the mean reaction times (RTs) in condition a and panel (b) in condition b. In the bottom row, Experiments in which the posture of the response position was manipulated are displayed. Panel (c) shows mean reaction times (RTs) in condition c, panel (d), those of condition d, and panel (e), those of condition e. Data are reported as a function of the stimulus‐response (handedness) pairings correspondence and the views of the hands. Bars represent standard errors In condition b, RTs did not differ either between corresponding and non‐corresponding pairing conditions (correspondence: F(1, 13) = .11, p = .752, ηp 2 = .008), or between the two views (view: F(1, 13) =.36, p = .559, ηp 2 =.027). The two‐way interaction, however, was significant (F(1, 13) =16.10, p = .001, ηp 2 =.553). One‐tailed paired‐samples t‐test indicated that the differences between the corresponding and non‐corresponding pairings were significant for both views (back view: t(13) =2.51, p = .013, D =.670; palm view: t(13) = −4.15, p < .001, D =.650). RTs for corresponding pairing were slower (M = 350.70, SE = 7.54) than those of the non‐corresponding ones (M = 339.44 SE = 6.52) for the back views and the opposite applied to palm view (corresponding pairings: M = 332.87, SE = 8.33; non‐corresponding pairings, M = 345.69, SE = 9.66). See Figure 4B. As regards accuracy, similar results emerged: neither the view (F(1,13) =.221, p = .646, ηp 2 =.017) nor the correspondence factors were significant (F(1,13) =.869, p =.368, ηp 2 =.063), but the interaction was (F(1,13) =16.736, p = .001, ηp 2 =.563). Indeed, accuracy was lower for the corresponding (M = 55.71, SE = 0.507) compared to the non‐corresponding pairings (M = 57.36, SE = 0.440) for the back view (one‐tailed t(13) =2.79, p =.007, D =.967), but the opposite pattern emerged for the pam view (one‐tailed t(13) =3.62, p =.001, D =.939; corresponding pairings, M = 58, SE = 0.555; non‐corresponding pairings, M = 55.64, SE = 0.716). In condition c, mean RT differences did not emerge between the two views (view: F(1, 13) =.058, p =.813, ηp 2 =.004), nor between pairings (correspondence: F(1, 13) =.350, p =.56, ηp 2 =.026). Only the interaction view ×correspondence was significant (F(1,13) =37.303, p <.001, ηp 2 =.742; see Figure 4C). One‐tailed paired‐samples t‐tests with Bonferroni correction revealed significant differences between the corresponding and non‐corresponding pairings for both the back (t(13) = −4.97, p < .001, D = −1.28; corresponding pairings, M = 380.49, SE = 8.52 and non‐corresponding pairings, M = 396.11, SE = 9.71) and the palm view (t(13) =3.22, p =.003, D = −1.93; corresponding pairings, M = 392.99, SE = 7.48 and non‐corresponding pairings, M = 380.60, SE = 8.63). Analysis on accuracy showed that the view factor was not significant (F(1,13) =.880, p =.365, ηp 2 =.063) but correspondence (F(1,13) =8.154, p =.014; ηp 2 =.385) and view ×correspondence (F(1,13) =5.460, p = .036, ηp 2 =.296) were significant. Corresponding pairings were better than the non‐corresponding ones for the back view (t(13) =3.426, p = .002, D = .916; corresponding pairings = 55.79, SE = 1.26, non‐corresponding pairings, M = 53.36, SE = 1.21); on the contrary, no difference emerged for palm view (t(13) =.114, p =.45, D =.031; corresponding pairings, M = 55.64, SE = 0.83, non‐corresponding pairings, M = 55.64, SE = 1.03). In condition d, no differences emerged either between the two views (view: F(1, 13) =.163 p = .69, ηp 2 = .012), or between pairings (correspondence: F(1, 13) = .845, p = .375, ηp 2 = .061). The view ×correspondence interaction was significant (F(1,13) =9.36, p = .009, ηp 2 = .419; see Figure 4D). One‐tailed paired‐samples t‐tests revealed significant differences between the corresponding and non‐corresponding pairings for both the back (t(13) = −3.58, p = .001; corresponding pairings, M = 372,32, SE = 14.45, and non‐corresponding pairings, M = 387.26, SE = 16.39) and the palm view (t(13) = 2.01, p = .03; corresponding pairings, M = 384.02, SE = 15.00, and non‐corresponding pairings, M = 372.33, SE = 11.07). Accuracy analysis revealed a similar result pattern with no significant view (F(1,13) = 391, p = .542, ηp 2  = .029) and correspondence (F(1,13) = .355, p = .562, ηp 2  = .027) factors, but a significant view ×correspondence interaction (F(1,13) = 7.495, p = .017, ηp 2  = .366). Corresponding pairings (M = 58.07, SE = 0.51) were in trend more accurate than the non‐corresponding ones (M = 56.64, SE = 0.73) for the back view (t(13) = 1.810, p = .046, D = .484) and the opposite pattern held in trend for the palm view (t(13) = 1.992, p = .034, D = .532; corresponding pairings, M = 57.14, SE = 0.48, and non‐corresponding pairings, M = 58.00, SE = 0.42). In condition e, neither the view (F(1,13) = .256; p = .621, ηp 2 =.019) nor the correspondence factors (F(1,13) = .3.938, p = .069, ηp 2  = .233) were significant but their interaction was (F(1,13) = 8.842, p = .011, ηp 2  = .405; see Figure 4E). One‐tailed paired‐samples t‐tests revealed significant differences between the corresponding (M = 390.49, SE = 8.42) and non‐corresponding pairings (M = 397.28, SE = 12.75) for the palm (t(13) = −5.464, p <.001, D = .302) but not the back view (t(13) = 1.198, p = .126, D = 1.460). Neither the factors nor their interaction were significant in the accuracy analysis: View (F(1,13) = .455, p = .512, ηp 2  = .034; correspondence (F(1,13) = .379, p = .549, ηp 2  = .029; view ×correspondence interaction (F(1,13) = .112, p = .743, ηp 2 =.009). The lack of significance in this last experiment might be due to the need for further participants with such an experimental manipulation, tested for the first time (responses given with the feet).

GENERAL DISCUSSION

Human bodies are unique objects, and they can be processed using internal cues (i.e., proprioceptive feedbacks from BS). However, in social contexts particularly, they are processed as external from the observer and standing alone (i.e., as a standard human body, BSR). The distinction between internal and external processing mechanisms appears to be complex because people tend to use their own body as a frame of reference to code external stimuli (e.g., Rizzolatti & Craighero, 2004). One instance of this mechanism can be seen when judging whether a hand is a right or a left hand (e.g Candini et al., 2016; Conson et al., 2017; Parsons, 1994). However, humans observe others’ bodies in daily life without any explicit intent to judge body parts or perform a cognitive task that requires processing them. For example, when observing an action, an observer is interested in the body parts’ spatial features to anticipate correct reactions (Ottoboni et al., 2015; Tessari et al., 2012b, 2021) and not to judge the body parts themselves. We decided to test body part coding with an implicit task (the sidedness task; Ottoboni et al., 2005) that does not require participants/observers to process body parts consciously to perform the task. We manipulated: (a) the posture of the shown body parts to gain new knowledge about the BSR; and (b) the participants’ posture to understand whether the BS can come into play like in the explicit tasks on body parts. The results delineated a process of implicit body part coding/processing that is sensitive to stimulus’ view‐related features but not participants’ posture. In condition a, responses to hand/forearm with fingers pointing down, as the arms were laying relaxed along the body, showed an RTs’ pattern that seems to be the opposite of the one reported with the same, albeit upright, pictures of hands (Ottoboni et al., 2005; Tessari et al., 2010, 2012b). However, such a result pattern aligns with the anatomical posture shown in Figure 2c. The interaction displayed in Figure 4A indicates that responses to colour judgement were negatively influenced when the handedness of the body stimulus seen from the back did not match the side of response; on the contrary, they were faster when they matched the hand/forearm's handedness in the palm view. These results suggest that the spatial code each hand/forearm generates induces an attentional shift (the one originating the sidedness effect in this kind of Simon‐like paradigm) only once it is mentally connected with a body representation that provides a spatial frame of reference for each hand/forearm (see Introduction and Figure 1). The body representation (BSR) is implicitly serving to frame each hand/forearm spatially according to anatomo‐physiologically driven constraints connecting any body parts to the trunk (Ottoboni et al., 2005; Tessari et al., 2010, 2012a; see also Di Vita et al., 2016; Ionta et al., 2012; Reed & Farah, 1995; Reed et al., 2003) relying on the most comfortable, less awkward anatomo‐physiological bodily arrangements. The anatomo‐physiologically guided logic fits condition a’ results: for hands viewed from the back and fingers pointing downwards, the body of reference is represented as facing towards the observer, while hands seen from the palm view elicit a body facing away from the observer (see Figure 2c). The assumption of the least awkward physiological also explains the results in condition b. The hand/forearm stimulus is connected to the body with an arm bent at the level of the elbow and directed towards the centre of the body: Hands seen from the back view as if they were in front of the abdomen, and hands seen from the palm view as if they were resting behind the back (see Figure 2d). Indeed, movements towards the centre of the body represent the easiest to perform after the ones in the supinated and pronated positions on the body side. In the case of hand/forearm with fingers pointing upwards used in previous studies (e.g., Ottoboni et al., 2005; Tessari et al., 2010, 2012a), hands viewed from the back were coded once they have been represented as part of a body facing away from the observer; hands seen from the palm, on the contrary, were coded as part of a body represented as facing towards the observer (see Figure 2a). Results from conditions a and b extend our knowledge as they move the theoretical asset from the hand's coding towards the body‐links coding. Indeed, by following the hand‐attached‐to‐body logic, the spatial codes generated by the hand/forearm are crucially related to the position of the entire arm in relation to the body (e.g., on the left or right side of the trunk) and not to the spatial position of the hands in relation to the body (e.g., on the left of the body, in front of the body; this were the case we would have found null results in Experiment b as they lay in front or the back of the body). Romano et al. (2017, 2019) also brought evidence towards standard finger/hand postures in an explicit spatial judgment task respecting the prono‐supination of the hand and suggested that this might extend to the overall body posture. We also manipulated participants’ response postures to raise their cognitive weighting: by increasing the proprioceptive salience of a response posture (in conditions c, d, and e), the corresponding stimulus posture (i.e., view) might have been primed like in the explicit tasks for body part processing (e.g., Dolgilevica et al., 2020; Ionta & Blanke, 2009; Ionta et al., 2007; Schwoebel et al., 2001; Tamè et al., 2017). View‐related priming effects should have emerged in conditions c and d (e.g., in condition c, responding hand with the palm up should have had induced faster RTs to palm view stimuli). However, we did not find a significant view factor (i.e., no facilitation for either the palm or the back view), and the typical sidedness effect (a significant view ×correspondence interaction) arose in both conditions c and d, contrary to results obtained in tasks relying on BS, such as the hand laterality judgment (e.gIonta & Blanke, 2009; Ionta et al., 2007). Thus, results do not support proprioceptive mechanisms based on BS when body parts are only covertly and implicitly analysed. At last, a typical sidedness effect (a significant view ×correspondence interaction) also emerged in conditions e, where participants responded with feet. If BS played a role, we expected to find no effect as there would not have been a matching between the response effector and the shown body part. Together, results support the spatial body part coding hypothesis based upon visuo‐spatial analysis of stimuli within the BSR (Ottoboni et al., 2005; Tessari et al., 2010, 2012a for feet). To summarize, results reported a sidedness effect (i.e., a spatial compatibility effect) based on both the hand/arm position within a reference body structure (i.e., the BSR) and its posture (e.g., arms laying on the side of the body or horizontally in front or behind it). Each hand/arm was spatially coded within a body‐based spatial frame of reference by (a) following the physiological constraints of the human body, (b) taking advantage of the least awkward way of moving them, and (c) specifying a standard anatomical body structure in which the arm is laterally represented. However, results expand our knowledge on BSR: not only it is a universal representation of the human body that stores offline, visuo‐spatial knowledge about the morphometry of the human body (i.e., the metric and spatial positions of human body parts; Buxbaum & Coslett, 2001; Sirigu et al., 1991; Tessari et al., 2010; Corradi Dell’Acqua & Tessari, 2010), but it is also enriched by biomechanical and anatomo‐physiological information about the real body movement possibilities (Ottoboni et al., 2005; Tessari et al., 2010, 2012a) and at least some preferential standard anatomical body postures of the body can be automatically represented (such as those represented in Figures 2, 3, 4d, but not, for example, other with crossed limbs Figure 2b) when seen human body parts. The last point also implies that the cognitive system can flexibly reorganize how human bodies are spatially represented. Moreover, results are in line with the vertical (e.g., the head is on top of the shoulders and at the opposite end of the body to the feet) and the coronal planes (e.g., the nose and the mouth are represented on the front surface of the body, and the calves are behind; Kessler & Rutherford, 2010). The vertical and the coronal planes are biologically hard‐wired in both BSR and BS (i.e., they are independent of the body's external or internal representation). However, the results (condition b in particular) are challenging concerning the left/right segregation situation according to the sagittal plane: no mono‐directional cue is available to place the body parts on the right or the left side of the body. Moreover, the chiral characteristic of some body parts (i.e., the fact that left and right body parts do not match when overlapped) is dynamically represented and adaptably changes according to the body part posture, the biomechanical and anatomo‐physiological constraints and interacting with the coronal plane. Importantly, results support the existence of preferential standard representations of the relationships between body and space as already suggested in the context of hand and fingers by Romano et al. (2017): they proposed the existence of a standard representation of the hand with the thumb (below) opposed to the other fingers (lined above) in a sort of grasping posture towards the environment. Interestingly, such a hand posture is compatible with one of the whole limb and body postures we here propose (see Figure 2a). This hypothesis also seems to be supported by independent results (Ottoboni et al., 2015; Tessari et al., 2021), evidencing a facilitation effect in the manual responses given when attack postures of a whole body were shown. It is not surprising that humans pay attention to an allocentric frame of reference (the BSR) when observing body parts. Indeed, the spatial attentional effect we observe in the sidedness paradigm could be the cognitive step preceding those leading to specular imitation. A similar interplay between allocentric and egocentric frames of reference also emerges in imitation. In specular imitation, the agents adopt an allocentric frame of reference since early childhood (Bekkering et al., 2000; Bergès & Lézine, 1963; Gleissner et al., 2000; Wapner & Cirillo, 1968). On the other hand, adults can also imitate anatomically, and they can adopt the same side of the body as the model, relying on an egocentric frame of reference (Press et al., 2009). However, they result faster (Ishikura & Inomata, 1995) and more accurate (Avikainen et al., 2003) when adopting specular imitation. Specular imitation follows an allocentric frame of reference that entails a spatial correspondence with the observed model and is based on an automatic process activated when attention is directed to the body disposition of others’ actions (Bianchi & Savardi, 2008; Bianchi et al., 2014; Chiavarino, 2012; Chiavarino et al., 2007) like in the sidedness effect. Another important result comes from manipulating the response position to improve its body part salience (conditions c, d, and e). It disentangled that the BS does not play a role in the implicit and covert coding of body parts (as in the domain of the sidedness task, the body part stimuli are task‐irrelevant, and participants are required to perform a colour judgement task). We can conclude that when we observe body parts without any explicit intent to process or judge them, they are treated as belonging to another person and are automatically integrated with a general description of the human body (i.e., the BSR) and not a viewer's representation's own body (i.e., the BS). It is possible that the response manipulations were not sufficiently effective at enhancing the cognitive salience of the viewer's felt hand, or deliberate and conscious attention must be paid to the state of the body part (as in the hand laterality judgment tasks, e.g., Ionta et al., 2007, 2009, or in the ‘in‐between’ test; Tamè et al., 2017; Dolgilevica et al., 2020). However, such a possibility cannot be tested within the sidedness task's confines since it does not call for any explicit judgement on body parts, that are task‐irrelevant. Other authors (Azañón & Soto‐Faraco, 2008) have proposed that there might be a first unconscious somatotopic processing of body stimuli (at 30 and 60 ms), and a later remapping into an egocentric spatial frame of reference that is necessary for orienting attention (Sideness, with its 100 ms presentation of the stimulus, resides in the latter stage even though the reader needs to consider that in the sidedness paradigm no tactile stimuli are used but only visual ones). A future investigation of the emerged effects may include an experimental manipulation using a tactile stimulus on the participant's hands to improve further the relevance of the proprioceptive feedback from one own’ body (e.g., Azañón & Soto‐Faraco, 2008). Moreover, as body parts seem to be only pre‐attentively processed without entering a second explicit processing stage where one's own body becomes relevant and features coded by the BS play an important role (e.g., Parsons, 1994, Appendix), a future manipulation might involve the manipulation of the stimulus onset asynchrony (SOA) between the onset of the task‐irrelevant body part picture and target stimulus (i.e., the superimposed coloured circle) to investigate whether a more extended presentation of the body part (without the task‐relevant coloured circle) might improve and trigger deeper processing of the hand/arm stimuli and generate results and attentional shifts congruent with the handedness judgement task (e.g., Parsons, 1994). Such a modification of the paradigm might be also used with the apraxic patients to understand whether the BSR actually plays a role in generating their imitative deficits. So far, we have failed to find such an effect in apraxic patients (Amesz et al., 2016; Lane et al., 2021), but the reader should remember that stimuli are presented in the sidedness paradigm for only 100 ms, and the processing of the body stimuli might take longer in the brain‐damaged patients that are characterized by general reduces cognitive resources (Tessari et al., 2007). At last, the sidedness task might be employed for investigating whether body alterations (such as amputation, congenital absence of a limb) can interfere with the access to, the maintenance or the development of a canonical BSR.

CONCLUSION

This study significantly contributes to the ongoing work of defining the cognitive mechanisms involved in human body coding. Although the details on the sidedness effect may seem rather methodological, they relate on the contrary to a crucial theoretical issue related to research on body representation. In the domain of perception, to recognize objects, even from different and unusual viewpoints, it has been proposed that people rely on structural knowledge of objects (Hillis & Caramazza, 1995; Sutherland, 2012). This knowledge corresponds in part to the 3‐D level of the influential model of Marr (Kitcher, 1988; Marr, 1982), where a ‘catalogue’ of object structures contains volumetric object descriptions that are stored in multiple viewpoints and distinctive features may also be included in this knowledge (Warrington & James, 1988). Similarly, the knowledge about the human body in the BSR can be conceived as a store of structural and visuo‐spatial viewpoint representations for the body (like the multiple viewpoints proposed by Warrington & James, 1988 for non‐human objects), including information about the anatomo‐physiological and biomechanical relationship among the body parts (Romano et al. (2017, 2019); Tessari et al., 2010). Results indicate that coding/processing implicitly the body parts’ spatial features is hard‐wired and drives a person to frame them within a body representation (i.e., the BSR) without any need to map them with their own body.

ACKNOWLEDGEMENTS

Open Access Funding provided by Universita degli Studi di Bologna within the CRUI‐CARE Agreement. [Correction added on 20th May 2022, after first online publication: CRUI funding statement has been added.]

CONFLICTS OF INTEREST

The authors declare that they have no conflict of interest.

AUTHOR CONTRIBUTION

Alessia Tessari: Conceptualization (equal); Data curation (equal); Formal analysis (equal); Investigation (equal); Methodology (equal); Project administration (equal); Resources (equal); Software (equal); Supervision (equal); Writing – original draft (equal); Writing – review & editing (equal). Giovanni Ottoboni: Conceptualization (equal); Data curation (equal); Formal analysis (equal); Investigation (equal); Methodology (equal); Writing – original draft (equal); Writing – review & editing (equal).

ETHICAL APPROVAL

All procedures performed in studies involving human participants were in accordance with the institutional and/or national research committee's ethical standards and the 1964 Helsinki declaration and its later amendments or comparable ethical standards. Informed consent was obtained from all participants included in the study. Participants were invited to participate after they had been provided with some general information about the study, which included the fact that the experiment they would take part in presented ‘no more than minimal risk’. They were also told about their right to withdraw from the study at any time without penalty. The Ethics Committee approved the study of the University of Bologna in 2013.
  73 in total

1.  A response-discrimination account of the Simon effect.

Authors:  Ulrich Ansorge; Peter Wiihr
Journal:  J Exp Psychol Hum Percept Perform       Date:  2004-04       Impact factor: 3.332

2.  The standard posture of the hand.

Authors:  Daniele Romano; Luigi Tamè; Elena Amoruso; Elena Azañón; Angelo Maravita; Matthew R Longo
Journal:  J Exp Psychol Hum Percept Perform       Date:  2019-05-30       Impact factor: 3.332

3.  Standard body-space relationships: Fingers hold spatial information.

Authors:  Daniele Romano; Francesco Marini; Angelo Maravita
Journal:  Cognition       Date:  2017-05-19

Review 4.  How do the body schema and the body image interact?

Authors:  Victor Pitron; Adrian Alsmith; Frédérique de Vignemont
Journal:  Conscious Cogn       Date:  2018-09-24

Review 5.  To move or not to move, that is the question! Body schema and non-action oriented body representations: An fMRI meta-analytic study.

Authors:  Antonella Di Vita; Maddalena Boccia; Liana Palermo; Cecilia Guariglia
Journal:  Neurosci Biobehav Rev       Date:  2016-05-10       Impact factor: 8.989

6.  Bodily self: an implicit knowledge of what is explicitly unknown.

Authors:  Francesca Frassinetti; Francesca Ferri; Manuela Maini; Maria Grazia Benassi; Vittorio Gallese
Journal:  Exp Brain Res       Date:  2011-05-08       Impact factor: 1.972

7.  Origins and early development of human body knowledge.

Authors:  Virginia Slaughter; Michelle Heron
Journal:  Monogr Soc Res Child Dev       Date:  2004

8.  The Two Forms of Visuo-Spatial Perspective Taking are Differently Embodied and Subserve Different Spatial Prepositions.

Authors:  Klaus Kessler; Hannah Rutherford
Journal:  Front Psychol       Date:  2010-12-06

9.  Please don't! The automatic extrapolation of dangerous intentions.

Authors:  Alessia Tessari; Giovanni Ottoboni; Andrea Mazzatenta; Arcangelo Merla; Roberto Nicoletti
Journal:  PLoS One       Date:  2012-11-14       Impact factor: 3.240

10.  Humans have an expectation that gaze is directed toward them.

Authors:  Isabelle Mareschal; Andrew J Calder; Colin W G Clifford
Journal:  Curr Biol       Date:  2013-04-04       Impact factor: 10.834

View more
  1 in total

1.  Does the body talk to the body? The relationship between different body representations while observing others' body parts.

Authors:  Alessia Tessari; Giovanni Ottoboni
Journal:  Br J Psychol       Date:  2022-02-18
  1 in total

北京卡尤迪生物科技股份有限公司 © 2022-2023.