| Literature DB >> 32572038 |
Jan Stupacher1,2, Maria A G Witek3, Jonna K Vuoskoski4, Peter Vuust5.
Abstract
Social bonds are essential for our health and well-being. Music provides a unique and implicit context for social bonding by introducing temporal and affective frameworks, which facilitate movement synchronization and increase affiliation. How these frameworks are modulated by cultural familiarity and individual musical preferences remain open questions. In three experiments, we operationalized the affective aspects of social interactions as ratings of interpersonal closeness between two walking stick-figures in a video. These figures represented a virtual self and a virtual other person. The temporal aspects of social interactions were manipulated by movement synchrony: while the virtual self always moved in time with the beat of instrumental music, the virtual other moved either synchronously or asynchronously. When the context-providing music was more enjoyed, social closeness increased strongly with a synchronized virtual other, but only weakly with an asynchronized virtual other. When the music was more familiar, social closeness was higher independent of movement synchrony. We conclude that the social context provided by music can strengthen interpersonal closeness by increasing temporal and affective self-other overlaps. Individual musical preferences might be more relevant for the influence of movement synchrony on social bonding than musical familiarity.Entities:
Mesh:
Year: 2020 PMID: 32572038 PMCID: PMC7308378 DOI: 10.1038/s41598-020-66529-1
Source DB: PubMed Journal: Sci Rep ISSN: 2045-2322 Impact factor: 4.379
Figure 1Design of Studies 1–3. (A) Interpersonal movement synchrony was manipulated by using the social entrainment video paradigm. Participants watched two walking stick figures and imagined that one of the figures represents themselves and the other figure represents an unknown person. Left panel: Example of one frame of a video with synchronized virtual self (black) and virtual other (blue). Both figure’s steps are aligned with the quarter beat of the musical pieces. A stylized dust cloud additionally marked the temporal position of the beat. Right panel: Example of one frame of a video with the virtual self in synchrony with the quarter beat of the music and the virtual other out of synchrony. (B) Different musical stimuli used in the three studies and participant samples. (C) Adapted Inclusion of Other in the Self scale (IOS[32]) used in all three studies. Participants rated the interpersonal closeness between virtual self and other on a continuous slider presented below the 7 circle combinations.
Nested mixed effects models for the two dependent variables inclusion of other in the self (IOS) and likeability of the virtual other (LIKE) investigating the effects of the independent variables synchrony, musical pattern (i.e., cultural familiarity), and enjoyment of the music. Every model includes the random effect (1 | participant). The Akaike information criterion (AIC), Bayesian information criterion (BIC), marginal R2 (variance explained by fixed effects only), and conditional R2 (variance explained by fixed and random effects) are provided. χ2 and p values refer to model comparisons to the previous model (unless stated otherwise) using likelihood ratio tests. Null model: IOS ~ (1 | participant). Best fitting models are marked in bold letters.
| Dependent variable: IOS; Independent variables: Synchrony, Musical Pattern, Enjoyment of Music | ||||||
|---|---|---|---|---|---|---|
| Model | AIC | BIC | Marginal | Conditional | Improvement in Model Fit | |
| IOS Null Model | 2235 | 2246 | 0.338 | |||
| IOS ~ Synchrony | 2059 | 2073 | 0.310 | 0.750 | 178.52 | <0.001 |
| IOS ~ Synchrony + Musical Pattern | 2047 | 2064 | 0.324 | 0.767 | 14.35 | <0.001 |
| IOS ~ Synchrony × Musical Pattern | 2049 | 2069 | 0.323 | 0.766 | 0 | 0.999 |
| IOS ~ Synchrony + Musical Pattern + Enjoyment | 2035 | 2056 | 0.356 | 0.770 | 13.17° | <0.001° |
| LIKE Null Model | 2097 | 2108 | 0.299 | |||
| LIKE ~ Synchrony | 2007 | 2020 | 0.210 | 0.577 | 92.87 | <0.001 |
| LIKE ~ Synchrony + Musical Pattern | 1977 | 1994 | 0.260 | 0.642 | 31.66 | <0.001 |
| LIKE ~ Synchrony × Musical Pattern | 1978 | 1999 | 0.260 | 0.642 | 0.36 | 0.551 |
| LIKE ~ Synchrony × Enjoyment + Musical Pattern | 1935 | 1960 | 0.399 | 0.656 | 1.06 | 0.304 |
°As compared to model IOS ~ Synchrony + Musical Pattern.
^As compared to model LIKE ~ Synchrony + Musical Pattern.
Figure 2Results of Studies 1 and 2. Data points and model-predicted values of interpersonal closeness, as measured by IOS, in Study 1 (panels A and B) and Study 2 (panels C–E). Error bars / shaded areas represent 95% confidence intervals. Data points and predicted values for each musical pattern of panels B–E are depicted in Supplementary Figures S1 and S2.
Nested mixed effects models for the dependent variable inclusion of other in the self (IOS) separately investigating the effects of the three independent variables familiarity with the music, enjoyment of the music and perceived beat clarity. Every model includes the random effect (1 | participant). The Akaike information criterion (AIC), Bayesian information criterion (BIC), marginal R2 (i.e., variance explained by fixed effects only), and conditional R2 (i.e., variance explained by fixed and random effects) are provided. χ2 and p values refer to model comparisons to the previous model (unless stated otherwise) using likelihood ratio tests. Null model: IOS ~ (1 | participant). Best fitting models are marked in bold letters.
| Dependent variable: IOS; Independent variables: Synchrony and Musical Pattern | ||||||
|---|---|---|---|---|---|---|
| Model | AIC | BIC | Marginal | Conditional | Improvement in Model Fit | |
| Null Model | 11663 | 11679 | 0.306 | |||
| IOS ~ Synchrony | 11096 | 11116 | 0.247 | 0.603 | 569.56 (1) | <0.001 |
| IOS ~ Synchrony × Musical Pattern | 11088 | 11129 | 0.252 | 0.608 | 4.89 (2) | 0.087 |
| IOS ~ Synchrony × Familiarity + Musical Pattern | 11079 | 11120 | 0.261 | 0.607 | 0.11 (1) | 0.745 |
| IOS ~ Synchrony + Musical Pattern + Enjoyment | 11058 | 11093 | 0.273 | 0.612 | 33.26 (1)° | <0.001° |
| IOS ~ Synchrony + Musical Pattern + Beat Clarity | 11049 | 11085 | 0.281 | 0.615 | 41.59 (1)° | <0.001° |
°Improvement in model fit as compared to model IOS ~ Synchrony + Musical Pattern.
Figure 3Stimuli and results of Study 3. (A) Waveforms of one bar of the three different musical stimuli with low, moderate, and high levels of syncopation. The dotted grey lines represent the eighth-note level at a tempo of 94.4 beats per minute. The grey arrows on top mark the strong metric positions at the quarter-note (beat) level. In the stimulus with low syncopation, four of five piano chord onsets fall on the strong metric positions, compared to two in the moderately syncopated and one in the highly syncopated stimulus. The smaller peaks represent a soft hi-hat sound, which was marking the eighth notes. (B) inclusion of other in the self ratings for videos with synchronously or asynchronously moving figures accompanied by musical stimuli with three different levels of rhythmic complexity (low, moderate, and high levels of syncopation). Boxes represent the interquartile range (IQR); whiskers represent the lowest values within 1.5 * IQR of the lower quartile, and the highest values within 1.5 * IQR of the upper quartile; dots represent outliers; dotted lines represent the connections between the medians (center line).
Pairwise comparisons (Wilcoxon signed-rank tests) between inclusion of other in the self ratings in videos with different levels of syncopation (low, moderate, and high), N = 48. The Bonferroni-corrected critical p-value is 0.05/6 = 0.0083.
| Low vs. moderate syncopation | Low vs. high syncopation | Moderate vs. high syncopation | |||||||
|---|---|---|---|---|---|---|---|---|---|
| Synchronous movement | −1.70 | 0.089 | 0.17 | 5.43 | <0.001 | 0.55 | 5.77 | <0.001 | 0.59 |
| Asynchronous movement | −0.96 | 0.340 | 0.10 | 5.47 | <0.001 | 0.56 | 5.51 | <0.001 | 0.56 |
*Effect size r = Z/sqrt(N×2) with N = 48.
Descriptive statistics of the three music stimuli selected after Pre-studies 2 A and 2B. Results of the finger tapping task (mean and standard deviation of inter-tap-intervals; beat interval: 700 ms) and the synchrony rating in which participants had to decide if a stick figure was walking in time with the beat of the music or out of time (percentage of correct answers and mean of the time needed for a decision).
| Stimulus | Region | Pre-study 2 A: Online rating | Pre-study 2B: Finger tapping | Pre-study 2B: Synchrony rating | ||||||
|---|---|---|---|---|---|---|---|---|---|---|
| Enjoy | Mood | Familiar | Correct origin % | Mean of ITIs (SD) | SD of ITIs (SD) | Number of taps (SD) | % of correct answers ( | Decision time in sec ( | ||
| Bonde | West Africa | 5.4 (1.3) | 5.5 (1.0) | 3.7 (1.5) | 39 | 755 (144) | 65 (42) | 14.6 (3.0) | 73 (23) | 4.19 (1.06) |
| Cumbia del Leon | Latin America | 5.9 (2.1) | 6.1 (1.7) | 5.4 (2.2) | 55 | 716 (51) | 28 (7) | 16.2 (1.6) | 83 (22) | 3.78 (0.98) |
| Nomads | South Asia | 5.6 (1.8) | 5.6 (1.4) | 4.9 (1.9) | 50 | 730 (90) | 37 (19) | 15.7 (1.7) | 83 (19) | 3.76 (0.82) |