| Literature DB >> 34948651 |
Xiaolin Liu1,2,3, Huijuan Shi3, Yong Liu1,4, Hong Yuan1,4, Maoping Zheng1,2.
Abstract
This study explored the behavioral and neural correlates of mindfulness meditation improvement in musical aesthetic emotion processing (MAEP) in young adults, using the revised across-modal priming paradigm. Sixty-two participants were selected from 652 college students who assessed their mindfulness traits using the Mindful Attention Awareness Scale (MAAS). According to the 27% ratio of the high and low total scores, participants were divided into two subgroups: high trait group (n = 31) and low trait group (n = 31). Participants underwent facial recognition and emotional arousal tasks while listening to music, and simultaneously recorded event-related potentials (ERPs). The N400, P3, and late positive component (LPC) were investigated. The behavioral results showed that mindfulness meditation improved executive control abilities in emotional face processing and effectively regulated the emotional arousal of repeated listening to familiar music among young adults. These improvements were associated with positive changes in key neural signatures of facial recognition (smaller P3 and larger LPC effects) and emotional arousal (smaller N400 and larger LPC effects). Our results show that P3, N400, and LPC are important neural markers for the improvement of executive control and regulating emotional arousal in musical aesthetic emotion processing, providing new evidence for exploring attention training and emotional processing. We revised the affecting priming paradigm and E-prime 3.0 procedure to fulfill the simultaneous measurement of music listening and experimental tasks and provide a new experimental paradigm to simultaneously detect the behavioral and neural correlates of mindfulness-based musical aesthetic processing.Entities:
Keywords: ERPs; aesthetic emotion; executive control; mindfulness meditation; musical aesthetics
Mesh:
Year: 2021 PMID: 34948651 PMCID: PMC8701887 DOI: 10.3390/ijerph182413045
Source DB: PubMed Journal: Int J Environ Res Public Health ISSN: 1660-4601 Impact factor: 3.390
Figure 1An example of the experimental task.
Participants’ demographic information and self-reported results.
| Variables | HTG (M ± SD) | LTG (M ± SD) | t | |
|---|---|---|---|---|
| Age | 20.45 (2.46) | 20.03 (1.47) | 0.81 | |
| Sex | Male = 9, female = 21 | Male = 6, female = 25 | ||
| PANAS | PA | 2.55 (0.77) | 2.32 (0.77) | 1.29 |
| NA | 1.24 (0.34) | 1.48 (0.54) | 1.06 | |
| TMS *** | pre-test | 33.77 (5.59) | 33.94 (3.95) | 0.13 |
| post-test | 36.06 (5.13) | 35.03 (4.69) | 0.83 | |
| MAAS *** | pre-test | 67.84 (6.08) | 49.35 (6.62) | 11.45 |
| post-test | 68.35 (8.33) | 51.61 (11.11) | 6.71 | |
Note: Positive and Negative Affect Schedule (PANAS) difference within-group at the mood state after experiment; Toronto Mindfulness Scale (TMS) and Mindful Attention Awareness Scale (MAAS) difference within- and between-group before and after mindfulness meditation training; PA: positive affect, NA: negative affect; HTG: high trait group, LTG: low trait group; M: mean, SD: standard deviation; *** p < 0.001.
Figure 2Toronto Mindfulness Scale (TMS) difference within-group and Mindful Attention Awareness Scale (MAAS) difference between-group before and after mindfulness meditation training. HTG: high trait group, LTG: low trait group; * p < 0.05, *** p < 0.001.
Descriptive statistics of the facial recognition and emotional arousal task.
| Variables | HTG (M ± SD) | LTG (M ± SD) | |||||
|---|---|---|---|---|---|---|---|
| MELs | CM | HM | SM | CM | HM | SM * | |
| FR | ACC (pre-test) | 0.85 (0.13) | 0.87 (0.16) | 0.84 (0.14) | 0.84 (0.13) * | 0.81 (0.20) * | 0.81 (0.15) * |
| ACC (post-test) | 0.88 (0.16) | 0.84 (0.27) | 0.83 (0.20) | 0.89 (0.08) * | 0.84 (0.22) * | 0.85 (0.14) * | |
| RTs (pre-test) | 1249.35 (98.53) | 990.24 (146.33) | 1260.40 (140.16) | 1244.06 (180.83) | 969.95 (161.59) | 1249.87 (178.15) | |
| RTs (post-test) *** | 1182.51 (120.15) | 933.15 (159.44) | 1142.08 (113.97) | 1098.00 (162.02) | 804.39 (174.47) | 1063.99 (165.88) | |
| EE | Arousal (pre-test) | 4.94 (1.49) * | 6.26 (1.06) * | 6.29 (1.44) * | 4.91 (1.18) | 5.98 (1.19) | 5.93 (1.27) |
| Arousal (post-test) | 4.78 (0.68) * | 5.94 (1.24) * | 6.04 (1.45) * | 4.98 (1.24) | 5.98 (1.28) | 6.10 (1.55) | |
| RTs (pre-test) | 879.71 (217.31) | 783.92 (193.73) | 839.47 (225.83) | 834.12 (244.99) | 781.94 (249.18) | 930.89 (270.46) | |
| RTs (post-test) ** | 723.24 (226.59) | 687.23 (232.43) | 731.09 (222.66) | 604.07 (197.26) | 550.67 (186.84) | 597.28 (231.15) | |
Note: MELs: musical emotion levels; FR: facial recognition, EA: emotional arousal; ACC: accuracy, RTs: reaction times; HTG: high trait group, LTG: low trait group; CM: calm music, HM: happy music, SM: sad music; M: mean, SD: standard deviation; * p < 0.05, ** p < 0.01, *** p < 0.001.
Figure 3Accuracy (ACC) and reaction times (RTs) difference within-group and between-group in the facial recognition and emotional arousal task; HTG: high trait group, LTG: low trait group; * p < 0.05, ** p < 0.01, *** p < 0.001.
Descriptive statistics of AESTHEMOS results.
| Variables | HTG (M ± SD) | LTG (M ± SD) | ||||
|---|---|---|---|---|---|---|
| MELs | CM | HM * | SM | CM | HM * | SM |
| PAEs (pre-test) | 2.47 (0.74) * | 2.21 (0.78) | 2.36 (0.71) | 2.35 (0.57) | 2.27 (0.66) | 2.31 (0.60) |
| PAEs (post-test) | 2.13 (0.67) * | 2.51 (0.83) | 2.24 (0.81) | 2.10 (0.51) | 2.47 (0.65) | 2.32 (0.74) |
| PEs (pre-test) | 2.44 (0.66) ** | 2.27 (0.81) | 3.22 (0.81) *** | 2.29 (0.67) | 2.14 (0.61) | 2.92 (0.72) *** |
| PEs (post-test) | 3.08 (0.92) ** | 1.82 (0.68) | 1.81 (0.54) *** | 2.72 (0.82) | 1.89 (0.74) | 1.85 (0.54) *** |
| EEs (pre-test) | 2.30 (0.69) | 2.14 (0.83) | 2.70 (0.75) *** | 2.04 (0.56) | 2.20 (0.69) | 2.51 (0.61) *** |
| EEs (post-test) | 2.39 (0.78) | 2.38 (0.69) | 2.10 (0.64) *** | 2.26 (0.62) | 2.35 (0.69) | 2.16 (0.73) *** |
| NEs (pre-test) | 1.46 (0.48) ** | 1.31 (0.36) | 1.24 (0.33) *** | 1.58 (0.42) | 1.49 (0.58) | 1.41 (0.54) *** |
| NEs (post-test) | 1.24 (0.38) ** | 1.95 (0.66) | 1.74 (0.46) *** | 1.38 (0.47) | 1.86 (0.54) | 1.87 (0.51) *** |
Note: AETHEMOS: Aesthetic Emotions Scale; MELs: musical emotion levels; CM: calm music, HM: happy music, SM: sad music; PAEs: prototypical aesthetic emotions, PEs: pleasing emotions, EEs: epistemic emotions, NEs: negative emotions; HTG: high trait group, LTG: low trait group; M: mean, SD: standard deviation; * p < 0.05, ** p < 0.01, *** p < 0.001.
Figure 4Aesthetic Emotions Scale (AETHEMOS) differences within- and between-group on the three levels of musical emotion. PAEs: prototypical aesthetic emotions, PEs: pleasing emotions, EEs: epistemic emotions, NEs: negative emotions; HTG: high trait group, LTG: low trait group; M: mean, SD: standard deviation; * p < 0.05, ** p < 0.01, *** p < 0.001.
Figure 5Grand average waveforms of P3 and late positive component (LPC) at site Fz in the facial recognition and emotional arousal task. HTG: high trait group; LTG: low trait group.