| Literature DB >> 36203808 |
Abstract
This article discusses recent developments and advances in the neuroscience of music to understand the nature of musical emotion. In particular, it highlights how system identification techniques and computational models of music have advanced our understanding of how the human brain processes the textures and structures of music and how the processed information evokes emotions. Musical models relate physical properties of stimuli to internal representations called features, and predictive models relate features to neural or behavioral responses and test their predictions against independent unseen data. The new frameworks do not require orthogonalized stimuli in controlled experiments to establish reproducible knowledge, which has opened up a new wave of naturalistic neuroscience. The current review focuses on how this trend has transformed the domain of the neuroscience of music.Entities:
Keywords: artificial neural network; computational model; musical emotion; natural music; naturalistic stimuli; system identification
Year: 2022 PMID: 36203808 PMCID: PMC9531138 DOI: 10.3389/fnins.2022.928841
Source DB: PubMed Journal: Front Neurosci ISSN: 1662-453X Impact factor: 5.152
FIGURE 1A schematic of a linearized encoding model. A hypothesized neural representation in the brain of music (X) is modeled by a computation model [i.e., a linearization function 𝒥*(⋅)] resulting in feature time-series (z). A linear relationship, described by a response function (w), between the delayed features (Z) and measured neural responses (y) is estimated using stimuli (e.g., music #1) in the training set. For independent stimuli (e.g., music #2) in the test set, the same computational model extracts features. Using them, the predictive performance of the hypothesized representation can be evaluated.