Literature DB >> 36174035

Sweet spot in music-Is predictability preferred among persons with psychotic-like experiences or autistic traits?

Rebekka Solvik Lisøy1, Gerit Pfuhl1,2, Hans Fredrik Sunde3, Robert Biegler1.   

Abstract

People prefer music with an intermediate level of predictability; not so predictable as to be boring, yet not so unpredictable that it ceases to be music. This sweet spot for predictability varies due to differences in the perception of predictability. The symptoms of both psychosis and Autism Spectrum Disorder have been attributed to overestimation of uncertainty, which predicts a preference for predictable stimuli and environments. In a pre-registered study, we tested this prediction by investigating whether psychotic and autistic traits were associated with a higher preference for predictability in music. Participants from the general population were presented with twenty-nine pre-composed music excerpts, scored on their complexity by musical experts. A participant's preferred level of predictability corresponded to the peak of the inverted U-shaped curve between music complexity and liking (i.e., a Wundt curve). We found that the sweet spot for predictability did indeed vary between individuals. Contrary to predictions, we did not find support for these variations being associated with autistic and psychotic traits. The findings are discussed in the context of the Wundt curve and the use of naturalistic stimuli. We also provide recommendations for further exploration.

Entities:  

Mesh:

Year:  2022        PMID: 36174035      PMCID: PMC9521895          DOI: 10.1371/journal.pone.0275308

Source DB:  PubMed          Journal:  PLoS One        ISSN: 1932-6203            Impact factor:   3.752


Introduction

Predictability has a sweet spot. Too much unpredictability is stressful [1, 2], whereas being understimulated by excessive predictability is fatiguing [3]. People therefore use compensatory strategies to regulate the level of predictability in the environment. This could be engaging in exploration to cope with boredom [4] or seeking out information to make an unfamiliar situation more predictable [5]. But what one person considers a comfortable level of predictability, someone else may find either too monotonous or too chaotic. Hence, the optimal level of unpredictability depends on the degree of unpredictability perceived by the individual. Higher perceived unpredictability is proposed to be a causative factor in both Autism Spectrum Disorder (ASD) and psychosis [6, 7].

The sweet spot for predictability

Excessive unpredictability is stressful and unpleasant [1, 8, 9], so much so that people are willing to pay to avoid it [10]. Unpredictable stimuli cause high levels of arousal and a high strain on attentional and cognitive resources. Attention [11, 12] and learning rates [13, 14] increase in unpredictable environments, as unexpected events signal that statistical relationships are not fully learned. When relationships change at such a high rate that learning is no longer worth the effort [15], the exposure to such high levels of unpredictability can result in the unpleasant feeling of not being in control [16]. Hence, to reduce distress, people engage in behaviours that turn unpredictable situations more predictable [17-20]. Yet, optimal levels of stimulation and arousal are not produced in perfectly predictable environments either. Higher levels of predictability increase the likelihood of experiencing the aversive state of boredom or understimulation [21, 22]. In fact, performing a monotonous, predictable task causes more fatigue than performing a task that requires cognitive effort [3]. This may be explained by the lack of surprising (i.e., salient) stimuli making it more laborious to engage attention [23]. People therefore avoid too much predictability, even if high predictability means high certainty of rewards [22]. Thus, an intermediate level of predictability is least aversive, and should therefore be preferred over high and low levels [24]. As an intermediate level of predictability produces the highest amount of stimulation without causing aversion, it should also be experienced as the most pleasurable [25, 26]. Indeed, Gold and colleagues [27] showed that people like songs with intermediate levels of predictability better than songs either low or high in predictability. This inverted U-shaped relationship between liking and predictability has been found in several lines of research: such as music [28-31], visual texture patterns [32], geometric shapes [33], and online web pages [34]. In the visual domain, a preference for intermediate levels of predictability has even been found in infants [35].

Differences in experiencing predictability cause differences in preferred predictability

What mechanisms drive individual differences in the sweet spot for predictability? The experience of predictability reflects subjective perceptions of predictability, which only partly relates to objective features such as the number of tones in a song or edges in a painting [36-38]. Accordingly, people vary in their perception of stimuli as predictable or unpredictable [36, 39, 40]. For example, forming probabilistic predictions based on one’s model of statistical properties in music is thought to be central to music perception [41, 42]. Differences in long-term or online learning of musical regularities lead to differences in expectations, such that, for example, a music expert can make more accurate predictions and thereby experiences less unpredictability than a non-musician [43]. Differences in the subjective perception of predictability could explain why individuals differ in their preferred level of predictability when exposed to identical levels of objective (un)predictability. This explanation has implications for psychosis and ASD, as the symptoms of both disorders have been separately attributed to overestimations of uncertainty [6, 7, 44]. According to these theories, overestimations of uncertainty arise due to excessive prediction errors (i.e., the deviation between prediction and outcome). A prediction error can signal any of three things, in varying proportions: that learning is incomplete; that a change has happened, requiring the agent to update what she had previously learned to be true; or some degree of inherent environmental randomness that limits how much learning can improve prediction [45]. As unpredictability increases and changes become more frequent, so too increases the rate of error signals. Individuals who experience disproportionately large prediction errors, as is proposed in persons with psychosis and ASD, will therefore perceive more unpredictability relative to those who experience smaller errors. Overestimating unpredictability should cause a corresponding surge in aversion and distress. Indeed, psychotic and autistic symptoms have been linked to experiencing increased distress in unpredictable situations [46-51]. We would therefore expect that individuals with psychotic and autistic traits prefer higher levels of predictability to cope with aversive levels of unpredictability, and that their sweet spot for predictability is skewed towards more predictable stimuli and or environments. A preference for predictability is consistent with some symptoms of ASD, such as the preference for routine and sameness, repetitive behaviour, fixed interests, and finding unexpected changes upsetting and stressful [52]. In line with this, Goris and colleagues [53] found that autistic traits were associated with preferring more predictable tone sequences. Cognitive inflexibility is also a general trait in psychosis, though it may manifest as belief fixation rather than fixed interests [54, 55]. Preoccupation with unusual content, whether it be with objects in ASD or beliefs in psychosis, is an example of parallels that can be drawn between ASD and psychosis. Overlaps in core features, such as impairments in social cognition and language [56-58], also extend to shared phenotypic traits in the non-clinical population [59]. Yet, to our knowledge, no studies have investigated a relationship between psychotic traits and a preference for predictability.

Current study

For the current study, we aimed to investigate whether tendencies towards psychosis and ASD were related to a higher preference for predictability in music. We chose to focus on music preferences, although we expect that a higher preference for predictability is observable in other domains. Furthermore, we opted for a naturalistic music setting, using music composed and performed by humans, to increase the chance of capturing ecologically valid behaviours that reflect peoples’ real-life responses to (un)predictability in music. If preference for predictability influences music preferences, then the extent to which a listener enjoys a piece of music should be contingent on the listener’s perceived levels of predictability. However, not all acoustic features contribute equally to the experience of predictability [27, 38, 60]. The likelihood of capturing facets relevant to preference might therefore be higher for subjective evaluations of predictability than for an assessment based on acoustic features. We therefore chose to focus on subjective evaluations for our main study, but we also explored whether the results would replicate using an objective measure of predictability. A participant’s preferred level of predictability was represented by the peak of an inverted U-shaped curve between preference and predictability (also referred to as a Wundt curve). Such music is experienced as neither too predictable nor too unpredictable. Individual differences in predictability preferences were reflected by the lateral position of the peaks. We expected psychotic and autistic traits to be associated with peaks shifted towards more predictable music (see Fig 1). The study’s pre-registration can be found at https://osf.io/y5d2r.
Fig 1

Predicted shift of Wundt curve by autistic/psychotic traits.

The hypothesised leftward shift in the peak (from the black distribution to the blue) that is associated with higher occurrence of autistic or psychotic traits.

Predicted shift of Wundt curve by autistic/psychotic traits.

The hypothesised leftward shift in the peak (from the black distribution to the blue) that is associated with higher occurrence of autistic or psychotic traits.

Methods

Participants

Three hundred and twenty-six participants were recruited for this study through the online recruitment platform Prolific (www.prolific.co) and at the campus of UiT—The Arctic University of Norway. Participants were recruited from the general population as psychotic and autistic symptoms are distributed along continua in the general population [61, 62]. Five participants were excluded for failing quality-control checks (see Statistical Analysis for details). The total sample was therefore n = 321. Ninety-three participants were given course credits, while the remaining participants were given 100 NOK / £8 for participating. The average age of the sample (before exclusions) was ~ 31.60 years, SD = 12.30, with the distribution of gender being approximately 54% females, 45% males and 1% defining their gender as non-cis. The average years of music training, including both formal and self-taught, was M = 5.69 years, SD = 7.6 (after exclusion). Participation was voluntary and anonymous, and all participants gave their written, informed consent. The study obtained ethical approval from the institutional review board at UiT–The Arctic University of Norway.

Stimuli

The stimuli were selected from a pool of instrumental music excerpts [36]. The original pool consisted of music excerpts whose musical properties were characteristic of popular music (being in the style of pop, rock, jazz, world music, or a mixture of these), but that were also assumed to be unknown to a group of eight musical experts (e.g., excerpts frequently played in broadcast media were excluded). These experts rated each excerpt on overall complexity on a 1–10 scale (for more details on the rating procedure, see [36]). An excerpt’s complexity score reflects the average of the experts’ ratings. Complexity, as used here, is the inverse of predictability. This is based on the rationale that, as Delplanque et al. argued: “A stimulus is more complex if its elements are more difficult to predict, leading to more prediction error” [30 p147]. For the current study, the original stimulus pool was reduced from 40 to 29 by removing excerpts with recurring complexity scores. When multiple excerpts had identical complexity scores, the excerpt with the lowest variance in expert ratings was selected. The complexity scores in the final stimulus set ranged from 2.625 to 8.625, with the largest interval between two scores being 0.5 (see Table 1). The excerpts’ durations ranged from 38 to 75 seconds (M = 62.8 seconds, SD = 10.5).
Table 1

Stimulus details for the music task.

ArtistSongBlockComplexityAverage liking (n = 321)
Bonnie RaittCircle dance42.62551.37
AirNew Star In The Sky13.12555.07
Bo Kaspers OrkesterVäljer dig33.550.80
Bo Kaspers OrkesterKvarter23.7548.40
Magnus Edholm ComboShe’s Within4456.24
Genesis7–834.12550.51
SantanaEl Farol24.37560.78
Bill BergmanFrom Now On44.62554.38
Jonas Knutsson BandLemet-Lemet Ánná-Kirste24.87551.08
David SanbornThe Dream25.12557.76
Nils Landgren Funk UnitRock it35.548.73
Kenny GSade45.7556.99
Jean-Luc PontyHappy Robots15.87554.12
Bill BergmanThe Night begins3650.32
Bill BergmanMidnight Sax Theme46.12543.14
Roine StoltThe Flower King16.2558.85
Trio con XPass it On26.37546.54
Trio con XChakas dans36.543.69
Greger Wikberg TrioSvedbergs Massage36.7549.26
L Coryell, S Smith & T CosterFirst things first46.87544.00
Dave WecklHere and There27.2548.81
TransatlanticAll of the Above:17.37544.45
Roine StoltThe Magic Circus of Zeb47.547.12
Dave WecklTower of Inspiration3847.30
L Coryell, S Smith & T CosterBubba48.12546.68
Janne SchafferBromma Express28.2552.41
Béla Fleck & The FlecktonesVix 918.37557.44
Janne SchafferHot Days and Summer Nights38.547.98
Itchy Fingers£7.50 (only saxophone part)28.62537.98

Complexity reflects the average of complexity ratings by eight experts. Average liking reflects the mean liking for each song in the total sample (n = 321), rated on a scale from 0 to 100. Two warm-up trials from block 1 are not listed.

Complexity reflects the average of complexity ratings by eight experts. Average liking reflects the mean liking for each song in the total sample (n = 321), rated on a scale from 0 to 100. Two warm-up trials from block 1 are not listed.

Measures and procedure

Participants completed the survey on a laptop or a desktop, either online (www.qualtrics.com) or using PsychoPy [63]. The excerpts were allocated to four blocks as evenly as possible based on complexity scores, ensuring that the blocks’ average complexities and variances were equivalent. Block one included two warm-up trials. The excerpts were presented in random order within each block. The participants listened to all musical pieces in their entirety, and subsequently rated how much they liked each piece on a visual analogue scale from 0 = Disliked very much to 100 = Liked very much. Questionnaire items were presented between blocks to avoid fatigue. Autistic traits were measured using the 28-item version of the Autism Spectrum Quotient (AQ-short: [61]). AQ-short has previously been validated using non-clinical samples [61, 64]. Positive symptoms of psychosis (e.g., paranormal beliefs) were measured using the 20 frequency items in the positive subscale of the Community Assessment of Psychic Experiences scale (CAPEp: [62]), which can be divided into five subscales [65]. Similar to the AQ-short, the CAPEp has been used to meausure traits in non-clinical samples [66, 67]. We added three control questions reflecting common misconceptions about psychosis [68], such as whether one believes in kidnappings by aliens. The internal consistencies of AQ-short and the CAPEp were very good; α = .85 and α = .86, respectively. The participants self-reported the frequency of adverse childhood experiences using an abbreviated version of the Adverse Childhood Experiences International Questionnaire (ACE-IQ: [69]). Vividness of auditory imagery was measured using the Vividness subscale of the Bucknell Auditory Imagery Scale (BAISv: [70]), with some revisions to make the content more appropriate for the current sample (see the project’s OSF page for details, at https://osf.io/y5d2r/). Participants reported years of music training, as well as how many hours they listened to music on a typical day. Finally, mood was assessed using a five-point emoticon scale.

Statistical analysis

Of the 326 recruited participants, five participants were excluded (see pre-registered exclusion criteria and OSF wiki, at https://osf.io/y5d2r/). One participant was excluded for giving a middle rating to the majority of the music excerpts. One participant was excluded for scoring above the threshold on CAPEp control items. Finally, three additional participants were excluded for reporting implausible years of music training. Thus, the total sample was n = 321. While a group-level Wundt effect between preference and complexity scores was significant in the total sample (see supplementary material), the relationship between traits and preference was only investigated in participants who showed a Wundt curve between preference and complexity (see Main analysis for details). Out of 321 participants, 181 showed a Wundt curve. The remaining 140 participants who showed no Wundt curve were therefore not included in the analyses. S1 Table in supplementary material shows that the participants with and without Wundt curves did not differ on any of the measured variables when correcting for multiple testing. All analyses were performed using JASP [71] and R [72]. The R package correlation was used to perform partial correlations [73]. The R packages lme4 and lmeTest were used to test linear mixed models [74, 75]. CAPEp and AQ-short mean scores were calculated separately, with lowest possible scores being 1 and highest possible being 4.

Main analysis

To investigate relationships between preferred levels of complexity and traits, we included only participants whose preferred level of complexity could be determined according to pre-registered criteria. Specifically, a preferred level of complexity corresponds to the peak (or apex) of an inverted U-shaped curve between preference and complexity scores (i.e., a Wundt curve). Each peak was determined based on a fitted quadratic model. We performed quadratic regression analyses between the music excerpts’ complexity scores and preference ratings for each participant. The quadratic component was calculated by squaring the complexity scores. The inverted U-shape is characterised by a negative quadratic component; the further that component is below 0, the sharper is the peak. Participants with a quadratic component larger than -0.1 were excluded. This exclusion criterion trades off sample size against measurement error in preferred complexity levels. On the one hand, the peaks of curves with a quadratic term near 0 would vary widely due to random errors, which would induce measurement error and consequently reduce statistical power. On the other hand, reducing sample size also reduces statistical power. An a priori power analysis showed that the best trade-off was to exclude participants with a quadratic component > –0.1 to ensure a convex parabola. This would reduce a hypothetical sample size of n = 200 down to n = 159, at which a one-tailed test with α = .05 had 81.5% power to detect a correlation of .2 (see power analysis at https://osf.io/y5d2r). The number of participants who showed a Wundt curve was n = 181. One of these participants is presented in the right panel of Fig 2, while the left panel presents an example of a participant who did not meet the criteria for an inverted U-shaped relationship. A linear mixed model was used to confirm a Wundt effect between preference and complexity scores in the final sample, using participants as random intercepts. We calculated partial correlations between the preferred level of complexity and autistic and psychotic traits separately, while controlling for mood and ACE-IQ sum scores. These tests were one-sided, as we hypothesised that autistic and psychotic traits would correlate with a preference for simpler music (i.e., peaks located more on the lower end of the complexity scale). As exploratory analyses, we investigated how preferred level of complexity related to other indices using two-sided correlational tests, as well as whether traits were associated with giving more variable liking responses. A Shapiro-Wilks test indicated that preferred level of complexity was not normality distributed, p < .001, and thus Kendall’s rank correlation was chosen as the non-parametric test.
Fig 2

Example data.

The solid blue and orange lines reflect the fitted quadratic models, and the shaded areas reflect confidence intervals. The left panel presents a participant whose data met the pre-registered criteria for a Wundt curve between liking and complexity scores (a quadratic component of -1.56). The stippled vertical line reflects the peak of the parabola, which was taken as the preferred level of complexity. The right panel presents a participant without a Wundt curve, showing instead a U-shaped relationship between liking and complexity (a quadratic component of +0.78). Here, the stippled line reflects the bounded maximum that was used as the preferred level of complexity in exploratory analyses (see supplementary material).

Example data.

The solid blue and orange lines reflect the fitted quadratic models, and the shaded areas reflect confidence intervals. The left panel presents a participant whose data met the pre-registered criteria for a Wundt curve between liking and complexity scores (a quadratic component of -1.56). The stippled vertical line reflects the peak of the parabola, which was taken as the preferred level of complexity. The right panel presents a participant without a Wundt curve, showing instead a U-shaped relationship between liking and complexity (a quadratic component of +0.78). Here, the stippled line reflects the bounded maximum that was used as the preferred level of complexity in exploratory analyses (see supplementary material).

Wiener entropy

We replicated the results from the main study by replacing the music experts’ complexity scores with calculations of Wiener entropy (also known as spectral flatness). Wiener entropy reflects the excerpts’ noisiness, or the uniformity of the power spectra, and is thus an objective measure of complexity. Entropy was calculated by dividing each excerpt into 50 ms segments and subsequently analysing the segments’ frequency spectra. Reducing the segments to 20 ms did not change the results meaningfully. See S2 Text in supplementary material for more details on the acoustic analysis. The correlation between experts’ complexity scores and entropy scores, r = .489, p = .007, can be considered large in the context of psychological research [76]. The same exclusion procedures and analyses from the main study were performed by replacing complexity scores with entropy scores, resulting in a sample of n = 183 participants.

Results

The participants varied in their psychotic and autistic traits (see Fig 3) and had few adverse childhood events (see Table 2). 29.83% of participants scored above the AQ-short clinical cut-off of >2.33 (equivalent to a sum score >65; see [61]). 44.2% participants scored above the 1.47 CAPEp cut-off for ultra-high-risk for psychosis [77]. On average, participants were in positive mood, and vividness was similar to that reported previously [70]. Many respondents had no musical training. Sample demographics are presented in Table 2.
Fig 3

Distribution of AQ-short scores and CAPEp scores.

Table 2

Demographics for participants showing Wundt curves (n = 181).

 Mean (SD)MedianMinMax
CAPEp 1.48 (0.32)1.451.053.35
AQ-short 2.20 (0.38)2.211.173.68
ACE-IQ 1.76 (1.58)1.0006.00
BAISv 28.87 (5.76)29.009.0041.00
Mood 3.61 (0.63)4.001.005.00
Training (years) 5.00 (6.85)3.00043.00
Daily listening2.21 (1.28)2.5004.50

CAPEp = mean scores of the positive subscale of the Community Assessment of Psychic Experiences, AQ-short = mean scores of the abridged version of the Autism Spectrum Quotient, ACE-IQ = an abbreviated version of the adverse childhood experiences international questionnaire, BAISv = the Vividness subscale of the Bucknell Auditory Imagery Scale (with some revisions), Training = years of music training, Daily listening = hours spent listening to music on a typical day. Higher mood values reflect better mood.

CAPEp = mean scores of the positive subscale of the Community Assessment of Psychic Experiences, AQ-short = mean scores of the abridged version of the Autism Spectrum Quotient, ACE-IQ = an abbreviated version of the adverse childhood experiences international questionnaire, BAISv = the Vividness subscale of the Bucknell Auditory Imagery Scale (with some revisions), Training = years of music training, Daily listening = hours spent listening to music on a typical day. Higher mood values reflect better mood. The results from the linear mixed model confirmed Wundt curves between liking and complexity in the sample (n = 181), with a significant positive linear effect (β = 8.82, p < .001) and a significant negative quadratic effect (β = -.91, p < .001). There were individual differences in the preferred level of complexity, indicated by variation in the peaks of the parabolas (Fig 4). The peaks had a median of 4.96 and a mean of 4.97, SD = 1.791, skew = .37, kurtosis = -.74. The sample’s preferred levels of complexity spanned the entire complexity range of the music excerpts, from 2.625 to 8.625. The most liked music excerpt (mean liking = 65.45) had a complexity score of 4.375. In fact, the complexity scores of the five most liked songs all hovered around the midpoint of the complexity range, ranging from 4.375 to 6.25. However, as shown in Fig 4, a large portion of participants preferred the lowest level of complexity.
Fig 4

Histogram of preferred complexity levels.

Confirmatory analyses

Contrary to expectations, preferred level of complexity was neither associated with autistic traits nor with psychotic-like experiences. Partial correlation between participants’ AQ-short scores and preferred level of complexity, while controlling for ACE-IQ scores and mood, was non-significant, Kendall’s τ = .039, p = .784. Similarly, partial correlation between participants’ CAPEp scores and preferred level of complexity, while controlling for ACE-IQ scores and mood, was non-significant, Kendall’s τ = .033, p = .743. Regressing the peaks on CAPEp, AQ-short, music experience and BAIS confirmed that these predictors could not explain individual differences in the preferred level of complexity, F(4, 176) = .954, p = .434, R2 = .021, = -.001. Rerunning the analysis using entropy scores instead of experts’ complexity ratings replicated all results from the main analysis (see S2 Text).

Exploratory analyses

The results from confirmatory analyses were robust when all participants with nonzero slopes were included (see S3 Text). Correlation analyses showed that the participants’ preferred levels of complexity were neither related to mood, vividness of auditory imagery, musical training nor how much the participant typically listens to music, all p > .1 (see S2 Table in supplementary material for details). Preferred levels of complexity were not associated with any subscales of the CAPEp nor AQ-short, all p > .1. Correlations between subscales and preferred level of entropy did not reach significance when controlling for multiple comparisons (see S2 Table). CAPEp and AQ-short were not significantly correlated with variance in liking responses, all p > .05.

Discussion

According to computational theories, individuals with ASD and psychosis perceive higher levels of unpredictability compared to typically developed individuals. In this study, we investigated whether psychotic and autistic traits were related to preferring predictable music, reasoning that preference is modulated by how much predictability the individual experiences. Participants did indeed vary in how much predictability among music excerpts they preferred, indicated by variations in the peaks of the inverted U-shaped relationships between music complexity and liking. Contrary to predictions, we found no support for variations in the sweet spot for predictability being associated with psychotic or autistic traits. While the lack of support for the predicted association was unexpected, it should be noted that this is the first investigation into the relationship between psychotic-like experiences and a preference for predictability. We stress that these results do not refute the notion of a general predictability preference in psychosis. The trait levels in the sample, as measured by AQ-short and CAPEp, were similar to those reported in other studies using community samples (see e.g., [61, 65, 78, 79]). Yet, CAPEp scores were skewed towards the lower end of the scale (see Fig 3), making it possible that null results occurred due to a low prevalence of psychotic traits in the current sample. These findings indicate that using non-clinical samples may not be sufficient to detect a relationship between tendencies towards psychosis and a preference for predictability in music. In fact, patients with delusions self-report a higher preference for predictability than healthy controls [80, 81]. Yet, when using the same self-report measure, no evidence was found for an association between a preference for predictability and delusion-proneness in the general population [82]. There is also a question of whether frequency of psychotic-like experiences (as measured by CAPEp) captures psychotic traits that influence predictability preferences. The self-reported predictability preferences might be related to belief inflexibility often seen in delusional patients [54]. This inflexibility might also be distributed along the psychosis continuum in the non-clinical population; Bronstein and Cannon [83] found that delusional traits measured in the general population were associated with sticking to one’s beliefs in the face of disconfirmatory evidence. Future studies that seek to further investigate predictability preferences in psychosis should consider controlling for belief inflexibility, for example by incorporating study paradigms like that of Woodward and colleagues [84]. An association between autistic traits and preference for predictable music is consistent with common characteristics of ASD, such as preference for routine and sameness [52]. Interestingly, preference for predictable music was not related to the AQ-short subscale measuring preference for routine. However, routine reflects a preference for predictability on a large time-scale, whereas preference measured by our music task reflects sensory processing at a relatively short time-scale. In fact, it has been suggested that individuals with ASD have problems coping with uncertainty at longer time-scales, but not short time-scales [85]. Listening to a piece of music once and then immediately give a rating might measure a current state of liking rather than a trait. Asking participants for their preferred music (and least preferred music), as well as measuring the level of exposure to these songs, could give a more reliable measure of patterns in preference that endure over time. In a similar vein, predictability seeking behaviours described in individuals with ASD often depicts a preference for a specific combination of multisensory information. For example, food selection in autistic children is influenced by texture and appearance in addition to taste [86]. It is possible that a preference for predictability is difficult to observe when restricting preference measures to the auditory domain only. We therefore support the suggestion by Goris and colleagues [87] that further research on the preference for predictability in ASD should consider incorporating multisensory paradigms. While short time-scales and lack of multisensory stimuli may explain the null results in the current study, they do not account for why Goris and colleagues [53] found a preference for predictability in ASD using short tone sequences. One can speculate whether the low stimulus complexity of tone sequences caused more local auditory processing, and whether the null results in the current study can be attributed to composed music causing global rather than local auditory processing. In addition to differences in stimulus complexity, Goris and colleagues used an information-theoretic calculation of music unpredictability, while the current study used human complexity ratings and Wiener Entropy. In fact, our null results mirror those of another study that investigated a relationship between autistic traits and music preference using complexity scores [88]. If predictability preferences in ASD can be captured by measuring information-theoretic predictability, but not complexity nor Wiener Entropy, it raises the question of how these definitions of predictability differ. People differ in the level of predictability they perceive in music (see e.g., [36]), which explains why a single music excerpt can produce a variety of liking responses. It is also possible that people vary in their emotional reaction when perceiving identical levels of music predictability. Indeed, this was the rationale for controlling for adverse childhood experiences (ACE-IQ) in the current study, as early trauma has been linked to heightened stress responses [89]. Such increase in responsiveness could boost any aversive reactions to unpredictable music, similar to the effect of overestimating unpredictability. In contrast, the study did not include control measures for mechanisms that dampen emotional reactions to unpredictable music. One example of this might be having a high tolerance for unpredictability (i.e., high aversion threshold). Moreover, Spehar and colleagues [90] found that those with a higher ability to discriminate perceptual features also prefer stimuli containing more features (i.e., more complex). Such mechanisms may have obscured an association between traits and disliking unpredictable music in the current sample. Furthermore, they could elucidate why predictability preferences would differ between clinical and non-clinical populations, and even relating to different types of sensory information. For example, patients with psychosis show a lower ability to discriminate pitch than healthy controls, but the groups do not differ in visual discrimination [91]. The implicit learning that occurs when listening to music [92] determines the experienced predictability of a musical piece. Long-term learning of musical regularities also influences perceived predictability in music [43], and thus also preference (see [41]). While exploratory analyses showed no support for an association between preferred level of predictability and years of musical training or passive exposure, it should be noted that comparing these results to studies using more comprehensive screenings of musical expertise (e.g., [93]) may be problematic.

Limitations and strengths

The linear mixed model confirmed a Wundt effect at the group-level. Yet, finding the inverted U-shaped curves by visually inspecting the plots of liking ratings and complexity scores proved challenging for a portion of the participants. Also, the mixed model’s linear component was statistically significant. Taken together, this suggests weak Wundt effects in our sample, which may have concealed the relationship between traits and preferring predictable music despite our threshold balancing statistical power with measurement error. Curves might have been flat, or preferred complexity levels lay outside the complexity range of the stimuli. For example, it may be that the music excerpts only covered the curve’s right slope for the participants whose peaks of the Wundt curves corresponded to the lowest level of complexity (see Fig 4). The same idea would apply for participants who showed monotonically increasing or decreasing relationships but no Wundt curves. As Chmiel and Schubert [29] argued, increasing or decreasing preferences are likely to saturate at some point, and may decrease or increase after saturation, respectively. The current study sought to investigate predictability preferences along the autism-psychosis continuum. Because the aim was not to test for overestimation of unpredictability, the music task did not include a measure of perceived predictability. One cannot infer overestimation of unpredictability directly from liking responses without also controlling for other factors that influence emotional reactivity. For example, people prefer predictable music when unpredictable music causes aversion, but aversion may result from either experiencing high levels of unpredictability or from having a lower aversion threshold. Measuring perceived predictability is necessary if future studies seek to interpret similar findings in light of the computational theories of overestimation of uncertainty in psychosis and ASD [6, 7]. Psychotic and autistic traits were measured in a non-clinical sample, thus avoiding issues with high response variability sometimes observed in clinical samples [94]. Indeed, neither psychotic nor autistic traits were associated with giving more variable liking ratings. To preserve anonymity, gender and age was not linked to participant responses. Thus, we could not control for the possibility that younger people like more complex music [88]. The only study finding a preference for predictable music in ASD used constructed tone sequences, while the current study used composed music. Unlike tone sequences, composed music could contain features other than complexity that influenced liking. For example, music enjoyment might be influenced by familiarity, style, or genre [36, 37, 95]. One has therefore more opportunity to focus on predictability in tone sequences where such features are absent. The fact that features that influence music enjoyment–other than predictability–were not held constant may also elucidate why a significant portion of the original sample did not show Wundt curves. For example, a recognised music excerpt might have been given a higher rating simply due to it being familiar [36, 92], which could overshadow the effects of structural predictability on liking. Although generated stimuli, like pure tones, allow for more experimental control when studying the effect of predictability on preference, it comes at the cost of ecological validity. Liking ratings of composed music excerpts likely resemble preferences and choice behaviours that can be observed in every-day settings, because the excerpts contained elements that are naturally present in music. Similarly, the study paradigm employed experts’ complexity ratings to define how difficult or easy it was to predict musical elements in the stimuli. Notably, not all acoustic features are important to the experience of predictability [27, 38, 60]. Hence, comparing liking ratings to complexity judgements, rather than acoustic features such as probability of pitch change, increased the likelihood of observing how people respond to music that is perceived as unpredictable.

Directions for future research

That so many participants in the current study preferred the lowest level of complexity (see Fig 4) suggests that at least some of those would have preferred still simpler music than we offered. Future studies should expand the complexity range to fully capture the variation in preferences at very low and very high levels, as the effects of traits on preference may not be observable if these individual differences are discounted. Here, using experts’ complexity ratings when creating a stimulus set, as done in the current study (see [36]), may inadvertently preclude predictable music; Hansen and Pearce [43] found that experts rated music with simple musical structures as more predictable than non-musicians. Thus, music excerpts with low expert complexity scores may still be experienced as moderately unpredictable by non-expert participants. Substituting expert complexity ratings with non-expert ratings (see e.g., [88, 95]) ensures that the complexity range and scores are specific to a sample of non-musician. The experts’ complexity scores largely reflected melodic complexity [36], but it is unknown how other dimensions, such as rhythm and instrumentation, contribute to perceived unpredictability in non-musicians. Furthermore, predictability ratings can be collected concomitantly with liking rating, such that the sweet spot is estimated based on the participant’s subjective judgments of predictability. Explicit ratings can be combined with physiological measure of experienced predictability, like heart rate deceleration [96]. Perhaps more pertinent are pupillometric measures; for example, pupil dilatation should increase as the musical elements become harder to predict, while still perceived as learnable [97]. Just as pupil dilation decreases as stimuli become predictable [98], excessive unpredictability should decrease dilation as it too dissuades learning [15]. It should be noted that one cannot investigate whether traits are associated with shifts in preferred level of predictability solely based on the subjective predictability measures. Regardless of whether one over- or underestimates unpredictability, the sweet spot should always reflect moderate levels of subjective predictability.

Conclusion

By measuring the peaks of the inverted U-shaped curves between liking and predictability, we found that the sweet spot for predictability in composed music varies between individuals. However, there was no support for either psychotic or autistic traits being associated with liking predictable music. This is the first investigation of an association between psychotic traits and predictability preferences, and previous research on autistic traits and predictability preferences is scarce. Hence, we stress that our results do not refute the general notion of a preferences for predictable in either psychosis or ASD. Instead, these findings suggest that relationships between traits and predictability preferences may be difficult to observe using stimuli with high ecological validity, and that incorporating a large range of stimulus predictability is needed to account for the large variations in the sweet spot for music predictability.

Demographics and group comparisons for participants with and without Wundt curves using complexity scores.

All tests are two-sided. CAPEp = the positive subscale of the Community Assessment of Psychic Experiences, AQ-short = the abridged version of the Autism Spectrum Quotient, ACE-IQ = an abbreviated version of the adverse childhood experiences international questionnaire, BAISv = the Vividness subscale of the Bucknell Auditory Imagery Scale (with some revisions), Training = years of music training, Daily listening = hours spent listening to music on a typical day. Higher mood values reflect more positive mood. Due to non-normality, Mann-Whitney tests were used to compare groups, in which the effect sizes reflect rank biserial corelations. The only exception was BAISv, where a Welch’s test (and Cohen’s d) was used due to heteroscedasticity. * Not significant using Šidák corrections, new α = .0073. (PDF) Click here for additional data file.

Kendall’s rank correlations.

All tests are two-sided. Complexity = preferred complexity indicated by peaks, Entropy 50 ms = preferred entropy (indicated by peaks) calculated with 50 ms time windows, Entropy 20 ms = preferred entropy (indicated by peaks) calculated with 20 ms time windows, CAPEp = the positive subscale of the Community Assessment of Psychic Experiences, AQ-short = the abridged version of the Autism Spectrum Quotient, ACE-IQ = an abbreviated version of the adverse childhood experiences international questionnaire, BAISv = the Vividness subscale of the Bucknell Auditory Imagery Scale (with some revisions), Training = years of music training, Daily listening = hours spent listening to music on a typical day. Higher mood values reflect more positive mood. * Not significant using Šidák corrections, new α = .0011. (PDF) Click here for additional data file.

Wundt effect at group level in the total sample (n = 321).

(PDF) Click here for additional data file.

Wiener entropy.

(PDF) Click here for additional data file.

Bounded maxima.

(PDF) Click here for additional data file.

Histograms of preferred complexity and entropy levels in different samples.

The top left panel shows the preferred complexity levels of participants with quadratic components below -0.1 (n = 181), while the top right panel consists of those with quadratic components below -0.1 and above 0.1 (n = 299). The centre left panel show the preferred entropy levels (20 ms) of participants with quadratic components below -0.1 (n = 183), while the centre right panel consists of those with quadratic components below -0.1 and above 0.1 (n = 321). The bottom left panel shows the preferred entropy levels (50 ms) for participants with quadratic components below -0.1 (n = 183), while the bottom right panel consists of those with quadratic components below -0.1 and above 0.1 (n = 321). (TIF) Click here for additional data file. 17 May 2022
PONE-D-21-22799
Sweet spot in music – is predictability preferred among persons with psychotic-like experiences or autistic traits?
PLOS ONE Dear Dr. Lisøy, Thank you for submitting your manuscript to PLOS ONE. After careful consideration, we feel that it has merit but does not fully meet PLOS ONE’s publication criteria as it currently stands. Therefore, we invite you to submit a revised version of the manuscript that addresses the points raised during the review process. ============================== The reviewers appreciate the report; it is interesting, clearly written, has a large sample size, and is rigorously conducted (but see below). Reviewer 1 mainly has some questions and suggestions for clarification, which I ask you to consider. Reviewer 2 also has some suggestions for clarification, but in addition challenges your handling of the Wundt curve. I want to add some comments, most of them related to his/hers: If I understand right, the n = 181 subjects were chosen based on having a Wundt curve (quadratic component < -.1). Several comments are in order here. First, what is the rationale for this criterion (more specific than “a power analysis”? Second, if that is the criterion, it is by definition true (page 14) that you obtain Wundt curves in your sample. You must also check the Wundt curve in the full sample, at least. For subjects with a monotonically in- or decreasing preference curve, you can still calculate the optimum. Third, if you relax the criterion (say, quadratic component < .05), do you get similar results? With such a heavy filtering, we should be able to see what is the consequence of the filter. Also, how is the peak of the Wundt curve determined exactly? Based on the data or the fitted (quadratic) model? Reviewer 2 also makes an (alternative?) suggestion to calculate it. It would be good to show that the result is robust toward changing this measure (just like it is robust w.r.t. taking Wiener entropy). Relatedly, as hinted at by reviewer 2, Figure 1 is made up and unnecessary. You would better use this figure to plot some of the data, so we get some intuition about them. Now, the data are plotted in very digested format only. For example, show some typical and atypical participants’ Wundt curves (and possibly in the same Figure, if you want, also a panel with a cartoon version of the hypothetical shift, so you can keep the current figure 1 for your explanation). Minor comment: Weiner should be Wiener (if it refers to Norbert Wiener). ============================== Please submit your revised manuscript by Jul 01 2022 11:59PM. If you will need more time than this to complete your revisions, please reply to this message or contact the journal office at plosone@plos.org. When you're ready to submit your revision, log on to https://www.editorialmanager.com/pone/ and select the 'Submissions Needing Revision' folder to locate your manuscript file. Please include the following items when submitting your revised manuscript:
A rebuttal letter that responds to each point raised by the academic editor and reviewer(s). You should upload this letter as a separate file labeled 'Response to Reviewers'. A marked-up copy of your manuscript that highlights changes made to the original version. You should upload this as a separate file labeled 'Revised Manuscript with Track Changes'. An unmarked version of your revised paper without tracked changes. You should upload this as a separate file labeled 'Manuscript'. If you would like to make changes to your financial disclosure, please include your updated statement in your cover letter. Guidelines for resubmitting your figure files are available below the reviewer comments at the end of this letter. If applicable, we recommend that you deposit your laboratory protocols in protocols.io to enhance the reproducibility of your results. Protocols.io assigns your protocol its own identifier (DOI) so that it can be cited independently in the future. For instructions see: https://journals.plos.org/plosone/s/submission-guidelines#loc-laboratory-protocols. Additionally, PLOS ONE offers an option for publishing peer-reviewed Lab Protocol articles, which describe protocols hosted on protocols.io. Read more information on sharing protocols at https://plos.org/protocols?utm_medium=editorial-email&utm_source=authorletters&utm_campaign=protocols. We look forward to receiving your revised manuscript. Kind regards, Tom Verguts Academic Editor PLOS ONE Journal Requirements: When submitting your revision, we need you to address these additional requirements. 1. Please ensure that your manuscript meets PLOS ONE's style requirements, including those for file naming. The PLOS ONE style templates can be found at https://journals.plos.org/plosone/s/file?id=wjVg/PLOSOne_formatting_sample_main_body.pdf and https://journals.plos.org/plosone/s/file?id=ba62/PLOSOne_formatting_sample_title_authors_affiliations.pdf [Note: HTML markup is below. Please do not edit.] Reviewers' comments: Reviewer's Responses to Questions Comments to the Author 1. Is the manuscript technically sound, and do the data support the conclusions? The manuscript must describe a technically sound piece of scientific research with data that supports the conclusions. Experiments must have been conducted rigorously, with appropriate controls, replication, and sample sizes. The conclusions must be drawn appropriately based on the data presented. Reviewer #1: Yes Reviewer #2: Partly ********** 2. Has the statistical analysis been performed appropriately and rigorously? Reviewer #1: Yes Reviewer #2: Yes ********** 3. Have the authors made all data underlying the findings in their manuscript fully available? The PLOS Data policy requires authors to make all data underlying the findings described in their manuscript fully available without restriction, with rare exception (please refer to the Data Availability Statement in the manuscript PDF file). The data should be provided as part of the manuscript or its supporting information, or deposited to a public repository. For example, in addition to summary statistics, the data points behind means, medians and variance measures should be available. If there are restrictions on publicly sharing data—e.g. participant privacy or use of data from a third party—those must be specified. Reviewer #1: Yes Reviewer #2: Yes ********** 4. Is the manuscript presented in an intelligible fashion and written in standard English? PLOS ONE does not copyedit accepted manuscripts, so the language in submitted articles must be clear, correct, and unambiguous. Any typographical or grammatical errors should be corrected at revision, so please note any specific errors here. Reviewer #1: Yes Reviewer #2: Yes ********** 5. Review Comments to the Author Please use the space provided to explain your answers to the questions above. You may also include additional comments for the author, including concerns about dual publication, research ethics, or publication ethics. (Please upload your review as an attachment if it exceeds 20,000 characters) Reviewer #1: The paper currently under review investigated whether two disorders with lower unpredictability tolerance, namely those with autistic or psychotic traits, showed a leftward shift in preference on the Wundt curve. The authors presented participants with naturalistic music of which the complexity was scored subjectively by experts and objectively scored using Weiner entropy. The method of predictability-measurement showed little to no impact on the conclusion, which revealed no significant relation between psychotic or autistic traits and preference. Overall the study appears to have been conducted rigorously and the conclusion follow neatly from the data as presented. However, I have some minor comments would may help to improve the quality of the manuscript. - P.4, line 80-82: please use an example to clarify what sort of differences in perception could explain differing predictability levels. - P.6, line 127-130 & associated image: The figure would be clearer if two curves were plotted, rather than using two coloured scatterplots. - P.7, line 148: How was the original pool of music excerpts selected? - P.7, line 150-151: What do the authors mean when they say complexity is the inverse of predictability and how does this relate to the cited article? - P.8, line 156-157: the musical excerpts are relatively long compared to more experimental stimuli. It raises the question: to what extent does complexity remain consist across time throughout such musical pieces? I can imagine the music could be subjectively segmented into predictable and unpredictable chunks. - P.10, line 162-184: did you check to what extent participants were already familiar with the musical pieces used in the experiment? - P.10, line 176-177: could you include an example of such a control question? - P.11, line 191-192: What number of years are considered implausible? - P.13, line 243-244: r = 0.489 is more a moderate correlation than a high one. It also leaves me wondering how the subjective and objective differed from one another. - P.13, line 248-249: The results of the questionnaires are described in very vague terms. How much did the scores vary, exactly? Are the scores distributed in a comparable fashion to the general population? How many participants scored sufficiently high on ASD or psychosis traits to be potentially considered in range of a diagnosis? - P.14, line 270 & associated figure: Figure 2 seems to contain a lot of information that isn't explained very clearly. What do the rows and columns represent? What is displayed above and below the diagonal? What are the numbers presented on the x-or y-axis on each individual graph? I found this portion of the paper hard to follow. - P.16, line 301: typo: "not support" should be "no support" - P.19, line 398-407: the authors discuss the advantages of using naturalistic music, but should consider some additional disadvantages. For one, all naturalistic music is designed by artists to be enjoyable, at least in principle. This could explain why such a large portion of the sample did not display a Wundt curve (i.e. flat curves). Another downside would be that participants may already be familiar with specific compositions, which could, for instance, trigger a mere exposure effect where known compositions are rated higher than unknown compositions. Reviewer #2: ===Summary=== The current study sought to investigate the role of psychotic and austistic traits in influencing predictability preferences in music. As an online study, over 300 subjects listened to 29 musical pieces with varying expert-rated levels of complexity. While subjects demonstrated a characteristic Wundt/Inverted-U effect for musical complexity preferences, no significant effects between subjects' most preferred musical complexity and psychotic/autistic traits were detected. In my opinion, this research question is highly interesting and worthy of investigation. However, I feel that there needs to be major revisions to the analyses and interpretation of results before publication is warranted. ===Strengths=== - The connection between preferences to predictability in musical preference and autistic/psychotic traits is innovative. Despite the current null findings, I feel that there is potential for further research on this topic. - I really liked how the authors confirmed their analyses with both expert-rated complexity and Wiener entropy as an objective complexity measure. - The manuscript is highly accessible and easy to read. - The introduction is well motivated. - This study is pre-registered? (this was hinted but has not been explicitly stated) ===Concerns=== - While it is clever to use the peak of the Wundt curve to indicate the most preferred complexity of a subject, I feel that the manuscript placed too much emphasis on the Wundt curve and that has somewhat confused its original aim: if the goal is to investigate whether autistic/psychotic traits shift preferences for musical complexity, then whether a subject demonstrates the Wundt effect is not a necessary requirement. In my opinion, a better approach would be to fit a quadratic curve for each subject as the authors have already done, and use the bounded maximum value (i.e., the peak or the value from the song with the highest rating) for analyses. This allows data for all subjects to be retained and thus improve statistical power. - Relatedly, the authors should discuss why the Wundt effect was not present in ~45% of subjects - not only superficially from a statistical point of view, but also from a cognitive perspective. The discussion from Chmiel and Schubert 2017 should be a helpful resource. - The justification of examining both autistic and psychotic traits in the same study seemed rather superficial to me. Apart from the connection that the two are related to preferences for order/repetition, it is important to elaborate on other shared mechanisms that relate the two together. Otherwise, we cannot go further to understand why such shifts in preferences occur. - Musical preference is not only due to personality traits. Previous exposure via statistical learning places just as important a role. See Pearce 2018 for a discussion. - Please justify why only ACE-IQ and mood were included as controls when other tests were recorded. - I am not sure where the data from Figure 1 comes from - are these hypothetical ratings from two fake subjects? I feel it would be more informative to have actual preference ratings from some of the subjects and to show the Inverted-U from real data. ********** 6. PLOS authors have the option to publish the peer review history of their article (what does this mean?). If published, this will include your full peer review and any attached files. If you choose “no”, your identity will remain anonymous but your review may still be made public. Do you want your identity to be public for this peer review? For information about this choice, including consent withdrawal, please see our Privacy Policy. Reviewer #1: No Reviewer #2: Yes: Vincent K.M. Cheung [NOTE: If reviewer comments were submitted as an attachment file, they will be attached to this email and accessible via the submission site. Please log into your account, locate the manuscript record, and check for the action link "View Attachments". If this link does not appear, there are no attachment files.] While revising your submission, please upload your figure files to the Preflight Analysis and Conversion Engine (PACE) digital diagnostic tool, https://pacev2.apexcovantage.com/. PACE helps ensure that figures meet PLOS requirements. To use PACE, you must first register as a user. Registration is free. Then, login and navigate to the UPLOAD tab, where you will find detailed instructions on how to use the tool. If you encounter any issues or have any questions when using PACE, please email PLOS at figures@plos.org. Please note that Supporting Information files do not need this step. 15 Jul 2022 Dr. Tom Verguts Academic Editor PLOS ONE Dear Dr. Verguts, Subject: Submission of revised paper [PONE-D-21-22799] - [EMID:b089d435b8379e18] We wish to thank you and the reviewers for taking the time to read our manuscript and for giving us valuable feedback. Several changes have been implemented based on your advice, which has improved the overall quality of the manuscript. Below you can find our reply to each of the points raised. We hope that our answers clarified your questions, and that the modifications have increased the standard of the manuscript sufficiently. Kind regards, Rebekka S. Lisøy Ph.D candidate Norwegian university of science and technology Trondheim, Norway Comments from Editor If I understand right, the n = 181 subjects were chosen based on having a Wundt curve (quadratic component < -.1). Several comments are in order here. First, what is the rationale for this criterion (more specific than “a power analysis”? Reply: This criterion was based on literature supporting the notion of an inverted U-shaped relationship between complexity and liking in domains such as music preference. Building on this, the rationale was that the peaks (or apexes) of the Wundt curves would reflect the preferred level of complexity. If we were to treat peaks as preferred complexity, we needed to ensure that those peaks did indeed exist in our sample. The quadratic components could not be positive as that would reflect a U-shaped relationship, which is not a Wundt curve with a clear peak. Some participants might have peaks outside the complexity range, and so a separate inclusion criterion of bounded maximum was added for those with slopes only (see reply below). For participants who showed an inverted U-shaped curve, as indicated by a negative quadratic component, we needed to define a criterion for how much noise we would allow when determining those peaks. Components close to 0 would mean that the peaks were less reliable; that is, because the curve is so flat, small changes in the estimated parameters leads to large changes in the location of the peak. As this was the outcome variable of our analyses, it was crucial to reduce noise in the preferred level of complexity. Thus, we needed to define a cut-off for how reliable the peak (preferred level of complexity) should be for a participant with a Wundt curve. This is where the power analysis was used to narrow the criterion down to -0.1, to ensure that we included as many as possible while removing those with the most unreliable (noisy) peaks. Second, if that is the criterion, it is by definition true (page 14) that you obtain Wundt curves in your sample. You must also check the Wundt curve in the full sample, at least. For subjects with a monotonically in- or decreasing preference curve, you can still calculate the optimum. Reply: We did indeed find a Wundt curve in the total sample (see lines 210-211 in revised manuscript and S2 Text in SOM). In our pre-registered criteria, we specified that “If the quadratic component is between -0.1 and 0.1, and the linear slope is distinguishable from 0 at p < .05, the most extreme complexity on the more preferred side is taken to be the preferred complexity.” However, none met these criteria. For example, for the main analyses with complexity scores, 22 participants had quadratic components between -0.1 and 0.1, and the smallest p-value here is 0.59. Regarding calculating the optimum for all participants, see our response to reviewer 2’s suggestion. Third, if you relax the criterion (say, quadratic component < .05), do you get similar results? With such a heavy filtering, we should be able to see what is the consequence of the filter. Reply: We thank you for making this interesting point. By relaxing the criterion to <.05, as suggested, five additional participants are included in the sample for analyses (only with regards to complexity scores, but no change in sample for the entropy analyses). However, the results from the confirmatory analyses remain the same. Also, how is the peak of the Wundt curve determined exactly? Based on the data or the fitted (quadratic) model? Reviewer 2 also makes an (alternative?) suggestion to calculate it. It would be good to show that the result is robust toward changing this measure (just like it is robust w.r.t. taking Wiener entropy). Reply: The peaks are based on the fitted model (we have added a sentence to make this clearer in the methods). We have looked into the suggestion made by reviewer 2 (see our reply to this). Relatedly, as hinted at by reviewer 2, Figure 1 is made up and unnecessary. You would better use this figure to plot some of the data, so we get some intuition about them. Now, the data are plotted in very digested format only. For example, show some typical and atypical participants’ Wundt curves (and possibly in the same Figure, if you want, also a panel with a cartoon version of the hypothetical shift, so you can keep the current figure 1 for your explanation). Reply: We thank you for this suggested improvement. We wish to keep figure 1 as a visual aid for our explanation, but it has been revised so that it is clear it represents a hypothetical effect. As requested by reviewer 2, we have added a new figure showing example data from participants with and without a Wundt curve. Minor comment: Weiner should be Wiener (if it refers to Norbert Wiener) Reply: Thank you for bringing this to our attention. We have corrected this mistake. Comments from Reviewer 1 P.4, line 80-82: please use an example to clarify what sort of differences in perception could explain differing predictability levels. Reply: We thank you for this suggestion, and an example has been added that hopefully provides some clarity (lines 81-86). P.6, line 127-130 & associated image: The figure would be clearer if two curves were plotted, rather than using two coloured scatterplots. Reply: We thank you for your suggested improvement. The figure has been revised. P.7, line 148: How was the original pool of music excerpts selected? Reply: The original pool of music excerpts in Madison & Schiölde (2017) consisted of excerpts whose musical properties were characteristic of popular music (being in the style of pop, rock, jazz, world music, or a mixture of these). Excerpts that were assumed to be known to the musical experts (e.g., excerpts frequently played in broadcast media) were excluded. We have added this information to the manuscript. P.7, line 150-151: What do the authors mean when they say complexity is the inverse of predictability and how does this relate to the cited article? Reply: Here we refer to the argument made by Delplanque et al. (2019), who argued that “A stimulus is more complex if its elements are more difficult to predict, leading to more prediction error”(p. 147). That is, the more complex the stimulus, the more prediction error it produces, and therefore the increase in complexity is treated as an increase in unpredictability. We have rephrased this sentence in the manuscript to make this point clearer (lines 164-167). P.8, line 156-157: the musical excerpts are relatively long compared to more experimental stimuli. It raises the question: to what extent does complexity remain consist across time throughout such musical pieces? I can imagine the music could be subjectively segmented into predictable and unpredictable chunks. Reply: We thank you for bringing up this important issue. That is a valid question, and an option we considered early on. However, we ultimately chose to use the unaltered stimuli from Madison and Schiölde (2017) so that we could utilise the complexity scores provided by the musical experts, as segmenting the stimuli meant that the previous (overall) complexity rating for a given excerpt would not necessarily apply to some (or maybe all) of the individual chunks. Using these scores allowed us to, among other things, shorten the study length (by reducing the number of music excerpts) whilst ensuring that our stimuli still included the full range of structural complexity. While we acknowledge that using naturalistic music comes at the cost of experimental control, we believe that the stimuli are still suitable for testing the current research hypotheses. Each excerpt selected by Madison and Schiölde was meant to constitute an ”independent musical statement”, such that the complexity rating for an excerpt reflects the overall complexity rating of a complete passage or phrase. The experts judged the complexity of multiple features in each excerpt (e.g., tempo, melody, rhythm), yet overall complexity was mostly (86.5%) accounted for by melodic complexity. Thus, if one assumes that participants are influenced in the same way as experts, then inconsistent complexity in, for example, tempo and instrumentation during a passage had very little effect on the experience of complexity. In the process of reducing the stimuli pool for the current study, the excerpts with the most varied complexity rating were also excluded. P.10, line 162-184: did you check to what extent participants were already familiar with the musical pieces used in the experiment? Reply: We have not asked participants to rate familiarity. Given the considerable time it took to listen to and rate all music excerpts, as well as answer all questionnaires, we refrained from adding measures without a clear rationale. Adding an extra rating for each music excerpt would lengthen the task, which risked both reducing the number of people willing to take part, as well as increasing fatigue to the point that it affected data quality. It should be noted that in the original pool of music excerpts, those songs that were frequently played in broadcast media or otherwise assumed to be widely known were excluded. P.10, line 176-177: could you include an example of such a control question? Reply: An example has been added (lines 192-193). P.11, line 191-192: What number of years are considered implausible? Reply: Implausible number of years would be a mismatch between age and years of training (measured on a rating scale; not a value that was typed by the participant). Specifically: one 23 year old participant reported 48 years of training, one 25 year old participant reported 67 years of training, and one 19 year old participant reported 63 years of training. This could indicate that the participant was trolling/being dishonest and/or not taking the test seriously, which is always a risk when running anonymous studies online. It could also be an honest mistake, but since we cannot know for sure, we excluded them. P.13, line 243-244: r = 0.489 is more a moderate correlation than a high one. It also leaves me wondering how the subjective and objective differed from one another. Reply: We thank you for bringing this up. The interpretation of the effect size was based on Funder and Ozer’s (2019) paper on effect sizes in psychological research. We have revised this sentence to clarify that this interpretation is made in the context of expected effect sizes in psychological research. Unfortunately, we cannot give a definitive answer on how the subjective and objective measure of complexity differed from each other. While one can refer to the instructions that the experts received and what spectral flatness is supposed to reflect, it is not evident why complexity scores and entropy scores are very alike for one music excerpt but less so for another. That sort of analysis, albeit interesting, is beyond the scope of this paper. P.13, line 248-249: The results of the questionnaires are described in very vague terms. How much did the scores vary, exactly? Are the scores distributed in a comparable fashion to the general population? How many participants scored sufficiently high on ASD or psychosis traits to be potentially considered in range of a diagnosis? Reply: Thank you for pointing out these unclarities. Statistics are presented in Table 2, and a new figure has been added showing the distribution of CAPEp and AQ-short scores. We have now commented on a clinical cut-off value for AQ-short in the manuscript (lines 279-280). For the CAPE positive subscale, clinical cut-off values are often weighted by distress items, which we did not measure. However, we have added a cut-off value for detecting individuals at ultra-high-risk for psychosis (lines 280-281). To use this cut-off, we had to report mean scores instead of sum scores, which is why those values have now been changed in the manuscript and supplementary materials. A comparison with other studies is now included in the discussion (lines 339-341). P.14, line 270 & associated figure: Figure 2 seems to contain a lot of information that isn't explained very clearly. What do the rows and columns represent? What is displayed above and below the diagonal? What are the numbers presented on the x-or y-axis on each individual graph? I found this portion of the paper hard to follow. Reply: We thank you for bringing this to our attention. We have removed this figure and instead added new ones that contain the most important information. P.16, line 301: typo: "not support" should be "no support" Reply: Thank you for bringing this to our attention. This mistake has been corrected. P.19, line 398-407: the authors discuss the advantages of using naturalistic music, but should consider some additional disadvantages. For one, all naturalistic music is designed by artists to be enjoyable, at least in principle. This could explain why such a large portion of the sample did not display a Wundt curve (i.e. flat curves). Another downside would be that participants may already be familiar with specific compositions, which could, for instance, trigger a mere exposure effect where known compositions are rated higher than unknown compositions. Reply: We thank you for raising these interesting points. We briefly mentioned familiarity, style and genre in the discussion in relation to a study on individuals with ASD and preference for tone sequences, and have now linked this to issues in our data specifically (lines 446-455). Comments from Reviewer 2 This study is pre-registered? (this was hinted but has not been explicitly stated) Reply: Thank you for bringing this to our attention. We have now added explicit statements regarding preregistration. While it is clever to use the peak of the Wundt curve to indicate the most preferred complexity of a subject, I feel that the manuscript placed too much emphasis on the Wundt curve and that has somewhat confused its original aim: if the goal is to investigate whether autistic/psychotic traits shift preferences for musical complexity, then whether a subject demonstrates the Wundt effect is not a necessary requirement. In my opinion, a better approach would be to fit a quadratic curve for each subject as the authors have already done, and use the bounded maximum value (i.e., the peak or the value from the song with the highest rating) for analyses. This allows data for all subjects to be retained and thus improve statistical power. Reply: We thank you for this suggestion. We increased the sample size by calculating the bounded maximum for the quadratic curves, and then repeated the analyses. The partial correlations were non-significant (for both complexity and entropy scores), and so were also the bivariate correlations. Here, we excluded those with slopes close to 0 (and no linear slopes distinguishable from 0 at p=.05), per our pre-registered criteria. Note that the results are still the same if these 22 (excluded) participants are included in the analyses. The results are presented in the supplementary materials (S3 text). Relatedly, the authors should discuss why the Wundt effect was not present in ~45% of subjects - not only superficially from a statistical point of view, but also from a cognitive perspective. The discussion from Chmiel and Schubert 2017 should be a helpful resource. Reply: We thank you for pointing that this needed to be unpacked more, and for the helpful resource. We briefly mention the range of stimuli being restricted in terms of complexity range, but we have now unpacked this idea more (lines 410-419). We have also mentioned this issue in relation to other features that influence music enjoyment (lines 448-455). The justification of examining both autistic and psychotic traits in the same study seemed rather superficial to me. Apart from the connection that the two are related to preferences for order/repetition, it is important to elaborate on other shared mechanisms that relate the two together. Otherwise, we cannot go further to understand why such shifts in preferences occur. Reply: We thank you for raising this point. We have added more information on shared mechanisms (lines 110-115). Musical preference is not only due to personality traits. Previous exposure via statistical learning places just as important a role. See Pearce 2018 for a discussion. Reply: We thank you for bringing up this argument. This is mentioned in the example we added (lines 81-86) with regards to reviewer 1’s request for an example relating to differences in subjective perception of predictability. We have also added a paragraph in the strengths and limitations section where this is again mentioned in relation to musical training (lines 434-436). It should be noted that while we can comment on this, we cannot use our data to control for statistical learning. As the role of expertise was not part of our research question, we could not justify lengthening the survey by including a comprehensive screening of musical expertise. Consequently, the data is too limited in this respect to make any strong statements about the effect of previous exposure, such as refuting the findings by Hansen and Pearce (2014). Please justify why only ACE-IQ and mood were included as controls when other tests were recorded. Reply: We thank you for pointing out these unclarities. In our pre-registration, we registered that we would control for mood and ACE-IQ, which was based on the rationale that these may affect preferred level of complexity (with ACE-IQ being related to stress responses, as briefly mentioned in the discussion). As mentioned above, we did not include a comprehensive screening of musical expertise, but we still wanted to measure training years to get some indication of the level of musical training in our sample. Similarly, we included daily listening to give some indication of the level of passive exposure to music, and we included BAIS to give some indication of vividness of auditory imagery. Thus, these measures were included to describe the sample, but we did not have a theoretical rationale for making predictions about how training years, daily listening or of vividness of auditory imagery would link to expert ratings of complexity (for example due to limited screening, see above reply). These relationships were therefore explored in exploratory analyses, where we found no support for either of these being associated with preferred level of complexity. I am not sure where the data from Figure 1 comes from - are these hypothetical ratings from two fake subjects? I feel it would be more informative to have actual preference ratings from some of the subjects and to show the Inverted-U from real data. Reply: We thank you for this suggested improvement. We wish to keep figure 1 as a visual aid for our explanation, but it has been revised so that it is clear it represents a hypothetical effect. We have added a new figure showing example data from participants with and without a Wundt curve. Submitted filename: Response to Reviewers.docx Click here for additional data file. 25 Jul 2022
PONE-D-21-22799R1
Sweet spot in music – is predictability preferred among persons with psychotic-like experiences or autistic traits?
PLOS ONE Dear Dr. Lisøy, Thank you for submitting your manuscript to PLOS ONE. After careful consideration, we feel that it has merit but does not fully meet PLOS ONE’s publication criteria as it currently stands. Therefore, we invite you to submit a revised version of the manuscript that addresses the points raised during the review process. There are two very clear suggestions that can be made based on your study: 1) make the range of complexity (much) larger; 2) include subject-specific complexity measures.
Regarding 2, predictability can be varied either by changing the items or by changing the subjects (i.e., increased expertise with specific musical pieces). Indeed, predictability is an inherently subjective (in the sense of subject-specific) measure.
 
You mention both factors briefly in your Limits and Strengths section, but I think it's worthwhile to be (even) more specific that this is the way to go for future study on this topic. Please submit your revised manuscript by Sep 08 2022 11:59PM. If you will need more time than this to complete your revisions, please reply to this message or contact the journal office at plosone@plos.org. When you're ready to submit your revision, log on to https://www.editorialmanager.com/pone/ and select the 'Submissions Needing Revision' folder to locate your manuscript file. Please include the following items when submitting your revised manuscript:
If you would like to make changes to your financial disclosure, please include your updated statement in your cover letter. Guidelines for resubmitting your figure files are available below the reviewer comments at the end of this letter. A rebuttal letter that responds to each point raised by the academic editor and reviewer(s). You should upload this letter as a separate file labeled 'Response to Reviewers'. A marked-up copy of your manuscript that highlights changes made to the original version. You should upload this as a separate file labeled 'Revised Manuscript with Track Changes'. An unmarked version of your revised paper without tracked changes. You should upload this as a separate file labeled 'Manuscript'. If applicable, we recommend that you deposit your laboratory protocols in protocols.io to enhance the reproducibility of your results. Protocols.io assigns your protocol its own identifier (DOI) so that it can be cited independently in the future. For instructions see: https://journals.plos.org/plosone/s/submission-guidelines#loc-laboratory-protocols. Additionally, PLOS ONE offers an option for publishing peer-reviewed Lab Protocol articles, which describe protocols hosted on protocols.io. Read more information on sharing protocols at https://plos.org/protocols?utm_medium=editorial-email&utm_source=authorletters&utm_campaign=protocols. We look forward to receiving your revised manuscript. Kind regards, Tom Verguts Academic Editor PLOS ONE Journal Requirements: Please review your reference list to ensure that it is complete and correct. If you have cited papers that have been retracted, please include the rationale for doing so in the manuscript text, or remove these references and replace them with relevant current references. Any changes to the reference list should be mentioned in the rebuttal letter that accompanies your revised manuscript. If you need to cite a retracted article, indicate the article’s retracted status in the References list and also include a citation and full reference for the retraction notice. [Note: HTML markup is below. Please do not edit.] Reviewers' comments: [NOTE: If reviewer comments were submitted as an attachment file, they will be attached to this email and accessible via the submission site. Please log into your account, locate the manuscript record, and check for the action link "View Attachments". If this link does not appear, there are no attachment files.] While revising your submission, please upload your figure files to the Preflight Analysis and Conversion Engine (PACE) digital diagnostic tool, https://pacev2.apexcovantage.com/. PACE helps ensure that figures meet PLOS requirements. To use PACE, you must first register as a user. Registration is free. Then, login and navigate to the UPLOAD tab, where you will find detailed instructions on how to use the tool. If you encounter any issues or have any questions when using PACE, please email PLOS at figures@plos.org. Please note that Supporting Information files do not need this step.
8 Sep 2022 Comment from editor: There are two very clear suggestions that can be made based on your study: 1) make the range of complexity (much) larger; 2) include subject-specific complexity measures. Regarding 2, predictability can be varied either by changing the items or by changing the subjects (i.e., increased expertise with specific musical pieces). Indeed, predictability is an inherently subjective (in the sense of subject-specific) measure. You mention both factors briefly in your Limits and Strengths section, but I think it's worthwhile to be (even) more specific that this is the way to go for future study on this topic. Reply: We thank you for this suggested improvement. We have added a new section (“directions for future research”, lines 459-485) where we have discussed these two points in more details, as well as made recommendations for future research. We hope that our modifications have increased the standard of the manuscript sufficiently. Submitted filename: Response to Reviewers.docx Click here for additional data file. 14 Sep 2022 Sweet spot in music – is predictability preferred among persons with psychotic-like experiences or autistic traits? PONE-D-21-22799R2 Dear Dr. Lisøy, We’re pleased to inform you that your manuscript has been judged scientifically suitable for publication and will be formally accepted for publication once it meets all outstanding technical requirements. Within one week, you’ll receive an e-mail detailing the required amendments. When these have been addressed, you’ll receive a formal acceptance letter and your manuscript will be scheduled for publication. An invoice for payment will follow shortly after the formal acceptance. To ensure an efficient process, please log into Editorial Manager at http://www.editorialmanager.com/pone/, click the 'Update My Information' link at the top of the page, and double check that your user information is up-to-date. If you have any billing related questions, please contact our Author Billing department directly at authorbilling@plos.org. If your institution or institutions have a press office, please notify them about your upcoming paper to help maximize its impact. If they’ll be preparing press materials, please inform our press team as soon as possible -- no later than 48 hours after receiving the formal acceptance. Your manuscript will remain under strict press embargo until 2 pm Eastern Time on the date of publication. For more information, please contact onepress@plos.org. Kind regards, Tom Verguts Academic Editor PLOS ONE Additional Editor Comments (optional): Reviewers' comments: 20 Sep 2022 PONE-D-21-22799R2 Sweet spot in music – is predictability preferred among persons with psychotic-like experiences or autistic traits? Dear Dr. Lisøy: I'm pleased to inform you that your manuscript has been deemed suitable for publication in PLOS ONE. Congratulations! Your manuscript is now with our production department. If your institution or institutions have a press office, please let them know about your upcoming paper now to help maximize its impact. If they'll be preparing press materials, please inform our press team within the next 48 hours. Your manuscript will remain under strict press embargo until 2 pm Eastern Time on the date of publication. For more information please contact onepress@plos.org. If we can help with anything else, please email us at plosone@plos.org. Thank you for submitting your work to PLOS ONE and supporting open access. Kind regards, PLOS ONE Editorial Office Staff on behalf of Dr. Tom Verguts Academic Editor PLOS ONE
  69 in total

1.  The Unengaged Mind: Defining Boredom in Terms of Attention.

Authors:  John D Eastwood; Alexandra Frischen; Mark J Fenske; Daniel Smilek
Journal:  Perspect Psychol Sci       Date:  2012-09

2.  The sound of beauty: How complexity determines aesthetic preference.

Authors:  Jeroen Delplanque; Esther De Loof; Clio Janssens; Tom Verguts
Journal:  Acta Psychol (Amst)       Date:  2018-11-30

3.  Beyond the usual suspects: positive attitudes towards positive symptoms is associated with medication noncompliance in psychosis.

Authors:  Steffen Moritz; Jerome Favrod; Christina Andreou; Anthony P Morrison; Francesca Bohn; Ruth Veckenstedt; Peter Tonn; Anne Karow
Journal:  Schizophr Bull       Date:  2012-02-15       Impact factor: 9.306

4.  Humans Rapidly Learn Grammatical Structure in a New Musical Scale.

Authors:  Psyche Loui; David L Wessel; Carla L Hudson Kam
Journal:  Music Percept       Date:  2010-06-01

5.  Belief inflexibility in schizophrenia.

Authors:  Todd S Woodward; Steffen Moritz; Mahesh Menon; Ruth Klinge
Journal:  Cogn Neuropsychiatry       Date:  2008-05       Impact factor: 1.871

6.  The Relation Between Preference for Predictability and Autistic Traits.

Authors:  Judith Goris; Marcel Brass; Charlotte Cambier; Jeroen Delplanque; Jan R Wiersema; Senne Braem
Journal:  Autism Res       Date:  2019-12-04       Impact factor: 5.216

7.  Associations between belief inflexibility and dimensions of delusions: A meta-analytic review of two approaches to assessing belief flexibility.

Authors:  Chen Zhu; Xiaoqi Sun; Suzanne Ho-Wai So
Journal:  Br J Clin Psychol       Date:  2017-08-14

8.  Social Cognitive Performance in Schizophrenia Spectrum Disorders Compared With Autism Spectrum Disorder: A Systematic Review, Meta-analysis, and Meta-regression.

Authors:  Lindsay D Oliver; Iska Moxon-Emre; Meng-Chuan Lai; Laura Grennan; Aristotle N Voineskos; Stephanie H Ameis
Journal:  JAMA Psychiatry       Date:  2021-03-01       Impact factor: 21.596

Review 9.  Rethinking delusions: A selective review of delusion research through a computational lens.

Authors:  Brandon K Ashinoff; Nicholas M Singletary; Seth C Baker; Guillermo Horga
Journal:  Schizophr Res       Date:  2021-03-03       Impact factor: 4.662

10.  Beauty and the beholder: the role of visual sensitivity in visual preference.

Authors:  Branka Spehar; Solomon Wong; Sarah van de Klundert; Jessie Lui; Colin W G Clifford; Richard P Taylor
Journal:  Front Hum Neurosci       Date:  2015-09-23       Impact factor: 3.169

View more

北京卡尤迪生物科技股份有限公司 © 2022-2023.