| Literature DB >> 30104989 |
Oliver Bones1, Trevor J Cox1, William J Davies1.
Abstract
Five evidence-based taxonomies of everyday sounds frequently reported in the soundscape literature have been generated. An online sorting and category-labeling method that elicits rather than prescribes descriptive words was used. A total of N = 242 participants took part. The main categories of the soundscape taxonomy were people, nature, and manmade, with each dividing into further categories. Sounds within the nature and manmade categories, and two further individual sound sources, dogs, and engines, were explored further by repeating the procedure using multiple exemplars. By generating multidimensional spaces containing both sounds and the spontaneously generated descriptive words the procedure allows for the interpretation of the psychological dimensions along which sounds are organized. This reveals how category formation is based upon different cues - sound source-event identification, subjective-states, and explicit assessment of the acoustic signal - in different contexts. At higher levels of the taxonomy the majority of words described sound source-events. In contrast, when categorizing dog sounds a greater proportion of the words described subjective-states, and valence and arousal scores of these words correlated with their coordinates along the first two dimensions of the data. This is consistent with valence and arousal judgments being the primary categorization strategy used for dog sounds. In contrast, when categorizing engine sounds a greater proportion of the words explicitly described the acoustic signal. The coordinates of sounds along the first two dimensions were found to correlate with fluctuation strength and sharpness, consistent with explicit assessment of acoustic signal features underlying category formation for engine sounds. By eliciting descriptive words the method makes explicit the subjective meaning of these judgments based upon valence and arousal and acoustic properties, and the results demonstrate distinct strategies being spontaneously used to categorize different types of sounds.Entities:
Keywords: acoustic correlates; arousal; categories; category formation; everyday sounds; soundscape; taxonomy; valence
Year: 2018 PMID: 30104989 PMCID: PMC6077929 DOI: 10.3389/fpsyg.2018.01277
Source DB: PubMed Journal: Front Psychol ISSN: 1664-1078
Demographic data of participants for all studies.
| Soundscape | Nature | Manmade | Dogs | Engines | ||
|---|---|---|---|---|---|---|
| Age | 18–29 | 41 | 35 | 35 | 70 | 33 |
| 30–39 | 37 | 41 | 38 | 12 | 49 | |
| 40–49 | 18 | 11 | 13 | 14 | 10 | |
| 50–59 | 2 | 16 | 10 | 4 | 6 | |
| 60–69 | 2 | 2 | 2 | 0 | 2 | |
| 70–79 | 0 | 2 | 2 | 0 | 0 | |
| Sex | Male | 39 | 36 | 63 | 46 | 43 |
| Female | 61 | 61 | 38 | 54 | 57 | |
| Rather not say | 0 | 2 | 0 | 0 | 0 | |
| Audio expert | Yes | 16 | 16 | 15 | 18 | 2 |
| No | 84 | 84 | 85 | 82 | 98 |
Descriptive words that were significantly over-represented in the first cluster of the soundscape categorization data.
| Descriptive word | Internal Freq. | Global Freq. | ||
|---|---|---|---|---|
| People | 257 | 357 | <0.001 | 22.109 |
| Music | 63 | 121 | <0.001 | 7.438 |
| Vocal | 16 | 16 | <0.001 | 6.608 |
| Entertainment | 18 | 20 | <0.001 | 6.352 |
| Chatter | 10 | 10 | <0.001 | 5.060 |
| Changes | 9 | 10 | <0.001 | 4.316 |
| Harmony | 9 | 10 | <0.001 | 4.316 |
| Social | 9 | 11 | <0.001 | 3.974 |
| Alive | 9 | 11 | <0.001 | 3.974 |
| Enjoying | 8 | 12 | 0.002 | 3.096 |
| Marine | 7 | 10 | 0.003 | 2.993 |
| Species | 9 | 16 | 0.005 | 2.801 |
| Pleasant | 8 | 14 | 0.008 | 2.658 |
| Events | 6 | 9 | 0.009 | 2.606 |
| Relaxing | 5 | 8 | 0.029 | 2.184 |
Percentages of different types of descriptive words used at each level of the taxonomy and for each type of sound.
| Top | Middle | Bottom | |||
|---|---|---|---|---|---|
| Source | 81.2 | 75.1 | 42.0 | ||
| Acoustic | 14.8 | 17.7 | 35.6 | ||
| Subjective | 4.0 | 7.2 | 22.4 | ||
| Source | 81.2 | 75.6 | 74.6 | 24.0 | 60.0 |
| Acoustic | 14.8 | 13.8 | 21.7 | 34.0 | 37.1 |
| Subjective | 4.0 | 10.7 | 3.8 | 42.0 | 2.9 |
Results of the multinomial logit regression models.
| Middle vs. Top | Subjective vs. Source | 0.65 | 1.9 | 0.37 | 0.08 |
| Acoustic vs. Source | 0.26 | 1.30 | 0.22 | 0.22 | |
| Acoustic vs. Subjective | -0.39 | 0.68 | 0.41 | 0.35 | |
| Bottom vs. Top | Subjective vs. Source | 2.40 | 11.0 | 0.34 | <0.001∗ |
| Acoustic vs. Source | 1.54 | 4.7 | 0.21 | <0.001∗ | |
| Acoustic vs. Subjective | -0.86 | 0.4 | 0.38 | 0.022∗ | |
| Bottom vs. Middle | Subjective vs. Source | 1.74 | 5.7 | 0.22 | <0.001∗ |
| Acoustic vs. Source | 1.27 | 3.6 | 0.16 | <0.001∗ | |
| Acoustic vs. Subjective | -0.47 | 0.6 | 0.24 | 0.049∗ | |
| Nature vs. Manmade | Subjective vs. Source | 1.03 | 2.8 | 0.41 | 0.011∗ |
| Acoustic vs. Source | -0.47 | 0.6 | 0.25 | 0.063 | |
| Acoustic vs. Subjective | -1.50 | 0.2 | 0.45 | <0.001∗ | |
| Dogs vs. Engines | Subjective vs. Source | 3.60 | 36.7 | 0.42 | <0.001∗ |
| Acoustic vs. Source | 0.83 | 2.3 | 0.22 | <0.001∗ | |
| Acoustic vs. Subjective | -2.78 | 0.1 | 0.42 | <0.001∗ | |