| Literature DB >> 35440938 |
Yanze Xu1, Weiqing Wang1, Huahua Cui2, Mingyang Xu2, Ming Li1.
Abstract
Humans can recognize someone's identity through their voice and describe the timbral phenomena of voices. Likewise, the singing voice also has timbral phenomena. In vocal pedagogy, vocal teachers listen and then describe the timbral phenomena of their student's singing voice. In this study, in order to enable machines to describe the singing voice from the vocal pedagogy point of view, we perform a task called paralinguistic singing attribute recognition. To achieve this goal, we first construct and publish an open source dataset named Singing Voice Quality and Technique Database (SVQTD) for supervised learning. All the audio clips in SVQTD are downloaded from YouTube and processed by music source separation and silence detection. For annotation, seven paralinguistic singing attributes commonly used in vocal pedagogy are adopted as the labels. Furthermore, to explore the different supervised machine learning algorithm for classifying each paralinguistic singing attribute, we adopt three main frameworks, namely openSMILE features with support vector machine (SF-SVM), end-to-end deep learning (E2EDL), and deep embedding with support vector machine (DE-SVM). Our methods are based on existing frameworks commonly employed in other paralinguistic speech attribute recognition tasks. In SF-SVM, we separately use the feature set of the INTERSPEECH 2009 Challenge and that of the INTERSPEECH 2016 Challenge as the SVM classifier's input. In E2EDL, the end-to-end framework separately utilizes the ResNet and transformer encoder as feature extractors. In particular, to handle two-dimensional spectrogram input for a transformer, we adopt a sliced multi-head self-attention (SMSA) mechanism. In the DE-SVM, we use the representation extracted from the E2EDL model as the input of the SVM classifier. Experimental results on SVQTD show no absolute winner between E2EDL and the DE-SVM, which means that the back-end SVM classifier with the representation learned by E2E as input does not necessarily improve the performance. However, the DE-SVM that utilizes the ResNet as the feature extractor achieves the best average UAR, with an average 16% improvement over that of the SF-SVM with INTERSPEECH's hand-crafted feature set.Entities:
Keywords: Music perception; Paralinguistic singing attribtue recognition; Vocal pedagogy
Year: 2022 PMID: 35440938 PMCID: PMC9011380 DOI: 10.1186/s13636-022-00240-z
Source DB: PubMed Journal: EURASIP J Audio Speech Music Process ISSN: 1687-4714
Fig. 1The number of vocal segments of each aria in SVQTD
Number of vocal segments in each class of each paralinguistic singing attribute in SVQTD
| Attributes | |||||
|---|---|---|---|---|---|
| 101 | 804 | 2341 | 786 | 4 | |
| 191 | 967 | 2435 | 439 | 4 | |
| 2757 | 845 | 366 | 64 | 4 | |
| 3552 | 480 | N/A | N/A | 2 | |
| 1052 | 2845 | 135 | N/A | 3 | |
| 1052 | 2845 | 135 | N/A | 3 | |
| 3157 | 724 | 151 | N/A | 3 |
Fig. 2Visual comparison of example pairs of paralinguistic singing attributes. PS, wide-band power spectrogram; F#, Tte corresponding formants depicted on the narrow-band spectrogram
Fig. 3The proposed three frameworks
Features in the INTERSPEECH 2009 emotion challenge feature set
| LLD (16*2) | Functionals (12) |
|---|---|
| ( | Mean |
| ( | Standard deviation |
| ( | Kurtosis, skewness |
| ( | Extremes: value, rel. position, range |
| ( | Linear regression: offset, slope, MSE |
Fig. 4The standard transformer encoder architecture
Fig. 5The architecture of our SAMA model
The details of the proposed transformer encoder model. TE transformer encoder, SMSA sliced multi-head self-attention, MSA multi-head self-attention, MLP multi-layer perceptron, FC fully connected, MP global max pooling
| Layer | Parameters | Output | ||
|---|---|---|---|---|
| Extractor | SAMA | Slice | ||
| [1×1,32] | ||||
| MSA | ||||
| MLP | Cat | |||
| FC | [1×1,128] | |||
| FC | [1×1,512] | |||
| FC | [1×1,128] | |||
| GMP | [ | 1×128 | ||
| Classifier | MLP | FC | [1×1,64] | 1×64 |
| FC | [1×1, | 1× | ||
Different frameworks’ UAR results for classification subtasks of three 4-class paralinguistic singing attributes
| Unweighted average recall (UAR) [%] | ||||
|---|---|---|---|---|
| Frameworks | Chest resonance | Head resonance | Open throat | Average |
| SF-SVM (ComparE09) | 34 | 37.21 | 28.1 | 33.10 |
| SF-SVM (ComparE016) | 38.7 | 34.34 | 39.74 | |
| E2EDL (ResNet) | 44.39 | 37.33 | 28.8 | 36.84 |
| E2EDL (Transformer) | 41.54 | 37.68 | 29 | 36.07 |
| DE-SVM (ResNet) | 30.82 | |||
| DE-SVM (Transformer) | 42.17 | 40.26 | 22.58 | 35 |
Different frameworks’ UAR result for classification subtasks of three 3-class paralinguistic singing attributes
| Unweighted average recall (UAR) [%] | ||||
|---|---|---|---|---|
| Frameworks | Front placement singing | Back placement singing | Vibrato | Average |
| SF-SVM (ComparE09) | 31.87 | 34.91 | 35.52 | 34.10 |
| SF-SVM (ComparE016) | 33.7 | 33.76 | 42.84 | 36.77 |
| E2E (ResNet) | 36.2 | 41.89 | 37.76 | |
| E2E (Transformer) | 33.6 | 38.97 | 37.33 | |
| DE-SVM (ResNet) | 33.22 | 33.76 | ||
| DE-SVM (Transformer) | 30.61 | 36.71 | 43.67 | 37 |
Different frameworks’ UAR result for binary classification subtask of roughness
| Unweighted average recall (UAR) [%] | |
|---|---|
| Frameworks | Roughness |
| SF-SVM (ComparE09) | 51.85 |
| SF-SVM (ComparE016) | 55.19 |
| E2E (ResNet) | 56.23 |
| E2E (Transformer) | 55.39 |
| DE-SVM (ResNet) | |
| DE-SVM (Transformer) | 54.4 |