| Literature DB >> 35795758 |
Yonghui Duan1, Jianping Wang2.
Abstract
Semiautomated digital creation is increasingly important in the manipulation of electronic music. How to realize the learning of local effective features of audio data is a difficult point in the current research field. Based on recurrent neural network theory, this paper designs a semiautomatic digital creation system for electronic music for digital manipulation and genre classification. The recurrent neural network improves the transmission of electronic music information between the input and output of the network by adopting dense connections consistent with DenseNet and adopts an inception-like structure for the autonomous selection of effective recursive nuclear electronic music categories. In the simulation process, the prediction method based on semiautomatic digital audio clips is also adopted, which pays more attention to the learning of local effective features of audio data, which gives the model the ability to create audio samples of different lengths and improves the model's support for creative tasks in different scenarios. It includes the determination of the number of neurons, the selection of the function of neurons, the determination of the connection method, and the specific learning algorithm rules, and then the training samples are formed. The experimental results show that the recurrent neural network exhibits powerful feature extraction ability and classification ability of music information. The 10-fold cross-validation on GTZAN dataset and ISMIR2004 dataset has obtained 88.7% and 87.68%, surpassing similar ones. The model has reached a leading level. After further use of the MSD (Million Song Dataset) dataset for pre-semiautomatic training, the model effect has been further greatly improved. The accuracy rate on the dataset has been increased to 91.0% and 89.91%, respectively, which has improved the semiautomatic number and creative advancement.Entities:
Mesh:
Year: 2022 PMID: 35795758 PMCID: PMC9252672 DOI: 10.1155/2022/5457376
Source DB: PubMed Journal: Comput Intell Neurosci
Figure 1Similarity distribution of eigenvalues of recurrent neural network.
Figure 2Recursive network digital audio extraction distribution.
Figure 3Recursive network digital topology.
Figure 4Distribution of network elements in the network music power spectrum.
Recurrent neural network coding.
| Recurrent unit | Development parameter | Network coding | Network weight |
|---|---|---|---|
| 10 | V-net | Code data | 0.87 |
| 20 | Cnn-test | The semiautomatic training | 0.40 |
| 30 |
| Test data | 0.45 |
| 40 | Algorithm-t | The neural network | 0.77 |
| 50 | Cet-tain | All samples are considered | 0.43 |
| 60 | Case-line-t | Training data | 0.11 |
Figure 5Online music classification and reconstruction test data.
Figure 6Polar distribution of semiautomatic digital symbol mapping.
Figure 7Electronic music semiautomated digital dependencies.
Figure 8Electronic music semiautomation fundamental frequency distribution.
Recurrent neural network pitch frequency analysis algorithm.
| Recurrent neural network content | Pitch frequency analysis algorithm |
|---|---|
| Input pitch frequency network: | After preprocessing semiautomatic |
| Input frequency network: | Steps to obtain the map |
| Input pitch network: | Digital operations such as |
| Input case pitch frequency network: exp(2 | Combined with the note |
| Plt.figure (figsize = (8, 4)) | Framing the synthesized audio |
| Ax1 = plt.subplot (111, projection = 'polar') | Features of the note fragment |
| Ax1.set_title (“spot fish”) | log( |
| Ax1.set_rlim (0, 12) | Perform the above three |
| Bar = ax1.bar (theta) | Data, alpha = 0.5 |
| Data = np.random.randint (1, 10, 10) | Endpoint detection algorithm |
| Theta = np.arange (0, 2 ∗ np.pi, 2 ∗ np.pi/10) |
|
Figure 9Distribution of semiautomated digital operations in online music.