| Literature DB >> 35837219 |
Abstract
The application of machine learning technology to intelligent music creation has become a very important field in music creation. The main current research on music intelligent creation methods uses fixed coding steps in audio data, which lead to weak feature expression ability. Based on convolutional neural network theory, this paper proposes a deep music intelligent creation method. The model uses a convolutional recurrent neural network to generate an effective hash code, first preprocesses the music signal to obtain a Mel spectrogram, and then inputs it into a pretrained CNN to extract from its convolutional layers. The network space details and the semantic information of musical symbols are used to construct the feature map sequence using selection strategy for the feature map of each convolutional layer, so as to solve the problem of high data feature dimension and poor recognition performance. In the simulation process, the Mel cepstral coefficient method (MFCC) was used to extract the features of four different music signals, and the features that could represent each signal were extracted through the convolutional neural network, and the continuous signals were discretized and reduced. The experimental results show that the high-dimensional music data are dimensionally reduced at the data level. After the data are compressed, the correct rate of intelligent creation is as high as 98%, and the characteristic signal distortion rate is reduced to 5% below, effectively improving the algorithm performance and the ability to create music intelligently.Entities:
Mesh:
Year: 2022 PMID: 35837219 PMCID: PMC9276503 DOI: 10.1155/2022/2854066
Source DB: PubMed Journal: Comput Intell Neurosci
Figure 1Convolutional neural network hierarchical relationship set.
Figure 2Dimensionality reduction processing of convolution kernel network in convolution layer.
Figure 3Music principal component scale factor analysis.
Figure 4Network convolution pooling topology.
Figure 5Convolutional neural network feature extraction spectral distribution.
Figure 6Label distribution of intelligent authoring layer of convolutional neural network.
Authoring model structure optimization algorithm.
| Authoring model text | Optimization algorithm steps |
|---|---|
|
| Import matplotlib.pyplot as plt |
| Between each | Import numpy as np |
| The number of nodes | Import matplotlib as mpl |
| Excitation function | Import settings |
| Initialize the connection weights |
|
| Determine the rate and |
|
| Initialize the thresholds |
|
| 1 − exp( | Hidden layer and input layer |
| Calculate the | ∑ |
| Dertimerst( | Output layer of the network |
Figure 7Network data pooling input pretrained network.
Figure 8Sparsity distribution of intelligent music creation.
Training set music intelligent creation instructions.
| Creation layer | Instructions rate 1 | Instructions rate 2 | Instructions rate 3 | Instructions rate 4 |
|---|---|---|---|---|
| 10 | 0.121 | 0.993 | 0.322 | 0.666 |
| 20 | 0.004 | 0.553 | 0.346 | 0.117 |
| 30 | 0.219 | 0.213 | 0.638 | 0.404 |
| 40 | 0.661 | 0.742 | 0.559 | 0.037 |
| 50 | 0.318 | 0.338 | 0.295 | 0.825 |
| 60 | 0.006 | 0.086 | 0.412 | 0.385 |
Figure 9Convolutional neural network weight music operation.
Figure 10Mean square error result of intelligent music creation.