| Literature DB >> 34880912 |
Jun Cang1, Yichen Huang2, Yanhong Huang2.
Abstract
Musical choreography is usually completed by professional choreographers, which is very professional and time-consuming. In order to realize the intelligent choreography of musical, based on the mixed density network (MDN), this paper generates the dance matching with the target music through three steps: motion generation, motion screening, and feature matching. The choreography results in this paper have a high degree of matching with music, which makes it possible for the development of motion capture technology and artificial intelligence and computer automatic choreography based on music. In the process of motion generation, the average value of Gaussian model output by MDN is used as the bone position and the consistency of motion is measured according to the change rate of joint velocity in adjacent frames in the process of motion selection. Compared with the existing studies, the dance generated in this paper has improved in motion coherence and realism. In this paper, a multilevel music and action feature matching algorithm combining global feature matching and local feature matching is proposed. The algorithm improves the unity and coherence of music and action. The algorithm proposed in this paper improves the consistency and novelty of movement, the compatibility with music, and the controllability of dance characteristics. Therefore, the algorithm in this paper technically changes the way of artistic creation and provides the possibility for the development of motion capture technology and artificial intelligence.Entities:
Mesh:
Year: 2021 PMID: 34880912 PMCID: PMC8648476 DOI: 10.1155/2021/4337398
Source DB: PubMed Journal: Comput Intell Neurosci
Parsing the VMD file bone keyframe data block to obtain part of the data.
| Frame number | Bone ID | Bone displacement | Number of skeletal rotation pions | ||||||
|---|---|---|---|---|---|---|---|---|---|
| 0 | 1 | 2 | 3 | 4 | 5 | 6 | 7 | 8 | |
| 0 | 0.000 | 0.000 | 0.066 | 0.000 | 5.970 | 0.000 | 0.379 | 0.000 | −0.925 |
| 1 | 0.000 | 1.000 | 0.000 | 0.000 | 0.000 | 0.000 | 0.000 | 0.000 | 1.000 |
| 2 | 0.000 | 2.000 | 0.000 | 0.000 | 0.000 | −0.257 | 0.000 | 0.000 | 0.966 |
| 3 | 0.000 | 3.000 | 0.000 | 0.000 | 0.000 | −0.125 | 0.000 | 0.000 | 0.000 |
| 4 | 0.000 | 4.000 | 0.000 | 0.000 | 0.000 | 0.000 | 0.000 | −0.105 | 0.995 |
Details of the various types of data in the dataset.
| Dance style | Information | ||||
|---|---|---|---|---|---|
| Overall speed | Number of clips | Frame speed | Frame rate | Duration (min) | |
| House dance | Quick | 54 | Quick | 285393 | 158.6 |
| Slow | 26428 | 14.7 | |||
| Slow | 35 | Quick | 157175 | 87.3 | |
| Slow | 54034 | 30.0 | |||
|
| |||||
| Street dance | Quick | 67 | Quick | 357287 | 198.5 |
| Slow | 18159 | 10.1 | |||
| Slow | 6 | Quick | 26615 | 14.8 | |
| Slow | 9074 | 5.0 | |||
|
| |||||
| Modern dance | Quick | 5 | Quick | 22780 | 12.7 |
| Slow | 252 | 0.1 | |||
| Slow | 25 | Quick | 42627 | 23.7 | |
| Slow | 57521 | 32.0 | |||
Figure 1Schematic diagram of the overall structure of the motion generation model.
Figure 2Network structure of the action generation model.
Figure 3Model loss: training set.
Figure 4Model loss: validation set.
Figure 5Schematic diagram of the interpolation process of the intermediate frames.
Figure 6Spatiality metric of action fragments.
Figure 7Average arm speed of the action segment.
Figure 8Root node motion path.