| Literature DB >> 35958786 |
Feng Jiao1, Yang Chen1, Xinyue Zhang1, Yuyue Zhou1, Linlin Wang1, Jinhua Wu1.
Abstract
Rice developing prognostication is a key part of precise agricultural management, and its advancement is an intricate course of events involving the interplay of breed and environmental element. The traditional research method is based on data analysis of rice growth prediction modeling, mining the concealed rapport between rice productivity and circumstance element, for instance, weather, sunlight, and water, and then predicting its yield and analyzing the complex rapport between the circumstance element and growth in every developing phase. In this dissertation, the improved ElmanNN is accustomed to establish a prediction model, and the ElmanNN is accustomed to determine the rapport between the circumstance element and growth in every developing phase simultaneously so as to avoid the arithmetic falling into local optimum easily. In this dissertation, the improved genetic arithmetic is accustomed to optimize the initial weight and threshold of Elman neural network, and the range of weight value multitudinous layers in the mould are obtained by training the network with samples that have been tested in the last few years. Finally, the rapport between growth and yield in six different periods is independently modeled, and the training samples are build up separately one by one based on physiological parameters and environmental indicators of rice at every level. The experiments show that the accuracy for the prediction model in the light of the improved ElmanNN has been beneficial.Entities:
Mesh:
Year: 2022 PMID: 35958786 PMCID: PMC9357766 DOI: 10.1155/2022/2151682
Source DB: PubMed Journal: Comput Intell Neurosci
Figure 1Flowchart of the improved Elman neural.
Figure 2ElmanNN structure diagram.
The number of hidden layer nodes and the total error in neural network simulation.
| Quantity hidden layer nodes | Overall error | Quantity hidden layer nodes | Overall error |
|---|---|---|---|
| 4 | 9.26e-04 | 5 | 4.24e-05 |
| 6 | 8.45e-05 | 7 | 9.31e-04 |
| 8 | 6.21e-04 | 9 | 2.45e-04 |
| 10 | 4.57e-04 | 11 | 3.56e-05 |
| 12 | 7.62e-05 | 13 | 2.73e-04 |
Comparison of the errors of each algorithm.
| Types of neural networks | Average mean square error | Average mean absolute error |
|---|---|---|
| BP neural network | 0.00712 | 0.05423 |
| ElmanNN | 0.01045 | 0.08440 |
| Ameliorate ElmanNN | 0.00613 | 0.02603 |