Literature DB >> 35222636

Modeling Soil Temperature for Different Days Using Novel Quadruplet Loss-Guided LSTM.

Xuezhi Wang1,2, Wenhui Li1,2, Qingliang Li2,3, Xiaoning Li1,3.   

Abstract

Soil temperature (T s ), a key variable in geosciences study, has generated growing interest among researchers. There are many factors affecting the spatiotemporal variation of T s , which poses immense challenges for the T s estimation. To enrich processing information on loss function and achieve better performance in estimation, the paper designed a new long short-term memory model using quadruplet loss function as an intelligence tool for data processing (QL-LSTM). The model in this paper combined the traditional squared-error loss function with distance metric learning between the sample features. It can zoom analyze the samples accurately to optimize the estimation accuracy. We applied the meteorological data from Laegern and Fluehli stations at 5, 10, and 15 cm depth on the 1st, 5th, and 15th day separately to verify the performance of the proposed soil temperature estimation model. Meanwhile, this paper inputs the variables into the proposed model including radiation, air temperature, vapor pressure deficit, wind speed, air pressure, and past T s data. The performance of the model was tested by several error evaluation indices, including root mean square error (RMSE), mean absolute error (MAE), Nash-Sutcliffe model efficiency coefficient (NS), Willmott Index of Agreement (WI), and Legates and McCabe index (LMI). As the test results at different soil depths show, our model generally outperformed the four existing advanced estimation models, namely, backpropagation neural networks, extreme learning machines, support vector regression, and LSTM. Furthermore, as experiments show, the proposed model achieved the best performance at the 15 cm depth of soil on the 1st day at Laegern station, which achieved higher WI (0.998), NS (0.995), and LMI (0.938) values, and got lower RMSE (0.312) and MAE (0.239) values. Consequently, the QL-LSTM model is recommended to estimate daily T s profiles estimation on the 1st, 5th, and 15th days.
Copyright © 2022 Xuezhi Wang et al.

Entities:  

Mesh:

Substances:

Year:  2022        PMID: 35222636      PMCID: PMC8872672          DOI: 10.1155/2022/9016823

Source DB:  PubMed          Journal:  Comput Intell Neurosci


1. Introduction

Soil temperature (T) is a main physical variable of the land surface, which has a direct influence on the atmosphere [1]. Relevant fields including geoscience and forestry application aspects have drawn attention from researchers [2, 3]. In principle, all the interactions in terrestrial ecosystems are companied by T variations since they involve energy exchanges. T is an essential factor for growing crops that can facilitate the development of the root system by impacting microbial activity, soil decomposition, and fluidity of soil water [4]. In addition, the death of animals and plants produces plenty of carbon substrates and a high volume of greenhouse gases in the soil. Consequently, it results in an increase in T, thus expediting carbon dioxide emission to the atmosphere [5]. Therefore, accurate T monitoring is crucial for agricultural management and atmosphere environment forecast. However, T data in most areas is still measured by using traditional sensors, and the T data cannot be collected at different depths [6]. Therefore, it can be used to solve some problems in different fields for the study of T estimation. The essential environmental factors have a great influence on the accuracy of T estimation. At present, T is mainly predicted by methods based on physical models and data-driven methods. The physical method is based on the heat conduction model to estimate T [7]. Meanwhile, the method is greatly affected by related physical parameters and the scale problem [8]. The data-driven methods can explore the internal relationship between T and the surrounding environmental factors for T estimation. At present, several predictive models based on machine learning methods are used for estimating T [9-14]. For example, ANN is composed of a complex network structure that imitates the structure and function of the brain's neural network, and it has powerful data processing capabilities. Bilgili applied the multilayer perceptron (MLP) model to adequately describe T distribution at a monthly temporal scale from meteorological data [15]. Kisi et al. used three machine learning models to estimate monthly T at the soil depth of 5 cm and 10 cm, respectively, and verified the predictive performance of radial basis neural networks performed better than that of generalized regression neural networks and multilayer perceptron models [9]. But generalized regression neural networks had the better performance for deeper depth (50 cm and 100 cm). Kisi et al. applied ANN-based models to predict long-term T at a monthly temporal scale, and they found that genetic programming generated the best performance with the meteorological data [16]. Zeynoddin et al. applied a multilayer perceptron (MLP) model to describe daily T distribution at three soil depths (5, 10, and 20 cm) from past measurements of T [17]. Samadianfard et al. processed the meteorological data such as Ta, W, RH, Rs, Sunshine hours (Sh), and air pressure (Ap) and integrated ANN-based models separately to predict T at a monthly temporal scale [18]. Mehdizadeh et al. noted that machine learning models combined with time series models performed better performance than the predictive models based on the single machine learning method or single time series method for predicting T at a daily temporal scale [19]. Moazenzadeh et al. proposed SVR with krill herd algorithm (SVR-KHA) method in modeling T estimation at different depths (5, 10, 20, 30, 50, and 100 cm), which achieved the best performance, compared to SVR and SVR with firefly algorithm (SVR-FA) [11]. Delbari et al. proposed an SVR-based model to compute daily T at three depths (5, 30, and 100 cm) in Iran [12]. The ELM network featured by a single hidden layer could improve the learning speed and accuracy of the algorithm and can model the accurate T. Nahvi et al. used the improved ELM model on daily T estimation based on the self-adaptive evolutionary method and verified the improved predictive model can estimate the adequate T [20]. Sanikhani et al. tested the data from the Mersin station, and the results that show the performance of ELM has the best predictive performance than other predictive models. [14]. Feng et al. tested the loess plateau data with ELM and random forests (RF) and ANN-based models showed that ELM had the better performance for estimation T of half-hourly at different soil depths [13]. As a time loop neural network containing complex neural network modules, LSTM is used in this paper to solve long-term dependence problems, which can effectively alleviate gradient vanishing through the extraction of required features by the gate control unit. LSTM network [21] can learn long-term and short-term behaviors, and it has seen application in vast areas. By integrating LSTM and SVR, Guo et al. significantly improved the prediction accuracy of abnormal passenger flow fluctuations [22]. In hydrology, Zhang et al. designed a novel LSTM model with the dropout scheme to estimate the depth of the water table [23]. In the atmosphere field, Qing et al. estimated the solar irradiance based on the LSTM network [24]; the results showed the method could avoid the overfitting of the model. Li et al. designed a new GANs-LSTM model and noted that it could serve as an alternative method to estimate T [25]. This article focuses on the following issues. First, we select the environmental factors which will affect T estimation. T memory can help the predictive model “remember” a warm or cold condition when the anomaly is forgotten by the atmosphere forcing. In addition, recent literature reviews have revealed that the input for prediction models is either the past measurements of T or other meteorological information (Ta, W, RH, Rs, Sh, and Ap). Assume that the prediction models are constructed using input combinations of past T and other meteorological information; how does the prediction model performance? The second question is about the construction of loss function in LSTM. The predictive model for T estimation is a regression predictive modeling problem that involves predicting a real-valued quantity. The loss function is crucial for optimizing the predictive model which could express the degree of difference between predicted and observed T; meanwhile, it can optimize the predictive model by updating the weights. Recently, most previous studies in loss function of regression predictive model mainly focused on the distance metric learning between predicted values and real values [26-29]. However, the distance metric learning between the sample features (environmental factors) is usually ignored which has already been successfully applied to image processing [30-32]. To enrich information processing in loss function and further improve the estimation performance, how can we construct a novel loss function by combinations of distance metric learning? The last question is about timescale evaluating for T estimation. In previous studies, any evaluation at short-term T estimation (half-hourly, hourly, daily timescales) does not consider the timeliness of long-term T estimation. However, any evaluation at long-term T estimation (monthly timescale) does not include the information of T in a small timescale. An ideal decision-support tool for T estimation should provide a multifarious decision-making basis. How can we design a prediction scheme at the same timescale evaluation that provides not only the short-term decision-making basis but also the long-term decision-making basis? This paper proposed a novel quadruplet loss function based on the LSTM network that combines traditional squared-error loss function with distance metric learning between the sample features, called QL-LSTM. The traditional squared-error loss function is usually applied to the predictive task with great accuracy. The current limitation of this loss function, however, involves the special variation on Ts based on different predictors. As shown in Figure 1, we have made labels according to T values. The T data which are in the same range are made the same label (the T data which are in the range of 8–12°C are labeled as “1”, the T data which are in the range of 12–16°C are labeled as “2”, and the T data which are in the range of 16–22°C are labeled as “3”). Meanwhile, Ta data are labeled as the same as T data. In Figure 1(a), we noticed that the T data with the same label are almost within a stable range. However, in the red ellipse, Figure 1(b), we observed that similar Ta values may have different labels (T data with similar Ta values may vary considerably). The data with this feature will prevent predictive models from accurately exploring the internal relationship between T and the surrounding environmental factors discovering. To address this problem, the idea of triplet loss [33] is considered in this paper. Triplet loss optimization allows the anchor and positive points to accumulate and therefore prevent the negative points and realize the similarity calculation of samples. This approach can enrich processing information of loss function and overcome the disadvantage of the traditional squared-error loss function and further improve the estimation performance.
Figure 1

Variations of the daily air temperature (a) and soil temperature (b) at Laegern station (located in Switzerland) during 1st, January 2003–9th April 2003 (100 days).

The main three contributions of this research paper are summarized as follows: As we know, the proposed method that combined traditional squared-error loss function with distance metric learning between the sample features is a new approach to be used for T estimation. Daily-scale prediction scheme was designed to provide the multifarious decision-making basis and was used to estimate the T on the next 1st, 5th, and 15th day. To achieve this end, we input the meteorological and past T data to the estimation model. Results showed that our QL-LSTM method outperformed the existing advanced methods in most cases.

2. Data and Methods

2.1. The Framework of Soil Temperature Estimation

The corresponding meteorological data T as the input of our QL-LSTM model are obtained from FLUXNET at first. In the meantime, several other advanced models based on data-driven technology (SVR, BPNN, ELM, and LSTM) were considered in T estimation. Traditional squared-error loss function and distance metric learning between the sample features were integrated into our model for accurate exploration of the internal relationship between T and the surrounding environmental factors. Finally, the comparison of model performance is reflected by five evaluating indicators (RMSE, MAE, NS, WI, and LMI). Figure 2 denoted the flow chart of soil temperature estimation.
Figure 2

The flow chart of soil temperature estimation.

2.2. Long Short-Term Memory (LSTM) Network

LSTM can process and learn long-term dependence problems. Due to the characteristics of the LSTM network, we use it to explore the internal relationship between T and the surrounding predictors. LSTM controls the transmission state through the gating state, remembers what needs to be remembered, and forgets unimportant information. Figure 3 shows the internal structure of an LSTM cell, and the calculation formula of the LSTM is as follows:where x(t) is the input data, and is the output data; i(t), f(t), and o(t) denote the input gate, forget gate, and output gate; c(t) represents the unit status at the current moment; h(t) is the current output value; σ(·) and tanh(·) are the activation functions; W and b denote the weight matrix and bias term.
Figure 3

The internal structure of an LSTM cell.

2.3. Triplet Loss

Triplet loss is a significant “learning criterion” for optimizing the predictive models, which is applied for adjusting the weight parameters of predictive models, including anchor (Anchor) example, positive (Positive) example, and negative (Negative) example. The similarity calculation of the samples is realized through triplet loss learning, which makes the anchor-to-positive distance smaller than the anchor-to-negative distance. And Figure 4 denoted the visual representation of triplet loss.
Figure 4

A visual representation of triplet loss.

Equation (2) expresses the objective function of triplet loss as follows:where f(x), f(x), and f(x) are the corresponding feature expression obtained by training a parameter in the triplet; α represents the minimum interval between the anchor-to-positive distance and the anchor-to-negative distance; the value of [·]+ defines the degree of loss.

2.4. QL-LSTM Model

Previous analysis shows that LSTM with traditional squared-error loss function could not accurately discover the special relationship between T and surrounding predictors. To address this problem, inspired by the study of triplet loss, we combined a predictive model with distance metric learning between the sample features. As far as we know, the method based on distance metric learning between the sample features has not been used to estimate T ever. It must be noted that the distance metric learning between the sample features is first proposed in the field of image processing. However, there is no description of the similarity of samples for T estimation. In this paper, the clustering method is used to label samples; thus, distance metric learning between the sample features could be further applied in T estimation. The framework of our QL-LSTM is shown in Figure 5. Firstly, for its ability to cluster data efficiently and scalability, the T data were quantized by the clustering method (called Birch) [34]. In the quantization step, any T data quantized to the same label will be defined as similar samples (positive). In contrast, any T data quantized to different labels will be defined as the dissimilar samples (negative). It is worthwhile to observe that the number of labels should be neither too large nor too small [35]. Hence, the Calinski Harabasz Score (CH) and Y_Silhouette_score (S) are used to evaluate the quality of the cluster [36]. The larger value of CH or S, the better quality of the clustering results. Second, the labeled data are input into the predictive model (LSTM network). Finally, the weights of the predictive model are updated to reduce the loss based on our quadruplet loss function.
Figure 5

The framework of QL-LSTM.

We set X={(x, l)} as the input data, where l represents T labeled as “i” and x represents the labeled environmental factors. Assume C is the total number of labels, where l ∈ [1,2,3 …, C]. Then, we project an instance x onto the estimate T by fLSTM(.; θ) : R⟶S1, where fLSTM is an LSTM network parameterized by θ. Let {X} be the environmental factors in the i -th labeled samples. N represents the total number of samples. We evaluate the similarity between samples through cluster analysis and expect the output of the model closer to the true value.

2.4.1. Hard Sample Mining

Hard sample mining generally refers to hard negative mining. Adding negative sample sets to participate in model training can improve the effectiveness of learning and training and mine hard negatives as much as possible [37, 38]. For each fixed picture, the farthest sample picture and the nearest negative sample picture in a training batch are applied to train the network to enhance the generalization ability of the network, so that the network can learn better representations. Inspired by TriHard loss, we first define x as the test sample: P(P={x|j ≠ i}, |P|=N − 1) is a collection, which includes the samples with the same label; N(N={x|k ≠ c}, |N|=∑N) represents the other samples' collection. (x, y, P, N) is the quadruplet data set we defined. P is the positive set; N represents the negative set, |P| and |N| represent positive and negative sample pairs, and these tuples form the training sample pairs. The query sample is represented as x; when S+ satisfies the formula (3), {x, x} is the pair that we need.where S+〈fLSTM(x; θ), fLSTM(x; θ)〉 represents the similarity between two samples, where 〈·, ·〉 represents the calculation of an n × n similarity matrix. S is the element in S at (x, x), and μ as a hyperparameter impacts the quadruplet that can control the number of hard positive samples. The condition for selecting a hard and negative pair is the same as above:

2.4.2. Optimization Objective

For each test sample x, we use the margin m to make it as close to the positive set P as possible, and as far away from the negative set N as possible. All the nontrivial positive points in P are pulled together by minimizing:where f(x) and f(x) denote the estimated T of samples x and x, respectively, and ‖f(x) − f(x)‖ is the Euclidean distance between f(x) and f(x). Similarly, all nontrivial negative points in N need to push out of the boundary τ, by minimizing: Meanwhile, we applied the squared-error loss function to the LSTM model for T estimation, as follows: In the QL-LSTM, three minimization objectives were put into the model, and they are optimized at the same time: We incorporate stochastic gradient descent and minibatch into the QL-LSTM to optimize the estimation model. x is a sample of the minibatch, which is obtained by sampling the labels of the training samples randomly, and serves as an anchor. We represent the QL-LSTM of each minibatch aswhere N denotes the batch size. Figure 6 represented the learning procedure of our QL-LSTM model.
Figure 6

The learning procedure of QL-LSTM.

2.5. Model Training and Testing

The input of our model is the corresponding meteorological data (T, W, A, R, VPD, and T) from Laegern and Fluehli stations in Switzerland. And we downloaded the data at https://fluxnet.fluxdata.org/ on FLUXNET with a total of 3,287 patterns from 2006 to 2014. Training datasets had 2465 patterns, and the rest as testing datasets. Comparing our QL-LSTM model with the other advanced methods (SVR, BPNN, ELM, and LSTM), meanwhile, we calculate several evaluation criteria to analyze the model performance, including model fitting degree and the accuracy of the estimation model, as follows:where N is the number of the whole data, y denotes the observed value, is the predicted value, and is the average of the true values.

2.6. Experiments

The data within half an hour is obtained from two meteorological stations in an ecological nature reserve, located in Switzerland, namely, Legern and Fluley. The corresponding meteorological data (T, W, A, R, VPD, and T) and past T data were input into the models. Meanwhile, the input variables are normalized to eliminate the dimensional influence between indicators. And the formula is as follows:where the minimum value of the sample data is represented by xmin, and the maximum value is represented by xmax. Moreover, we have conducted research on the influence of the surrounding environmental factors on the model prediction. And we found the value of R in the data of the two stations is low, which is close to the normal distribution. We conducted a statistical analysis of the data from the two stations. Table 1 listed the details of variables (minimum value (xmin), maximum value (xmax), average value (xmean), standard deviation (z), skewness (z), and variation coefficient (z)). We used the daily data to verify the performance of the model with every half an hour data. The results showed in Table 1 that A had the highest negative skewness and presents a normal distribution at 5 cm depth, which presented similar characteristics in both stations. Meanwhile, z showed the biggest difference between the two stations. T at the 5 cm, 10 cm, and 15 cm depths range −1.888–26.876°C, −0.181–22.193°C, and 0.16–20.826°C, respectively. In summary, results showed that the values of z, z, and z change very slightly.
Table 1

Statistical results of the applied data for Laegern and Fluehli stations.

StationVariable x min x max x mean z s  d z s z v
Laegern T a (°C)−14.50923.6467.8577.084−0.1180.901
R s (W/m2)175.545379.458305.08135.164−0.4180.115
VPD (hpa)0.5415.9373.2712.3341.4240.713
W (m/s)0.6688.0252.2371.0051.4110.449
A p (kpa)89.87695.16393.2370.714-0.6300.007
T s −5 cm (°C)−1.88826.87610.1046.0610.1030.599
T s −10 cm (°C)−0.18122.1939.7265.435−0.0310.558
T s −15 cm (°C)0.1619.3949.0105.025−0.0680.557

Fluehli T a (°C)−14.44822.8777.7086.906−0.1000.895
R s (W/m2)194.734377.444306.01532.805−0.3270.107
VPD (hpa)0.4169.6622.1291.5431.4100.724
W (m/s)0.3424.6361.4760.6190.8940.419
A p (kpa)82.30887.16485.4930.711−0.8430.008
T s −5 cm (°C)−0.3521.8228.7296.3380.0750.726
T s −10 cm (°C)−0.04421.7278.8366.2420.0710.706
T s −15 cm (°C)0.43220.8268.8136.0230.0620.683

3. Results and Discussion

For testing the superiority of our QL-LSTM model performance for T estimation using scikit-learn, we compared our test results with those of other advanced models (SVR, BPNN, ELM, and LSTM). We choose default parameters for the SVR model. For the BPNN model, the square error is used as the loss function, and the optimization is Adam. The number of samples selected for the model is 400, the iteration is set to 500, the learning rate is set to 5.0e-4, and the size of the nodes is set to 128. The elm function was used to model the ELM model, the sigmoid served to activate the function in the hidden layer, and we set the same size of the nodes to BPNN. Furthermore, we set the hyperparameters of the LSTM to be the same as that of QL-LSTM. As can be seen from Table 2 and 3, the different values of the hyperparameter can generate the different predictive results. When the number of samples selected for the model is set to 400, the iteration to 500, the numQL−LSTM to 128, and set the learning rate to 1.0e-3, the QL-LSTM model has the best performance.
Table 2

Predictive performance with different numQL−LSTM and learning rates at Laegern station.

Learning ratenumQL−LSTMRMSEMAENSWILMI
1.0e-5164.7354.0590.17990.3250.093
322.9702.5080.6770.8450.439
641.3461.0610.9330.9800.763
1281.3651.0730.9310.9820.760
2561.3411.0490.9340.9830.765
161.2300.9620.9440.9860.785

1.0e-4321.1750.9150.9490.9870.795
641.1670.9030.9500.9870.798
1281.1590.8840.9500.9870.802
2561.1480.8710.9510.9870.805
161.1290.8550.9530.9880.808

5.0e-4321.0760.8090.9570.9890.819
641.0010.7570.9630.9900.830
1280.8170.6290.9750.9930.859
2560.8520.6570.9730.9930.853
161.0690.8020.9580.9890.820

1.0e-3320.9500.7160.9660.9910.839
640.8800.6760.9710.9920.848
128 0.817 0.625 0.975 0.993 0.860
2560.8250.6340.9750.9930.858
160.8710.6750.9720.9930.849

5.0e-3320.8580.6610.9730.9930.852
640.8680.6610.9720.9930.852
1280.8520.6480.9730.9930.855
2560.8540.6490.9730.9930.855
Table 3

Predictive performance with different numbers of batch size and iterations at Laegern station.

BatchIterationRMSEMAENSWILMI
1001000.8600.6660.9720.9930.851
2000.8730.6760.9720.9930.848
5000.8800.6670.9710.9920.850
8001.0730.8250.9570.9890.815
1000.8490.6540.9730.9930.853

2002000.8170.6250.9750.9930.860
5000.8430.6420.9730.9930.856
8000.9160.6920.9690.9920.845
1000.9730.7390.9650.9910.834

3002000.8310.6440.9740.9930.856
5000.8170.6260.9750.9930.860
8000.8690.6640.9720.9930.851
1001.0380.7840.9600.9900.824

4002000.8510.6540.9730.9930.853
500 0.809 0.622 0.976 0.993 0.860
8000.8170.6280.9750.9930.859
1001.0760.8120.9570.9890.818

5002000.8680.6660.9720.9930.851
5000.8090.6230.9760.9930.860
8000.8110.6260.9750.9930.860

3.1. Evaluation for the Hyperparameters in Quadruplet Loss Function

The quadruplet loss function has five main hyperparameters, which are the total number of labels C, hyperparameter μ in equations (3) and (4), and τ and m in equations (5) and (6). When we evaluate the above hyperparameters in the quadruplet loss function, we set the parameters num to 128, the learning rate to 1.0e-3, the iteration time to 500, and the batch size to 400. We first select the best C based on the Calinski Harabasz Score and Y_Silhouette_score. Figure 7 denotes the Calinski Harabasz Score and Y_Silhouette_score with different numbers of labels. It is observed that both scores achieve the best result when C is 25. Then, we gradually tune the hyperparameters, τ and m. Figure 8 represented the results of the estimation model with different μ, τ, and m in Laegern meteorological station. We can see that when we set μ to be 5.0e-3, τ to be 1.0e-3, and m to be 5.0e-5, and our QL-LSTM model could achieve the best estimation performance (RMSE = 0.789, MAE = 0.605, NS = 0.977, WI = 0.994, and LMI = 0.865). It is probably because the smaller hyperparameters we set, the less hard samples would be computed. Meanwhile, when we set the larger hyperparameters, the more redundant samples would be computed.
Figure 7

The Calinski Harabasz Score and Y_Silhouette_score with different numbers of labels.

Figure 8

The estimation results with different μ, τ, and m at Laegern meteorological station.

3.2. The Impact of Different Inputs on the Performance of the Predictive Model

In this part, we analyzed the environmental factors that may affect our QL-LSTM model for T estimation. Considering that the interaction between different environmental factors would have an impact on the T estimation, we combine the meteorological variables accordingly and input them into the submodels we set as follows: Input I: T (d − 1) Input I: T (d − 1) + R(d − 1) Input I: T (d − 1) + R(d − 1) + VPD(d − 1) Input I: T (d − 1) + R(d − 1) + VPD(d − 1) + W(d − 1) Input I: T (d − 1) + R(d − 1) + VPD(d − 1) + W(d − 1) + A(d − 1) Output: T (d) Then, we consider that the past T will continue to have an impact on the future T estimation, so we have carried out lag processing for the past T on different days, as follows: Input I: T (d − 1) Input I: T (d − 1) + T (d − 2) Input I: T (d − 1) + T (d − 2) + T (d − 3) Input I: T (d − 1) + T (d − 2) +T (d − 3) + T (d − 4) Input I: T (d − 1) + T (d − 2) +T (d − 3) + T (d − 4) + T (d − 5) Output: T (d) We input what we specified above into QL-LSTM to predict the T (d) at the 5 cm depth of the Laegern station. For our model, we first selected the hyperparameters μ as 5.0e-3, τ as 1.0e-3, m as 5.0e-5, C as 25, numQL−LSTM as 128, learning rate as 1.0e-3, iteration time as 500, and batch size as 400, and the results are presented in Table 4. Obviously, the methods of QL-LSTM(I) and QL-LSTM(I) are better than the others, respectively. Meanwhile, we could conclude that W (d − 1), A (d − 1), T (d − 4), and T (d − 5) all have an influence on the performance of the predictive model. In addition, by comparing the estimation results between meteorological variables input and past T input, we found that our model with past T could achieve greater accuracy in modeling than the one with meteorological variables. The reason may be that the predictive model with past T input has stronger memory for T variables. The estimation of the future T should make the best use of its continuity; in this way, we can make a reliable T estimation, which not only continues its historical tendency but also conforms to its actual performance. Hence, we construct the predictive model (QL-LSTM(I11)) by combining the environmental factors (T (d − 1), R(d − 1), VPD(d − 1)) with past T (T (d − 1), T (d − 2), T (d − 3)), which is also considered in estimating the T (d) at the 5 cm depth of the Laegern station. Experiment results prove that it could achieve the best estimation performance (RMSE = 0.789, LMI = 0.865, WI = 0.994, NS = 0.977, and MAE = 0.605). Hence, the final input for the predictive models is the environmental factors (T(d − 1), R(d − 1), VPD(d − 1)) and the past T (T (d − 1), T (d − 2), T (d − 3)).
Table 4

Predictive performance of QL-LSTM at 5 cm depth for the Laegern station.

MethodRMSEMAENSWILMI
QL-LSTM(I1)1.4691.1430.9210.9800.744
QL-LSTM(I2)1.4441.1130.9230.9810.751
QL-LSTM(I3) 1.221 0.937 0.945 0.986 0.790
QL-LSTM(I4)1.3961.0780.9280.9820.759
QL-LSTM(I5)1.4541.1430.9220.9800.744
QL-LSTM(I6)1.1430.8660.9520.9870.806
QL-LSTM(I7)1.0950.8230.9560.9880.815
QL-LSTM(I8) 1.077 0.815 0.957 0.989 0.817
QL-LSTM(I9)1.0960.8340.9560.9880.813
QL-LSTM(I10)1.1150.8420.9540.9880.811
The three methods (QL-LSTM(I3), QL-LSTM(I8), and QL-LSTM(I11)) are used to test the data of the Laegern station. Figure 9 shows the linear relationship between the predicted value and the observed value. The QL-LSTM(I11) model gets the best predictive performance with y = 0.9899x + 0.3022 and the higher R2 (0.9792) compared with the others. In the frequency diagram (Figure 10) of the models (QL-LSTM(I3), QL-LSTM(I8), and QL-LSTM(I11)), the QL-LSTM(I11) also has a higher frequency (91%) compared to the others. Therefore, we can draw a conclusion that the predictive model (a combination of the environmental factors and the past T) normally outperformed the other two (by either past measurements of T or other meteorological information) in the T estimation.
Figure 9

The scatterplots of the predictive model testing results (the values of estimated and observed) for the Laegern station. (a) QL-LSTM(I11), (b) QL-LSTM(I8) model, and (c) QL-LSTM(I3) model.

Figure 10

The frequency plot of the predictive models (absolute estimation error) for the Laegern station. (a) QL-LSTM(I11), (b) QL-LSTM(I8) model, and (c) QL-LSTM(I3) model.

3.3. Comparison with Different Models

In this part, our QL-LSTM model was compared with several advanced models, including SVR, BPNN, ELM, and LSTM. The data of T, R, and VPD on day “d − 1”, and T data on different days were acted as input data to different predictive models, and the output was the predicted value of T on days “d”, “d + 5”, and “d + 15”. Time steps were in days. The testing results of five different models at 5, 10, and 15 cm depth on the 1st, 5th, and 15th days of the Laegern station were shown in Table 5. And we can see that our QL-LSTM model performs better than the existing advanced models at the 5 cm depth on the 1st day. Specifically, the value of RMSE is 0.789, which is reduced relative to 13% (LSTM), 22% (ELM), 28% (BPNN), and 22% (SVR), respectively. The MAE values amount to 0.605 (QL-LSTM), and the others are 0.813 (SVR), 0.872 (BPNN), 0.824 (ELM), and 0.821 (LSTM). Meanwhile, the QL-LSTM model achieved a higher value of NS, WI, and LMI. Hence, it is obvious that our model had the best performance in this case. For the results of 5 cm depth on the 15th day, the LSTM achieved a higher WI (0.892) than estimation from other models on the 15th day, but it is similar to the WI (0.891) of our model. For 10 and 15 cm depth results on the 1st, 5th, and 15th days, the performance of our QL-LSTM model remains stable, although our QL-LSTM model has the lower values of WI (0.952 and 0.933) than the values of WI (0.954 and 0.934) on the LSTM model in individual cases. It can be found that the predictive performance will get better as the soil depth decreases (from 5 cm to 15 cm), but it will decrease as time goes on (from 1st to 15th days). The systematic errors caused this phenomenon for long-term estimation [39].
Table 5

The predictive performance with different models at the Laegern station.

Depth (cm)DayMethodRMSEMAENSWILMI
5dSVR1.0070.813−1315.50.412−882.6
BPNN1.1010.8720.9300.9820.838
ELM1.0160.8240.9250.9830.849
LSTM0.9100.8210.9260.9830.850
QL-LSTM 0.789 0.605 0.977 0.994 0.865
d + 5SVR2.6652.133-1205.30.419-845.1
BPNN2.8122.2300.7000.9190.535
ELM2.8322.2610.6930.9160.528
LSTM2.6432.1130.7300.9260.560
QL-LSTM 2.436 1.908 0.782 0.937 0.573
d + 15SVR3.3222.680−1215.20.422−846.9
BPNN3.2712.6570.6010.8860.438
ELM3.4032.7230.5720.8810.424
LSTM3.2782.6520.601 0.892 0.439
QL-LSTM 3.078 2.428 0.651 0.891 0.455

10dSVR1.0110.850−1291.50.414−870.2
BPNN0.9860.8310.9230.9830.837
ELM1.0900.8620.9160.9810.830
LSTM0.9800.8240.9230.9830.839
QL-LSTM 0.761 0.605 0.975 0.993 0.854
d + 5SVR2.1731.752−1211.90.417−842.1
BPNN2.1881.7510.7740.9350.608
ELM2.2911.8330.7620.9380.596
LSTM2.1811.7450.781 0.954 0.615
QL-LSTM 1.973 1.545 0.833 0.952 0.628
d + 15SVR2.7942.220−1244.50.431−850.8
BPNN2.8312.2630.6520.9090.493
ELM2.9202.3000.6320.8990.484
LSTM2.7632.1310.6670.9070.509
QL-LSTM 2.538 1.975 0.724 0.917 0.524

15dSVR0.5310.450−1304.50.424−873.5
BPNN0.5250.4420.9420.9870.921
ELM0.6370.4880.9400.9870.913
LSTM0.5120.4360.9440.9880.927
QL-LSTM 0.312 0.239 0.995 0.998 0.938
d + 5SVR1.7611.400−1262.30.426−857.9
BPNN1.7731.4410.8260.9570.667
ELM1.9211.5320.8000.9500.643
LSTM1.7421.4080.8310.9580.675
QL-LSTM 1.533 1.203 0.882 0.968 0.687
d + 15SVR2.3961.937−1266.30.431−857.4
BPNN2.4061.9260.7070.9330.537
ELM2.5082.0080.6740.9250.518
LSTM2.4011.9180.707 0.934 0.539
QL-LSTM 2.189 1.719 0.760 0.933 0.553
The same strategy was applied in the Fluehli station to further verify the performance of the models, with the results shown in Table 6. And our QL-LSTM model performs better compared with others. However, for 5 cm depth on the 15th day and 15 cm depth on the 5th day, the BPNN model performs better with the results of RMSE = 2.081, LMI = 0.584, WI = 0.937, NS = 0.769, and MAE = 2.099, and RMSE = 1.832, LMI = 0.726, WI = 0.973, NS = 0.897, and MAE = 1.352 and LMI = 0.726. Our method does not perform well in some cases probably because the weights of the LSTM model are randomly selected to generate the nonoptimal solution. Meanwhile, our novel loss function (quadruplet loss) is applied based on the LSTM model; it only improved estimation performance to a certain extent against the LSTM model. All in all, the results of our testing on the data of different regions show that the performance of our QL-LSTM model is usually better for T prediction with different depths and days.
Table 6

The predictive performance with different models at the Fluehli station.

Depth (cm)DayMethodRMSEMAENSWILMI
5dSVR0.6910.549−1311.50.429−868.5
BPNN0.7180.5500.9420.9180.916
ELM0.6930.5380.9420.9880.918
LSTM0.7230.5490.9410.9870.916
QL-LSTM 0.492 0.352 0.992 0.998 0.930
d + 5SVR2.3161.745−1266.60.431−851.7
BPNN2.2971.7160.8200.9550.687
ELM2.4631.8800.7990.9490.654
LSTM2.2911.7180.8210.9560.686
QL-LSTM 2.084 1.526 0.872 0.966 0.698
d + 15SVR3.0232.316−1257.50.433−847.4
BPNN 2.801 2.099 0.769 0.937 0.584
ELM3.1432.4000.6940.9180.534
LSTM2.9712.3530.7230.9220.557
QL-LSTM2.8152.1650.7660.9330.570

10dSVR0.6280.518−1310.10.428−870.4
BPNN0.6350.5080.9440.9880.926
ELM0.6210.5100.9440.9880.926
LSTM0.6100.5000.9440.9880.928
QL-LSTM 0.381 0.286 0.995 0.998 0.941
d + 5SVR2.0431.529−1274.50.430−856.9
BPNN2.0211.5100.8460.9630.721
ELM2.1531.6000.8300.9580.697
LSTM 1.777 1.506 0.901 0.973 0.732
QL-LSTM 1.794 1.307 0.8990.9730.732
d + 15SVR2.8132.143−1271.70.432−854.43
BPNN2.7922.1120.7380.9330.594
ELM2.9262.3000.7170.9260.558
LSTM2.8022.1080.7360.9320.597
QL-LSTM 2.601 1.906 0.789 0.948 0.612

15dSVR0.6310.506−1291.90.430−862.9
BPNN0.6420.5000.9430.9880.926
ELM 0.407 0.4680.9440.9880.930
LSTM0.6180.4650.9440.9880.931
QL-LSTM0.409 0.290 0.994 0.998 0.941
d + 5SVR2.0281.551−1269.30.429−853.7
BPNN 1.832 1.352 0.897 0.973 0.726
ELM2.1731.6380.8300.9580.691
LSTM2.0421.5550.8460.9620.715
QL-LSTM1.8351.3540.8960.9720.725
d + 15SVR2.7712.156−1257.70.433−848.4
BPNN2.7622.1340.7480.9350.595
ELM2.8022.2000.7410.9310.579
LSTM2.7952.1120.7420.9330.597
QL-LSTM 2.552 1.931 0.805 0.949 0.608

4. Conclusions

Soil temperature (T) is a main physical variable of the land surface, which has an impact on many aspects, such as the growth and yield of crops. Therefore, how to predict T accurately is very important. This paper proposed the QL-LSTM model and compared it with the state-of-the-art predictive models to use the meteorological data and past T of the Laegern and Fluehli stations (Switzerland) for daily T estimation at 5, 10, and 15 cm depth on the 1st, 5th, and 15th days. The experiment results showed that the QL-LSTM model performed better than the existing advanced models for T estimation in multifarious cases. In addition, to enrich processing information in loss function and further improve estimation performance, we attempt to design the novel quadruplet loss function that combines the traditional squared-error loss function with distance metric learning between the sample features. Similar samples can be zoomed and the dissimilar samples can be pushed. The distance metric learning between the sample features is combined with the squared-error loss function, which could improve the estimation performance to a certain extent. However, the many hyperparameters in our method may cause sensitivity issues in estimation, which may lead to poor generalization ability of other estimations. In the future, the parametric adaptive method will be explored for a new loss function in the follow-up study.
  5 in total

1.  A Comparison Study of Validity Indices on Swarm-Intelligence-Based Clustering.

Authors:  D C Wunsch
Journal:  IEEE Trans Syst Man Cybern B Cybern       Date:  2012-03-15

2.  Long short-term memory.

Authors:  S Hochreiter; J Schmidhuber
Journal:  Neural Comput       Date:  1997-11-15       Impact factor: 2.026

3.  Sequential Subspace Clustering via Temporal Smoothness for Sequential Data Segmentation.

Authors: 
Journal:  IEEE Trans Image Process       Date:  2018-02       Impact factor: 10.856

4.  Impact of straw management on seasonal soil carbon dioxide emissions, soil water content, and temperature in a semi-arid region of China.

Authors:  Weiyu Wang; Kashif Akhtar; Guangxin Ren; Gaihe Yang; Yongzhong Feng; Liuyan Yuan
Journal:  Sci Total Environ       Date:  2018-10-18       Impact factor: 7.963

5.  Soil respiration in different agricultural and natural ecosystems in an arid region.

Authors:  Liming Lai; Xuechun Zhao; Lianhe Jiang; Yongji Wang; Liangguo Luo; Yuanrun Zheng; Xi Chen; Glyn M Rimmington
Journal:  PLoS One       Date:  2012-10-17       Impact factor: 3.240

  5 in total

北京卡尤迪生物科技股份有限公司 © 2022-2023.