| Literature DB >> 35957206 |
Yunfei Yan1, Peng Sun1, Jieyong Zhang1, Yutang Ma1, Liang Zhao1, Yueyi Qin2.
Abstract
With the widespread adoption of service-oriented architectures (SOA), services with the same functionality but the different Quality of Service (QoS) are proliferating, which is challenging the ability of users to build high-quality services. It is often costly for users to evaluate the QoS of all feasible services; therefore, it is necessary to investigate QoS prediction algorithms to help users find services that meet their needs. In this paper, we propose a QoS prediction algorithm called the MFDK model, which is able to fill in historical sparse QoS values by a non-negative matrix decomposition algorithm and predict future QoS values by a deep neural network. In addition, this model uses a Kalman filter algorithm to correct the model prediction values with real-time QoS observations to reduce its prediction error. Through extensive simulation experiments on the WS-DREAM dataset, we analytically validate that the MFDK model has better prediction accuracy compared to the baseline model, and it can maintain good prediction results under different tensor densities and observation densities. We further demonstrate the rationality of our proposed model and its prediction performance through model ablation experiments and parameter tuning experiments.Entities:
Keywords: Quality of Service; deep learning; service computing; service recommendation
Year: 2022 PMID: 35957206 PMCID: PMC9371177 DOI: 10.3390/s22155651
Source DB: PubMed Journal: Sensors (Basel) ISSN: 1424-8220 Impact factor: 3.847
Comparison of methods and results of research in related work.
| Algorithm Category | Approach | Prediction Accuracy of QoS | References | |
|---|---|---|---|---|
| Static QoS prediction algorithm | Neighborhood-based CF algorithm | A CF-based approach for mining the similarity of users. | Outperforms common collaborative filtering algorithms and average prediction algorithms in terms of response time, availability, and latency | Shao et al. [ |
| WSRec: a improved CF-based approach for combining the traditional user-based and item-based CF methods. | Better than UMAEN, IMEAN, UPCC, IPCC algorithms in terms of response time and failure rate | Zheng et al. [ | ||
| Model-based CF algorithm | JDNMFL: A method based on a combination of matrix decomposition and neural networks, including multi-source feature extraction and feature interaction learning. | Better than UPCC, IPCC, UIPCC, PMF, FM algorithms in terms of response time and throughput | Xia et al. [ | |
| NDMF: A method integrates user neighborhood selected by a collaborative way into an enhanced matrix factorization model via deep neural network. | Outperforms the 12 baseline models in the article in terms of response time and throughput | Zou et al. [ | ||
| AMF: A method combines probabilistic matrix decomposition and neural attention networks for QoS prediction. | Outperforms the 8 baseline models in the article in terms of Normalized Discounted Cumulative Gain (NDCG) and Mean average precision (MAP) | Nguyen et al. [ | ||
| Dynamic QoS prediction algorithm | Feature engineering-based algorithm | A method combines a truncated singular value decomposition (SVD) and a classical ARIMA model. | Better than UPCC, IPCC, SerRec algorithms in terms of response time and throughput | Yan et al. [ |
| A method combines Kalman filtering and classical ARIMA model. Afterwards, personalized QoS prediction is achieved by an modified neighborhood-based CF algorithm. | Better than ARIMA, WSRec algorithms in terms of response time and throughput | Hu et al. [ | ||
| A method combines time series clustering, minimum description length and dynamic time warping similarity. Afterwards, the most appropriate service quality prediction scheme is provided to the user via multi-cloud. | Better than UPCC, IPCC, combined UPCC and IPCC, LASSO algorithms in terms of response time and throughput | Keshavarzi et al. [ | ||
| Deep learning-based QoS prediction algorithm | TWQP: A two-stage QoS prediction method that performs predictions in the historical time slice and the current time slice, respectively. | Outperforms the 6 baseline models in the article in terms of response time and throughput | Jin et al. [ | |
| MulA-LMRBF: A method to input historical QoS data using phase-space reconstruction method, afterwards implementing dynamic multi-step prediction by RBF neural network improved by Levenberg–Marquardt algorithm. | Outperforms the 5 baseline models in the article in terms of response time and throughput | Zhang et al. [ | ||
| DeepTSQP: A method propose a deep neural network with gated recurrent units (GRU), learning, and mining temporal features among users and services. | Outperforms the 9 baseline models in the article in terms of response time and throughput | Zou et al. [ | ||
Figure 1Schematic diagram of Tucker decomposition.
Figure 2Users–service–time tensor.
Figure 3General structure of the MFDK model.
Figure 4Schematic representation of tensor expansion into fiber patterns.
Figure 5An LSTM unit structure.
Figure 6Bidirectional long short-term memory (BiLSTM) neural network structure.
Figure 7CNN-BiLSTM Neural Network Structure.
Results of model comparison experiments.
| Forecasting Methodology | Tensor Density | |||||||||
|---|---|---|---|---|---|---|---|---|---|---|
| 0.1 | 0.15 | 0.2 | 0.25 | 0.3 | ||||||
| RMSE | MAE | RMSE | MAE | RMSE | MAE | RMSE | MAE | RMSE | MAE | |
| ARIMA [ | 2.9209 | 1.0471 | 2.8388 | 1.0225 | 2.7578 | 0.9866 | 2.6186 | 0.9376 | 2.5119 | 0.9008 |
| TCN [ | 3.0182 | 1.1188 | 2.9754 | 1.0966 | 2.9146 | 1.0773 | 2.9422 | 1.0682 | 2.8556 | 1.0502 |
| WSPred [ | 1.7878 | 0.7684 | 1.7737 | 0.7563 | 1.7864 | 0.7653 | 1.7708 | 0.7512 | 1.7921 | 0.7638 |
| CLUS [ | 2.2625 | 0.8858 | 2.2494 | 0.8557 | 2.2168 | 0.8296 | 2.1782 | 0.8082 | 2.1434 | 0.7926 |
| PMF [ | 2.2441 | 0.9336 | 2.0951 | 0.8951 | 1.9961 | 0.8667 | 1.9271 | 0.8448 | 1.8773 | 0.8271 |
|
|
|
|
|
|
|
|
|
|
|
|
Figure 8Results of model ablation experiments. (a) Variation of RMSE at different tensor densities in the ablation experiment; (b) variation of MAE at different tensor densities in the ablation experiment.
Figure 9Experimental results on the variation of the sparsity of the observations. (a) MAE variation for different tensor densities in the observation variation; (b) RMSE variation at different tensor densities in the observation variation.
Figure 10Experimental results of Kalman filter parameter variations. (a) Variation of MAE at tensor density of 0.1; (b) variation of MAE at tensor density of 0.3; (c) variation of MAE at tensor density of 0.5; (d) variation of PMSE at tensor density of 0.1; (e) variation of PMSE at tensor density of 0.3; (f) variation of PMSE at tensor density of 0.5.