| Literature DB >> 36254227 |
Liqiang Song1, Huaiguang Wang1, Zhiyong Shi1.
Abstract
The motivation of this research is to review all methods used in data compression of collected data in monitoring the condition of equipment based on the framework of edge computing. Since a large amount of signal data is collected when monitoring conditions of mechanical equipment, namely, signals of running machines are continuously transmitted to be crunched, compressed data should be handled effectively. However, this process occupies resources since data transmission requires the allocation of a large capacity. To resolve this problem, this article examines the monitoring conditions of equipment based on edge computing. First, the signal is pre-processed by edge computing, so that the fault characteristics can be identified quickly. Second, signals with difficult-to-identify fault characteristics need to be compressed to save transmission resources. Then, different types of signal data collected in mechanical equipment conditions are compressed by various compression methods and uploaded to the cloud. Finally, the cloud platform, which has powerful processing capability, is processed to improve the volume of the data transmission. By examining and analyzing the monitoring conditions and signal compression methods of mechanical equipment, the future development trend is elaborated to provide references and ideas for the contemporary research of data monitoring and data compression algorithms. Consequently, the manuscript presents different compression methods in detail and clarifies the data compression methods used for the signal compression of equipment based on edge computing.Entities:
Year: 2022 PMID: 36254227 PMCID: PMC9569220 DOI: 10.1155/2022/9489306
Source DB: PubMed Journal: Appl Bionics Biomech ISSN: 1176-2322 Impact factor: 1.664
Figure 1Signal analysis process of equipment monitoring.
Lossless compression methods.
| Compression algorithm model | Compression algorithm type | Algorithm title | Advantages | Disadvantages |
|---|---|---|---|---|
| Single-algorithm compression model | Statistical-based compression algorithms | Huffman | Fast calculation speed. The greater the frequency difference, the better the compression effect. | The decoding process is slow and easily influenced by file size. |
| Arithmetic coding | Good compression effect. | Complex calculation process. | ||
| RLE | The algorithm is simple and has a good compression effect when there are more repetitive characters. | When there are few repetitive characters, the compression effect is poor or has the opposite effect. | ||
| Asymmetric digital systems (ANS) [ | The compression rate is close to arithmetic coding. Compression speed is close to Huffman coding. | rANS requires shift decomposition and tANS requires form construction. | ||
| Finite state (FSE) [ | ANS-based algorithm with high compression performance | Need to build a table of finite state entropy | ||
| Dictionary-based compression algorithm | LZ77 [ | High compression efficiency and very fast decompression speed of LZO | Poor compression when repeated characters are far apart | |
| LZ78 [ | No need for a search buffer and memory | Need to create dictionaries and manage them. Complex to compile | ||
| LZW [ | A simple method with a good compression effect and the second field of LZ78 encoding removed | The dictionary update process causes a reduction in compression ratio (CR) | ||
| Hybrid algorithm compression model | Improved compression-based algorithm | MTF + single algorithm [ | Ability to improve the alignment of its data, output its index of alignment, and create a high CR | Good for finite data only, not easy to handle when contains more data |
| BWT + single algorithm [ | BWT makes full use of its sequential arrangement and has a better compression effect | The algorithm process includes sorting, which takes up some memory and increases the time used for compression | ||
| XOR incremental encoding + single algorithm [ | Incremental encoding reduces the range of variation in the original data and reduces the number of binary bits represented | When the adjacent data vary very much, its compression becomes worse | ||
| Hybrid compression of different single algorithms | RLE + Huffman [ | With both data effects, it can get a higher CR and faster compression speed | However, it is limited by two compression methods on the dataset | |
| RLE + LZW [ | Better data compression. No duplicate characters are encoded in the dictionary | Increased risk of error codes when coding |
Lossy compression methods.
| Compression algorithm model | Compression algorithm type | Algorithm title | Advantages | Disadvantages |
|---|---|---|---|---|
| Single-algorithm compression model | Quantization-based compression algorithm | Scalar quantification [ | Simple method and fast processing | Distorted data |
| Vector quantization [ | Better than scalar quantification | Limited distortion | ||
| Compression algorithm based on signal decomposition (threshold processing) | EMD [ | Fast signal decomposition and fast compression | Easy to generate IMFs overlap, resulting in signal distortion | |
| Intrinsic time-scale decomposition [ | Better computational speed than EMD, high time–frequency resolution, the good compression effect | Due to linear interpolation, it is easy to distort the intersection position | ||
| VMD [ | High decomposition accuracy, better decomposition of IMFs, accurate reduction of redundant information | With boundary effects, the parameters have a large impact on the results | ||
| LMD [ | Better preservation of transient change information in the original signal | Still has endpoint effects and does not have fast algorithms | ||
| Compression algorithms based on sparse dictionary transformations (complete and overcomplete dictionaries) | STFT [ | Capable of fast time and frequency conversion | Low resolution at high frequencies, resulting in signal loss during compression | |
| DCT [ | Fast processing time and good sparse transformation effect | The transformation method does not work with all signals | ||
| DWT [ | With better time–frequency resolution, the LSWT algorithm does not consume memory | Excessive layer decomposition can result in wasted computational resources | ||
| Best orthogonal basis [ | Able to obtain near-optimal signal representation | Sparse is less effective when the signal cannot be represented by orthogonal components | ||
| Orthogonal matching pursuit [ | Fast convergence to get a better sparse signal | The vertical projection of the processed signal is non-orthogonal and the number of iterations increases | ||
| Generalized morphological component analysis [ | Capable of adapting to different input signal types, improving calculation speed and signal separation accuracy | The parameters of the calculation need to be set in advance, and the set values of the parameters directly affect the results of the processing | ||
| Neural-network-based algorithm | RNN [ | Very strong nonlinear mapping, high compression ratio (CR) | Requires training in the model and high computational resource requirements | |
| Hybrid algorithm compression model | Lossy single algorithm + lossless compression single algorithm | Based on signal decomposition and Huffman [ | Furthermore, increase in CR, no secondary data loss due to the introduction of lossless compression | This causes the complexity of the algorithm to increase and the data compression time to increase |
| Based on sparse dictionary transform and RLE [ |