| Literature DB >> 35408115 |
Alan Torres-Alvarado1, Luis Alberto Morales-Rosales2, Ignacio Algredo-Badillo1, Francisco López-Huerta3, Mariana Lobato-Baez4, Juan Carlos López-Pimentel5.
Abstract
The latest generation of communication networks, such as SDVN (Software-defined vehicular network) and VANETs (Vehicular ad-hoc networks), should evaluate their communication channels to adapt their behavior. The quality of the communication in data networks depends on the behavior of the transmission channel selected to send the information. Transmission channels can be affected by diverse problems ranging from physical phenomena (e.g., weather, cosmic rays) to interference or faults inherent to data spectra. In particular, if the channel has a good transmission quality, we might maximize the bandwidth use. Otherwise, although fault-tolerant schemes degrade the transmission speed by solving errors or failures should be included, these schemes spend more energy and are slower due to requesting lost packets (recovery). In this sense, one of the open problems in communications is how to design and implement an efficient and low-power-consumption mechanism capable of sensing the quality of the channel and automatically making the adjustments to select the channel over which transmit. In this work, we present a trade-off analysis based on hardware implementation to identify if a channel has a low or high quality, implementing four machine learning algorithms: Decision Trees, Multi-Layer Perceptron, Logistic Regression, and Support Vector Machines. We obtained the best trade-off with an accuracy of 95.01% and efficiency of 9.83 Mbps/LUT (LookUp Table) with a hardware implementation of a Decision Tree algorithm with a depth of five.Entities:
Keywords: FPGA; channel quality classification; hardware implementation; machine learning
Year: 2022 PMID: 35408115 PMCID: PMC9003435 DOI: 10.3390/s22072497
Source DB: PubMed Journal: Sensors (Basel) ISSN: 1424-8220 Impact factor: 3.576
Figure 1An operating environment of communication networks based on SDR.
Figure 2Proposed radio system based on SDR and the ML/DL Co-processor.
Figure 3System model to implement in V2V and V2I Communications our proposal as part of an On-Board System.
Figure 4Multiplier Adder Module 1 and its Components. (a) Common Multipliers and Adders. (b) Multiplier Adder Module 1.
Figure 5Block Diagram and Module of the Sigmoid Function.
Figure 6Comparator Module of MLP and Logistic Regression.
Inputs and Outputs Attributes of the ML models and the Proposed CQI Label.
| (a) Inputs and Outputs Attributes | ||||
|---|---|---|---|---|
|
| ||||
| Inputs | rsrp | |||
| rsrq | ||||
| macStats_phr | ||||
| macStats_totalBytesSdusDl | ||||
| macStats_totalTbsUl | ||||
| macStats_totalPduDl | ||||
| macStats_totalPrbUl | ||||
| macStats_totalPduUl | ||||
| macStats_totalPrbDl | ||||
| macStats_totalTbsDl | ||||
| pdcpStats_pktRx | ||||
| pdcpStats_pktRxSn | ||||
| pdcpStats_pktTxSn | ||||
| pdcpStats_pktRxBytes | ||||
| pdcpStats_pktTxW | ||||
| Outputs | Low Quality | |||
| High Quality | ||||
|
| ||||
|
|
|
|
|
|
| 0 | Out of Range | |||
| 1 | Low Quality | QPSK | 78 | 0.1523 |
| 2 | QPSK | 120 | 0.2344 | |
| 3 | QPSK | 193 | 0.3770 | |
| 4 | QPSK | 308 | 0.6016 | |
| 5 | QPSK | 449 | 0.8770 | |
| 6 | QPSK | 602 | 1.1758 | |
| 7 | High Quality | 16QAM | 378 | 1.4766 |
| 8 | 16QAM | 490 | 1.9141 | |
| 9 | 16QAM | 616 | 2.4063 | |
| 10 | 64QAM | 466 | 2.7305 | |
| 11 | 64QAM | 567 | 3.3223 | |
| 12 | 64QAM | 666 | 3.9023 | |
| 13 | 64QAM | 772 | 4.5234 | |
| 14 | 64QAM | 873 | 5.1152 | |
| 15 | 64QAM | 948 | 5.5547 | |
Hyper-parameters Used for the Developed Machine Learning Algorithms.
| Algorithm | Considered Hyperparameters | Obtained Hyperparameters | Configuration |
|---|---|---|---|
| Decision Tree | criterion: {entropy, gini}, max_depth: {1:15}, min_samples_split: {2:4}, min_samples_leaf: {1:3}, min_weight_fraction_leaf: {0, 0.1, 0.01, 0.001} | criterion: {entropy}, max_depth: {6}, min_samples_split: {2}, min_samples_leaf: {3}, min_weight_fraction_leaf: {0} | Implemented Depths: five and six, Designs based on comparators |
| MLP | neurons: {1:10}, activation_function: {softmax, softplus, softsign, relu, tanh, sigmoid, hard_sigmoid, linear}, optimizer: {SGD, RMSprop, adagrad, adadelta}, batch_size: {10, 20, 40, 60, 80, 100}, learning_rate: {0.001, 0.01, 0.1, 0.2, 0.3}, weight_init: {uniform, lecun_uniform, normal, zero, glorot_normal, glorot_uniform, he_normal, he_uniform}, dropout: {0, 0.1, 0.2, 0.3, 0.4, 0.5, 0.6, 0.7, 0.8, 0.9} | neurons: {5}, activation: {relu}, optimizer: {RMSprop}, batch_size: {5}, learning_rate: {0.01}, weight_init: {glorot_normal}, dropout: {0} | MLP based on three dense layers, with the next configurations of neurons [5, 5, 1] |
| Logistic Regression | penalty: {l1, l2, elasticnet, none}, tol: {0.0001, 0.001, 0.01, 0.1}, C: { 1 × 10 | penalty: {l2}, tol: {0.0001}, C: {10000}, solver: {newton_cg}, max_iteration: {1000} | Logistic Regression model based on the modules: Multiplier - Adder 1, Sigmoid, and Comparator |
| SVM | kernel: {poly, rbf, sigmoid, linear}, C: {1 × 10 | kernel: {rbf}, C: {464.1588}, gamma: {scale} | SVM model based on the modules Multiplier-Adder 1 and Comparators |
Figure 7Decision Tree with a Depth of Five.
Figure 8Neural Network Hardware Architecture.
Figure 9Multiplier Adder Module 1 and its Components: (a) Components of the Neurons of the Second and Third Layer; (b) Multiplier-Adder 2.
Figure 10Neurons of the Feed-Forward Neural Network. (a) Neuron of the First Layer, 15-bit input and ReLU activation function; (b) Neuron of the Second Layer, 5-bit input and ReLU activation function; (c) Neuron of the Third Layer, 5-bit input and Sigmoid activation function.
Figure 11Logistic Regression Hardware Architecture.
Figure 12Support Vector Machine Hardware Architecture.
Resource Comparison of the Implemented Machine Learning Algorithms.
| Machine Learning Technique | Latency | LUT | FF | DSP | Minimum Period (ns) | Max. Frequency (MHz) | Throughput (Mbps) | Efficiency (Mbps/LUT) | Balanced Accuracy | Sensitivity | Specificity |
|---|---|---|---|---|---|---|---|---|---|---|---|
| Decision Tree (depth 6) | 25 | 890 | 1389 | 0 | 3.125 | 320 | 2355.20 | 2.6462 | 95.2626 | 94.88 | 95.64 |
| Decision Tree (depth 5) | 23 | 320 | 812 | 0 | 3.093 | 323.3107 | 3148.7650 | 9.8398 | 95.0180 | 94.12 | 96.91 |
| Multi-Layer Perceptron | 317 | 36,825 | 79,465 | 439 | 6.081 | 164.44 | 249.004 | 0.0067 | 94.0295 | 97.48 | 90.57 |
| Logistic Regression | 21 | 5886 | 1986 | 100 | 31.427 | 31.8197 | 727.3090 | 0.1235 | 93.0713 | 93.14 | 92.99 |
| Support Vector Machine | 25 | 4865 | 1922 | 75 | 16.993 | 58.8477 | 1129.8770 | 0.2322 | 92.0657 | 92.99 | 91.13 |
Figure 13Trade-Off between Accuracy and Efficiency.
Resource Comparison Among Different Works.
| Work | Algorithm | Evaluation | Platform | Latency | LUT | FF | BRAM | DSP | Freq. (MHz) | Throughput (M) | Efficiency (Mbps/LUT) | Objective |
|---|---|---|---|---|---|---|---|---|---|---|---|---|
| This Work | Decision Tree (5) | 95.01% | Virtex-7 | 23 CC | 320 | 812 | 0 | 0 | 323.3107 | 3148.7 b/s | 9.83 | Low and High |
| Lin Z. et al. [ | Decision Tree | Acc: 89.10% | VCU1525 | 0.36 ms | 63,079 | - | 486 | 202 | 308 | - | - | Online Learning |
| Choudhury et al. [ | TM Decision Tree | Acc: ±68% | Virtex Ultrascale+ | 24 ms | 894,000 | 6488 | 228 | 200 | 62 | - | - | Dynamic Training |
| Novickis R. et al. [ | FFNNs | MAE 0.0232 | Zynq-7000 | 10.22 us | 8958 | - | 8 | 14 | 100 | 1.52 Sam/s | - | Estimating Forces of an |
| Kachris C. et al. [ | Logistic Regression | Accuracy: 90% | Zynq FPGA SoC. | - | 44177 | 47841 | 42 | 160 | 667 | - | - | Framework for |
| Batista G. et al. [ | SVM | Accuracy: 98% | Cyclone IV EP4CE115F29C7 | 1.648 us | 1314 | - | - | - | 50 | 1.64 MOPS | - | Speech Recognition |
| Wu R. et al. [ | SVM | Effective Utilization Rate: 94.82% | Xilinx ZYNQ 7Z020 | 274.50–40632 ns | 12090 | - | 8 | - | 100 | - | - | Matrix Computing |