| Literature DB >> 31963622 |
Mohamad Alameh1, Yahya Abbass1, Ali Ibrahim1,2, Maurizio Valle1.
Abstract
Embedding machine learning methods into the data decoding units may enable the extraction of complex information making the tactile sensing systems intelligent. This paper presents and compares the implementations of a convolutional neural network model for tactile data decoding on various hardware platforms. Experimental results show comparable classification accuracy of 90.88% for Model 3, overcoming similar state-of-the-art solutions in terms of time inference. The proposed implementation achieves a time inference of 1.2 ms while consuming around 900 μ J. Such an embedded implementation of intelligent tactile data decoding algorithms enables tactile sensing systems in different application domains such as robotics and prosthetic devices.Entities:
Keywords: convolutional neural network; embedding intelligence; tactile sensing systems
Year: 2020 PMID: 31963622 PMCID: PMC7019580 DOI: 10.3390/mi11010103
Source DB: PubMed Journal: Micromachines (Basel) ISSN: 2072-666X Impact factor: 2.891
Figure 1Block diagram of the tactile sensing system.
Figure 2Examples of visual (top) vs. pressure (middle) vs. tactile images (bottom) of common objects.
Figure 3Architecture of the tested model. BaN, Batch Normalization.
Figure 4Visual representation of the training, test, and validation split using cross-validation.
Figure 5Example of an image resized for the sticky tape object; the red canvas is shown for illustration, which signifies the original image size (28 × 50).
Distribution of the number of parameters on the models’ layers.
| Layers | Model 1 | Model 2 | Model 3 | Model 4 | Model 5 |
|---|---|---|---|---|---|
| (28 × 50) | (26 × 47) | (28 × 40) | (28 × 32) | (24 × 32) | |
| Conv1 | 208 | 208 | 208 | 208 | 208 |
| BaN1 | 16 | 16 | 16 | 16 | 16 |
| Conv2 | 1168 | 1168 | 1168 | 1168 | 1168 |
| BaN2 | 32 | 32 | 32 | 32 | 32 |
| Conv3 | 4640 | 4640 | 4640 | 4640 | 4640 |
| BaN3 | 64 | 64 | 64 | 64 | 64 |
| FC | 19,734 | 16,918 | 14,102 | 11,286 | 8470 |
| Total | 25,862 | 23,046 | 20,230 | 17,414 | 14,598 |
Figure 6Learning accuracy for the 3 configurations of the TactNet4model: (a) training; (b) validation.
Figure 7Comparison of the performance, number of trainable parameters, and FLOPS in the convolutional layers.
Figure 8Implementation flow.
Accuracy results for 10 runs on Model 2, Fold 4.
| Trials | Accuracy (%) |
|---|---|
| 1 | 96.36 |
| 2 | 92.73 |
| 3 | 94.55 |
| 4 | 91.82 |
| 5 | 97.27 |
| 6 | 93.64 |
| 7 | 92.73 |
| 8 | 95.45 |
| 9 | 96.36 |
| 10 | 92.73 |
| Average ± Stdev | 94.36 ± 1.904% |
Comparison of the inference time between models.
| Platform | Inference Time (ms) | |||
|---|---|---|---|---|
| Hardware | Software | Model 1 | Model 2 | Model 3 |
| Jetson TX2 | TensorRT | 5.5597 | 5.2905 | 5.919 |
| TF | 6.2943 | 5.4691 | 5.946 | |
| TFLite | 1.3384 | 1.2181 | 1.2445 | |
| Core i7 | MATLAB | 3.245 | 2.6139 | 2.4715 |
| Movidius NCS2 | OpenVINO | 1.9 | 1.9 | 1.86 |
| Raspberry Pi4 | TFLite | 1.615 | 1.473 | 1.21 |
Power consumption.
| Platform | Current (mA) | Voltage (V) | Consumed Power (mW) | ||||
|---|---|---|---|---|---|---|---|
| Hardware | Software | Static | Total | Static | Total | Dynamic | |
| Jetson | TensorRT | 8 | 16 | 19.072 | 152 | 305 | 153 |
| TF | 8 | 16 | 19.072 | 152 | 305 | 153 | |
| Movidius NCS2 | OpenVINO | - | 160 | 5 | - | 800 | 800 |
| Raspberry Pi4 | TFLite | 560 | 700 | 5 | 2800 | 3500 | 700 |
Energy consumption.
| Platform | Energy Consumption ( | |||
|---|---|---|---|---|
| Hardware | Software | Model 1 | Model 2 | Model 3 |
| Jetson TX2 | TensorRT | 850.6341 | 809.4465 | 905.607 |
| TF | 963.0279 | 836.7723 | 909.738 | |
| Movidius NCS2 | Open VINO | 1520 | 1520 | 1488 |
| Raspberry Pi4 | TFLite | 1130.5 | 1031.1 | 847 |