| Literature DB >> 30934907 |
Ghazal Rouhafzay1, Ana-Maria Cretu2.
Abstract
Drawing inspiration from haptic exploration of objects by humans, the current work proposes a novel framework for robotic tactile object recognition, where visual information in the form of a set of visually interesting points is employed to guide the process of tactile data acquisition. Neuroscience research confirms the integration of cutaneous data as a response to surface changes sensed by humans with data from joints, muscles, and bones (kinesthetic cues) for object recognition. On the other hand, psychological studies demonstrate that humans tend to follow object contours to perceive their global shape, which leads to object recognition. In compliance with these findings, a series of contours are determined around a set of 24 virtual objects from which bimodal tactile data (kinesthetic and cutaneous) are obtained sequentially and by adaptively changing the size of the sensor surface according to the object geometry for each object. A virtual Force Sensing Resistor array (FSR) is employed to capture cutaneous cues. Two different methods for sequential data classification are then implemented using Convolutional Neural Networks (CNN) and conventional classifiers, including support vector machines and k-nearest neighbors. In the case of conventional classifiers, we exploit contourlet transformation to extract features from tactile images. In the case of CNN, two networks are trained for cutaneous and kinesthetic data and a novel hybrid decision-making strategy is proposed for object recognition. The proposed framework is tested both for contours determined blindly (randomly determined contours of objects) and contours determined using a model of visual attention. Trained classifiers are tested on 4560 new sequential tactile data and the CNN trained over tactile data from object contours selected by the model of visual attention yields an accuracy of 98.97% which is the highest accuracy among other implemented approaches.Entities:
Keywords: Convolutional Neural Network; Haptic exploration; tactile object recognition; visual attention; visuo-haptic interaction
Year: 2019 PMID: 30934907 PMCID: PMC6480322 DOI: 10.3390/s19071534
Source DB: PubMed Journal: Sensors (Basel) ISSN: 1424-8220 Impact factor: 3.576
Figure 1Framework for sequential tactile object recognition.
Figure 2Adaptive modification of tactile sensor size.
Figure 3Adaptive modification of tactile sensor size.
Figure 4The nine channels contributing to the model of visual attention.
Figure 5(a) Examples of blind contour following paths for model of Plane. (b) Examples of guided contour following paths by visually interesting points for model of Plane.
Figure 6(a) Example of six consecutive frames of the tactile video captured from the model of Plane. (b) Example of sequence of normal vectors.
Figure 7The two convolutional neural network (CNN) structures and the decision on output class.
Figure 8Process of object classification using conventional classifiers.
Figure 9Objects used for experiments.
Classification accuracy of test data.
| Contour Following Guided by Visually Interesting Points | Blind Contour Following | |
|---|---|---|
| CNNs | 98.97% | 83.29% |
| kNN | 86.07% | 73.11% |
| SVM | 88.44% | 77.21% |
Figure 10Confusion matrices.