Abstract

This article presents the artificial intelligence for object recognition by touching method with the tactile sensor, which has similar structural characteristics as the human hand. The object recognition uses a tactile glove sensor with small tactile sensors to spread throughout the glove, and there are touching points similar to human hands. Using the glove sensor to grasp the object that will obtain an image, there is tactile image. These images were used to compare recognition performance with two techniques: the Bag of Word (BoW) and the Convolutional Neural Network (CNN). In the experiment, the tactile glove performs 20 different objects. The BoW technique, the Scale Invariant Feature Transform (SIFT), has been used to feature extraction and then classified with K Nearest Neighbors (KNN) to evaluate the performance. The results illustrate that the accuracy of BoW and CNN is 70.80% and 97.20%, respectively. When comparing both techniques, it was found that the CNN had higher accuracy of about 26.40%. The results clearly show that CNN was more performance than BoW. Therefore, it is suitable to analyze tactile glove recognition, which can be applied to the recognition system of humanoid robots.

Full Text
Published version (Free)

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call