Abstract

AbstractExisting ROS packages for object detection are based on traditional template matching techniques. They are used to identify only a few classes and cannot be adjusted to detect new classes. These methods also fail with changes in parameters like lighting, shape, size and orientation. Deep learning-based object detection can overcome these drawbacks and are also more accurate compared to traditional methods. Therefore, few shot object detection is attempted with the help of TensorFlow object detection API. The aim of this research is to integrate few shot object detection into a robotic application. This object detection method learns to detect objects from a few examples per class. This solves the problem of requirement of large datasets and also develops a ROS package that can easily use object detection models from TensorFlow object detection API. Thus, the integration can be deployed in any robotic application based on ROS framework.KeywordsMobile robotsRobot Operating System (ROS)GazeboRaspberry PiArduinoTransfer learningFew shot object detectionTensorFlow object detection API

Full Text
Published version (Free)

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call