Abstract

This paper proposes a vision-based autonomous move-to-grasp approach for a compact mobile manipulator under some low and small environments. The visual information of specified object with a radial symbol and an overhead colour block is extracted from two CMOS cameras in an embedded way. Furthermore, the mobile platform and the postures of the manipulator are adjusted continuously by vision-based control, which drives the mobile manipulator approaching the object. When the mobile manipulator is sufficiently close to the object, only the manipulator moves to grasp the object based on the incremental movement with its head end centre of the end-effector conforming to a Bezier curve. The effectiveness of the proposed approach is verified by experiments.

Highlights

  • The development of autonomous mobile robots operating in unstructured and natural environments has been studied extensively in robotics research

  • Once the object is observed by recognizing the radial symbol feature based on the information provided by CMOS camera 1, it means that mobile manipulator is a little far from the object and the information is extracted for guiding the motion of the mobile platform

  • Based on the four corners extracted from the CMOS camera 2, the position of the centre point of the colour block P relative to frame OcXcYcZc is obtained according to the physical size of the rectangle colour block, the camera model [22] (see Eq(2))as well as the shape constraint‐based pose measurement proposed by Xu et al in [18]

Read more

Summary

Introduction

The development of autonomous mobile robots operating in unstructured and natural environments has been studied extensively in robotics research. Yamamoto and Yun present an algorithm to control the mobile platform so that the manipulator is maintained at a configuration which maximizes the manipulability measure, and simulation and experiment results verify its effectiveness [2]. Seelinger et al [10] develop a high‐precision visual control method for mobile camera‐space manipulation for unmanned planetary exploration rovers It achieves a high level of positioning precision, which is robust to model errors and uncertainties in measurement. With the increasing complexity of tasks and environments, the miniaturized mobile robot with manipulation capability under some low and small environments is required In this case, image capturing through CMOS cameras and embedded vision‐based control provide a better solution.

Problem Description
The Autonomous Move‐to‐Grasp Approach Based on Embedded Vision
Information Extraction Based on a Radial Symbol
Information Extraction Based on Rectangle Colour Block
Motion Control of Mobile Platform and Posture Adjustment of the Manipulator
Vision‐Based Manipulator Grasp Control
Experiments
Findings
Conclusions
Full Text
Published version (Free)

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call