Abstract
This paper presents a software architecture for robots providing manipulation services autonomously in human environments. In an unstructured human environment, a service robot often needs to perform tasks even without human intervention and prior knowledge about tasks and environments. For autonomous execution of tasks, varied processes are necessary such as perceiving environments, representing knowledge, reasoning with the knowledge, and planning for task and motion. While developing each of the processes is important, integrating them into a working system for deployment is also important as a robotic system can bring tangible outcomes when it works in real world. However, such an architecture has been rarely realized in the literature owing to the difficulties of a full integration, deployment, understanding high-level goals without human interventions. In this work, we suggest a software architecture that integrates the components necessary to perform tasks by a real robot without human intervention. We show our architecture composed of deep learning based perception, symbolic reasoning, AI task planning, and geometric motion planning. We implement a deep neural network that produces information about the environment, which are then stored in a knowledge base. We implement a reasoner that processes the knowledge to use the result for task planning. We show our implementation of the symbolic task planner that generates a sequence of motion predicates. We implement an interface that computes geometric information necessary for motion planning to execute the symbolic task plans. We describe the deployment of the architecture through the result of lab tests and a public demonstration. The architecture is developed based on Robot Operating System (ROS) so compatible with any robot that is capable of object manipulation and mobile navigation running in ROS. We deploy the architecture to two different robot platforms to show the compatibility.
Highlights
There have long been extensive research efforts on robotic manipulation of objects
Since our goal is to develop an architecture that does not depend on particular hardware platforms, MoveIt is appropriate as it only needs a model of the robot used (i.e., URDF)
ENVIRONMENTS We develop the proposed software architecture to provide manipulation services where the service domain is not tied to a particular environment
Summary
There have long been extensive research efforts on robotic manipulation of objects. The areas of research include hand and grasper design [1], grasp planning [2], task planning [3], motion planning [4], control [5], and perception [6]. Object manipulation becomes one of the most successful applications in robotics. Examples are Pickit [7] for perception, MoveIt for motion planning [8], and GraspIt for grasp planning [9]. The success of the DARPA Robotics Challenge (DRC) 2015 shows us a bright future of robots that can work in human environments. Several reports from the challenge [10]–[12] tell us that interventions of human operators were necessary and caused major problems (e.g., fall, reset, large delay, task failure). Atkeson et al (2018) [10] conclude that a greater autonomy is expected in the future than the human-moderated operation.
Talk to us
Join us for a 30 min session where you can share your feedback and ask us any queries you have
Disclaimer: All third-party content on this website/platform is and will remain the property of their respective owners and is provided on "as is" basis without any warranties, express or implied. Use of third-party content does not indicate any affiliation, sponsorship with or endorsement by them. Any references to third-party content is to identify the corresponding services and shall be considered fair use under The CopyrightLaw.