Abstract

Advancements in robotics have led to an ever-growing repertoire of software capabilities (e.g., recognition, mapping, and object manipulation). However, robotic capabilities grow, the complexity of operating and interacting with such robots increases (such as through speech, gesture, scripting, or programming). Language-based communication can offer users the ability to work with physically and computationally complex robots without diminishing the robot's inherent capability. However, it remains an open question how to build a common ground between natural language and goal-directed robot actions, particularly in a way that scales with the growth of robot capabilities. We examine using semantic frames - a linguistics concept which describes scenes being acted out - as a conceptual stepping stone between natural language and robot action. We examine the scalability of this solution through the development of RoboFrameNet, a generic language-to-action pipeline for ROS (the Robot Operating System) that abstracts verbs and their dependents into semantic frames, then grounds these frames into actions. We demonstrate the framework through experiments with the PR2 and Turtlebot robot platforms and consider the future scalability of the approach.

Full Text
Published version (Free)

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call