The challenge in robotics is to enable robots to transition from visual perception and language understanding to performing tasks such as grasp and assembling objects, bridging the gap between “seeing” and “hearing” to “doing”. In this work, we propose Ground4Act, a two-stage approach for collaborative pushing and grasping in clutter using a visual-language model. In the grounding stage, Ground4Act extracts target features from multi-modal data via visual grounding. In the action stage, it embeds a collaborative pushing and grasping framework to generate the action's position and direction. Specifically, we propose a DQN-based reinforcement learning pushing policy that uses RGBD images as the state space to determine the push action's pixel-level coordinates and direction. Additionally, a least squares-based linear fitting grasping policy takes the target mask from the grounding stage as input to achieve efficient grasp. Simulations and real-world experiments demonstrate Ground4Act's superior performance. The simulation suite, source code, and trained models will be made publicly available.
Read full abstract