Animals have evolved to adapt to complex and uncertain environments, acquiring locomotion skills for diverse surroundings. To endow a robot’s animal-like locomotion ability, in this paper, we propose a learning algorithm for quadruped robots based on deep reinforcement learning (DRL) and a rhythm controller that is based on a cosine oscillator. For a quadruped robot, two cosine oscillators are utilized at the hip joint and the knee joint of one leg, respectively, and, finally, eight oscillators form the controller to realize the quadruped robot’s locomotion rhythm during moving. The coupling between the cosine oscillators of the rhythm controller is realized by the phase difference, which is simpler and easier to realize when dealing with the complex coupling relationship between different joints. DRL is used to help learn the controller parameters and, in the reward function design, we address the challenge of terrain adaptation without relying on the complex camera-based vision processing but based on the proprioceptive information, where a state estimator is introduced to achieve the robot’s posture and help finally utilize the food-end coordinate. Experiments are carried out in CoppeliaSim, and all of the flat, uphill and downhill conditions are considered. The results show that the robot can successfully accomplish all the above skills and, at the same time, with the reward function designed, the robot’s pitch angle, yaw angle and roll angle are very small, which means that the robot is relatively stable during walking. Then, the robot is transplanted to a new scene; the results show that although the environment is previously unencountered, the robot can still fulfill the task, which demonstrates the effectiveness and robustness of this proposed method.
Read full abstract