Abstract
Most successful dynamic walkers today use either decoupled forward and sideways models or use locally-defined Euler formulations to model the combined 3D dynamics. While decoupling may lead to stability conflicts, local parametrizations of the orientation dynamics trades-off dynamism and shortens the stabilizable range of motion. Through a rigorous empirical study of the 3D pendulum stabilization problem, we show that Euler-parametrization based orientation control in 3D requires greater input to stabilize on average, not just for large-error situations. To resolve these issues, we present novel geometric bipedal robot models and design suitable geometric controllers by extending commonly used non-linear control design techniques to non-Euclidean Lie-group manifolds. The dynamics and control actions thus obtained are very compact, singularity-free, and more importantly, they naturally capture the inherent coupling between the rotational degrees-of-freedom. The presented models are the fully-actuated Reaction Mass Biped (RMB) and the geometric Cassie robot model both evolving on SO(3) product manifolds. The RMB model uniquely allows for modeling of the torso with variable inertia. Additionally, in the Cassie robot model the dynamics are augmented to capture the transverse plane dynamics generated while riding a pair of Hovershoes. Further, the RMB dynamics are also discretized using variational principles. Finally, suitable geometric variational integrators for numerical integration of the RMB dynamics while preserving its manifold structure and conservation properties for long time-scales. On the control design front, for fully-actuated models like RMB, we define geometric motion plans for walking on straight and curved paths along with suitable trajectory tracking controllers that afford almost-global stability guarantees. Motivated by these promising theoretical results, we leveraged the geometric model of Cassie to plan and control highly dynamic behaviors like turning in place on a 20 degree-of-freedom bipedal robot, Cassie standing on a pair of Hovershoes. Learning for Dynamic Legged RobotsDeep learning has been widely used to develop smooth control policies for robotic systems. In the field of bipedal locomotion, gait libraries are a powerful tool to update gait parameters step-by-step via interpolation to render the bipedal system approximately neutrally stable. A discrete set of optimal gaits in the gait library is generated offline through nonlinear trajectory optimization using the full-order hybrid robot model and satisfies all the associated unilateral ground contact and friction constraints, joint limits and motor limits. However, the gaits are locally stable around the specific gait parameter choice. Moreover, the interpolation time-complexity grows linearly and gait library space-complexity grows exponentially with the number of gait parameters. Considering these factors, we combine model-based gait library design and deep learning to yield a near constant-time and constant-memory policy for fast, stable and robust bipedal robot locomotion. To achieve this, we design a custom network, called Gait-Net, using an autoencoder-based architecture to jointly learn both a gait-parameter to gait mapping and a gait-parameter reconstruction mapping. The reconstruction mapping is used to assess the quality of the learned gait. It can also be provided to the high-level planner to search for alternate plans that can result in better quality gait predictions to ensure stable and sustained locomotion. We validated our Gait-Net performance on a high-fidelity physics simulator that is custom-built for the bipedal robot Cassie.A popular application of gait libraries is to walk on discrete terrain where the robot has to constantly modulate its step length to accurately step on discrete footholds. In such scenarios, it is also very important to sense and estimate the distance to the next valid foothold apriori to leverage the gait library for step length modulation. A perception module can be developed for this task but it must be very fast at detection and accurate at localization. The latest deep-learning fueled advances in computer vision make this possible. However, these neural network models need a lot of data.Generating large datasets for every possible locomotion task is impractical. Alternatively, a graphics simulator capable of generating photo-realistic images can be used to rapidly generate synthetic datasets with desired diversity in visual features to mimic real-world situations. We take the latter approach.A convolutional neural network, called SL-CNN is designed for predicting step-length from a synthetic dataset of monocular images rendered from the robot's point-of-view. Further, the network architecture is customized to minimize the worst-case prediction error keeping in mind the safety-critical nature of the task. Finally, the visual simulator and estimator thus developed are integrated into the physical model of the robot and the gait-library-based controller to realize autonomous planar walking in simulation.
Talk to us
Join us for a 30 min session where you can share your feedback and ask us any queries you have
Disclaimer: All third-party content on this website/platform is and will remain the property of their respective owners and is provided on "as is" basis without any warranties, express or implied. Use of third-party content does not indicate any affiliation, sponsorship with or endorsement by them. Any references to third-party content is to identify the corresponding services and shall be considered fair use under The CopyrightLaw.