Abstract
Robots have been widely used in daily life in recent years. Unlike conventional robots made of rigid materials, soft robots utilize stretchable and flexible materials, allowing flexible movements similar to those of living organisms, which are difficult for traditional robots. Previous studies have used periodic signals to control soft robots, which lead to repetitive motions and make it challenging to generate environment-adapted motions. To address this issue, control methods can be learned through deep reinforcement learning to enable soft robots to select appropriate actions based on observations, improving their adaptability to environmental changes. In addition, as mobile robots have limited onboard resources, it is necessary to conserve battery consumption and achieve low-power control. Therefore, the use of spiking neural networks (SNNs) with neuromorphic chips enables low-power control of soft robots. In this study, we investigated the learning methods for SNNs aimed at controlling soft robots. Experiments were conducted using a caterpillar-like soft robot model based on previous studies, and the effectiveness of the learning method was evaluated.
Talk to us
Join us for a 30 min session where you can share your feedback and ask us any queries you have
Disclaimer: All third-party content on this website/platform is and will remain the property of their respective owners and is provided on "as is" basis without any warranties, express or implied. Use of third-party content does not indicate any affiliation, sponsorship with or endorsement by them. Any references to third-party content is to identify the corresponding services and shall be considered fair use under The CopyrightLaw.