Robots have been widely used in daily life in recent years. Unlike conventional robots made of rigid materials, soft robots utilize stretchable and flexible materials, allowing flexible movements similar to those of living organisms, which are difficult for traditional robots. Previous studies have used periodic signals to control soft robots, which lead to repetitive motions and make it challenging to generate environment-adapted motions. To address this issue, control methods can be learned through deep reinforcement learning to enable soft robots to select appropriate actions based on observations, improving their adaptability to environmental changes. In addition, as mobile robots have limited onboard resources, it is necessary to conserve battery consumption and achieve low-power control. Therefore, the use of spiking neural networks (SNNs) with neuromorphic chips enables low-power control of soft robots. In this study, we investigated the learning methods for SNNs aimed at controlling soft robots. Experiments were conducted using a caterpillar-like soft robot model based on previous studies, and the effectiveness of the learning method was evaluated.