Abstract

Recently introduced deep reinforcement learning (DRL) techniques in discrete-time have resulted in significant advances in online games, robotics, and so on. Inspired from recent developments, we have proposed an approach referred to as Quantile Critic with Spiking Actor and Normalized Ensemble (QC_SANE) for continuous control problems, which uses quantile loss to train critic and a spiking neural network (NN) to train an ensemble of actors. The NN does an internal normalization using a scaled exponential linear unit (SELU) activation function and ensures robustness. The empirical study on multijoint dynamics with contact (MuJoCo)-based environments shows improved training and test results than the state-of-the-art approach: population coded spiking actor network (PopSAN).

Full Text
Published version (Free)

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call