Abstract
The purpose of this research is to acquire an adaptive control policy of an airship in a dynamic, continuous environment based on reinforcement learning combined with evolutionary construction. The state space for reinforcement learning becomes huge because the airship has great inertia and must sense huge amounts of information from a continuous environment to behave appropriately. To reduce and suitably segment state space, we propose combining CMAC-based Q-learning and its evolutionary state space layer construction. Simulation showed the acquisition of state space segmentation enabling airships to learn effectively.
Talk to us
Join us for a 30 min session where you can share your feedback and ask us any queries you have