Abstract

Facing to a planar tracking problem, a multiple-interpretable improved Proximal Policy Optimization (PPO) algorithm with few-shot technique is proposed, namely F-GBQ-PPO. Compared with the normal PPO, the main improvements of F-GBQ-PPO are to increase the interpretability, and reduce the consumption for real interaction samples. Considering to increase incomprehensibility of a tracking policy, three levels of interpretabilities has been studied, including the perceptual, logical and mathematical interpretabilities. Detailly speaking, it is realized through introducing a guided policy based on Apollonius circle, a hybrid exploration policy based on biological motions, and the update of external parameters based on quantum genetic algorithm. Besides, to deal with the potential lack of real interaction samples in real applications, a few-shot technique is contained in the algorithm, which mainly generate fake samples through a multi-dimension Gaussian process. By mixing fake samples with real ones in a certain proportion, the demand for real samples can be reduced.

Full Text
Published version (Free)

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call