Abstract

Recent deterministic learning methods have achieved locally-accurate identification of unknown system dynamics. However, the locally-accurate identification means that the neural networks can only capture the local dynamics knowledge along the system trajectory. In order to capture a broader knowledge region, this article investigates the knowledge fusion problem of deterministic learning, that is, the integration of different knowledge regions along different individual trajectories. Specifically, two kinds of knowledge fusion schemes are systematically introduced: an online fusion scheme and an offline fusion scheme. The online scheme can be viewed as an extension of distributed cooperative learning control to cooperative neural identification for sampled-data systems. By designing an auxiliary information transmission strategy to enable the neural network to receive information learned from other tasks while learning its own task, it is proven that the weights of all localized RBF networks exponentially converge to their common true/ideal values. The offline scheme can be regarded as a knowledge distillation strategy, in which the fused network is obtained by offline training through the knowledge learned from all individual system trajectories via deterministic learning. A novel weight fusion algorithm with low computational complexity is proposed based on the least squares solution under subspace constraints. Simulation studies show that the proposed fusion schemes can successfully integrate the knowledge regions of different individual trajectories while maintaining the learning performance, thereby greatly expanding the knowledge region learned from deterministic learning.

Full Text
Published version (Free)

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call