Abstract

Neural network-based controllers are widely used within the domain of robotic control systems. A network controller with more neurons typically achieves better performance, but an excessive number of neurons may make the model computationally intensive, resulting in slow dynamic responses in real-world environments. This paper reports a network compression method that integrates knowledge distillation technology for the development of concise neural network-based controllers to achieve a balance between the control performance and computational costs. The method first trains a full-size teacher model, which is then pruned, leading to a concise network with a minimum compromise of performance. From in this study, the resulting concise network is considered to be the prototype of a student model, which is further trained by a knowledge distillation process. The proposed compression method was applied to three classical networks, and the resultant compact controllers were tested on a robot manipulator for efficacy and potential demonstration. The experimental results from a comparative study confirm that the student models with fewer neurons resulting from the proposed model compression approach can achieve similar performance to that of the teacher models for intelligent dynamic control but with faster convergence speed.

Full Text
Paper version not known

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call

Disclaimer: All third-party content on this website/platform is and will remain the property of their respective owners and is provided on "as is" basis without any warranties, express or implied. Use of third-party content does not indicate any affiliation, sponsorship with or endorsement by them. Any references to third-party content is to identify the corresponding services and shall be considered fair use under The CopyrightLaw.