Abstract
The problems of continuous action space decision-making are widespread in industrial manufacturing. However, when dealing with these problems, existing reinforcement learning (RL) methods relies on a large number of training samples, which is always unacceptable given the limited availability or expensive nature of data, such as low-volume manufacturing. This paper proposes a new Fourier Q operator network (FQON) based RL method. The input of FQON is the expected state function and its output the Q-value function, and both functions take the action in RL as independent variables. The infinite-dimensional mapping between the function domains is established by a set of parameters that can be used with different discretization, which fixes the mapping complexity regardless of the action space resolution. By taking the advantages of the fast calculation using on Fourier kernel operator, the mapping complexity is highly reduced, and it enables that FQON can realize the decision-making in continuous action space using a small amount of training samples. Taking machining deformation control of an aero-engine casing as a case study, experimental results showed that FQON based RL method can control the deformation well with limited training samples.
Talk to us
Join us for a 30 min session where you can share your feedback and ask us any queries you have
Disclaimer: All third-party content on this website/platform is and will remain the property of their respective owners and is provided on "as is" basis without any warranties, express or implied. Use of third-party content does not indicate any affiliation, sponsorship with or endorsement by them. Any references to third-party content is to identify the corresponding services and shall be considered fair use under The CopyrightLaw.