Abstract
Response prediction is a fundamental yet challenging task in aeronautical engineering, requiring an accurate selection of sensor positions correlated with the target responses to achieve precise predictions. Unfortunately, in large-scale structures, the rigorous selection of reliable sensor candidates for multi-target responses remains largely unexplored. In this paper, we propose a flexible and generalized framework for selecting the most relevant sensors to the multi-target response and predicting the target response, referred to as the Fast-aware Multi-Target Response Prediction (FMTRP) approach in the spirit of divide-and-conquer. Specifically, first, a multi-task learning module is designed to predict multi-point response tasks at the same time. Simultaneously, we meticulously devise adaptive mechanisms to facilitate loss-term reweighting and encourage prioritization of challenging tasks in multiple prediction tasks. Second, to ensure ease of interpretation, we introduce a hybrid penalty to select sensors at the group-sparsity, individual-sparsity and element-sparsity levels. Finally, due to the substantial number of candidate sensors posing a significant computational burden, we develop a more efficient search strategy and support computation to make the proposed approach applicable in practice, leading to substantial runtime improvements. Extensive experiments on aircraft standard model response datasets and large airliner test flight datasets validate the effectiveness of the proposed approach in identifying sensor locations and simultaneously predicting responses at multiple points. Compared to state-of-the-art methods, the proposed approach achieves an accuracy of over 99% in sinusoidal excitation and exhibits the shortest runtime (3.514 s).
Talk to us
Join us for a 30 min session where you can share your feedback and ask us any queries you have
Disclaimer: All third-party content on this website/platform is and will remain the property of their respective owners and is provided on "as is" basis without any warranties, express or implied. Use of third-party content does not indicate any affiliation, sponsorship with or endorsement by them. Any references to third-party content is to identify the corresponding services and shall be considered fair use under The CopyrightLaw.