Unmanned autonomous systems (UASs), including drones and robotics, are widely employed across various fields. Despite significant advances in AI-enhanced intelligent systems, there remains a notable deficiency in the interpretability and comprehensive quantitative evaluation of these systems. The existing literature has primarily focused on constructing evaluation frameworks and methods, but has often overlooked the rationality and reliability of these methods. To address these challenges, this paper proposes an innovative optimization evaluation method for data-driven unmanned autonomous systems. By optimizing the weights of existing indicators based on data distribution characteristics, this method enhances the stability and reliability of assessment outcomes. Furthermore, interpretability techniques such as Local Interpretable Model-agnostic Explanations (LIMEs) and Partial Dependence Plots (PDPs) were employed to verify the effectiveness of the designed evaluation indicators, thereby ensuring the robustness of the evaluation system. The experimental results validated the effectiveness of the proposed approach.
Read full abstract