Abstract

AbstractSystems Engineering (SE) based test and evaluation (T&E) approaches have proven crucial for successful realization of most modern‐day complex systems and system‐of‐systems solutions. The design, test, and evaluation of engineered systems with many technologically advanced and complex components hinges on well‐structured integration and life cycle evolution process models, clearly defined requirements, along with controllability, observability, and stability (COS) of components that transcend to the system‐level. However, integration of machine learning (ML), deep learning (DL) and artificially intelligent (AI) components invalidates some SE foundations as the DL algorithms are primarily data‐driven with opaque decision‐making constructs. As a result, the current SE T&E approaches, although necessary, have become insufficient to evaluate the adoption of ML/DL/AI methods in engineered systems. This paper proposes two new approaches—explainable AI (xAI) and counterfactual testing and evaluation (cT&E)—as addition to the SE tool set for T&E of intelligent engineered systems. An example of SE T&E considerations is provided based on a conceptual aircraft control system implementation comparing as a classical controller and a DL‐based reinforcement learning controller.

Full Text
Published version (Free)

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call