Abstract

AbstractTrusted Space Autonomy is challenging in that space systems are complex artifacts deployed in a high stakes environment with complicated operational settings. Thus far these challenges have been met using the full arsenal of tools: formal methods, informal methods, testing, runtime techniques, and operations processes. Using examples from previous deployments of autonomy (e.g. the Remote Agent Experiment on Deep Space One, Autonomous Sciencecraft on Earth Observing One, WATCH on MER, IPEX, AEGIS on MER, MSL, and M2020, and the M2020 Onboard planner), we discuss how each of these approaches have been used to enable successful deployment of autonomy. We next focus on relatively limited use of formal methods (both prior to deployment and runtime methods). From the needs perspective, formal methods may represent the best chance for reliable autonomy. Testing, informal methods, and operations accommodations do not scale well with increasing complexity of the autonomous system as the number of text cases explodes and human effort for informal methods becomes infeasible. However from the practice perspective, formal methods have been limited in their application due to: difficulty in eliciting formal specifications, challenges in representing complex constraints such as metric time and resources, and requiring significant expertise in formal methods to apply properly to complex, critical applications. We discuss some of these challenges as well as the opportunity to extend formal and informal methods into runtime validation systems.KeywordsVerification and validationFlight softwareSpace autonomyArtificial intelligence

Full Text
Published version (Free)

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call