Abstract

Future autonomous systems will employ complex sensing, computation, and communication components for their perception, planning, control, and coordination, and could operate in highly dynamic and uncertain environment with safety and security assurance. To realize this vision, we have to better understand and address the challenges from the unknowns - the unexpected disturbances from component faults, environmental interference, and malicious attacks, as well as the inherent uncertainties in system inputs, model inaccuracies, and machine learning techniques (particularly those based on neural networks). In this work, we will discuss these challenges, propose our approaches in addressing them, and present some of the initial results. In particular, we will introduce a cross-layer framework for modeling and mitigating execution uncertainties (e.g., timing violations, soft errors) with weakly-hard paradigm, quantitative and formal methods for ensuring safe and time-predictable application of neural networks in both perception and decision making, and safety-assured adaptation strategies in dynamic environment.

Full Text
Paper version not known

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call

Disclaimer: All third-party content on this website/platform is and will remain the property of their respective owners and is provided on "as is" basis without any warranties, express or implied. Use of third-party content does not indicate any affiliation, sponsorship with or endorsement by them. Any references to third-party content is to identify the corresponding services and shall be considered fair use under The CopyrightLaw.