Abstract

Machine Learning (ML) algorithms and Artificial Intelligence (AI) are now regarded as very useful for data-driven applications including resilient multi-domain operations. However, ML algorithms and AI systems can be controlled, dodged, biased, and misled through flawed learning models and input data, they need robust security features and trust. Furthermore, ML algorithms and AI systems add challenges when we have (unlabeled/labeled) sparse/small data or big data for training and evaluation. It is very important to design, evaluate and test ML algorithms and AI systems that produce reliable, robust, trustworthy, explainable, and fair/unbiased outcomes to make them acceptable and reliable in mission critical multi-domain operations. ML algorithms rely on data and work on the principle of ``Garbage In, Garbage Out, which means that if the input data to learning model is corrupted or compromised, the outcomes of the ML/AI would not be optimal, reliable and trustworthy.

Full Text
Published version (Free)

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call