Abstract

Much like cars, AI technologies must undergo rigorous testing to ensure their safety and reliability. However, just as a 16-wheel truck’s brakes are different from that of a standard hatchback, AI models too may need distinct analyses based on their risk, size, application domain, and other factors. Prior research has attempted to do this, by identifying areas of concern for AI/ML applications and tools needed to simulate the effect of adversarial actors. However, currently, a variety of frameworks exist which poses challenges due to inconsistent terminology, focus, complexity, and interoperability issues, hindering effective threat discovery. In this paper, we present a meta-analysis of 14 AI threat modeling frameworks, providing a streamlined set of questions for AI/ML threat analysis. We then review this library, incorporating feedback from 10 experts to refine the questions. This refined set of questions allow practitioners to seamlessly integrate threat analysis for comprehensive manual evaluation of a wide range of AI/ML applications.

Full Text
Published version (Free)

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call