Abstract

It is our pleasure to welcome you to the 10th ACM Workshop on Artificial Intelligence and Security - AISec 2017. AISec, having been annually co-located with CCS for ten consecutive years, is the premier meeting place for researchers interested in the intersection of security, privacy, AI, and machine learning. Its role as a venue has been to merge practical security problems with advances in AI and machine learning. In doing so, researchers also have been developing theory and analytics unique to this domain and have explored diverse topics such as learning in game-theoretic adversarial environments, privacy-preserving learning, and applications to spam and intrusion detection. AISec 2017 received 36 submissions, of which 11 (30%) were selected for publication and presentation as full papers. We also accepted 3 additional short papers, namely, two-page papers to be presented in a lightning round at the workshop (10 mins). Submissions arrived from researchers in 15 countries, from a wide variety of institutions both academic and corporate. The accepted papers were organized into the following thematic groups: Deep Learning, concerning the analysis of the security properties of deep neural networks against test-time evasion and training-time poisoning attacks; Authentication and Intrusion Detection, related to systems that use machine learning to solve a particular security problem;Defense against Poisoning, related to the discussion of countermeasures that mitigate the impact of training-time poisoning attacks;Malware, concerning automatic malware detection and classification. The keynote address is given by Aylin Caliskan, from Princeton University, USA, whose talk is entitled, "Beyond Big Data: What Can We Learn from AI Models?" In this talk, Dr. Caliskan discusses how to use machine learning and natural language processing in novel ways to interpret big data, develop privacy and security attacks, and gain insights about humans and society through these methods. She discusses how to analyze machine learning models' internal representations to investigate how the artificial intelligence perceives the world, and uncover facts about the society and the use of language which have implications for privacy, security, and fairness in machine learning.

Full Text
Paper version not known

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call

Disclaimer: All third-party content on this website/platform is and will remain the property of their respective owners and is provided on "as is" basis without any warranties, express or implied. Use of third-party content does not indicate any affiliation, sponsorship with or endorsement by them. Any references to third-party content is to identify the corresponding services and shall be considered fair use under The CopyrightLaw.