Abstract

Abstract: In the rapidly evolving landscape of artificial intelligence (AI) and cybersecurity, the increasing adoption of large language models has introduced both opportunities and challenges. The utilization of large generative AI models, such as GPT 3.5 and GPT 4.0 used in ChatGPT, has shown promising potential in various domains, including cybersecurity, software engineering, and human-computer interaction. However, alongside their benefits, these models raise concerns regarding transparency, interpretability, and ethical considerations. Furthermore, AI-driven cybersecurity has emerged as a critical defense against sophisticated cyber threats, but it faces issues related to accuracy, false positives, and the need for data-efficient techniques. The integration of AI in cybersecurity has also led to new attack vectors and vulnerabilities that require comprehensive solutions. To address these multifaceted challenges, a research survey paper is warranted to analyze the state-ofthe-art understanding of the use of generative AI in cybersecurity, addressing issues identified through statistical analysis, new attack vectors and vulnerabilities that have emerged, innovative solutions that may exist, and the current approach to promoting responsible and secure AI practices.

Full Text
Published version (Free)

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call