In the realm of machine learning, ensuring robustness against adversarial attacks is increasingly crucial. Adversarial training has emerged as a prominent strategy to fortify models against such vulnerabilities. This project provides a comprehensive overview of adversarial training and its pivotal role in bolstering the resilience of machine learning frameworks. We delve into the foundational principles of adversarial training, elucidating its underlying mechanisms and theoretical underpinnings. Furthermore, we survey state-of-the-art methodologies and techniques utilized in adversarial training, encompassing adversarial example generation and training methodologies. Through a thorough examination of recent advancements and empirical findings, we evaluate the effectiveness of adversarial training in enhancing the robustness of machine learning models across diverse domains and applications. Additionally, we address challenges and identify open research avenues in this burgeoning field, laying the groundwork for future developments aimed at strengthening the security and dependability of machine learning systems in real-world scenarios. By elucidating the intricacies of adversarial training and its implications for robust machine learning, this paper contributes to advancing the understanding and application of techniques crucial for safeguarding against adversarial threats in the evolving landscape of artificial intelligence