In the rapidly evolving field of artificial intelligence, machine learning (ML) and deep learning (DL) algorithms have emerged as powerful tools for solving complex problems in various domains, including cyber security. However, as these algorithms become increasingly prevalent, they also face new security challenges. One of the most significant of these challenges is the threat of zero-day attacks, which exploit unknown and unpredictable vulnerabilities in the algorithms or the data they process. This paper provides a comprehensive overview of zero-day attacks on ML/DL algorithms, exploring their types, causes, effects, and potential countermeasures. The paper begins by introducing the concept and definition of zero-day attacks, providing a clear understanding of this emerging threat. It then reviews the existing research on zero-day attacks on ML/DL algorithms, focusing on three main categories: data poisoning attacks, adversarial input attacks, and model stealing attacks. Each of these attack types poses unique challenges and requires specific countermeasures. The paper also discusses the potential impacts and risks of these attacks on various application domains. For instance, in facial expression recognition, an adversarial input attack could lead to misclassification of emotions, with serious implications for user experience and system integrity. In object classification, a data poisoning attack could cause the algorithm to misidentify critical objects, potentially endangering human lives in applications like autonomous driving. In satellite intersection recognition, a model stealing attack could compromise national security by revealing sensitive information. Finally, the paper presents some possible protection methods against zero-day attacks on ML/DL algorithms. These include anomaly detection techniques to identify unusual patterns in the data or the algorithm’s behaviour, model verification and validation methods to ensure the algorithm’s correctness and robustness, federated learning approaches to protect the privacy of the training data, and differential privacy techniques to add noise to the data or the algorithm’s outputs to prevent information leakage. The paper concludes by highlighting some open issues and future directions for research in this area, emphasizing the need for ongoing efforts to secure ML/DL algorithms against zero-day attacks.