Purpose Advances in machine learning (ML) have made significant contributions to the development of intelligent and autonomous systems leading to concerns about resilience of such systems against cyberattacks. This paper aims to report findings from a quantitative analysis of literature within ML security to assess current research trends in ML security. Design/methodology/approach The study focuses on statistical analysis of literature published between 2000 and 2023, providing quantitative research contributions targeting authors, countries and interdisciplinary studies of organizations. This paper reports existing surveys and a comparison of publications of attacks on ML and its in-demand security. Furthermore, an in-depth study of keywords, citations and collaboration is presented to facilitate deeper analysis of this literature. Findings Trends identified between 2021 and 2022 highlight an increase in focus on adversarial ML – 40\% more publications compared to 2020–2022 with more than 90\% publications in journals. This paper has also identified trends with respect to citations, keywords analysis, annual publications, co-author citations and geographical collaboration highlighting China and the USA as the countries with highest publications count and Biggio B. as the researcher with collaborative strength of 143 co-authors which highlight significant pollination of ideas and knowledge. Keyword analysis highlighted deep learning and computer vision as the most common domains for adversarial attacks due to the potential to perturb images whilst being challenging to identify issues in deep learning because of complex architecture. Originality/value The study presented in this paper identifies research trends, author contributions and open research challenges that can facilitate further research in this domain.
Read full abstract