Harnessing AI for Enhanced Cybersecurity
In the era of Artificial Intelligence (AI), it is crucial to understand the impact of AI on cybersecurity. This chapter introduces data-driven security, data analysis and AI to predict, identify, and neutralize security threats, with introduction to AI, Machine Learning (ML) and cyber security and current trends in AI/ML applications for cybersecurity. Furthermore, we will discuss workflows involving information gathering, analysing data, and applying ML techniques for AI security. Later in the chapter, we will discuss the common pitfalls while designing an AI security workflow and how to avoid such pitfalls. In addition to this, the chapter discusses security concerns in contemporary AI systems that emphasize privacy and ethical considerations while balancing technology. Moreover, we'll discuss how AI/ML could secure the aviation, tourism, and hospitality sectors. Finally, the conclusions will provide valuable insights and recommend further exploration and integration with modern technologies.
- Research Article
3
- 10.47941/jmlp.2162
- Aug 2, 2024
- Journal of Modern Law and Policy
Purpose: The general objective of this study was to explore Intellectual Property Rights in the era of Artificial Intelligence. Methodology: The study adopted a desktop research methodology. Desk research refers to secondary data or that which can be collected without fieldwork. Desk research is basically involved in collecting data from existing resources hence it is often considered a low cost technique as compared to field research, as the main cost is involved in executive’s time, telephone charges and directories. Thus, the study relied on already published studies, reports and statistics. This secondary data was easily accessed through the online journals and library. Findings: The findings reveal that there exists a contextual and methodological gap relating to Intellectual Property Rights in the era of Artificial Intelligence. Preliminary empirical review revealed that the era of Artificial Intelligence (AI) has significantly transformed the landscape of Intellectual Property Rights (IPR), presenting both opportunities and challenges. It highlighted that traditional IP laws are increasingly inadequate to address the complexities introduced by AI-generated content, necessitating a rethinking of existing frameworks. The study emphasized the need for recognizing AI's role in the creation of new works and inventions and the importance of developing balanced approaches to protect both human and AI contributions. Ethical considerations, such as accountability, transparency, and fairness, were also deemed crucial in ensuring responsible AI use. Overall, the study called for a comprehensive and proactive approach to integrate AI into IPR, ensuring robust protections while fostering innovation. Unique Contribution to Theory, Practice and Policy: The Technological Determinism Theory, Innovation Diffusion Theory and Legal Realism Theory may be used to anchor future studies on Intellectual Property Rights in the era of Artificial Intelligence. The study recommended revising existing IP laws to explicitly include AI-generated content and inventions, clarifying criteria for authorship and inventorship. It suggested expanding theoretical frameworks to accommodate AI contributions, emphasizing the collaborative nature of human and AI creativity. Practical measures, such as enhanced cybersecurity and legal safeguards for AI-generated trade secrets, were advised. Policy-wise, the study advocated for international cooperation to harmonize IP laws concerning AI. Developing ethical guidelines for responsible AI use and implementing education programs to inform stakeholders about AI and IP implications were also recommended. These measures aimed to create a balanced IP framework supporting innovation while protecting the rights of all stakeholders.
- Research Article
- 10.21776/ub.jtg.012.02.4
- Dec 28, 2025
- Transformasi Global
This study analyzes how the United Nations (UN) constructs the issue of artificial intelligence (AI) as part of international security through a constructivist approach. In recent years, AI has developed rapidly and raised global concerns regarding technological misuse, disinformation, and potential threats to political stability and human rights. This phenomenon has driven the emergence of various global governance initiatives, including discussions at the UN Summit of the Future 2025. Using qualitative methods and constructivist discourse analysis, this research examines official documents, speeches by the Secretary-General, and reports from institutions such as UNIDIR, WEF, and OECD. The analysis is conducted through three stages: description, interpretation, and explanation. The findings reveal that the concept of “AI security” does not arise naturally from the nature of the technology itself but is shaped through social processes and normative discourses among global actors. The UN acts as a norm entrepreneur promoting the values of responsible AI and collective security, while states interpret these norms according to their own identities and interests. Thus, AI security is a social construct reflecting the interaction between ideas, interests, and identities within the international system. This study contributes to the strengthening of non-traditional security studies by demonstrating how technological issues can be understood as arenas for the formation of global norms and state political identities.
- Book Chapter
3
- 10.1007/978-3-030-90633-7_96
- Jan 1, 2022
For more than 30 years, companies’ needs for management solutions have evolved. Professionals place high demands on how to manage their business as effectively as possible. ERP (Enterprise Resource Planning) is still at the center of projects and reflections on the modernization of software equipment. This management tool has undergone many changes and evolutions. In this context of evolution, it is now possible to add many intelligent functionalities in an ERP, by calling on the big names of AI (Artificial intelligence), in particular ML (Machine Learning). The aim of this paper is to highlight this connection between ERP and AI through the application of ML techniques in ERP industrial processes in accordance to industry 4.0 and with the help of cloud computing and Internet of Things (IoT), and also to propose a classification of the techniques of ML in some smart industrial processes applications in ERP.KeywordERPArtificial intelligenceMachine learningCloudIoTIndustry 4.0
- Single Report
1
- 10.62311/nesx/rrv225
- Mar 19, 2025
Abstract: The rapid adoption of artificial intelligence (AI) in cloud and edge computing environments has transformed industries by enabling large-scale automation, real-time analytics, and intelligent decision-making. However, the increasing reliance on AI-powered infrastructures introduces significant cybersecurity challenges, including adversarial attacks, data privacy risks, and vulnerabilities in AI model supply chains. This research explores advanced cybersecurity frameworks tailored to protect AI-driven cloud and edge computing environments. It investigates AI-specific security threats, such as adversarial machine learning, model poisoning, and API exploitation, while analyzing AI-powered cybersecurity techniques for threat detection, anomaly prediction, and zero-trust security. The study also examines the role of cryptographic solutions, including homomorphic encryption, federated learning security, and post-quantum cryptography, in safeguarding AI models and data integrity. By integrating AI with cutting-edge cybersecurity strategies, this research aims to enhance resilience, compliance, and trust in AI-driven infrastructures. Future advancements in AI security, blockchain-based authentication, and quantum-enhanced cryptographic solutions will be critical in securing next-generation AI applications in cloud and edge environments. Keywords: AI security, adversarial machine learning, cloud computing security, edge computing security, zero-trust AI, homomorphic encryption, federated learning security, post-quantum cryptography, blockchain for AI security, AI-driven threat detection, model poisoning attacks, anomaly prediction, cyber resilience, decentralized AI security, secure multi-party computation (SMPC).
- Research Article
1
- 10.3390/ohbm3040007
- Sep 28, 2022
- Journal of Otorhinolaryngology, Hearing and Balance Medicine
The application of machine learning (ML) techniques to otolaryngology remains a topic of interest and prevalence in the literature, though no previous articles have summarized the current state of ML application to management and the diagnosis of lateral skull base (LSB) tumors. Subsequently, we present a systematic overview of previous applications of ML techniques to the management of LSB tumors. Independent searches were conducted on PubMed and Web of Science between August 2020 and February 2021 to identify the literature pertaining to the use of ML techniques in LSB tumor surgery written in the English language. All articles were assessed in regard to their application task, ML methodology, and their outcomes. A total of 32 articles were examined. The number of articles involving applications of ML techniques to LSB tumor surgeries has significantly increased since the first article relevant to this field was published in 1994. The most commonly employed ML category was tree-based algorithms. Most articles were included in the category of surgical management (13; 40.6%), followed by those in disease classification (8; 25%). Overall, the application of ML techniques to the management of LSB tumor has evolved rapidly over the past two decades, and the anticipated growth in the future could significantly augment the surgical outcomes and management of LSB tumors.
- Single Book
- 10.62311/nesx/97891
- Mar 14, 2025
Abstract: As Artificial Intelligence (AI) advances, so do the risks associated with deepfakes, misinformation, and algorithmic bias, posing significant threats to security, privacy, democracy, and societal trust. "Securing AI: Combating Deepfakes, Misinformation, and Bias with Trustworthy Systems" provides a comprehensive analysis of AI security vulnerabilities, adversarial machine learning, AI-driven misinformation, and bias in automated decision-making. The book explores how AI-generated synthetic media, data poisoning attacks, and biased algorithms are being weaponized for cyber fraud, political manipulation, and unethical automation. It delves into defensive strategies, AI forensic tools, cryptographic AI verification, and fairness-aware machine learning techniques to combat these emerging threats. Additionally, the book examines global AI regulations, governance frameworks, and ethical deployment standards that ensure transparency, accountability, and security in AI-driven ecosystems. Through real-world case studies, technical insights, and policy recommendations, this book serves as an essential resource for AI researchers, cybersecurity professionals, policymakers, and technology leaders aiming to develop trustworthy AI systems that resist adversarial manipulation, misinformation campaigns, and algorithmic bias while fostering fair, transparent, and secure AI adoption. Keywords: AI security, adversarial machine learning, deepfake detection, AI-generated misinformation, synthetic media, bias mitigation, AI ethics, AI governance, trustworthy AI, explainable AI (XAI), fairness-aware machine learning, cryptographic AI, federated learning security, digital forensics, algorithmic bias, data poisoning attacks, model robustness, cybersecurity in AI, misinformation detection, deep learning security, AI regulatory policies, zero-trust AI, blockchain-based content verification, ethical AI deployment, secure AI frameworks, AI transparency, AI-driven cyber threats, fake news detection, AI fraud prevention.
- Book Chapter
2
- 10.1201/9781003185246-7
- May 25, 2021
This chapter discusses the application of machine learning techniques in the healthcare sector for the prediction of epidemic disease outbreaks. Prediction is an important element in the decision-making processes for responding to and controlling any epidemic disease outbreak. Recently, a large number of countries across the globe has experienced coronavirus infectious disease outbreaks. Machine learning techniques can be very helpful for predicting and managing such epidemic outbreaks. In the new era of artificial intelligence, there exist huge opportunities for technology to be involved to assist in the monitoring and controlling these types of epidemics. With the growth of big data in the healthcare and biomedical sectors, it is now viable to exploit machine learning techniques and accurate disease prediction models to markedly improve epidemic prediction, which will ultimately help in the prevention and control capabilities. This chapter examines the variety of machine learning models that have been developed to predict epidemic diseases and shows how machine learning techniques can be of importance to the public health practitioners for predicting and detecting disease spread, improving epidemic management, and reducing the impact of outbreaks. It also highlights how machine learning techniques can be used for containing the current outbreak of COVID-19, which spread across the globe within a short period of three to four months.
- Research Article
3
- 10.1108/bij-04-2024-0353
- Apr 14, 2025
- Benchmarking: An International Journal
PurposeAs artificial intelligence (AI) and machine learning (ML) technologies continue to revolutionize various industries, understanding their impact on job embeddedness becomes crucial. This study examine the role of AI and ML technologies on job embeddedness, identifying key trends and proposing future research directions. It seeks to understand how these technologies influence employee attachment within organizations.Design/methodology/approachThis study uses bibliometric analysis to assess 890 articles published from 2001 to 2023 on job embeddedness and its relationship with AI and ML. The Scopus database is examined utilizing VOSviewer and Biblioshiny applications to determine themes and research deficiencies. This study visualizes the intellectual landscape of this area.FindingsThis study highlights the growing interest in AI and ML job embeddedness, highlighting the complex relationship between AI adoption and employee attachment, links and fit. It also highlights emerging themes like AI-enabled talent management, remote work implications and ethical considerations in AI-driven workplaces.Research limitations/implicationsThe extent of the Scopus database, the time period and the metadata correctness all provide restrictions on this investigation of how AI and ML affect job embeddedness. Nonetheless, the results underscore the need for empirical study on the effects of AI and ML and provide researchers with useful insights. This study also highlights how technological improvements influenced employee attitudes and actions.Originality/valueThis study presents a comprehensive overview of job embeddedness and AI/ML technologies, utilizing bibliometric techniques to evaluate research publications. It reveals key trends, identifies gaps and suggests future directions in this field.
- Research Article
- 10.3233/jifs-189940
- Jan 1, 2021
- Journal of Intelligent & Fuzzy Systems
The advent of the era of artificial intelligence makes it possible for administrative subjects to use intelligent machines and systems to engage in administrative activities. Among them, the administrative discretion, which is the core of administrative law, is particularly concerned about the use of artificial intelligence. In the era of weak artificial intelligence, intelligent administrative discretion has been widely used in all aspects of administrative law enforcement, but there is a phenomenon that administrative subjects are negligent in exercising discretion. Looking forward to the era of strong artificial intelligence, artificial intelligence machines or systems may have the ability and power to independently exercise administrative discretion, but they cannot become the real administrative discretion subject. Intelligent administrative discretion is conducive to administrative efficiency and guarantees the fairness of administrative behavior, but it also faces legal risks such as unfair results of discretion, opaque algorithm settings, and weakening of government functions. Only by strengthening the legal basis, protecting the rights of the counterparty, improving the accuracy of the algorithm, and improving the status of the administrative subject can the administrative discretionary behavior under the background of artificial intelligence be effectively regulated.
- Research Article
- 10.12732/ijam.v38i9s.870
- Nov 4, 2025
- International Journal of Applied Mathematics
The emergence of 6G networks heralds an era of unprecedented connectivity, speed, and complexity, paving the way for revolutionary advancements such as holographic communications, autonomous systems, and the Internet of Everything (IoE). However, with these enhanced capabilities comes a host of critical security challenges, including sophisticated cyber threats, privacy vulnerabilities, and the protection of billions of interconnected devices. Artificial Intelligence (AI) and Machine Learning (ML) are revolutionizing the security landscape of 6G networks, emerging as powerful catalysts for transformation. These technologies enable proactive threat detection, adaptive defense mechanisms, and real-time response strategies, fostering a resilient and intelligent security framework. AI-driven models can dynamically detect anomalies, autonomously refine protective measures, and facilitate self-optimizing security operations, ensuring a robust and future-ready defense system. Meanwhile, ML techniques facilitate predictive analytics, continuous learning, and advanced encryptionstrategies, tailored to the ever-changing threat environment. Furthermore, AI plays a pivotal role in securing network slices, IoT ecosystems, and edge infrastructures, offering robust and scalable protection within a highly virtualized and decentralized network architecture [1,3] next-generation networks. By providing a comprehensive Key innovations such as federated learning, behavioural analytics, and post-quantum cryptography serve as critical enablers for enhancing privacy, resilience, and trust in 6G environments. Despite their vast potential, challenges such as model robustness, explainability, and seamless integration with legacy systems must be addressed to fully harness the power of AI and ML in securing exploration of AI and ML-driven security solutions, this study aims to foster trust, inspire innovation, and lay the foundation for secure and reliable 6G adoption worldwide [14,15]. Problem Statement: As 6G networks evolve, they introduce new security challenges due to their complex architecture, massive connectivity, and ultra-low latency requirements. Traditional security mechanisms are inadequate to combat advanced cyber threats, including AI-powered attacks and vulnerabilities arising from edge computing and decentralized networks. Furthermore, the advent of quantum computing threatens existing cryptographic standards, rendering them obsolete and exposing 6G networks to unprecedented security risks. Therefore, there is a critical need to leverage AI, ML, and quantum-resistant security solutions to enhance threat detection, automate response mechanisms, and communications against both classical and quantum cyber threats.
- Conference Article
8
- 10.1109/dasc-picom-cbdcom-cyberscitech49142.2020.00064
- Aug 1, 2020
Nowadays, STEM (science, technology, engineering and mathematics) have never been treated so seriously before. Artificial Intelligence (AI) has played an important role currently in STEM. Under the 2020 COVID-19 pandemic crisis, coronavirus disease across over the world we are living in. Every government seek advices from scientist before making their strategic plan. Most of countries collect data from hospitals (and care home and so on in the society), carried out data analysis, using formula to make some AI models, to predict the potential development patterns, in order to make their government strategy. AI security become essential. If a security attack make the pattern wrong, the model is not a true prediction, that could result in thousands life loss. The potential consequence of this non-accurate forecast would be even worse. Therefore, take security into account during the forecast AI modelling, step-by-step data governance, will be significant. Cyber security should be applied during this kind of prediction process using AI deep learning technology and so on. Some in-depth discussion will follow.AI security impact is a principle concern in the world. It is also significant for both nature science and social science researchers to consider in the future. In particular, because many services are running on online devices, security defenses are essential. The results should have properly data governance with security. AI security strategy should be up to the top priority to influence governments and their citizens in the world. AI security will help governments' strategy makers to work reasonably balancing between technologies, socially and politics. In this paper, strategy related challenges of AI and Security will be discussed, along with suggestions AI cyber security and politics trade-off consideration from an initial planning stage to its near future further development.
- Research Article
- 10.18178/ijml.2023.13.4.1144
- Jan 1, 2023
- International Journal of Machine Learning
With the rapid development of network technology and the digital economy, the wave of the era of artificial intelligence has swept the world. Facing the era of big data and artificial intelligence, data-oriented technologies are undoubtedly served as the practical research trend. Therefore, the precise analysis provided by big data and artificial intelligence can provide effective and accurate knowledge and decision-making references for all sectors. In order to effectively and appropriately evaluate the potential risk to soil and groundwater for gas station industry, this study focuses on the potential risk factors affecting soil and groundwater pollution. In the past, our team has evaluated the risk factors affecting the remediation cost of soil and groundwater pollution for possible potential pollution sources such as gas stations, this study proceeds with the existing industrial database for in-depth discussion, uses machine learning technology to evaluate the key factors of pollution risk for soil and groundwater, and compares the differences, applicability and relative importance of the three machine learning techniques (such as neural networks, random forests and support vector machine). The performance indicators reveal that the random forest algorithm is better than support vector machine and artificial neural network. The relative importance of parameters of different machine learning models is not consistent, and the first five dominant parameters are location, number of gas monitoring wells, age of gas station, numbers of gasoline oil nozzle, and number of fuel dispenser for random forest model.
- Research Article
235
- 10.1631/fitee.1800573
- Dec 1, 2018
- Frontiers of Information Technology & Electronic Engineering
There is a wide range of interdisciplinary intersections between cyber security and artificial intelligence (AI). On one hand, AI technologies, such as deep learning, can be introduced into cyber security to construct smart models for implementing malware classification and intrusion detection and threating intelligence sensing. On the other hand, AI models will face various cyber threats, which will disturb their sample, learning, and decisions. Thus, AI models need specific cyber security defense and protection technologies to combat adversarial machine learning, preserve privacy in machine learning, secure federated learning, etc. Based on the above two aspects, we review the intersection of AI and cyber security. First, we summarize existing research efforts in terms of combating cyber attacks using AI, including adopting traditional machine learning methods and existing deep learning solutions. Then, we analyze the counterattacks from which AI itself may suffer, dissect their characteristics, and classify the corresponding defense methods. Finally, from the aspects of constructing encrypted neural network and realizing a secure federated deep learning, we expatiate the existing research on how to build a secure AI system.
- Book Chapter
- 10.62311/nesx/97991
- Feb 27, 2025
Abstract: As Artificial Intelligence (AI) becomes increasingly integrated into digital ecosystems, ensuring security and trust in AI-driven systems is paramount. This chapter explores the growing challenges posed by deepfakes, misinformation, and algorithmic bias, which threaten public trust, democratic integrity, and ethical AI adoption. Deepfake technology enables the manipulation of media, leading to fraud, identity theft, and political disinformation, while AI-driven misinformation amplifies fake news and biased narratives through social media algorithms. Additionally, algorithmic bias in hiring, law enforcement, and finance raises concerns about discrimination and fairness in AI decision-making. To counter these threats, AI security strategies—including deepfake detection, fact-checking AI models, fairness-aware algorithms, and cybersecurity measures—are being developed to ensure responsible AI governance. This chapter examines real-world applications, case studies from Google, IBM, Facebook, and OpenAI, and the role of regulations, AI ethics, and transparency in mitigating AI-related risks. Looking forward, the future of AI governance requires a collaborative approach between industry, academia, and policymakers to develop trustworthy, fair, and secure AI systems that benefit society while minimizing risks. Keywords: AI security, trust in AI, deepfakes, misinformation, algorithmic bias, AI ethics, fairness in AI, AI governance, AI transparency, adversarial attacks, explainable AI, cybersecurity, AI-driven misinformation, AI regulations, AI fairness, AI-driven trust.
- Research Article
- 10.1080/09747338.2025.2561550
- Jul 3, 2025
- IETE Journal of Education
In the past five decades, the teaching of Engineering Mathematics has continually evolved. In the 1970s and 1980s, when the undergraduate degree was five years in duration, mathematics occupied a prime position in engineering curriculum. When undergraduate programs were shortened to four years, the axe fell on mathematics courses. With the growth of jobs in the Information Technology sector and wide availability of tools for automating engineering tasks, some managers question the very need for teaching engineering mathematics courses, stating that engineering jobs do not require this knowledge since powerful software tools and automated flows are used in practice. Presently, most universities include 4 courses on engineering mathematics. There are universities who have customized the math courses for different departments. Some universities have integrated elements of engineering mathematics into departmental core courses with the intention of teaching the subject in a contextual setting. The availability of mathematical software such as MATLAB, Octave and SCILAB has added a new dimension to engineering education; it has now become possible to make mathematics courses more interesting by including computational experiments. Artificial Intelligence (AI), Machine Learning (ML) and Data Science (DS) have the potential to take engineering math education to a new level. In this paper, our goal is to explore how the teaching of engineering mathematics is evolving in the era of AI, ML, and DS. We consider both the merits and demerits of using AI, ML and DS tools in the math curriculum and provide some recommendations.
- Ask R Discovery
- Chat PDF
AI summaries and top papers from 250M+ research sources.