Abstract
Within the dynamic realm of contemporary warfare, Artificial Intelligence (AI) emerges as a transformative force that reshapes the ways and means used to strategize, execute, and assess military operations. In this journey, the use of AI spans functions and capabilities like intelligence analysis, target engagement decision-making support, weapon autonomy, and effects analytics. Concurrently, AI enhances, e.g., the effectiveness of military plans and capabilities having the potential to reducing risks to civilians, civilian objects, and military personnel. In this rapidly evolving arena, military Cyber Operations gained unprecedented prominence due to their intrinsic digital and cross-domain nature, speed, and became a clear option to achieving military goals, and a mature set of alternatives to conventional ones. Nonetheless, they need continuous assessment, deal with different uncertainty types produced by characteristics like anonymity and can imply psychological impact. Hence, such military operations demand meticulous planning, sophisticated execution, and a deep understanding of technical, military-legal, ethical, and strategic implications and consequences. This represents a direct call for building solutions that align the potential of AI with the responsible and safe conduct of military operations in the military cyber domain: building trustworthy AI-based military Cyber Operations. While incipient efforts to tackle important dimensions of such an approach exist in this domain, a direct and unified approach that unifies them as a commitment and artefact lacks. To tackle this knowledge gap, this research aims to build a bridge between the above-mentioned dimensions by proposing a working definition and framework for building trustworthy AI-based military Cyber Operations using the Design Science Research methodology.
Published Version (Free)
Talk to us
Join us for a 30 min session where you can share your feedback and ask us any queries you have