Purpose: Rules of Engagement(ROE) refer to rules or directives that define the circumstances, conditions, extent, manner, etc. of the application of force or action that can be considered provocative by the armed forces. ROE do not explain how results are achieved, but rather indicate what judgments are unacceptable. Focusing this, the purpose of this study is to propose an Instructional Systems Design(ISD) configured to reflect ethics in AI’s ROE learning for future warfare. Method: This study uses Development Research Method for develop and propose an ISD. ISD refers to the creation of guidelines into smaller units of teaching or learning. If some guidelines are created for such ISD, it would set the composition and application of ROE, and AI will learn that guidelines through deep learning. And the AI makes a decision with this in the hypothetical dilemma situation where the application of the ROE is requested. Finally, human experts review and supplement the learning results of these neural networks. The sophistication of the AI’s learning and applying ROE would be achieved by feeding back this result to the ISD. Results: This study understands that ROE would also be essential for AI or AI-equipped military robot systems. In this process, AI performs the task of making judgments related to applying ROE, which is the principle of action in specific situations. To do this, Ai’s deep learning first collects necessary information and makes decisions based on it. Next, the results of this learning are applied in a new hypothetical dilemma situation. Finally, human experts evaluation and feedback on the results are continuously made. This series of processes can be presented as a model of ISD oriented towards the moral development of AI. Conclusion: AI’s ROE learning converges to the learning of moral values. It focuses on the cognitive aspect of morality. Therefore, it would be possible to refine the cognitive moral judgment of deep learning by applying the learning hierarchy of taxonomy of educational objects and the logical test of validity of moral judgment oriented toward social justice. And the moral development of the neural network can be performed by modifying and complementing the results of human experts and feeding them back.
Read full abstract