Abstract

This Special Issue presents recent research results in the fast growing area of intelligent decision technologies (IDT) in dynamic environments. Intelligent decision making in dynamic environments can be distinguished by a number of aspects from a conventional decision making. First, due to situational changes in an operational field, time is an inherent dimension of the decision making process, and as such it affects decision-making in several aspects, including speed, temporal relations between occurring events, and the order of the undertaken decision-making actions. Secondly, multiple alternative decision-making strategies can be applied depending on the assessment of current situations. Even more, the ongoing decisionmaking process may be dynamically changed, modified and optimized following unfolding external situations on the ground as well system internal resource situations. Thirdly, IDT processes in dynamic environments should maintain the truth of the logical decision making process and resolve conflicts. IDT in dynamic environments need construction and current updating of the complex situational picture, which might involve a large number of entities inter-related by structural, spatial, temporal, casual, and other domain-specific relations. A variety of applications benefit from, and provide a sounding board for, technologies in dynamic environments, including natural and human caused disasters management, tactical warfare, intelligent transportation systems, physical and cyber security management, and others. One of the major problems of intelligent decision making in such dynamic environments is the gap between the ‘raw’ sensor data and the information used by decision makers. The input to the information gathering process is hence defined by available sensors, and the desired output is defined by precondition constraints to standard operating procedures. The first paper of this issue is written by Gerhard Wickler and Stephen Potter, and it addresses the information-gathering process as a three-phase procedure that decomposes the overall problem into phases requiring different types of knowledge and information processing capabilities. The first phase, data validation, aims to remove incorrect information from the input data, thereby creating a consistent view of the current situation. The second phase, data abstraction and aggregation, applies mathematical models to reduce the amount of data, remove noise from the data, and derive features that are closer to the terminology of the user. The third phase, information interpretation, uses a belief revision and rulebased approach to make the information actionable for the decision maker. One of the critical tasks of securing the effective and purposeful behaviour of large-scale distributed systems

Full Text
Published version (Free)

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call