Abstract

Situation-awareness-based decision-making (SABDM) models constructed using cognitive maps and goal-direct task analysis techniques have been successfully used in decision support systems in safety-critical and mission-critical environments such as air traffic control and electrical energy distribution. Reinforcement learning (RL) and other machine learning techniques are used to automate situational awareness mental model parameter adjustments, reducing the expert work on the initial configuration and long-term maintenance without affecting the mental model’s structure and maintaining the situation-awareness-based decision-making model’s cognitive and explainability characteristics. Real-world models should evolve to cope with changes in the environmental conditions. This study evaluates the application of reinforcement learning as an online adaptive technique to adjust the situational-awareness mental model’s parameters under evolving conditions. We conducted evaluation experiments using real-world public datasets to compare the performance of the SABDM model with that of the reinforcement learning adaptation technique (SABDM/RL) and other adaptive machine learning methods under distinct concept drift evolving conditions. We measured the techniques’ overall and dynamic performance to understand how well they adapt to evolving environmental conditions. The experiments show that SABDM/RL performs similarly to modern online adaptive machine learning classification methods with the support of concept drift detection techniques while maintaining the mental model strength of the situation awareness-based systems.

Full Text
Paper version not known

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call