Abstract

Inability to evaluate the performance of critical adaptive systems may have a catastrophic impact on both individuals and society at large, due to society’s increased dependence on these systems. However, currently no evaluation methodology exists that adequately assesses the safety and security of critical adaptive systems. Therefore, this research aims to develop an evaluation methodology, which is capable of evaluating critical adaptive systems by reviewing dynamic architectures in real time. The evaluation methodology relies on a tool and an assurance case argument patterns catalogue, which enhance the automated construction and evaluation of assurance cases to determine the performance of critical adaptive systems. The capabilities of the methodology to automatically evaluate an adaptive system are validated on the basis of an illustrative example by employing a tool prototype. The results of the research show that the evaluation methodology provides opportunities to automatically construct and review many, yet not all, aspects of the assurance case by using the tool and the argument patterns catalogue. Therefore, even though the established methodology is largely automated to enable runtime evaluation, several actions remain to demand human interaction to ensure the safe and secure operation of critical adaptive systems.

Full Text
Published version (Free)

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call