Abstract. Self-adaptation equips a software system with a feedback loop that resolves uncertainties during operation and adapts the system to deal with them when necessary. Most self-adaptation approaches today use decision-making mechanisms that select for execution the adaptation option with the best-estimated benefit expressed as a set of adaptation goals. A few approaches also consider the estimated (one-off) cost of executing the candidate adaptation options. We argue that besides benefit and cost, decision-making in self-adaptive systems should also consider the estimated risk the system or its users would be exposed to if an adaptation option were selected for execution. Balancing all three concerns when evaluating the options for adaptation to mitigate uncertainty is essential for satisfying stakeholders’ concerns and ensuring the safety and public acceptance of self-adaptive systems. In this paper, we present a reference model for decision-making in self-adaptation that considers the estimated benefit, cost, and risk as core concerns of each adaptation option. Leveraging this model, we then present an ISO/IEC/IEEE 42010 compatible architectural viewpoint that aims at supporting software architects responsible for designing robust decision-making mechanisms for self-adaptive systems. We demonstrate the applicability, usefulness, and understandability of the viewpoint through a case study where participants with experience in the engineering of self-adaptive systems performed a set of design tasks in DeltaIoT, an Internet-of-Things exemplar for research on self-adaptive systems.
Read full abstract