Abstract

Recent trends such as the Internet of Things and pervasive computing demand for novel engineering approaches able to support the specification and scalable runtime execution of adaptive behaviour for large collections of interacting devices. Aggregate Computing is one such approach, formally founded in the field calculus, which enables programming of device aggregates by a global stance, through functional composition of self-organisation patterns that is turned automatically into repetitive local computations and gossip-like interactions. However, the logically decentralised and open nature of such algorithms and systems presumes a fundamental cooperation of the devices involved: an error in a device or a focused attack may significantly compromise the computation outcome and hence the algorithms built on top. For this reason, in this paper, we move the first steps towards attack-resistant aggregate computations. We propose trust as a framework to detect, ponder or isolate voluntary/involuntary misbehaviours, with the goal of mitigating the influence on the overall computation. On top of this, we consider recommendations in order to provide more reactivity and stability through the sharing of individual perceptions. To better understand the fragility of aggregate systems in face of attacks and investigate the extent of the mitigation afforded by the adoption of trust mechanisms, we consider the paradigmatic case of the gradient algorithm. Experiments are carried out to analyse the sensitivity of the adopted trust framework to malevolent actions and to study the impact of different factors on the error committed by trust-based gradients under attack. Finally, in a case study of the spatial channel algorithm, it is shown how the protection afforded by attack-resistant gradients can be effectively propagated to higher-level building blocks. • We consider the problem of security in the context of Aggregate Computing. • We propose trust as a framework to build attack-resistant aggregate computations. • We analyse the proposed approach for the case of the self-healing gradient algorithm. • We describe a field-calculus implementation of trust and recommendations. • We show that protection can be extended downstream to higher-level building blocks.

Full Text
Published version (Free)

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call