Abstract

One of the most interesting application of data analysis to industry is the real-time detection of anomalies during production. Industrial IoT paradigm includes all the components to realize predictive systems, like the anomaly detection ones. In this case, the goal is to discover patterns, in a given dataset, that do not resemble the “normal” behavior, to identify faults, malfunctions or the effects of bad maintenance. The use of complex neural networks to implement deep learning algorithm for anomaly detection is very common. The position of the deep learning algorithm is one of the main problem: this kind of algorithm requires both high computational power and data transfer bandwidth, rising serious questions on the system scalability. Data elaboration in the edge domain (i.e. close to the machine) usually reduce data transfer but requires to instantiate expensive physical assets. Cloud computing is usually cheaper but Cloud data transfer is expensive. In this paper a test methodology for the comparison of the two architectures for anomaly detection system is proposed. A real use case is described in order to demonstrate the feasibility. The experimental results show that, by means of the proposed methodology, edge and Cloud solutions implementing deep learning algorithms for industrial applications can be easily evaluated. In details, for the considered use case (with Siemens controller and Microsoft Azure platform) the tradeoff between scalability, communication delay, and bandwidth usage, has been studied. The results show that the full-cloud architecture can outperform the edge-cloud architecture when Cloud computation power is scaled.

Full Text
Published version (Free)

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call