Abstract
ABSTRACTAnomaly predicated upon multiple distributed hybrid sensors frequently uses hybrid approaches, integrating techniques derived from statistical analysis, probability, data mining, machine learning, deep learning, and signal denoising. Many of these methods are based on the analysis of irregularities, data continuity, correlation, and data consistency, aiming to discern anomalous patterns from normal behavior. By leveraging these techniques information fusion aims to enhance situational awareness, detect potential threats or abnormalities, and improve decision‐making processes in complex environments. It addresses uncertainties by integrating data from diverse sources, thereby enhancing performance, and reducing dependency on individual sensors. This study examines applications based on single and multiple sensor data, revealing common strategies, identifying strengths and weaknesses, and potential solutions for detecting and diagnosing anomalies by analyzing low, large, and complex data derived from the context of homogeneous or heterogeneous systems. Information fusion techniques are evaluated for their performance on various levels of algorithm complexity. This in‐depth bibliographic study involved searching top indexing databases such as Web of Science and Scopus. The inclusion criteria were articles published between 2012 and 2024. The search capitalized on specific keywords as follows: “sensor malfunction,” “sensor anomaly,” “sensor failure,” “sensor fusion,” and “anomaly data mining.” Publications that did not strictly focus on analytical processing for anomaly detection, diagnosis, and prognosis in sensor data were excluded. In conclusion, the practice of information fusion promotes transparency by elucidating the process of combining information, thereby enabling the inclusion of multitude of perspectives, and aligning with established best practices in the field. Data deviation remains the primary criterion for detecting anomalies using mostly deep learning and extensively hybrid techniques. Nevertheless, state‐of‐the‐art algorithms based on neural networks still require further contextual interpretation and analysis. Functional safety and safety of intended functionality breaching can lead to decision‐making errors, physical harm, and erosion of trust in autonomous systems. This is due to the lack of interpretability in AI approaches, making it challenging to predict and understand the system's behavior under various conditions.
Published Version
Join us for a 30 min session where you can share your feedback and ask us any queries you have