Abstract

The presented paper proposes a hybrid neural architecture that enables intelligent data analysis efficacy to be boosted in smart sensor devices, which are typically resource-constrained and application-specific. The postulated concept integrates prior knowledge with learning from examples, thus allowing sensor devices to be used for the successful execution of machine learning even when the volume of training data is highly limited, using compact underlying hardware. The proposed architecture comprises two interacting functional modules arranged in a homogeneous, multiple-layer architecture. The first module, referred to as the knowledge sub-network, implements knowledge in the Conjunctive Normal Form through a three-layer structure composed of novel types of learnable units, called L-neurons. In contrast, the second module is a fully-connected conventional three-layer, feed-forward neural network, and it is referred to as a conventional neural sub-network. We show that the proposed hybrid structure successfully combines knowledge and learning, providing high recognition performance even for very limited training datasets, while also benefiting from an abundance of data, as it occurs for purely neural structures. In addition, since the proposed L-neurons can learn (through classical backpropagation), we show that the architecture is also capable of repairing its knowledge.

Highlights

  • In recent years, remarkable improvement has been shown in both the capabilities and efficiency of intelligent systems [1], yet the state-of-the-art models continue to grow in size. are intelligent systems capable of achieving state-of-the-art performance on multiple complex games, as shown by AlphaZero [2], but they are capable of solving extremely complex real-world problems such as protein folding

  • Implementing large neural networks on resource-limited devices is infeasible, so if machine learning is to be considered as a problem-solving strategy for smart sensors, one needs to look for network complexity reduction concepts that preserve a sufficient capacity for handling real-world problems

  • It has been proven that every logical formula can be transformed into a set of clauses connected by conjunctions, which yields its Conjunctive Normal Form (CNF)

Read more

Summary

Introduction

Remarkable improvement has been shown in both the capabilities and efficiency of intelligent systems [1], yet the state-of-the-art models continue to grow in size. are intelligent systems capable of achieving state-of-the-art performance on multiple complex games, as shown by AlphaZero [2], but they are capable of solving extremely complex real-world problems such as protein folding. Structure Prediction (CASP) challenge [4], providing an invaluable tool for modern bioinformatics research These performance improvements are achieved at the expense of increases in model size, such as in the case of the GPT (Generative Pre-trained Transformer) family of models that went from 1.5 billion parameters in 2019 [5] to 175 billion parameters in 2020 [6]. These large models, while still feasible to train thanks to algorithmic and technological advances, require ever-increasing amounts of input examples, which may be unavailable, especially when application-specific tasks, typical for smart sensors, are considered. Implementing large neural networks on resource-limited devices is infeasible, so if machine learning is to be considered as a problem-solving strategy for smart sensors, one needs to look for network complexity reduction concepts that preserve a sufficient capacity for handling real-world problems

Objectives
Methods
Results
Conclusion
Full Text
Paper version not known

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call