Abstract

In machine learning, in order to obtain good models, it is necessary to train the network on a large data set. It is very often a long process, and any changes to the input dataset require re-training the entire network. If it is necessary to extend the model with new output classes, the use of the existing model becomes problematic, and in the case of extension with new decision classes, it is required to re-train the entire model based on all data. To improve this process, a new neural network architecture was proposed, which allows for easy extension of the already existing models with new classes, without the need to re-train the entire network, as well as the time needed to train the sub-model is much shorter than the time needed to re-train the entire neural network. The presented network architecture is designed for data that has at least two decision classes.

Full Text
Published version (Free)

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call