Abstract

Prior-knowledge use in neural networks, for example, knowledge of a physical system, allows network training to be tailored to specific problems. Literature shows that prior-knowledge in neural network training enhances predictive performance. Research to date focuses on parametric optimization rather than structure optimization. We present a new framework to optimize the structure of a neural network using prior-knowledge. This is achieved through optimizing the number of hidden units via a line search and cross-validation using the empirical error to eliminate data-set/model-structure application dependency for prior-knowledge guided neural networks. In addition to using the prior-knowledge in the model training step, we propose utilizing the prior errors as part of the cross-validation performance index to improve generalization. Results demonstrate that the proposed training framework enhances the model’s prediction accuracy and prior-knowledge consistency for convex data sets with a unique minimum and non-convex multi-modal data sets. The presented results yield a new understanding of physics-guided neural networks in terms of their structural and parametric optimization.

Full Text
Published version (Free)

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call