Abstract

Features in a deep neural network are only as robust as those present in the data provided for training. The robustness of features applies to not just the types of features and how they apply to various classes, known or unknown, but also to how those features apply to different octaves, or scales. Neural Networks trained at one octave have been shown to be invariant to other octaves, while neural networks trained on large robust datasets operate optimally at only the octaves that resonate best with the learned features. This may still discard features that existed in the data. Not knowing the octave a trained neural network is most applicable to can lead to sub-optimal results during prediction due to poor preprocessing. Recent work has shown good results in quantifying how the learned features in a neural network apply to objects. In this work, we follow up on work in feature applicability, using it to quantify which octaves the features in a trained neural network resonate best with.

Full Text
Published version (Free)

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call