Abstract
Neural networks have shown to be extremely effective tools for machine learning in broad contexts, including natural language processing, image processing, and adversarial game playing such as Chess and Go. Despite their success in a wide variety of contexts, the design of a neural network (which is inescapably tied to its performance) is often a matter of iterated re-engineering to find an architecture that performs the best on a small set of predetermined metrics. As such, this leaves the parsimony of such systems vague at best. Over the past few years it has become apparent that the mathematics of topology can be used to understand some theoretical fundamentals of neural networks. Roughly speaking, topology is the field of math that studies spaces that are considered to be ``the same'' up to a continuous stretching (i.e., homeomorphism). For that reason it is known as ``rubber sheet geometry.'' The main reason that topology has provided tools to study neural networks is that a deep neural network can be thought of as an iterated sequence of continuous transformations between spaces. In this work we develop an algorithm for automatically generating a trainable deep neural network with information obtained from the geometric and topological properties of training data. Our approach first finds a dense ellipsoidal covering of the training data set that is consistent with the classification information. We then find an (approximately) minimum sub-cover that models the classification information. A neural network is constructed that approximates the structure of the minimum sub-cover and which encodes logical statements representative of the data geometry. We show empirically that after training, the neural network retains the imprinted geometric information, making each module in the neural network geometrically interpretable. Theoretical and experimental results provide information on this approach across a variety of data sets. In addition to this we show how the proposed approach, when combined with methods from topological data analysis (specifically homology), can be used to quantify the likelihood that any neural network classifier will perform well on a given binary classifier data set before neural network engineering takes place. All results are illustrated using multiple data sets.
Talk to us
Join us for a 30 min session where you can share your feedback and ask us any queries you have
Disclaimer: All third-party content on this website/platform is and will remain the property of their respective owners and is provided on "as is" basis without any warranties, express or implied. Use of third-party content does not indicate any affiliation, sponsorship with or endorsement by them. Any references to third-party content is to identify the corresponding services and shall be considered fair use under The CopyrightLaw.