Abstract

Recently, Convolutional Neural Networks(CNNs) have been very efficient in many computer vision areas, especially in facial recognition where most CNN models showed near-perfect performance on standard benchmarks. One of the main problems faced by these CNN architectures today is that they need a large amount of processing power and memory to deliver their exceptional performance. This paper aims on comparing and contrasting these CNNs with a new advancement in deep learning called neural differential equations, where the whole architecture is thought of as a differential equation to be solved. This paper aims to show that this method can achieve comparable performance to a residual network with much less parameters. As a part of this project we will show that the models trained over the ODENet, that is neural net using the ordinary differential equations, takes very less memory and time to train when compared to that of CNNs. As to run these models on the benchmark datasets, these models have been run on the standard face datasets named Faces95 and Faces96 respectively.

Full Text
Paper version not known

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call

Disclaimer: All third-party content on this website/platform is and will remain the property of their respective owners and is provided on "as is" basis without any warranties, express or implied. Use of third-party content does not indicate any affiliation, sponsorship with or endorsement by them. Any references to third-party content is to identify the corresponding services and shall be considered fair use under The CopyrightLaw.