Abstract

Deep neural networks typically with a fixed activation function at each neuron, have shown breakthrough performances. The fixed activation function is not the optimal choice for different data distributions. Toward this end, this work improves the deep neural networks by proposing a novel and efficient activation scheme called “Mutual Activation” (MAC). A non-static activation function is adaptively learned in the training phase of deep network. Furthermore, the proposed activation neuron cooperating with maxout is a potent higher-order function approximator, which can break through the convex curve limitation. Experimental results on object recognition benchmarks demonstrate the effectiveness of the proposed activation scheme.

Full Text
Paper version not known

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call

Disclaimer: All third-party content on this website/platform is and will remain the property of their respective owners and is provided on "as is" basis without any warranties, express or implied. Use of third-party content does not indicate any affiliation, sponsorship with or endorsement by them. Any references to third-party content is to identify the corresponding services and shall be considered fair use under The CopyrightLaw.