Abstract

Rectified Linear Units (ReLU) is one of the key aspects for the success of Deep Learning models. It has been shown that deep networks can be trained efficiently using ReLU without pre-training. In this paper, we compare and analyze various kinds of ReLU variants in fully-connected deep neural networks. We test ReLU, LReLU, ELU, SELU, mReLU and vReLU on two popular datasets: MNIST and Fashion-MNIST. We find vReLU, a symmetric ReLU variant, shows promising results in most experiments. Fully-connected networks (FCN) with vReLU activation are able to achieve a higher accuracy. It achieves relative improvement in test error rate of 39.9% compared to ReLU on MNIST dataset; and achieves relative improvement of 6.3% compared to ReLU on Fashion-MNIST dataset.

Full Text
Published version (Free)

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call