Abstract

The choice of activation functions in Convolutional Neural Networks (CNNs) is very important. Rectified Linear Unit (ReLU) has been widely-used in most CNNs. Recently, a series of non-monotonic activation functions gradually become the new standard to enhance performance of CNNs. Inspired by them, this paper firstly proposes a novel non-monotonic activation function called Power Function Linear Unit (PFLU). The negative part of PFLU is non-monotonic and closer to zero with the negative input decreasing, which can maintain sparsity of the negative part while introducing negative activation values and non-zero derivative values for the negative part. The positive part of PFLU does not use identity mapping but is closer to identity mapping with the positive input increasing, which can bring non-linearity property for the positive part. Next, this paper proposes faster PFLU (FPFLU). A wide range of classification experiments show that PFLU tends to work better than current state-of-the-art non-monotonic activation functions, and FPFLU can run faster than most non-monotonic activation functions.

Full Text
Published version (Free)

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call