Abstract

Our proposal consists of developing two novel activation functions in time series anomaly detection, they have the capability to reduce the validation loss. The approach is based on a current activation function in Deep Learning, a very intensive field studied over time, in order to find the most suitable activation in a neural network. In order to achieve this purpose, we used an LSTM (Long Short-Term Memory) Autoencoder architecture, using these two novel functions to see the network's behavior through introducing them. The key point in our proposal is given by the learnable parameter, assuring more flexibility within the network in weights' updates, in fact, this property being more powerful than a predefined parameter that will bring a constraint due to its limit. We tested our proposal in comparison to other popular functions such as ReLU (Linear Rectifier Unit), hyperbolic tangent (tanh), Talu activation function. Also, the novelty of this paper consists of taking into consideration of piecewise behavior of an activation function in order to increase the performance of a neural network in Deep Learning.

Full Text
Published version (Free)

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call