Abstract

Slow feature analysis (SFA) is an unsupervised learning method that extracts the latent variables from a time series dataset based on the temporal slowness aspect. Neural networks, owing to their ability to model complex nonlinearities, can be used to extract slow features (SFs) from a dataset obtained from a complicated process. Siamese neural networks can be used for this purpose. Siamese neural networks have a provision of handling two samples at a time and this aspect helps in extracting SFs. For supervised learning applications, the extracted SFs should help predict the outputs. In this article, we present two approaches that extract SFs using Siamese neural networks. The output relevance aspect is brought into feature extraction as a regularization term in the objective function of the Siamese neural network. Such regularization improves the performance of the neural network model. The proposed approaches are implemented on three datasets to demonstrate their effectiveness.

Full Text
Paper version not known

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call

Disclaimer: All third-party content on this website/platform is and will remain the property of their respective owners and is provided on "as is" basis without any warranties, express or implied. Use of third-party content does not indicate any affiliation, sponsorship with or endorsement by them. Any references to third-party content is to identify the corresponding services and shall be considered fair use under The CopyrightLaw.