Abstract

Most stochastic gradient descent algorithms can optimize neural networks that are sub-differentiable in their parameters; however, this implies that the neural network’s activation function must exhibit a degree of continuity which limits the neural network model’s uniform approximation capacity to continuous functions. This paper focuses on the case where the discontinuities arise from distinct sub-patterns, each defined on different parts of the input space. We propose a new discontinuous deep neural network model trainable via a decoupled two-step procedure that avoids passing gradient updates through the network’s only and strategically placed, discontinuous unit. We provide approximation guarantees for our architecture in the space of bounded continuous functions and universal approximation guarantees in the space of piecewise continuous functions which we introduced herein. We present a novel semi-supervised two-step training procedure for our discontinuous deep learning model, tailored to its structure, and we provide theoretical support for its effectiveness. The performance of our model and trained with the propose procedure is evaluated experimentally on both real-world financial datasets and synthetic datasets.

Highlights

  • Since their introduction in [1], neural networks have led to numerous advances across various scientific areas

  • As is for instance the case in many signal processing or mathematical finance [23, 24] situations the uniform limit theorem from classical topology [25] guarantees that the worst-case approximation error of f by feedforward neural networks (FFNNs) cannot be controlled; the average error incurred by approximating f by FFNNs can be [11, 26, 27]

  • It illustrates the challenge of learning a piecewise continuous functions with two parts by an FFNN with ReLU activation function 2 hidden layers and 100 neurons in each layer and a comparable PCNN model

Read more

Summary

Introduction

Since their introduction in [1], neural networks have led to numerous advances across various scientific areas. [31] considers an extreme learning machine approach by randomizing all but the network’s final linear layer, which reduces the training task to a classical linear regression problem This approach’s provided approximation results are strictly weaker than the known guarantees for classical feedforward networks with a continuous activation function obtained, as are derived for example in [32]. It illustrates the challenge of learning a piecewise continuous functions with two parts (in grey and orange) by an FFNN with ReLU activation function 2 hidden layers and 100 neurons in each layer (in purple) and a comparable PCNN model (green). The model’s performance is benchmarked against comparable deep neural models trained using conventional training algorithms

Preliminaries
The PCNN Model
Set-Valued Analysis
Partitions
Universal Approximation Guarantees
Gold Standard
PCNNs are Universal Approximators of Piecewise Continuous Functions
The Space of Piecewise Continuous Functions
Approximating Piecewise Continuous Functions by PCNNs
The Training Meta-Algorithm
2: Output
Initializing and Training the Deep Zero-Sets
Numerical Experiments
Implementation Details
Learning Discontinuous Target Functions
Mild Discontinuities
Beating an Expert Partition
Ablation within A Controlled Environment
Parsing oscillations from Discontinues
Learning from Noisy Data
Learning From Few Training Samples
Conclusion
A Proof of the Negative Result for FFNNs
B Proof of Supporting Results
C Proofs of Main Results
Full Text
Paper version not known

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call

Disclaimer: All third-party content on this website/platform is and will remain the property of their respective owners and is provided on "as is" basis without any warranties, express or implied. Use of third-party content does not indicate any affiliation, sponsorship with or endorsement by them. Any references to third-party content is to identify the corresponding services and shall be considered fair use under The CopyrightLaw.