Abstract

We introduce a new regularization method for Artificial Neural Networks (ANNs) based on the Kernel Flow (KF) algorithm. The algorithm was introduced in Owhadi and Yoo (2019) as a method for kernel selection in regression/kriging based on the minimization of the loss of accuracy incurred by halving the number of interpolation points in random batches of the dataset. Writing fθ(x)=(fθn(n)∘fθn−1(n−1)∘⋯∘fθ1(1))(x) for the functional representation of compositional structure of the ANN (where θi are the weights and biases of the layer i), the inner layers outputs h(i)(x)=(fθi(i)∘fθi−1(i−1)∘…∘fθ1(1))(x) define a hierarchy of feature maps and a hierarchy of kernels k(i)(x,x′)=exp(−γi‖h(i)(x)−h(i)(x′)‖22). When combined with a batch of the dataset, these kernels produce KF losses e2(i) (defined as the L2 regression error incurred by using a random half of the batch to predict the other half) depending on the parameters of the inner layers θ1,…,θi (and γi). The proposed method simply consists of aggregating (as a weighted sum) a subset of these KF losses with a classical output loss (e.g., cross-entropy). We test the proposed method on Convolutional Neural Networks (CNNs) and Wide Residual Networks (WRNs) without alteration of their structure nor their output classifier and report reduced test errors, decreased generalization gaps, and increased robustness to distribution shift without a significant increase in computational complexity relative to standard CNN and WRN training (with Drop Out and Batch Normalization). We suspect that these results might be explained by the fact that while conventional training only employs a linear functional (a generalized moment) of the empirical distribution defined by the dataset and can be prone to trapping in the Neural Tangent Kernel regime (under over-parameterizations), the proposed loss function (defined as a nonlinear functional of the empirical distribution) effectively trains the underlying kernel defined by the CNN beyond regressing the data with that kernel.

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call

Disclaimer: All third-party content on this website/platform is and will remain the property of their respective owners and is provided on "as is" basis without any warranties, express or implied. Use of third-party content does not indicate any affiliation, sponsorship with or endorsement by them. Any references to third-party content is to identify the corresponding services and shall be considered fair use under The CopyrightLaw.