Abstract
Deep neural networks (DNNs) have shown their success as high-dimensional function approximators in many applications; however, training DNNs can be challenging in general. DNN training is commonly phrased as a stochastic optimization problem whose challenges include nonconvexity, nonsmoothness, insufficient regularization, and complicated data distributions. Hence, the performance of DNNs on a given task depends crucially on tuning hyperparameters, especially learning rates and regularization parameters. In the absence of theoretical guidelines or prior experience on similar tasks, this requires solving a series of repeated training problems which can be time-consuming and demanding on computational resources. This can limit the applicability of DNNs to problems with nonstandard, complex, and scarce datasets, e.g., those arising in many scientific applications. To remedy the challenges of DNN training, we propose \tt slimTrain, a stochastic optimization method for training DNNs with reduced sensitivity to the choice of hyperparameters and fast initial convergence. The central idea of slimTrain is to exploit the separability inherent in many DNN architectures; that is, we separate the DNN into a nonlinear feature extractor followed by a linear model. This separability allows us to leverage recent advances made for solving large-scale, linear, ill-posed inverse problems. Crucially, for the linear weights, \tt slimTrain does not require a learning rate and automatically adapts the regularization parameter. In our numerical experiments using function approximation tasks arising in surrogate modeling and dimensionality reduction, slimTrain outperforms existing DNN training methods with the recommended hyperparameter settings and reduces the sensitivity of DNN training to the remaining hyperparameters. Since our method operates on mini-batches, its computational overhead per iteration is modest and savings can be realized by reducing the number of iterations (due to quicker initial convergence) or the number of training problems that need to be solved to identify effective hyperparameters.
Submitted Version (Free)
Published Version
Talk to us
Join us for a 30 min session where you can share your feedback and ask us any queries you have