Abstract

Training Deep Learning (DL) models require adjusting a series of hyperparameters. Although there are several tools to automatically choose the best hyperparameter configuration, the user is still the main actor to take the final decision. To decide whether the training should continue or try different configurations, the user needs to analyze online the hyperparameters most adequate to the training dataset, observing metrics such as accuracy and loss values. Provenance naturally represents data derivation relationships (i.e., transformations, parameter values, etc.), which provide important support in this data analysis. Most of the existing provenance solutions define their own and proprietary data representations to support DL users in choosing the best hyperparameter configuration, which makes data analysis and interoperability difficult. We present Keras-Prov and its extension, named Keras-Prov++, which provides an analytical dashboard to support online hyperparameter fine-tuning. Different from the current mainstream solutions, Keras-Prov automatically captures the provenance data of DL applications using the W3C PROV recommendation, allowing for hyperparameter online analysis to help the user deciding on changing hyperparameters’ values after observing the performance of the models on a validation set. We provide an experimental evaluation of Keras-Prov++ using AlexNet and a real case study, named DenseED, that acts as a surrogate model for solving equations. During the online analysis, the users identify scenarios that suggest reducing the number of epochs to avoid unnecessary executions and fine-tuning the learning rate to improve the model accuracy.

Full Text
Published version (Free)

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call