Hyperparameter optimization (HO) is a must to figure out to what extent can a specific configuration of hyperparameters contribute to the performance of a machine learning task. The hardware and MLlib library of Apache Spark have the potential to improve big data processing performance when a tuning operation is combined with the exploitation of hyperparameters. To the best of our knowledge, the most of existing studies employ a black-box approach that results in misleading results due to ignoring the interior dynamics of big data processing. They suffer from one or more drawbacks including high computational cost, large search space, and sensitivity to the dimension of multi-objective functions. To address the issues above, this work proposes a new model-free reinforcement learning for multi-objective optimization of Apache Spark, thereby leveraging reinforcement learning (RL) agents to uncover the internal dynamics of Apache Spark in HO. To bridge the gap between multi-objective optimization and interior constraints of Apache Spark, our method runs a lot of iterations to update each cell of the RL grid. The proposed model-free learning mechanism achieves a tradeoff between three objective functions comprising time, memory, and accuracy. To this end, optimal values of the hyperparameters are obtained via an ensemble technique that analyzes the individual results yielded by each objective function. The results of the experiments show that the number of cores has not a direct effect on $speedup$. Further, although grid size has an impact on the time passed between two adjoining iterations, it is negligible in the computational burden. Dispersion and risk values of model-free RL differ when the size of the data is small. On average, MFRLMO produced $speedup$ that is 37% better than those of the competitors. Last, our approach is very competitive in terms of converging to a high accuracy when optimizing Convolutional Neural networks (CNN).
Read full abstract