There is no single best Hyperparameter Optimization algorithm. Different optimization algorithms meet different Hyperparameter Optimization tasks with different constraints. To accelerate the Hyperparameter Optimization, it is necessary to parallelize training executions of various tests, introduce distributed training and prematurely stop unpromising tests. It is recommended to use a library approach to support the hyperparameter optimization service. Ray Tune is the best by far among the open source Hyperparameter Optimization. Hyperparameter optimisation (HPO) is the process of identifying a set of hyperparameters that yields an optimal model. By optimal, we mean the model that minimises a predefined loss function on a given data set. This is a repeated model training process, except that the neural network is trained with a different set of hyperparameters each time. In this process, the optimal set of hyperparameters will be identified. During the grid search, users specify a limited set of values for each hyperparameter and then select trial hyperparameters from the Cartesian product of these values. Once the grid is constructed, the GPGs are tested with the grid values. The grid search suffers when the number of hyperparameters becomes larger or the parameter search space becomes larger, as in this case the required number of estimates will grow exponentially. Another problem with grid search is its inefficiency. Since grid search treats each set of hyperparameter candidates equally, it will spend a lot of computational resources in the suboptimal configuration space without spending enough computational power on the optimal space.