To relieve the computational cost of design evaluations using expensive finite element (FE) simulations, surrogate models have been widely applied in computer-aided engineering design. Machine learning algorithms (MLAs) have been implemented as surrogate models due to their capability of learning the complex interrelations between the design variables and the response from big datasets. Typically, an MLA regression model contains model parameters and hyperparameters. The model parameters are obtained by fitting the training data. Hyperparameters, which govern the model structures and the training processes, are assigned by users before training. There is a lack of systematic studies on the effect of hyperparameters on the accuracy and robustness of the surrogate model. In this work, we proposed to establish a hyperparameter optimization framework to deepen our understanding of the effect. Based on the sequential model-based optimization method, the Pareto front is generated by running the optimal acquisition and updating the surrogate model iteratively. The optimum acquisition works by repeating a design space shrinking process. Using the acquired optimum, the surrogate model is updated, which describes the relationship between the hyperparameter combinations (inputs) generated by Latin hypercube sampling from the design space and structural response (outputs) to evaluate the modeling accuracy. The updated model will then be used for the next iteration of optimal acquisition until the termination criterion is met. Four frequently used MLAs, namely Gaussian Process Regression (GPR), Support Vector Machine (SVM), Random Forest Regression (RFR), and Artificial Neural Network (ANN), are tested on four benchmark examples of structure design optimization. For each MLA model, the model accuracy and robustness before and after the hyperparameters optimization (HOpt) are compared. The results show that HOpt can generally improve the performance of the MLA models in general with dependency on model complexity. HOpt leads to unstable improvements in the MLAs accuracy and robustness for complex problems, which are featured by high-dimensional mixed-variable design space. We also investigated the additional computational costs incurred by HOpt. The training cost is closely related to the MLA architecture. After HOpt, the training cost of ANN and RFR is increased more than that of the GPR and SVM. In summary, this study benefits the selection of HOpt method for different types of design problems based on their complexity (i.e. design domain continuity and the number of design variables, etc.).