Abstract
Deep learning-based side-channel analysis is an efficient and suitable technique for profiling side-channel attacks. In order to obtain the better performance, it is highly necessary to analyze an in-depth training stage in which the optimization of relevant hyperparameters should be a vital process. During the training phase, hyperparameters that are connected to the architecture of the neural network are often selected; however, hyperparameters that impact the training process are to be effectively analyzed. This was represented by an optimized hyperparammeter that consists of considerable impact on attacking behaviour, which is the primary focus of our research. Our research has shown that even while the popular optimizers Adam and RMSprop are capable of delivering satisfactory outcomes, they are also tend to being overfit. Hence, it is necessary to use condensed training periods, simple profiling models, and explicit regularization in order to avoid this problem. On the other hand, the performance of optimizers of the SGD type is only satisfactory when momentum is used which results in slower convergence and less overfit. In conclusion, the research results provide a better use of Adagrad in the cases of longer training datasets or big profiling models.
Published Version (Free)
Talk to us
Join us for a 30 min session where you can share your feedback and ask us any queries you have