Abstract

In recent years, the emerging technology of machine learning has been taken into advantage to implement powerful Side Channel Analysis (SCA) attacks. By means of Deep Learning (DL) SCA attacks, countermeasures previously considered strong, such as masking, have failed to provide adequate security levels. This fact creates the need of taking attacks based on artificial neural networks into account during the design of cryptographic implementations. To make things worse, such neural networks may be pre-trained so as to succeed in attacking multiple implementations of a given cipher which were even not used during the training phase. To this end, this work evaluates two low-overhead SCA countermeasure techniques, which add noise in the calculation of the cryptographic algorithm to protect it against DL-SCA attacks with pre-trained networks. We propose the use of two existing, low-overhead countermeasure techniques and evaluate their resilience against multiple pre-trained DL-based SCA networks published in the literature. We show that the pre-trained networks which have been trained with power traces from an unprotected cipher implementation can be used to compromise the protection of a single hiding countermeasure but not the combination of the two hiding countermeasures. This is also true when the model has been pre-trained using a cipher implementation with a single hiding countermeasure. Thus, the combination of these two offers increased protection against pre-trained networks with low associated overheads

Full Text
Published version (Free)

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call