Abstract

Rowhammer Attack, a new DRAM-based attack, was developed exploiting weak cells to alter their content. Such attacks can be launched at the user level without requiring access permission to the victim memory cells. Leveraging such attacks, a new bit-flip-based adversarial weights attack (BFA) was developed targeting deep neural network models. When BFA attackers acquire a DNN model, they manipulate the existing DNN adversarial attack into locating vulnerable bits in the target DNN model. By flipping a subset of them using Rowhammer, they can crash that model within 30 trails. In this paper, we propose a lightweight and easy-to-deploy defense mechanism in the bit-level, Randomized Rotated and Nonlinear Encoding (RREC), which generates both robustness and fault-tolerant against BFA. Since flipping the most significant bit (MSB) in quantized data is too dangerous, we introduce randomized Rotation to obfuscate the bit order of model data and efficiently hide truly vulnerable bits with less vulnerable ones. Further, RREC reduces the average bit-flipped distance by more than 3x from the nonlinear encoding. It decreases the bit-flip distance among the majority of bits (including those vulnerable bits). Theoretically, RREC minimized the impact of a single bit BFA to 1/24 compared with baseline. Experimentally, RREC tolerates more than 17x flipped bits versus baseline model and 4.8x and 5.7x more bits compared with the existing BFA defenses (4B QAT and WR) with 0.01x to 0.08x of runtime latency. Moreover, we evaluate RREC against a newly emerged attack, Targeted-BFA, and it improves the defense rate from <inline-formula><tex-math notation="LaTeX">$5\%$</tex-math></inline-formula> to <inline-formula><tex-math notation="LaTeX">$95\%$</tex-math></inline-formula> .

Full Text
Published version (Free)

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call