Abstract

Recent work has shown that graph neural networks (GNNs) can be used to recover high-level word operators and their boundaries in gate-level netlists. Unlike formal methods, however, the GNN does not prove functional equivalence. As such, there is a question of whether structural transforms like circuit rewrites that preserve the functionality of the circuit can diminish the performance of GNN-based operator detection. In this work, we explore this problem by performing simple rewrites on benchmark circuits and showing that indeed there is a performance degradation. We propose new features and learning strategies that can alleviate this. Finally, we develop a rewriting framework that aims to optimize GNN accuracy degradation, which translates to a fine-grained adversarial circuit rewriting of sorts. This can have applications in circuit obfuscation or further improving reverse engineering by providing synthetic training data. We show how this methodology beats non-adversarial rewriting under the same area overhead.

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call

Disclaimer: All third-party content on this website/platform is and will remain the property of their respective owners and is provided on "as is" basis without any warranties, express or implied. Use of third-party content does not indicate any affiliation, sponsorship with or endorsement by them. Any references to third-party content is to identify the corresponding services and shall be considered fair use under The CopyrightLaw.