Recent work has shown that graph neural networks (GNNs) can be used to recover high-level word operators and their boundaries in gate-level netlists. Unlike formal methods, however, the GNN does not prove functional equivalence. As such, there is a question of whether structural transforms like circuit rewrites that preserve the functionality of the circuit can diminish the performance of GNN-based operator detection. In this work, we explore this problem by performing simple rewrites on benchmark circuits and showing that indeed there is a performance degradation. We propose new features and learning strategies that can alleviate this. Finally, we develop a rewriting framework that aims to optimize GNN accuracy degradation, which translates to a fine-grained adversarial circuit rewriting of sorts. This can have applications in circuit obfuscation or further improving reverse engineering by providing synthetic training data. We show how this methodology beats non-adversarial rewriting under the same area overhead.
Read full abstract