Abstract

Vision transformer (ViT) and its variants have achieved remarkable success in various tasks. The key characteristic of these ViT models is to adopt different aggregation strategies of spatial patch information within the artificial neural networks (ANNs). However, there is still a key lack of unified representation of different ViT architectures for systematic understanding and assessment of model representation performance. Moreover, how those well-performing ViT ANNs are similar to real biological neural networks (BNNs) is largely unexplored. To answer these fundamental questions, we, for the first time, propose a unified and biologically plausible relational graph representation of ViT models. Specifically, the proposed relational graph representation consists of two key subgraphs: an aggregation graph and an affine graph. The former considers ViT tokens as nodes and describes their spatial interaction, while the latter regards network channels as nodes and reflects the information communication between channels. Using this unified relational graph representation, we found that: 1) model performance was closely related to graph measures; 2) the proposed relational graph representation of ViT has high similarity with real BNNs; and 3) there was a further improvement in model performance when training with a superior model to constrain the aggregation graph.

Full Text
Published version (Free)

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call