The detection of Artificial Intelligence-Generated (AIG) images plays an important role in verifying the authenticity and originality of digital images. However, recent advancements in state-of-the-art image generation methods have significantly challenged the ability to differentiate AIG images from natural photographs (NP). To address this issue, a novel approach based on deep trace representations and dual-branch interactive feature fusion is presented. Firstly, a global feature extraction module that leverages attention-based MobileViT (AT-MobileViT) is designed to learn the deep representations of the global trace information. Besides, we apply multiple enhanced residual blocks to extract discriminative multi-scale features. After that, a low-level feature extraction module incorporating a channel-spatial attention (CSA) block is also carefully employed to enhance the learning of trace representations. To facilitate the capture of complementary information between features, a dual-branch interactive feature fusion module is introduced by reshaping feature vectors into interactive matrices. By conducting experiments on both seen and unseen images, results demonstrate the better performance and robustness of the proposed method.