Neuronal morphology can be represented using various feature representations, such as hand-crafted morphometrics and deep features. These features are complementary to each other, contributing to improving performance. However, existing classification methods only utilize a single feature representation or simply concatenate different features without fully considering their complementarity. Therefore, their performance is limited and can be further improved. In this paper, we propose a multi-level feature fusion network that fully utilizes diverse feature representations and their complementarity to effectively describe neuronal morphology and improve performance. Specifically, we devise a Multi-Level Fusion Module (MLFM) and incorporate it into each feature extraction block. It can facilitate the interaction between different features and achieve effective feature fusion at multiple levels. The MLFM comprises a channel attention-based Feature Enhancement Module (FEM) and a cross-attention-based Feature Interaction Module (FIM). The FEM is used to enhance robust morphological feature presentations, while the FIM mines and propagates complementary information across different feature presentations. In this way, our feature fusion network ultimately yields a more distinctive neuronal morphology descriptor that can effectively characterize neurons than any singular morphological representation. Experimental results show that our method effectively depicts neuronal morphology and correctly classifies 10-type neurons on the NeuronMorpho-10 dataset with an accuracy of 95.18%, outperforming other approaches. Moreover, our method performs well on the NeuronMorpho-12 and NeuronMorpho-17 datasets and possesses good generalization.
Read full abstract