The difficulty of classifying retinal fundus images with one or more illnesses present or missing is known as fundus multi-lesion classification. The challenges faced by current approaches include the inability to extract comparable morphological features from images of different lesions and the inability to resolve the issue of the same lesion, which presents significant feature variances due to grading disparities. This paper proposes a multi-disease recognition network model, Fundus-DANet, based on the dilated convolution. It has two sub-modules to address the aforementioned issues: the interclass learning module (ILM) and the dilated-convolution convolutional block attention module (DA-CBAM). The DA-CBAM uses a convolutional block attention module (CBAM) and dilated convolution to extract and merge multiscale information from images. The ILM uses the channel attention mechanism to map the features to lower dimensions, facilitating exploring latent relationships between various categories. The results demonstrate that this model outperforms previous models in classifying fundus multilocular lesions in the OIA-ODIR dataset with 93% accuracy.