Chest X-ray image classification for multiple diseases is an important research direction in the field of computer vision and medical image processing. It aims to utilize advanced image processing techniques and deep learning algorithms to automatically analyze and identify X-ray images, determining whether specific pathologies or structural abnormalities exist in the images. We present the MMPDenseNet network designed specifically for chest multi-label disease classification. Initially, the network employs the adaptive activation function Meta-ACON to enhance feature representation. Subsequently, the network incorporates a multi-head self-attention mechanism, merging the conventional convolutional neural network with the Transformer, thereby bolstering the ability to extract both local and global features. Ultimately, the network integrates a pyramid squeeze attention module to capture spatial information and enrich the feature space. The concluding experiment yielded an average AUC of 0.898, marking an average accuracy improvement of 0.6% over the baseline model. When compared with the original network, the experimental results highlight that MMPDenseNet considerably elevates the classification accuracy of various chest diseases. It can be concluded that the network, thus, holds substantial value for clinical applications.