Person re-identification (Re-ID) has important practical application value in intelligent video analysis. Due to the illumination, occlusion, and pose variation, person Re-ID is still a challenging problem. Some recent Re-ID methods based on ResNet-50 have achieved high accuracy, but performance degradation is caused by pose variation. To address this issue, Pose-Invariant Convolutional Baseline (PICB) embed with the proposed Pooling Fusion Block (PFB) is put forward as a new baseline for person Re-ID task. On the basis of PICB, an end-to-end network named Appearance-Enhanced Feature Learning Network (AEFLN) is proposed to simultaneously learn diversity body features and discriminative part features. Specially, a novel (DBFL) strategy is presented to learn diversity body features, which could alleviate the potential local minima problem generated by optimizing model with randomly initialized parameters in PFB. In addition, uniform part-level feature extractors are applied to learn part features, which compensates for body features’ lack of distinguishable local information. In testing phase, body features and part features are integrated to represent the enhanced appearance feature for each person image. Comprehensive experiments have demonstrated that our method can outperform the sate-of-the-art results on several public available datasets, including Market-1501, CUHK03 and DukeMTMC-reID. For instance, we achieve 74.8% (+11.1%) and 76.5% (+19.0%) in Rank-1 accuracy and mAP on CUHK03 dataset.