Although particle swarm optimization (PSO) has been successfully applied to solve optimization problems, its optimization performance still encounters challenges when dealing with complicated optimization problems, especially those with many interacting variables and many wide and flat local basins. To alleviate this issue, this paper proposes a differential elite learning particle swarm optimization (DELPSO) by differentiating the two guiding exemplars as much as possible to direct the update of each particle. Specifically, in this optimizer, particles in the current swarm are divided into two groups, namely the elite group and non-elite group, based on their fitness. Then, particles in the non-elite group are updated by learning from those in the elite group, while particles in the elite group are not updated and directly enter the next generation. To comprise fast convergence and high diversity at the particle level, we let each particle in the non-elite group learn from two differential elites in the elite group. In this way, the learning effectiveness and the learning diversity of particles is expectedly improved to a large extent. To alleviate the sensitivity of the proposed DELPSO to the newly introduced parameters, dynamic adjustment strategies for parameters were further designed. With the above two main components, the proposed DELPSO is expected to compromise the search intensification and diversification well to explore and exploit the solution space properly to obtain promising performance. Extensive experiments conducted on the widely used CEC 2017 benchmark set with three different dimension sizes demonstrated that the proposed DELPSO achieves highly competitive or even much better performance than state-of-the-art PSO variants.