Deep reinforcement learning (DRL) algorithms have made remarkable achievements in various fields, but they are vulnerable to changes in environment dynamics. This vulnerability easily leads to poor generalization, low performance, and catastrophic failures in unseen environments, which severely hinders the application of DRL in real-world scenarios. The robustness via adversary populations (RAP) algorithm addresses this issue by introducing a population of adversaries that perturb the protagonist. However, the low data utilization efficiency and lack of population diversity greatly limit the generalization performance. This article proposes robust adversary populations with volume diversity measure (RAP Vol) to address these drawbacks. In the proposed joint adversarial training framework, we use the training data to update all adversaries rather than only a single adversary, leading to a higher data utilization efficiency and a fast convergence speed. In the proposed population diversity iterative improvement mechanism, the vectors representing adversaries span a high-dimensional region. The volume of this region is utilized to measure and enhance population diversity via its square. The ablation experiments have verified the effectiveness of our proposed method in improving the robustness against variations in environment dynamics. Also, the influence of various factors (such as adversary population size and diversity weight) on the robustness has been investigated.