Crowdsourcing provides a cost-effective labeling solution for the acquisition of labeled training samples for machine learning by employing workers on the Internet. A common approach to improving the label quality is to employ a truth inference method to infer integrated labels for samples from their multiple noisy labels obtained from different crowd workers. Although the quality of integrated labels is significantly improved compared with that of the original noisy ones, it still cannot completely eliminate the noises inevitably existing in the integrated labels. To further improve the label quality, this paper proposes a novel label noise correction method for crowdsourcing based on dynamic resampling (DRNC). DRNC first divides the dataset with inferred labels into a clean set and a noisy set through a filter. According to a certain proportion, the clean set and the noisy set are resampled to train multiple heterogeneous classifiers, which form an ensemble classifier. Then, the dataset is divided by the ensemble classifier into a new sub-noisy set and a sub-clean set. The whole process repeats multiple rounds, generating multiple sub-clean sets. Finally, these sub-clean sets are used to train classifiers, which jointly correct the wrong labels in the dataset by voting. Experimental results on 25 simulated and 4 real-world datasets consistently show that the proposed DRNC averagely improves the quality of labels as well as the quality of learned models in the range of 1 to 10 percentage points, compared with four state-of-the-art crowdsourcing noise correction methods.