Distributed learning is proposed as a promising technique to reduce heavy data transmissions in centralized machine learning. By allowing the participants training the model locally, raw data is unnecessarily uploaded to the centralized cloud server, reducing the risks of privacy leakage as well. However, the existing studies have shown that an adversary is able to derive the raw data by analyzing the obtained machine learning models. To tackle this challenge, the state-of-the-art solutions mainly depend on differential privacy and encryption techniques (e.g., homomorphic encryption). Whereas, differential privacy degrades data utility and leads to inaccurate learning, while encryption based approaches are not effective to all machine learning algorithms due to the limited operations and excessive computation cost. In this work, we propose a novel scheme to resolve the privacy issues from the anonymous authentication approach. Different from the two types of existing solutions, this approach is generalized to all machine learning algorithms without reducing data utility, while guaranteeing privacy preservation. In addition, it can be integrated with detection schemes against data poisoning attacks and free-rider attacks, being more practical for distributed learning. To this end, we first design a pairing-based certificateless signature scheme. Based on the signature scheme, we further propose an anonymous and efficient authentication protocol which supports dynamic batch verification. The proposed protocol guarantees the desired security properties while being computationally efficient. Formal security proof and analysis have been provided to demonstrate the achieved security properties, including confidentiality, anonymity, mutual authentication, unlinkability, unforgeability, forward security, backward security, and non-repudiation. In addition, the performance analysis reveals that our proposed protocol significantly reduces the time consumption in batch verification, achieving high computational efficiency.