Training data in federated learning (FL) frameworks can have label noise, since they must be stored and annotated on clients' devices. If trained over such corrupted data, the models learn the wrong knowledge of label noise, which highly degrades their performance. Although several FL schemes are designed to combat label noise, they suffer performance degradation when the clients' devices only have limited local training samples. To this end, a new scheme called federated label-noise learning (FedLNL) is developed in this paper. The key problem of FedLNL is how to estimate a noise transition matrix (NTM) accurately in the case of limited local training samples. If a gradient-based update method is used to update the local NTM on each client's device, it can generate too large gradients for the local NTM, causing a high estimation error of the local NTM. To tackle this issue, an alternating update method for the local NTM and the local classifier is designed in FedLNL, where the local NTM is updated by a Bayesian inference-based update method. Such an alternating update method makes the loss function of existing NTM-based schemes not applicable to FedLNL. To enable federated optimization of FedLNL, a new regularizer on the parameters of the classifier called local diversity product regularizer is designed for the loss function of FedLNL. The results show that FedLNL improves the test accuracy of a trained model by up to 25.98%, compared with the state-of-the-art FL schemes that tackle label-noise issues.