Federated learning (FL) enables a great deal of distributed independent participants to collaborate in training without sharing data. Malicious adversary can poison the local model by backdoor poisoning attacks and utilize the characteristic that server cannot trace original data to make the poisoned model directly aggregated. Especially in the AIoT-FL network that generates large amounts of data in real-time, such an attack is more powerful. In this paper, we design a sybil-based backdoor poisoning attacks (SBPA) against the above vulnerability. Malicious participants inject backdoor triggers into distributed Big Data to covertly complete data poisoning. During subsequent iterative aggregation, the joint model activates the backdoor during testing to achieve misclassification for the backdoor images. Besides, malicious participants create some sybil nodes to join the aggregation by taking advantage of the vulnerability of system devices to be easily disconnected. They are committed to making the poisoned local models aggregated with higher probability. Their goal works on making the final global model misclassify backdoor images while keeping high classification accuracy on the other non-backdoor samples. We conduct extensive experiments on multiple datasets and exhibit more robust performance than the state-of-the-art in various metrics under both data distribution including i.i.d. and non-i.i.d. scenarios.