Federated learning (FL) has developed as an efficient framework that can be used to train models across isolated data sources while also protecting the privacy of the data. In FL a common method is to construct local and global models together, with the global model (server) informing the local models and the local models (clients) updating the global model. Most present works assume clients have labeled datasets and the server has no data for supervised learning (SL) problems. In reality, clients lack the competence and drive to identify their data, while the server may host a tiny amount. How to reasonably use serverlabeled and client-unlabeled data is crucial in semi-supervised learning (SSL) and Cclientdata heterogeneity is widespread in FL. However, inadequate high-quality labels and non-IID client data, especially in autonomous driving, decrease model performance across domains and interact negatively. To solve this Semi-Supervised Federated Learning (SSFL) problem, we come up with a new FL algorithm called FedDSL in this work. We use self-ensemble learning and complementary negative learning in our method to make clients’ unsupervised learning on unlabeled data more accurate and efficient. It also coordinates the model training on both the server side and the clients’ side. In an important distinction to earlier research that kept some subset of labels at each client, our method is the first to implement SSFL for clients with 0% labeled non-IID data. Our contributions include the effectiveness of self-ensemble learning by using confidence score vector for calculating only for the current model performing data filtering and initiated negative learning by showing the data filtering performance in the beginning rounds. Our approach has been rigorously validated on two significant autonomous driving datasets, BDD100K and Cityscapes, proving to be highly effective. We have achieved state-of-the-art results and the metric that is utilized to evaluate the effectiveness of each detection task is mean average precision (mAP@0.5). Astonishingly FedDSL performs nearly as well as fully-supervised centralized training approaches, despite the fact that it only uses 25% of the labels in the Global model.