Abstract

The data are sent by the nodes taking part in frequency hopping communications (FHC) utilising carrier frequencies and time slots that are pseudo-randomly assigned. Because of this, a high degree of protection against eavesdropping and anti-interference capabilities is provided. When using FHC in an environment, sharing time and frequency resources, avoiding collisions, and differentiating services are all made more complex as a result of this. A protocol for FHC that is based on dispersed wireless networks is presented by the authors of this research. It is a mechanism for multiple access control, which is prioritised and distributed. The ratio of empty channels metric can be found in the previous sentence. It is possible to provide priority in channel access by assigning different preset ratios of empty channel thresholds to the various traffic classes. Frames from frequency spread segments that have a partial collision are included as well. An analytical model is simulated for the analysis in terms of collision probability, transmission probability, and frame service time in order to carry out a theoretical examination of the performance of FHC. The objective of this inquiry is to determine how well FHC works. The analytical model has been proven correct by the exhaustive simulations as well as the theoretical findings. Cloud platforms are often used in the instruction of the most cutting-edge machine learning techniques of today, such as deep neural networks. This is done in order to take advantage of the cloud's capacity to scale elastically. In order to satisfy the criteria of these sorts of applications, federated learning, has been proposed as a distributed machine learning solution. This is done in order to fulfil the requirements of these kinds of applications. In federated learning (FL), even though everyone who uses the system works together to train a model, nobody ever shares their data with anybody else. Each user trains a local model with their own data, and then communicates the updated models with a FL server so that the data can be aggregated and a global model can be constructed. This process ensures that each user's model is unique. This process is repeated until a global model has been developed. This kind of training not only reduces the amount of network overhead that is necessary to transfer data to a centralised server, but it also safeguards the personal information of the users. Within the framework of this work, we looked at the feasibility of using the FL technique of learning on the many devices that are part of the dispersed network. On a centralised server, we conduct an analysis of the performance of the FL model by comparing its accuracy and the amount of time it takes to train using a range of various parameter value combinations. Additionally, the accuracy of these federated models may be made to reach a level that is comparable to that of the accuracy of central models.

Full Text
Published version (Free)

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call