Abstract
The crucial goal of fountain codes is to reduce the use of a feedback channel. Their roll-out is limited by multi-hops transmission. With the multi-hops transmission, fountain codes raise the problem of overflow engendering a waste of energy. The number of encoded packets generated is significantly reduced and the residual energy can be preserved with a clustered architecture and classification technique. In this paper, we consider a distributed estimation scheme comprising a sensor member and a fusion centre. To reduce the number of useless encoded packets and the number of transmissions, we determine the number of encoded packets to recover data. We adopt fountain codes for data encoding. Then, packets are assembled at the Cluster Head (CH). Each CH provides a final estimation using a classification. We prove the power of fountain codes and training machine learning models to exactly calculate the estimate and to preserve residual energy.
Published Version
Talk to us
Join us for a 30 min session where you can share your feedback and ask us any queries you have