Abstract

Random linear network coding (RLNC) is a promising network coding solution when the network topology information is not fully available to all the nodes. However, in practice, nodes have partial knowledge of the network topology information. Motivated by this, we investigate the performance of RLNC and obtain different upper bounds on the failure probability of RLNC for the network constrained by different partial network topology information. These upper bounds not only improve the existing ones in the literature, but also show that the partial network topology information can bring benefits to the performance analysis of RLNC. On the other hand, it is observed that if more network topology information can be utilized, tighter upper bounds can be obtained, as expected. The upper bounds on two classical networks are compared for demonstration. To obtain a deeper understanding about the performance of RLNC, the asymptotic behavior of RLNC as the field size goes to infinity is also investigated.

Full Text
Published version (Free)

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call