Abstract

Random Linear Network Coding (RLNC) has been proved to offer an efficient communication scheme, leveraging an interesting robustness against packet losses. However, it suffers from a high computational complexity and some novel approaches, which follow the same idea, have been recently proposed. One of such solutions is Tunable Sparse Network Coding (TSNC), where only few packets are combined in each transmissions. The amount of data packets to be combined in each transmissions can be set from a density parameter/distribution, which could be eventually adapted. In this work we present an analytical model that captures the performance of SNC on an accurate way. We exploit an absorbing Markov process where the states are defined by the number of useful packets received by the decoder, i.e the decoding matrix rank, and the number of non-zero columns at such matrix. The model is validated by means of a thorough simulation campaign, and the difference between model and simulation is negligible. A mean square error less than $4 \cdot 10^{-4}$ in the worst cases. We also include in the comparison some of more general bounds that have been recently used, showing that their accuracy is rather poor. The proposed model would enable a more precise assessment of the behavior of sparse network coding techniques. The last results show that the proposed analytical model can be exploited by the TSNC techniques in order to select by the encoder the best density as the transmission evolves.

Highlights

  • Network Coding (NC) techniques foster a new communication paradigm, where packets are no longer immutable entities and nodes across the network could retransmit, discard or recode them

  • Afterwards, a Tunable Sparse Network Coding (TSNC) scheme was introduced by Feizi et al in [2], which, in a nutshell, proposed tuning the density of the coded packets, as they are being generated by the source during the transmission

  • We include a second dimension, the covered packets, i.e the non-zero columns at the decoding matrix. It is based on an Absorbing Markov Chain and it precisely mimics the probability of generating new information when sparse coding schemes are used

Read more

Summary

INTRODUCTION

Network Coding (NC) techniques foster a new communication paradigm, where packets are no longer immutable entities and nodes across the network could retransmit, discard or recode them. It could lead to a greater probability of transmitting linear dependent combinations, increasing the corresponding network overhead and jeopardizing the performance In this sense, based on the observation that the probability of generating linear dependent packets is higher as the transmissions evolves, TSNC is proposed to tune the density throughout the transmission. Blomer et al highlight in [41] the complexity of the problem we tackle in this work, establishing upper and lower bounds for the number of linear dependencies within a random sparse matrix They do not explicitly exploit this result for coding purposes, their conclusions are still valid for our sparse coding model.

14 TABLE I m AND c0 FOR DIFFERENT q VALUES q modd meven
Impact of errors
SIMULATION AND MODEL VALIDATION
Findings
CONCLUSIONS AND FUTURE WORK
Full Text
Published version (Free)

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call