Abstract

A complete generation of packets coded with Random Linear Network Coding (RLNC) can be quickly decoded on a multicore system by scheduling the involved matrix block operations in parallel with an offline (pre-recorded) directed acyclic graph (DAG). The waiting for a complete generation of packets can be avoided with progressive RLNC decoding that commences the decoding (and can decode some packets) before all packets in a generation have been received. This article develops and evaluates a novel progressive RLNC decoding strategy based on the principle of DAG scheduling of parallel matrix block operations. The novel strategy involves helper matrices for conducting the Gauss Jordan elimination based on rows of blocks of matrix elements. The matrix block computations are dynamically scheduled by an online DAG which permits branching, e.g., to skip unnecessary matrix block operations. The throughput and delay of the novel progressive RLNC decoding strategy are evaluated with experiments on two heterogeneous multicore processor boards. The novel progressive RLNC decoding achieves throughput levels on par with state-of-the-art non-progressive (full-generation) RLNC decoding and achieves three times higher throughput than the fastest (highest-throughput) known progressive RLNC decoder for small generation sizes and short data packets. Also, our progressive RLNC decoding greatly reduces receiver delays for moderate to large generation sizes; the delay reductions are particularly pronounced when a low-delay RLNC version is employed (e.g., reduction to one tenth of the non-progressive decoding delay for a generation size of 256 packets).

Highlights

  • Random linear network coding (RLNC) can significantly enhance the communication over unreliable complex networks, such as body area networks [1], caching networks [2]–[4], cellular networks [5], the Internet of Things (IoT) [6]–[8], radio access networks [9], vehicular networks [10], wireless sensor networks [11], and general wireless networks [12]–[16]

  • We introduce a progressive Random Linear Network Coding (RLNC) decoding strategy based on the principle of directed acyclic graph (DAG) scheduling of parallel matrix block operations

  • Our performance evaluations on heterogeneous multiprocessor boards have indicated that our online DAG approach achieves similar decoding throughput as the state-of-the-art non-progressive RLNC decoding methodology

Read more

Summary

INTRODUCTION

Random linear network coding (RLNC) can significantly enhance the communication over unreliable complex networks, such as body area networks [1], caching networks [2]–[4], cellular networks [5], the Internet of Things (IoT) [6]–[8], radio access networks [9], vehicular networks [10], wireless sensor networks [11], and general wireless networks [12]–[16]. Acyclic graph (DAG) scheduling of parallel matrix block operations from the field of high-performance computing [19], [20] has been adapted for high-throughput RLNC encoding and decoding of a complete generation of source symbols (source data packets) [21]. To the best of our knowledge, the highly efficient DAG scheduling of parallel matrix block operations has not yet been studied in the context of progressive RLNC decoding. We introduce a progressive RLNC decoding strategy based on the principle of DAG scheduling of parallel matrix block operations.

BACKGROUND
EVALUATION
CONCLUSION
Full Text
Published version (Free)

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call