In the cloud-to-thing continuum paradigm, the efficient transmission of learning-driven data is critical for facilitating real-time decision-making and continuous learning processes. Fountain codes have gained popularity as Forward Error Correction (FEC) schemes in application layer networking due to their ability to recover lost data packets. Gaussian Elimination (GE) algorithms, which form the core of Fountain codes, can be optimized to improve transmission efficiency. We introduces techniques that leverage GE algorithms to enhance the transmission of learning-driven data within the cloud-to-thing continuum. A fast matrix element architecture is developed to improve computational speed. Additionally, a pipelined normalization component based on binary trees is introduced to enable parallel processing. A pipelined elimination component utilizing binary trees is also presented to further boost transmission speeds and efficiency. These techniques are implemented using Field Programmable Gate Array (FPGA) technology. Their performance is evaluated against existing methods, demonstrating significant gains in transmission efficiency. By effectively managing increasing Internet traffic loads and optimizing the flow of learning-driven data, these approaches facilitate more efficient communication between cloud-based AI systems and edge devices. Ultimately, this research supports enhanced real-time decision-making and continuous learning processes across the full cloud-to-thing continuum.
Read full abstract