Abstract

Due to an increased number of transmission errors, the real-time transmission needs for seafloor videos are imposing severe challenges on underwater acoustic networks. In this work, we propose an error-resilient coding method based on convolutional neural networks and multiple descriptions to combat packet losses for underwater video transmission. By exploiting the inter-frame motion information, our convolutional neural networks propagate the regions of interest, providing extra protection for multiple description coding. To achieve a good tradeoff between coding efficiency and error resiliency, video sequences are split into two kinds of descriptions that are encoded under a bit-rate constraint condition. Simulation experiments with underwater video datasets are conducted to verify the effectiveness of our approach at different packet loss rates, compared to state-of-the-art video coding schemes.

Full Text
Published version (Free)

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call