Abstract

Deep joint source-channel coding (DJSCC) has received extensive attention in the communications community. However, the high computational costs and storage requirements prevent the DJSCC model from being effectively deployed on embedded systems and mobile devices. Recently, convolutional neural network (CNN) compression via low-rank decomposition has achieved remarkable performance. In this paper, we conduct a comparative study of low-rank decomposition for lowering the computational complexity and storage requirement for Rate-Adaptive DJSCC, including CANDECOMP/PARAFAC (CP) de-composition, Tucker (TK) decomposition, and Tensor-train (TT) decomposition. We evaluate the compression ratio, speedup ratio, and Peak Signal-to-Noise Ratio (PSNR) performance loss for the CP, TK, and TT decomposition with fine-tuning and pruning. From the experimental results, we found that compared with the TT decomposition, CP decomposition with fine-tuning lowers the PSNR performance degradation at the expense of higher compression and speedup ratio.

Full Text
Published version (Free)

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call