We propose a novel method that addresses the most common limitations of real video sequences, including noise, blur, flicker, and low contrast. This method leverages the Discrete Cosine Transform (DCT) extensively for both deblurring and denoising tasks, ensuring computational efficiency. It also incorporates classical strategies for tonal stabilization and low-light enhancement. To the best of our knowledge, this is the first unified framework that tackles all these problems simultaneously. Compared to state-of-the-art learning-based methods for denoising and deblurring, our approach achieves better results while offering additional benefits such as full interpretability, reduced memory usage, and lighter computational requirements, making it well-suited for integration into mobile device processing chains.
Read full abstract