Abstract

In this paper, we introduce a new variational model for color image restoration, called DIP-VBTV, which combines two priors: a deep image prior (DIP), which assumes that the restored image can be generated through a neural network and a Vector Bundle Total Variation (VBTV) which generalizes the Vectorial Total Variation (VTV) on vector bundles. VBTV is determined by a geometric triplet: a Riemannian metric on the base manifold, a covariant derivative and a metric on the vector bundle. Whereas VTV prior encourages the restored images to be piece-wise constant, VBTV prior encourages them to be piece-wise parallel with respect to a covariant derivative. For well-chosen geometric triplets, we show that the minimization of VBTV encourages the solutions of the restoration model to share some visual content with the clean image. Then, we show on experiments that DIP-VBTV benefits from this property by outperforming DIP-VTV and state-of-the-art unsupervised methods, which demonstrates the relevance of combining DIP and VBTV priors.

Full Text
Published version (Free)

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call