AbstractEven though the statistical theory of linear inverse problems is a well-studied topic, certain relevant cases remain open. Among these is the estimation of functions of bounded variation ($BV$), meaning $L^1$ functions on a $d$-dimensional domain whose weak first derivatives are finite Radon measures. The estimation of $BV$ functions is relevant in many applications, since it involves minimal smoothness assumptions and gives simplified, interpretable cartoonized reconstructions. In this paper, we propose a novel technique for estimating $BV$ functions in an inverse problem setting and provide theoretical guaranties by showing that the proposed estimator is minimax optimal up to logarithms with respect to the $L^q$-risk, for any $q\in [1,\infty )$. This is to the best of our knowledge the first convergence result for $BV$ functions in inverse problems in dimension $d\geq 2$, and it extends the results of Donoho (1995, Appl. Comput. Harmon. Anal., 2, 101–126) in $d=1$. Furthermore, our analysis unravels a novel regime for large $q$ in which the minimax rate is slower than $n^{-1/(d+2\beta +2)}$, where $\beta$ is the degree of ill-posedness: our analysis shows that this slower rate arises from the low smoothness of $BV$ functions. The proposed estimator combines variational regularization techniques with the wavelet-vaguelette decomposition of operators.