We present a general kernel-based framework for learning operators between Banach spaces along with a priori error analysis and comprehensive numerical comparisons with popular neural net (NN) approaches such as Deep Operator Networks (DeepONet) [46] and Fourier Neural Operator (FNO) [45]. We consider the setting where the input/output spaces of target operator G†:U→V are reproducing kernel Hilbert spaces (RKHS), the data comes in the form of partial observations ϕ(ui),φ(vi) of input/output functions vi=G†(ui) (i=1,…,N), and the measurement operators ϕ:U→Rn and φ:V→Rm are linear. Writing ψ:Rn→U and χ:Rm→V for the optimal recovery maps associated with ϕ and φ, we approximate G† with G¯=χ∘f¯∘ϕ where f¯ is an optimal recovery approximation of f†:=φ∘G†∘ψ:Rn→Rm. We show that, even when using vanilla kernels (e.g., linear or Matérn), our approach is competitive in terms of cost-accuracy trade-off and either matches or beats the performance of NN methods on a majority of benchmarks. Additionally, our framework offers several advantages inherited from kernel methods: simplicity, interpretability, convergence guarantees, a priori error estimates, and Bayesian uncertainty quantification. As such, it can serve as a natural benchmark for operator learning.