As an extension of the standard paradigm in statistical learning theory, we introduce the concept of r-learnability, 0<r≤1, which is a notion very closely related to that of nonexact oracle inequalities (see Lecue and Mendelson (2012) [7]). The r-learnability concept can enable so-called fast learning rates (along with corresponding sample complexity-type bounds) to be established at the cost of multiplying the approximation error term by an extra (1+r)-factor in the learning error estimate. We establish a new, general r-learning bound (nonexact oracle inequality) yielding fast learning rates in probability (up to at most a logarithmic factor) for proper learning in the general setting of an agnostic model, essentially only assuming a uniformly bounded squared loss function and a hypothesis class of finite VC-dimension (that is, finite pseudo-dimension).