The body of this paper is written in terms of very general and abstract ideas which have been popular in pure mathematical work on the theory of probability for the last two or three decades. It seems to us that these ideas, so fruitful in pure mathematics, have something to contribute to mathematical statistics also, and this paper is an attempt to illustrate the sort of contribution we have in mind. The purpose of generality here is not to solve immediate practical problems, but rather to capture the logical essence of an important concept (sufficient statistic), and in particular to disentangle that concept from such ideas as Euclidean space, dimensionality, partial differentiation, and the distinction between continuous and discrete distributions, which seem to us extraneous. In accordance with these principles the center of the stage is occupied by a completely abstract sample space--that is a set $X$ of objects $x$, to be thought of as possible outcomes of an experimental program, distributed according to an unknown one of a certain set of probability measures. Perhaps the most familiar concrete example in statistics is the one in which $X$ is $n$ dimensional Cartesian space, the points of which represent $n$ independent observations of a normally distributed random variable with unknown parameters, and in which the probability measures considered are those induced by the various common normal distributions of the individual observations. A statistic is defined, as usual, to be a function $T$ of the outcome, whose values, however, are not necessarily real numbers but may themselves be abstract entities. Thus, in the concrete example, the entire set of $n$ observations, or, less trivially, the sequence of all sample moments about the origin are statistics with values in an $n$ dimensional and in an infinite dimensional space respectively. Another illuminating and very general example of a statistic may be obtained as follows. Suppose that the outcomes of two not necessarily statistically independent programs are thought of as one united outcome--then the outcome $T$ of the first program alone is a statistic relative to the united program. A technical measure theoretic result, known as the Radon-Nikodym theorem, is important in the study of statistics such as $T$. It is, for example, essential to the very definition of the basic concept of conditional probability of a subset $E$ of $X$ given a value $y$ of $T$. The statistic $T$ is called sufficient for the given set $\mathcal{M}$ of probability measures if (somewhat loosely speaking) the conditional probability of a subset $E$ of $X$ given a value $y$ of $T$ is the same for every probability measure in $\mathcal{M}$. It is, for instance, well known that the sample mean and variance together form a sufficient statistic for the measures described in the concrete example. The theory of sufficiency is in an especially satisfactory state for the case in which the set $\mathcal{M}$ of probability measures satisfies a certain condition described by the technical term dominated. A set $\mathcal{M}$ of probability measures is called dominated if each measure in the set may be expressed as the indefinite integral of a density function with respect to a fixed measure which is not itself necessarily in the set. It is easy to verify that both classical extremes, commonly referred to as the discrete and continuous cases, are dominated. One possible formulation of the principal result concerning sufficiency for dominated sets is a direct generalization to the abstract case of the well known Fisher-Neyman result: $T$ is sufficient if and only if the densities can be written as products of two factors, the first of which depends on the outcome through $T$ only and the second of which is independent of the unknown measure. Another way of phrasing this result is to say that $T$ is sufficient if and only if the likelihood ratio of every pair of measures in $\mathcal{M}$ depends on the outcome through $T$ only. The latter formulation makes sense even in the not necessarily dominated case but unfortunately it is not true in that case. The situation can be patched up somewhat by introducing a weaker notion called pairwise sufficiency. In ordinary statistical parlance one often speaks of a statistic sufficient for some of several parameters. The abstract results mentioned above can undoubtedly be extended to treat this concept.