This report presents the results from a study of mathematical models relating to the usage of information systems. For each of four models, the papers developed during the study provide three types of analyses: reviews of the literature relevant to the model, analytical studies, and tests of the models with data drawn from specific operational situations. (1) The Cobb-Douglas model: x 0 = ax 1 b x 2 (1− b) . This classic production model, normally interpreted as applying to the relationship between production, labor, and capital, is applied to a number of information related contexts. These include specifically the performance of libraries, both public and academic, and the use of information resources by the nation's industry. The results confirm not only the utility of the Cobb-Douglas model in evaluation of the use of information resources, but demonstrate the extent to which those resources currently are being used at significantly less than optimum levels. (2) Mixture of Poissons: χ 0 = ∑ i=0 n i ∑ j=0 p n je mj(m j)′/i! where x 0 is the usage and ( n j, m j), j = 0 to p, are the p + 1 components of the distribution. This model of heterogeneity is applied to the usage of library materials and of thesaurus terms. In each case, both the applicability and the analytical value of the model are demonstrated. (3) Inverse effects of distance: x = a e − md if c(d) = rd x = ad − m if c(d) = r log( d). These two models reflect different inverse effects of distance, the choice depending upon the cost of transportation. If the cost, c(d), is linear, the usage is inverse exponential; if logarithmic, the usage is inverse power. The literature that discusses the relationship between usage of facilities and the distance from them is reviewed. The models are tested with data from the usage of the Los Angeles Public Library, both Central Library and branches, based on a survey of 3662 users. (4) Weighted entropy: S(x 1,x 2,...,x n)= - ∑ i=1 n r(x iP(x i) log(p(x i)). This generalization of the “entropy measure of information” is designed to accommodate the effects of “relevancy”, as measured by r(x), upon the performance of information retrieval systems. The relevant literature is reviewed and the application to retrieval systems is considered.