The information contents of 143 biomedical journal articles were quantified by standardized criteria, emphasizing quantitative measurements and estimated labour investments. A hundredfold variability in article information contents was uncovered, producing a Poisson distribution with a median (peak) value at about one-half of the sample mean. Two-thirds of the articles thus had information contents below the average scientific article, testifying to the somewhat excessive fragmentation of the primary scientific literature. The information contents of an article depended on three different factors: (1) the number of pages, which rarely exceeded an upper limit corresponding to the standard article format (7–8 pages); (2) the number of figures plus tables per page, which similarly reached saturation at the standard format value (one per page); (3) the density of information packaging within each figure and table, for which no upper limit was observed. The latter factor could, therefore, account for virtually all information contents in excess of the standard article format. Differences in the information density of figures and tables were apparently not perceived by a peer reviewer, who tended to overestimate low-contents articles and underestimate high-contents articles. Furthermore, a model evaluation of the article authors indicated that evaluation by contents quantification and by straight article counting might give different results. Since neither peer review nor publication counts could satisfactorily detect differences in the information contents of scientific articles, objective contents quantification would seem to be required for an exact and fair evaluation of scientific productivity.