245,432 publications found
Sort by
The Nature of the Firm

Economic theory has suffered in the past from a failure to state clearly its assumptions. Economists in building up a theory have often omitted to examine the foundations on which it was erected. This examination is, however, essential not only to prevent the misunderstanding and needless controversy which arise from a lack of knowledge of the assumptions on which a theory is based, but also because of the extreme importance for economics of good judgement in choosing between rival sets of assumptions. For instance, it is suggested that the use of the word “firm” in economics may be different from the use of the term by the “plain man.”1 Since there is apparently a trend in economic theory towards starting analysis with the individual firm and not with the industry,2 it is all the more necessary not only that a clear definition of the word “firm” should be given but that its difference from a firm in the “real world,” if it exists, should be made clear. Mrs. Robinson has said that “the two questions to be asked of a set of assumptions in economics are: Are they tractable? and: Do they correspond with the real world?”3 Though, as Mrs. Robinson points out, “more often one set will be manageable and the other realistic,” yet there may well be branches of theory where assumptions may be both manageable and realistic. It is hoped to show in the following paper that a definition of a firm may be obtained which is not only realistic in that it corresponds to what is meant by a firm in the real world, but is tractable by two of the most powerful instruments of economic analysis developed by Marshall, the idea of the margin and that of substitution, together giving the idea of substitution at the margin.

Relevant
Some Models for Estimating Technical and Scale Inefficiencies in Data Envelopment Analysis

In management contexts, mathematical programming is usually used to evaluate a collection of possible alternative courses of action en route to selecting one which is best. In this capacity, mathematical programming serves as a planning aid to management. Data Envelopment Analysis reverses this role and employs mathematical programming to obtain ex post facto evaluations of the relative efficiency of management accomplishments, however they may have been planned or executed. Mathematical programming is thereby extended for use as a tool for control and evaluation of past accomplishments as well as a tool to aid in planning future activities. The CCR ratio form introduced by Charnes, Cooper and Rhodes, as part of their Data Envelopment Analysis approach, comprehends both technical and scale inefficiencies via the optimal value of the ratio form, as obtained directly from the data without requiring a priori specification of weights and/or explicit delineation of assumed functional forms of relations between inputs and outputs. A separation into technical and scale efficiencies is accomplished by the methods developed in this paper without altering the latter conditions for use of DEA directly on observational data. Technical inefficiencies are identified with failures to achieve best possible output levels and/or usage of excessive amounts of inputs. Methods for identifying and correcting the magnitudes of these inefficiencies, as supplied in prior work, are illustrated. In the present paper, a new separate variable is introduced which makes it possible to determine whether operations were conducted in regions of increasing, constant or decreasing returns to scale (in multiple input and multiple output situations). The results are discussed and related not only to classical (single output) economics but also to more modern versions of economics which are identified with “contestable market theories.”

Relevant
Institutions, Institutional Change and Economic Performance

Examines the role that institutions, defined as the humanly devised constraints that shape human interaction, play in economic performance and how those institutions change and how a model of dynamic institutions explains the differential performance of economies through time. Institutions are separate from organizations, which are assemblages of people directed to strategically operating within institutional constraints. Institutions affect the economy by influencing, together with technology, transaction and production costs. They do this by reducing uncertainty in human interaction, albeit not always efficiently. Entrepreneurs accomplish incremental changes in institutions by perceiving opportunities to do better through altering the institutional framework of political and economic organizations. Importantly, the ability to perceive these opportunities depends on both the completeness of information and the mental constructs used to process that information. Thus, institutions and entrepreneurs stand in a symbiotic relationship where each gives feedback to the other. Neoclassical economics suggests that inefficient institutions ought to be rapidly replaced. This symbiotic relationship helps explain why this theoretical consequence is often not observed: while this relationship allows growth, it also allows inefficient institutions to persist. The author identifies changes in relative prices and prevailing ideas as the source of institutional alterations. Transaction costs, however, may keep relative price changes from being fully exploited. Transaction costs are influenced by institutions and institutional development is accordingly path-dependent. (CAR)

Relevant
The concept of a linguistic variable and its application to approximate reasoning—I

By a linguistic variable we mean a variable whose values are words or sentences in a natural or artificial language. For example, Age is a linguistic variable if its values are linguistic rather than numerical, i.e., young, not young, very young, quite young, old, not very old and not very young, etc., rather than 20, 21,22, 23, In more specific terms, a linguistic variable is characterized by a quintuple (L>, T( L), U, G, M) in which L is the name of the variable; T( L) is the term-set of L, that is, the collection of its linguistic values; U is a universe of discourse; G is a syntactic rule which generates the terms in T( L); and M is a semantic rule which associates with each linguistic value X its meaning, M(X), where M(X) denotes a fuzzy subset of U. The meaning of a linguistic value X is characterized by a compatibility function, c: U → [0,1], which associates with each u in U its compatibility with X. Thus, the compatibility of age 27 with young might be 0.7, while that of 35 might be 0.2. The function of the semantic rule is to relate the compatibilities of the so-called primary terms in a composite linguistic value-e.g., young and old in not very young and not very old-to the compatibility of the composite value. To this end, the hedges such as very, quite, extremely, etc., as well as the connectives and and or are treated as nonlinear operators which modify the meaning of their operands in a specified fashion. The concept of a linguistic variable provides a means of approximate characterization of phenomena which are too complex or too ill-defined to be amenable to description in conventional quantitative terms. In particular, treating Truth as a linguistic variable with values such as true, very true, completely true, not very true, untrue, etc., leads to what is called fuzzy logic. By providing a basis for approximate reasoning, that is, a mode of reasoning which is not exact nor very inexact, such logic may offer a more realistic framework for human reasoning than the traditional two-valued logic. It is shown that probabilities, too, can be treated as linguistic variables with values such as likely, very likely, unlikely, etc. Computation with linguistic probabilities requires the solution of nonlinear programs and leads to results which are imprecise to the same degree as the underlying probabilities. The main applications of the linguistic approach lie in the realm of humanistic systems-especially in the fields of artificial intelligence, linguistics, human decision processes, pattern recognition, psychology, law, medical diagnosis, information retrieval, economics and related areas.

Relevant
Macroeconomics and Reality

Existing strategies for econometric analysis related to macroeconomics are subject to a number of serious objections, some recently formulated, some old. These objections are summarized in this paper, and it is argued that taken together they make it unlikely that macroeconomic models are in fact over identified, as the existing statistical theory usually assumes. The implications of this conclusion are explored, and an example of econometric work in a non-standard style, taking account of the objections to the standard style, is presented. THE STUDY OF THE BUSINESS cycle, fluctuations in aggregate measures of economic activity and prices over periods from one to ten years or so, constitutes or motivates a large part of what we call macroeconomics. Most economists would agree that there are many macroeconomic variables whose cyclical fluctuations are of interest, and would agree further that fluctuations in these series are interrelated. It would seem to follow almost tautologically that statistical models involving large numbers of macroeconomic variables ought to be the arena within which macroeconomic theories confront reality and thereby each other. Instead, though large-scale statistical macroeconomic models exist and are by some criteria successful, a deep vein of skepticism about the value of these models runs through that part of the economics profession not actively engaged in constructing or using them. It is still rare for empirical research in macroeconomics to be planned and executed within the framework of one of the large models. In this lecture I intend to discuss some aspects of this situation, attempting both to offer some explanations and to suggest some means for improvement. I will argue that the style in which their builders construct claims for a connection between these models and reality-the style in which is achieved for these models-is inappropriate, to the point at which claims for identification in these models cannot be taken seriously. This is a venerable assertion; and there are some good old reasons for believing it;2 but there are also some reasons which have been more recently put forth. After developing the conclusion that the identification claimed for existing large-scale models is incredible, I will discuss what ought to be done in consequence. The line of argument is: large-scale models do perform useful forecasting and policy-analysis functions despite their incredible identification; the restrictions imposed in the usual style of identification are neither essential to constructing a model which can perform these functions nor innocuous; an alternative style of identification is available and practical. Finally we will look at some empirical work based on an alternative style of macroeconometrics. A six-variable dynamic system is estimated without using 1 Research for this paper was supported by NSF Grant Soc-76-02482. Lars Hansen executed the computations. The paper has benefited from comments by many people, especially Thomas J. Sargent

Open Access
Relevant