Abstract

Jeff Biddle's recent book, Progress through Regression: The Life Story of the Empirical Cobb-Douglas Production Function, is a clearly told story of a theory and its implementation from its first proposal as a log-linear empirical relation linking outputs to inputs by Charles Cobb and Paul Douglas (1928), resulting in a plethora of highly critical, constructive, and supportive reactions, through to its acceptance as a substantive production function relationship in a wide range of research areas. The developments, and the continued criticisms and responses, are carefully discussed, building on extensive archival research.The book is divided into three parts. Part 1 has three chapters surveying Douglas's published accounts of his production research commencing in 1927. Chapter 1 concerns time series; chapter 2, cross sections; and chapter 3, the many challenges he faced. The name Cobb-Douglas derives from Douglas's first published time-series study with Cobb (Cobb and Douglas 1928), after which Cobb ceased to participate. Peter Lloyd (2001: 4) describes the earlier history of the equation, noting that Johann von Thünen (attributed to 1863) developed what was “a linearly homogeneous Cobb-Douglas production function written in intensive form.” However, the Cobb-Douglas relation was not called a production function until the late 1930s. Moreover, although Francis Galton (1886) introduced the concept of regression for a conditional expectation—in his case for reversion to the mean in heights of sons given heights of their fathers—it was not until 1940 that the word was used to describe empirical production functions. Ordinary least squares (OLS) is not necessarily a regression—in teaching elementary econometrics, I use the example of having students write their signature on a data graph and then fit an OLS line to it, to demonstrate that it is not a conditional expectation.By the time of the conjunction describing his research as “production function regressions,” Douglas had essentially ceased undertaking his empirical research, first because of war service and then because in 1948 he was elected to the US Senate, serving for eighteen years. However, as part 2 describes, other econometricians took up the research baton, especially in agricultural economics, and indeed regression estimation of Cobb-Douglas production functions became ubiquitous. Calculating technical progress and its contribution to analyzing economic growth also became a major application, both still ongoing. Part 3 concludes with the author's views on the success of Douglas's enterprise, why that happened, and how it might be evaluated.The various advances and debates are set in their contexts with brief intellectual histories of the key protagonists. There is a wealth of detailed information about numerous applied studies across many areas of manufacturing and agriculture, numerous countries, micro and macro, time series, cross sections, and panels. Naturally, issues of data collection, measurements of variables like output, labor, and capital, and later technical progress, as well as index numbers all need to be addressed. The history also follows some of the many relevant developments in econometrics, focusing on those either initiated by or rapidly applied to estimating Cobb-Douglas production functions or their implications.To quote Professor Biddle: The economists in my narrative who sought to estimate production functions faced a variety of challenges, which in many cases were specific forms of generic challenges that hindered empirical research in economics in the mid-twentieth century. The procedure required measures of inputs and outputs, measures that often had to be constructed from imperfect and incomplete statistical data. Throughout the period I examine, linear regression was the statistical method used to estimate production functions, but the decision to use linear regression necessitated a number of subsidiary decisions, such as the form to be taken by the estimating equation. Theory gave, at best, uncertain guidance to the economist making these decisions; this, combined with the quality of the available data, ensured that any decision made was vulnerable to criticism. (4)One might say that little has changed. Indeed, the author highlights that many of the issues Paul Douglas faced in the mid-twentieth century remain, including accurately measuring both inputs and outputs, especially “quality” or technology adjusted, calculating relevant price indexes, and appropriate methods of estimation for cross-section, time-series, and panel data sets, to which we must now add how to handle nonstationary observational time series confronting stationary economic theory, an issue discussed in the next section.Part 1 provides careful and usually very detailed discussions of Douglas's many published papers, first from time-series then cross-section observations. There are useful discussions of the difficulties early empirical researchers faced from the lack of good, or sometimes any, data, especially of the effort involved in getting measures of capital and prices. At more than 120 pages, even a summary of Biddle's text in part 1 would be overly long, although the general theme is one of findings by Douglas being heavily debated, indeed often attacked, and Douglas responding to his critics partly by countering the most easily rebutted criticisms, modifying his approach somewhat to finesse others, and ignoring what he could not yet counter. Douglas tended to claim concordance across his empirical findings, though Biddle considers this rather a stretch. Often the debates concerned the implications of his findings for the then extant economic theory, concerning “equilibrium,” and marginal productivity theory and its relation to income distribution, although criticisms of lack of identification (particularly for cross-section studies) and parameter estimation biases also abounded. Some critics like David Durand (1937) also questioned the constancy over time of the coefficients of labor and capital, a problem I return to in section 3.Biddle remarks that “the original time-series regression of the 1928 Cobb-Douglas paper arose out of Douglas's prior research interests and represented cutting-edge work in empirical economics” (4). Mostly agreed, except in one important aspect not discussed by Biddle: Douglas, and many of his critics, seemed to show no awareness of the potential problems posed by unit-root nonstationarity leading to nonsense regressions in processes with stochastic trends. Before you react to this being anachronistic, Louis Bachelier (1900) introduced random walks for speculative prices and, contemporaneously, Reginald Hooker (1901) sought to deal with stochastic trends and regime shifts (see, e.g., Hendry and Mary Morgan 1995). Udny Yule (1926) provided an understanding of nonsense regressions in static formulations as due to unit roots in time-series variables, and Bradford Smith (1926) proposed nesting specifications in levels and differences, which would serendipitously have solved the nonsense regressions problem (see Terence Mills [2011] for the rediscovery of this “lost” contribution). All of these publications predate Douglas's first study, which nevertheless just postulated static relations. While linking outputs to inputs would hardly be “nonsense,” failing to address possible dynamics could lead to poor estimates and underestimated standard errors, illustrated in section 5 below. Since the 1980s, there have been massive advances in understanding and modeling unit-root nonstationarity through cointegration: Peter Phillips (1986) clarified the nonsense regressions problem, closely followed by the introduction of cointegration (see Rob Engle and Sir Clive Granger [1987] and Søren Johansen [1988]). Had Smith (1926) been followed up, static and dynamic representations could have been integrated, closing the circle by their link back to equilibrium correction.Biddle does note John Maurice Clark's (1928) criticism that the “Cobb-Douglas equation offered a good account of the ‘normal’ or long-run relationship between labor, capital, and output, but did a poor job of representing the impact of cyclical fluctuations in labor and capital utilization, which were governed by a ‘different law’” (31). However, Clark suggested cyclical adjustments rather than including dynamics, although the major successes in discovering the key features that affected the properties of economic times-series data by Yule (1927) (autoregressive processes) and Eugen Slutsky (1937) (moving averages: the Russian version was 1927) had appeared the previous year. In a similar vein, but rather later, Victor Smith (1945: 562) argued that “the statistical [production] function represents relationships that prevail in a dynamic, disequilibrium economy” but did not suggest adding dynamics to the relationship to account for that, perhaps not realizing that such an extended Cobb-Douglas equation could be solved for the “long-run” as I do in section 5 below.Frank Knight emphasized the conflict between static theory and dynamic (historical) data, but wrongly deduced that statistical methods could not quantify theoretical concepts, an argument Douglas rightly dismissed. Biddle further illustrates the (still ongoing) battle between the “supremacy” of theory versus evidence using Douglas's joint papers with Martin Bronfenbrenner and with Grace Gunn. The former tried to link the estimated coefficients with neoclassical economics whereas the latter sought better statistical methodology. An interesting aside for this reviewer is that in the 1930s and 1940s about half of Douglas's coauthors were female at a time when that was relatively rare, especially in economics, where half would still be a high proportion today in most English-speaking countries.The Cobb-Douglas story was also one of evolving technicality that Biddle calls the rhetoric of “mechanical rules” moving away from “expert judgement” rhetoric and trust, using as a case study confluence analysis (proposed by Ragnar Frisch [1934]) where his “bunch maps” mainly relied on expert judgment. That issue concerned the direction of minimization, namely, which variable to treat as dependent given measurement errors in all regressors. Shortly after this discussion (in footnote 28) Biddle derives the parameter biases in the Gunn and Douglas study and finds they made the correct decision about the direction of minimization. A formal analysis would have been useful, both to show that those earlier results could be reproduced—as Hendry and Morgan (1995) did for a number of historical empirical studies—as well as allowing modern misspecification tests to be calculated: see Deborah Mayo (2010) on passing severe testing as an essential basis for valid inference, applied in section 5. The analysis of errors in variables versus errors in equations by Tjalling Koopmans (1937) greatly clarified that particular debate.While the complete list of criticisms of production function estimation is vast, debates about empirical evidence and its role in economics have been ever present since Henry Moore (1911): essentially all key “economic functions” like consumption (as shown by Jim Thomas [1989]), investment (Dale Jorgenson [1967]; and q as debated following Dale in James Tobin 1969), Phillips curves (as criticized by Robert Lucas [1976]), money demand, and so forth, have been subject to great debates. Notwithstanding numerous criticisms, the Cobb-Douglas program continued—as did the empirical endeavors for other relationships.In his presidential address to the American Economic Association in 1947, Douglas claims a “scientific schizophrenia” between the economics of marginal productivity theory under perfect competition in economic theory and the presence of wage bargaining theory in labor economics. However, “the importance of studying the general relationships that exist at the level of society as a whole because of the conditioning influence they exert on individual industries, shows a consistency through the years on an important methodological issue,” although at odds with the individualistic approaches then dominant at Cowles and Chicago (22–23). This leads to part 2.Part 2, almost two hundred pages long, considers the diffusion of Cobb-Douglas regressions. Chapter 4 describes three case studies looking at why the popularity of Cobb-Douglas regressions grew after World War II. The first is its entry into econometrics textbooks by Gerhard Tintner (1952) (who had written about deriving production functions from farm records in Tintner 1944) and Lawrence Klein (1953); the second describes how the regression was adopted by researchers in agricultural economics (discussed below); and the third is the constant elasticity of substitution (CES) production function proposal by Kenneth Arrow et al. (1961) building on an earlier paper by Robert Solow (1956). This enabled estimating the elasticity of substitution between capital and labor—which was constrained to unity in Cobb-Douglas and to zero in Wassily Leontief (1951) functions—and so allowed testing the Cobb-Douglas specification. En route, Hendrik Houthakker (1956) (who references Horst Mendershausen's highly critical 1938 article on Douglas's production function research) had established that aggregating microlevel Leontief production functions could lead to macro Cobb-Douglas. Biddle suggests CES may have caught on following John Hicks's (1939) and Roy Allen's (1963) mathematical representation of the neoclassical theory of value and distribution in which the elasticity of substitution was a key parameter, highlighting the willingness of practitioners to use assumptions of perfect competition, optimization, and equilibrium to sustain estimates of CES.Biddle also discusses two Phelps Brown (1957) critiques of Douglas's time-series regressions: first, that it was “improbable at first sight that one unchanging production function should fit a growing, changing economy over a run of years” and, second, that the good fit was due to “constant growth trends” of output, capital, and labor leading to serious multicollinearity (548–49). Not only does the second contradict the first, it is surprising coming from an economist who had witnessed two world wars and the Great Depression, so we will focus on the first. (Douglas's cross-section results were also criticized by Phelps Brown, but he mainly reiterating earlier critiques by Jan Tinbergen [1942] and Jacob Marschak and William Andrews Jr. [1944].) Technical change has certainly not proceeded at a constant rate with constant impacts over the last 120 years, nor has financial innovation, nor have major shocks.The second main source of nonstationarity, namely, shifts of data distributions, have also been important historically, and remain so. Location shifts are abrupt changes in the means of distributions (e.g., levels of a nontrending time series), such as from major wars on unemployment, oil-crises jumps in oil prices, Great Depressions or Great Recessions, and pandemic lockdowns on outputs. Location shift nonstationarity was slowly unraveled from its precursor in Hooker 1901 and another “lost paper” by Smith (1929): it is remarkable that in discussing forecasting the outcome of 1929 earlier that year, Smith foresaw shifts as the main imponderable. Econometricians and statisticians have since developed a plethora of methods for detecting shifts and parameter changes, but fewer in handling location shifts, although see Jennifer Castle et al. 2015. A location shift can be represented by a step indicator, the first difference of which is an impulse indicator, whereas its cumulation is a trend indicator, several of which are applied in section 5 below.Chapter 5 discusses the take up of the Cobb-Douglas production function in agricultural economics, which kept associated research work alive while Douglas was in the Senate. This led to a refocusing on resource allocation by individual farmers, augmented by some experimental data, rather than tests of marginal productivity and perfect competition. Many of the practitioners were trained in statistics, especially of a Fisherian variety, with Earl Heady as a key player and communicator. Research often aimed to improve farmers' resource allocation efficiency, so many studies used individual farm data. Nevertheless, Zvi Griliches (1957) demonstrated potentially substantial specification biases from omitting relevant variables and aggregating across heterogeneous inputs, with possible solutions. Irving Hoch (1962) sought to avoid such biases “converting his theoretical model into a statistical model” (199), using panel data methods that allowed limited “change over time and also differences across firms” (197), an approach extended by Yair Mundlak (1978). Biddle shows that such research fueled an explosion in collecting panel data sets, an example where theory developments led to data improvements seeking to avoid biases from unmeasured variables.The use of experimental data was believed at the time to avoid almost all of the statistical issues raised by critics of Douglas's methods of estimating production functions (210), perhaps unaware of the problem of hidden dependence downward biasing estimated standard errors discovered by Fairchild Smith (1938).Biddle concludes part 2 in chapter 6, which discusses the uses to which the regression was put by economists seeking to measure and explain economic growth where the Cobb-Douglas production function played an important research role. This chapter has three sections, beginning with efforts over 1920–50 to measure and analyze growth; moving on to the overlapping period 1945–60 and concerns with interpreting the “residual” (i.e., from production function regressions) as total factor productivity; then considering growth accounting using productivity indexes.Solow (1957) had proposed an important role for Cobb-Douglas production function research in measuring growth via technical progress, the first of several studies by him. That paper was also concerned to relax some of the assumptions of the Roy Harrod (1959)–Evsey Domar (1961) model. The residual was seen as “disembodied” progress, but the resulting estimated minimal effect of capital per worker on growth was questioned. Research from Scandinavia seeking to explain trend developments is illustrated, as are models where technical progress is embodied in each vintage of capital, as in Solow 1962. Although formal models are not discussed, Grayham Mizon (1974, 1977) estimated vintage capital Cobb-Douglas production functions. Since humans are needed to invent, build, and run computers, embodied labor skills are also crucial, requiring education, research, and public health as well as learning by doing. Debates abounded, especially concerning issues of identifying disembodied and embodied progress.The role of the Cobb-Douglas production function in growth accounting is also described, contrasted with productivity indexes. Carl Christ, discussing Irving Siegel (1961: 42), is quoted as arguing that “shifts of production functions are what productivity indexes are really about,” although Biddle seems to agree with Robert Barro (1999), arguing that index numbers are better than Cobb-Douglas production functions for estimating growth—but lost out because econometrics was seen as “superior” as it allowed “inference.”There are a number of discussions concerning the many problems confronting empirical researchers measuring and collecting aggregate data on output, capital, and labor. Good data are manifestly essential to empirical research, a theme that recurs in the many debates described by Biddle, highlighted by Griliches (1985) in his response to Oscar Morgenstern (1950) questioning the accuracy of economic observations. Econometricians are often passive players in data collection. As I complained in Hendry 1980, billions of dollars are spent on space exploration gleaning a few observations, but little on furthering economic understanding by better data, though Douglas and his coauthors had to expend considerable effort in adapting and “cleaning” what was available. Arthur Bowley and Josiah Stamp are mentioned by Biddle as cited by Douglas, but their contributions to data creation as invaluable precursors are not described. Geoff Tily (2009) discusses the ideas of Bowley (1895, 1913) and Alfred Flux (1924, 1929) as both pioneering the census of production to create a measure from the supply side as well as estimating the national income, with Stamp (1916) (also see Bowley and Stamp 1927). Even earlier, Alfred Marshall (1890) considered an aggregate idea of national income, leading to the modern measure of gross domestic product (GDP). Colin Clark's (1932, 1937) efforts greatly advanced the creation of national income accounts somewhat before Simon Kuznets (1937, 1946): Alexander Millmow (2021) provides an excellent discussion of Clark's contributions. During World War II, there were other major developments in the UK in the provision of macroeconomic data, where James Meade and Richard Stone, working with John Maynard Keynes, created national income accounts to help guide resource decisions. The construction of appropriate price indices was key to these steps. The first widely adopted aggregate price indices based on weighted averages of price relatives for bundles of goods were proposed by Étienne Laspeyres (1871) and Hermann Paasche (1875), using different baselines. Much later, Irving Fisher (1921) and François Divisia (1926), still predating Cobb-Douglas, and then Leo Törnqvist (1936) and Erwin Diewert (1976, 1978) inter alia all made proposals to improve indices, including chaining rather than intermittently changing the base period. Price indices and implicit deflators are essential for calculating “real” aggregates and, incidentally, camouflage changes in their components and weights, so that models using an aggregate can look constant even when all the components in the index are changing (see Hendry 1996). Aggregation is often seen as a drawback, but this benefit is important, as is that from the variance reductions of modeling linear aggregates by taking logs (see Hendry 1995). While the interpretations of coefficients in aggregate relations may not directly relate to economic theories, these benefits would have accrued to aggregate Cobb-Douglas estimates.Computation is not explicitly discussed yet is equally essential to progress beyond simple OLS applied to a few variables on a small sample of data. Louis Bean (1929) reports that it took eight hours to calculate a four-variable regression fitted to thirty data points. Both “Student” (1908) and Yule (1926) had to undertake their simulation studies manually. Consequently, empirical research was laborious both in collecting appropriate data and in calculating estimates of postulated relationships between them. Indeed, I think the approach to empirical research of fitting theory-based equations derives from that earlier epoch where computation was time intensive. Increased computer power and associated improved software were essential to the expansion of empirical modeling postwar, though many modern publications still fail to reference the software used (see Klein 1987, Doornik and Hendry 1999, and Charles Renfro 2009 for histories of econometric computing).The aim of this section is to illustrate for aggregate United Kingdom data using modern computer power and software (that I think Douglas would have loved to use) what estimates of a Cobb-Douglas regression look like when taking account of dynamics, cointegration, and multiple shifts in trends, combined with rigorous misspecification testing. However, to slightly mimic the historical progress, I begin with the simplest Cobb-Douglas regression. Output, Gt, is measured by real GDP; the total capital stock in the UK, Kt, is calculated by cumulating gross investment and assuming a rate of scrapping and obsolescence; and Lt by total full-time employment. Lowercase denotes logs. The role of Kt should be measured by the flow of “quality adjusted” capital inputs taking account of scrapping and the differing efficiencies of different technology vintages, but such data are not available. There is limited historical data on hours worked, which fell greatly over 1860–2017, or paid holidays, which rose considerably, and even less on “human capital” from embodied skills and knowledge, which rose considerably with increased education and greater female labor force participation. Thus, all forms of technical progress, improved knowledge, and increased education must be “picked up” by catchall trends, representing our ignorance as “not-explained-elsewhere.”Figure 1 shows the three-dimensional relationship over time between (g − l)t and (k − l)t emphasizing the large distortions in the interwar and postwar periods, but not showing any long-run departure overall from constant returns to scale represented by the straight line. As might be anticipated given such a graph, estimating a static Cobb-Douglas specification with a constant trend is not very successful and delivers: g −l^t=4.50.14+0.780.03k−lt+0.0020.0004tσ^=0.058R2=0.991Far2,153=376.9∗∗Farch1,156=273.4∗∗χnd22=50.9∗∗Fhet7,153=7.4∗∗Freset2,153=27.72∗∗T=1861−2017(1)In (1), coefficient standard errors are in parentheses, σ^ is the residual standard deviation, R2 the coefficient of multiple correlation, Far tests residual autocorrelation proposed by Leslie Godfrey (1978), Engle's (1982) Farch tests autoregressive conditional heteroscedasticity, Halbert White's (1980) Fhet tests residual heteroskedasticity, χ2nd (2) tests nonnormality (see Doornik and Hansen 2008), and James Ramsey's (1969) Freset tests nonlinearity. The double asterisks denote significance at 1 percent or less: all the misspecification tests strongly reject their nulls. Moreover, the coefficient of (k − l)t at 0.78 is highly discrepant from the share of capital in GDP. A high R2 combined with the massive residual autocorrelation were the signs Granger and Paul Newbold (1974) took to indicate “spurious” (aka nonsense) relations. The ±2σ^ value of almost 12 percent of GDP suggests much remains to be explained.To deal jointly with the lack of dynamic adjustments to the many large shocks and the nonconstant rate of all the unmodeled sources of technical change, I selected a model from a general formulation with one lag of (g − l) and (k − l) using trend-indicator saturation (TIS, which allows for a potential trend shift at every point in time: see Walker et al. 2019 for a description and an application to health care management) selected by the machine learning software Autometrics that can handle more candidate variables than observations (see Doornik and Hendry 2021, which also explains the misspecification tests reported above and their key role in model selection). The economics variables were initially retained when the trend indicators were selected at 0.01 percent. Combining neighboring equal-magnitude, opposite-sign trend indicators, correcting for two large outliers, and selecting the economics variables at 1 percent delivered: (2)In (2), τ{t≤abcd } denotes a trend indicator that equals t till time abcd and is zero thereafter, 1{abcd} is an impulse indicator equal to unit at time abcd and zero otherwise, and tur is the PcGive unit-root t-test in Anindya Banerjee and Hendry (1992), with critical values in Neil Ericsson and James MacKinnon (2002), which strongly rejects the presence of a unit root, so the variables cointegrate. No misspecification tests reject, and the long-run solved coefficient of (k − l) is 0.34, consistent with a labor share of about two-thirds. The R2 = 0.999 simply reflects the trending behavior, and is 0.5 when ∆(g− l) is the dependent variable.The trend indicators show that otherwise unexplained growth was about 0.5 percent per annum lower prior to World War II than after, but fell again after both 1955 and 2006, the latter signaling the start of the stagnant productivity period immediately before the Great Recession and beyond. Figure 2a records the fitted and actual values from (2); and figure 2b, the derived indicator adjustment path, which makes clear the differing rates of growth over time not captured by the economics variables. Is (2) a production function? No matter how efficient producers might be, they almost certainly cannot adapt instantaneously to sudden, especially unanticipated, changes. The estimated adjustment to the implicit long-run solution after any disequilibrium is almost 40 percent per annum, which seems reasonable given the potential need to order, obtain, and install capital equipment and hire (or fire) workers with the appropriate skills. Firms and farmers clearly use inventories of inputs and outputs to smooth production, but pandemic and supply chain shocks have highlighted their limitations: the ellipse round data on (g − l) for 2018–20 (a period for which I lack capital data) shows the great crash in output from the lockdowns imposed in response to the COVID-19 pandemic. That huge fall, and the complications of the UK furlough scheme on measuring labor inputs, will need an impulse indicator added to (2) when the sample is extended—correcting the estimated Cobb-Douglas production function rather than impuning it.In chapter 7, his final chapter, Biddle considers whether the Cobb-Douglas regression was “successful” in the sense that it became widely used and generally accepted. To quote: “In introducing the Cobb-Douglas regression, Douglas was motivated by a broader idea: that stable, quantifiable relationships between the inputs to and outputs of production processes existed and could be discovered through the application of regression [sic] analysis to statistical data, and that knowledge of such relationships was relevant to important questions of economic theory and policy” (299).New empirical ventures can but commence from contemporaneous knowledge, unaware of major unknowns that may thwart their efforts and occasionally cause laughter in retrospect at what they found. This applies in even the “hardest” of the natural sciences—Lord Kelvin initially underestimated the age of the Earth by more than 99 percent in conflict with geological evidence, because at the time no one knew about radioactivity. In Hendry (1987: 29–30), I proposed four prescriptions for successful empirical econometric research: (a) think brilliantly; (b) failing that, be infinitely creative; (c) if neither of those, be outstandingly lucky; and (d) otherwise stick to being a theorist. I would now amend (c) from “if neither of those” to “and.” While Paul Douglas had both (a) and (b), he was also lucky: much has been discovered about the properties of economic time series unknown at the time he commenced his pioneering research, but these happened not to vitiate his findings. He faced serious and pertinent criticisms, many of which he simply finessed or ignored, as he could not rebut them, but he persevered nonetheless, more in the spirit of Imre Lakatos (1974) than Karl Popper (1963).So why have Cobb-Douglas type regressions become ubiquitous? Biddle argues that finding sensible empirical estimates in line with economists' views was necessary for success, but not sufficient; he suggests four other enabling factors.These factors then became mutually reinforcing.Biddle acknowledges that other explanations are possible. Innovations perhaps usually attract followers and their success or failure may occur for different reasons: there are X-rays but not N-rays so the latter fails (see, e.g., en.wikipedia.org/wiki/N-ray). Even more usually, new ideas attract critics. For example, Charles Darwin's theory of evolution by natural selection was violently attacked, but prevailed over time as it matched so much evidence about the world—which in turn attracted some powerful supporters, like Thomas Huxley. Alternatively, some brilliant ideas and innovations languish for many years before being seen as important—Gregor Mendel comes to mind. There is also Max Planck's view that “science advances one funeral at a time” (or perhaps retirement) as more senior academics and scientists are naturally conservative having invested much of their career in earlier ideas. Had Douglas always found “uninterpretable” results like (1), or even worse the unrestricted version, g^t = 0.35(1.0) + 0.56(0.09) lt + 0.97(0.05) kt − 0.004(0.002) t,(3)the enterprise might have foundered early on. The existence of a relation between inputs and outputs was perhaps the most fundamental requirement for the success of any enterprise seeking to discover its form. That granted, the Cobb-Douglas production function lives on.

Full Text
Published version (Free)

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call