Do Computers Follow Rules Once Followed by Workers? Bjorn Westergard (bio) In his 2014 paper “Polanyi’s Paradox and the Shape of Employment Growth,” economist David Autor puts forward a very general historical thesis (emphasis added): When a computer processes a company’s payroll, alphabetizes a list of names, or tabulates the age distribution of residents in each U.S. Census enumeration district, it is “simulating” a work process that would, in a previous era, have been done by humans using nearly identical procedures. The principle of computer simulation of workplace tasks has not fundamentally changed since the dawn of the computer era. But its cost has. … This remarkable cost decline creates strong economic incentives for firms to substitute ever-cheaper computing power for relatively expensive human labor, with attendant effects on employers’ demand for employees. How could a historian of computing adjudicate this claim? How can we determine whether the procedures used by humans and computers are similar, let alone “nearly identical”? Part and parcel with this framing of the issue is Autor’s assertion that the inability of workers to articulate the rules they follow when carrying out a task constitutes an impediment to writing software to automate it and his suggestion that this impediment might be overcome with machine learning techniques, which putatively infer these “tacit rules” from a wealth of examples. Underwriting this view is a theory—henceforth, “the ALM theory”—first laid out by Autor, Levy, and Murnane in The Skill Content Content of Recent Technological Change (2003) and The New Division of Labor (2004), which builds upon Michael Polanyi’s epistemology and attendant conceptions of rule following. The ALM theory was developed in response to an economic literature that argued that adoption of computer technology—at the level of the industry, firm, or worksite—increases demand for the labor of those with a postsecondary education at the expense of those without. It was thought that in the race between education (supplying computer-complementary skills) and technology (creating demand for them), technology had and would prevail, driving up the wage premia of more educated workers.1 This “canonical model” of “skills-biased technical change” employed a binary classification scheme of “more-and less-skilled workers, often operationalized as college-and non-college-educated workers.” As the 1990s wore on economists found slowing growth in the college wage premium and nonmonotonic inequality growth difficult to account for in this framework. Subtler distinctions needed to be drawn.2 For these, economists pursuing the “task approach” looked to databases of job descriptions, such as the Department of Labor’s Dictionary of Occupational Titles and its successor O*NET, to “[measure] the tasks performed in jobs rather than the educational credentials of workers performing those jobs.”3 They would conclude, contrary to the existing skill-biased technical change literature, that beginning in the late 1970s, computerization had issued in “job polarization” or “the simultaneous growth of high-education, high-wage and low-education, low-wages jobs.”4 The task approach drops the assumption that educational attainment determines work activity in favor of two production functions: one characterizing how labor and computer capital inputs combine to perform tasks, another characterizing how task performances combine to produce outputs (i.e., goods, services). The firm is taken to be a locus of task assignment and execution in which managers play a key role in “organizing tasks into jobs.”5 The heart of the ALM theory, which is meant to provide an interpretation of the data collected using the “task approach,” is the “ALM hypothesis”:6 (1) that computer capital substitutes for workers in carrying out a limited and well-defined set of cognitive and manual activities, those that can be accomplished by following explicit rules (what we term “routine tasks”); and (2) that computer capital complements workers in carrying out problem-solving and complex communication activities (“nonroutine tasks”). [End Page 5] In addition to being “routine” or “nonroutine,” tasks are also either “manual” or “cognitive.” Example classifications include record keeping, calculation, repetitive customer service (routine cognitive), medical diagnosis, legal writing, managing others (nonroutine cognitive), picking/sorting, repetitive assembly (routine manual), janitorial work, truck driving, and removing paper clips from documents7 (nonroutine...
Read full abstract