Abstract

It is an exciting time to work in operations management. Advances in theory and methods, including behavioral operations, dynamic modeling, experimental methods, and field studies provide new insights into challenging operational contexts. Yet the world of operations continues to change rapidly, creating new and difficult challenges for scholars. Increasingly, operations management requires theory, models and empirical methods to address the cross-functional, interdisciplinary character of modern operational systems and the complex nonlinear dynamics these systems generate. The OM research community has a long tradition of dynamic modeling, going back at least to the pioneering work of Forrester (1958) and Holt et al. (1960). These innovators recognized that even core processes in organizations, such as production and supply chain management, involve critical feedbacks with other organizational functions and with other organizations and actors including customers, suppliers, workers, competitors, financial markets, and others. They recognized that these interactions and feedbacks often involve significant time delays, nonlinearities, information distortions, and behavioral responses that often cause dysfunctional, suboptimal behavior and slow learning and process improvement. The challenge, however, has been to develop, articulate and test parsimonious theories to explain the behavior of complex systems, to test policies for improvement, to implement these in real organizations, and to assess their impact over time. Forrester’s insight was to use ideas from control theory to map and explain industrial problems (Forrester 1958; 1961; Richardson 1991 traces the history of feedback control and systems theory from the Greeks through the development of nonlinear dynamics). Forrester’s first system dynamics model explained persistent oscillations of production and sales in a manufacturing supply chain. Forrester’s model integrated aspects of operations that had not previously been considered — e.g., limited information flow across organizations and functions within organizations, delays in gathering information, making decisions and in the implementation and impact of those decisions, and the behavioral, sometimes suboptimal decision rules managers used to make inventory and production decisions at each level of the supply chain. Forrester (1961) also integrated advertising and consumer behavior into the model, expanding the boundary of analysis beyond conventional inventory theory at the time. Forrester’s goals were broader than explaining an important operations issue; rather, he created a general approach to dynamic modeling any management system, indeed, any dynamic system, along with the conceptual and software tools to develop, test, and improve behavioral, dynamic models of human systems, and implement the recommendations arising from them. Soon after the publication of Industrial Dynamics, these concepts were applied to a variety of contexts, first in management, and soon after to ecological, urban, and societal problems, among others. By the late 1960s the breadth of the field led to a name change, from industrial dynamics to system dynamics (SD), and the growth of a vibrant field of study, taught around the world (see e.g. http://systemdynamics.org). There are many conceptual overlaps and synergies between OM/OR and SD; these can be traced to the origins and stated goals of both fields (see Lane, 1997; and Größler et al., 2008). Here we focus on the methodological elements of SD that are most distinctive and relevant to the OM community. First, system dynamics models are structural, behavioral representations of systems. The behavior of a system arises from its structure. That structure consists of the feedback loops, stocks and flows, and nonlinearities created by the interaction of the physical and institutional structure of a system with the decision-making processes of the agents acting within it (Forrester, 1961; Sterman, 2000). The physical and institutional structure of a model includes the stock and flow structures of people, material, money, information, and so forth that characterize the system. The decision processes of the agents refer to the decision rules that determine the behavior of the actors in the system. The behavioral assumptions of a simulation model describe the way in which people respond to different situations, for example, the way managers perceive inventory, forecast demand, assess the delivery time for materials, and use these perceptions and expectations to schedule production, hire workers, adjust prices, and so on. Accurately portraying the physical and institutional structure of a system is relatively straightforward. In contrast, discovering and representing the decision rules of the actors is subtle and challenging. To be useful, simulation models must mimic the behavior of the real decision makers so that they respond realistically, even when they deviate from optimality, and those decision rules must be globally robust so that the simulated actors behave appropriately, not only for conditions observed in the past but also for circumstances never yet encountered. SD models therefore have much in common with models in the behavioral operations tradition (See Bendoly et al., 2010a,b for a partial review): in both communities, decision makers are boundedly rational, rely on heuristics, and are often influenced by emotion and stressors that affect physiological arousal. Second, SD models capture disequilibrium. Since different decision processes govern the inflows and outflows to the stocks that characterize the state of the system, disequilibrium is the rule rather than the exception (Sterman, 2000). For example, the rate at which customers arrive at a hospital emergency department, or place orders for new products, differs from the rate at which they are treated, or orders fulfilled, leading to queues and delays in medical treatment, or wait lists of unsatisfied customers. The reactions of actors to these imbalances create feedbacks, both negative and positive, that then alter the rates of flow. If the negative feedbacks are strong and swift, the system may quickly settle to an equilibrium. If, however, there are long delays in the negative feedbacks, the system may oscillate; if there are positive feedbacks, the system may become locally unstable (for example, if a wait list triggers fear of shortages people may order more, lengthening the wait list still further; see Sterman and Dogan in this issue). Modelers should not presume that a system has an equilibrium or that any equilibria are stable. Instead, SD modelers represent the processes through which decision makers respond to situations in which the states of the system differ from their goals. Model analysis then reveals whether these decision rules, interacting with one another and with the physical structure, result in stable or unstable behavior. Equilibria, and the ability of a system to reach them, are emergent properties of the dynamic system, not something to be assumed. Third, SD stresses the importance of a broad model boundary. Research shows decisively people’s mental models have narrow boundaries, omitting most of the feedbacks and interactions that generate system behavior (see e.g., Sterman, 2000 and the law of prägnanz, a fundamental principal of gestalt perception, reinforcing our tendency to simplify the world, e.g., Sternberg, 2003). We tend to assume cause and effect are closely related in space and time, ignoring the distal and delayed impacts of decisions. The result is policy resistance — the tendency to implement policies that fail, or, more insidiously, that work locally or in the short run, only to worsen performance elsewhere or later (Meadows, 1989; Sterman, 2000). Although the sensitivity of model results to uncertainty in parameter values is important, and system dynamics uses a wide range of tools to assess such uncertainty, both model behavior and policy recommendations are typically far more sensitive to the breadth of the model boundary than to uncertainty in parametric assumptions. SD modelers are therefore also trained to challenge the boundary of models, both mental and formal, to consider feedbacks far removed from the symptoms of a problem in space and time. For example, models of traffic flow with exogenous trip origination, destination and departure times typically show that expanding highway capacity (adding lane-miles, optimizing traffic light timing, etc.) will relieve congestion. Expanding the model boundary to include endogenous changes in the number and type of trips, trip timing, transport mode choice, and settlement patterns will show that expanding highway capacity is ineffective as people respond to lower initial congestion levels by taking more trips, driving instead of using mass transit, and moving farther from their jobs (Sterman, 2000; Chapter 5). Fourth, SD models are developed and tested through grounded methods. SD and operations management modelers strive to capture the interactions among the elements of a system as they exist in the real world. The resulting models should reflect operational thinking (Richmond, 1993), that is, they should capture the physical structure of the system, the institutional structure that governs information flows and incentives, and the behavioral decision rules of the actors. These must all be tested empirically. Grounded methods, in this context, refers to empirical methods spanning the spectrum from ethnographic work for theory development, to experimental studies, to formal econometric estimation of model parameters and confidence intervals, hypothesis testing, and other statistical tests. The application of these methodological principles often results in complex models with dozens of interactions and significant time delays that integrate multiple data sources of different kinds (e.g., quantitative data such as panel datasets, archival data, interviews, surveys, participant observation, laboratory experiments, and so on). The result is both a better theory of the structure of the system, and a formal model. Usually that model cannot be solved in closed form so must be simulated. Simulation enables rigorous tests of the ability of the theory to explain the problematic phenomenon and can be used to evaluate and rank policy options, carry out wide-ranging parametric and structural sensitivity tests, and optimize performance. Much of the leading edge research in operations management is evolving along similar lines. Increasingly, OM scholars are expanding the boundaries of their models to include behavioral decision making, explicit consideration of dynamics, and broader model boundaries including multiple decision makers and organizations (e.g., supply chain coordination; interactions of operations, marketing and pricing) and performance criteria beyond profit maximization (e.g., working conditions and environmental sustainability). With this special issue we highlight relevant developments in system dynamics and empirical studies in operations management, focusing on the increasing alignment between them and complementarities that may lead to mutual benefit in new research. In the next sections we single out those areas of collaboration informed by the articles in this special issue. As discussed above, Forrester (1958, 1961) developed the first integrated supply chain model, showing how limited information and bounded rationality interact with the physics of production and distribution to explain the persistent oscillation in supply chains and the amplification of disturbances up the chain—phenomena that continue to vex operations managers today. Sterman (1989) used an experimental setting (the Beer Distribution Game) to estimate empirically a simple, behaviorally grounded decision rule, showing how “misperceptions of feedback” — mental models with narrow boundaries and short time horizons, specifically the failure to recognize feedbacks, time delays, accumulations and nonlinearities — led to the oscillations and amplification seen in real supply chains, thus articulating an endogenous behavioral theory of the causes of the bullwhip effect. Later experimental studies including Croson and Donohue (2006), Wu and Katok (2006), Croson et al., 2014, Paich and Sterman, 1993, Diehl and Sterman, 1995; to name just a few, have demonstrated how dysfunctional behavior arises endogenously through the interplay of human decision making heuristics with systems characterized by feedbacks, accumulations, time delays, limited information and other structural features of supply chains. Others have explored the interactions between feedback and behavioral response to empirically examine the evolution of trust, or its breakdown, among supply chain players, for example, Autry and Golicic’s (2010) analysis of relationship-performance spirals. In this issue, three papers expand on this experimental tradition. The paper by Sterman and Dogan uses a laboratory experiment with the beer game to explore the causes of hoarding (endogenous accumulation of excessive safety stock) and phantom ordering (endogenous accumulation of excessive on-order inventory) often seen in real supply chains as managers seek to defend themselves against erratic customer demand and poor supplier performance. The authors analyze the data collected in the experiment of Croson et al., 2014 which showed significant oscillation and amplification in a supply chain even when customer demand was both completely constant and that fact was common knowledge. They use online questionnaire responses, econometric estimation, and analysis of outlier behavior to generalize the ordering heuristic used in such work since Sterman, 1989 to show when endogenous hoarding and phantom ordering are likely to emerge. Similarly, Weinhardt, Hendijani, Harman, Steel and Gonzalez use a lab experiment to explain people’s difficulties with the stock management problem (Sterman, 1989). Prior work (e.g., Booth Sweeney and Sterman, 2000; Cronin et al., 2009) demonstrates that even highly educated elites with substantial STEM education do not understand the basic principles of accumulation (stocks and flows). This result has been broadly replicated. The challenge now is to understand why such “stock-flow failure” occurs and how it can be overcome. Weinhardt et al. build on recent empirical work examining the importance of alternative cognitive styles in managing complex systems (e.g., Bendoly, 2014; Moritz et al., 2013). The authors draw on the psychology literature to consider how different cognitive styles including global- vs. local-thinking and analytical orientation (both measured by a cognitive reasoning test) affect performance in widely used stock-and-flow tasks. They find that subjects scoring higher in analytical thinking exhibit better performance, while the global vs. local orientation had no impact. These results provide some guidance into educational and other interventions that may improve understanding of accumulations. They also note that performance remained rather low even among those scoring well in analytic thinking, consistent with prior work suggesting that the failure to understand accumulation processes is deeply embedded in human cognition, similar to the problems people have in understanding probability even after extensive schooling. The paper by Liu, Mak and Rapoport examines the evolution of coordination in a complex system, specifically a traffic network. The cover story for their experiment is traffic flowing through a road network (a directed graph). Such networks exist not only in transportation systems but in many common operations contexts such as jobs flowing through a factory, or orders flowing among different suppliers in a supply web. The work is grounded in the observation that many such networks benefit from coordination due to the externalities an individual imposes on others by using particular arcs in the network that may create congestion for others, along with opportunities for collective gains through shared use of arc capacity. The experiment not only considers how different conditions affect steady state performance, but the behavioral dynamics of learning and improvement as participants respond to outcome feedback, which then alters their behavior in the next round and thus the state of the system they and other actors experience. The authors find that the existence of an intermediate equilibrium choice greatly benefits performance. They also find evidence suggesting that “strategic teaching” by farsighted players helps shift group decision making towards the socially efficient equilibrium (consistent with the group-wise system-thinking effects found in Bendoly, 2014). While these studies help illuminate the behavioral operations literature, it should be noted that the role of system dynamics studies in supply chain are not limited to the consideration of decision failures by planners. Recent studies such as Sawhney (2006) argue that flexibility built into a supply network can foster a virtuous cycle of improved supply chain relationships. Related work by Holweg et al. (2005) and Cooke and Rohleder (2006) use case-data driven feedback simulations to explore the benefits of flexibility. Choi et al. (2012) examine a “decoupling points” strategy proposed for use in conjunction with postponement tactics to accommodate certain types of variability in demand and production. Project management, a core area of operations management, remains troubled. Despite decades of research and the proliferation of widely-used tools and approaches to project management (e.g., Gantt, PERT, CPM, PRINCE(2), spiral, adaptive, agile, lean, etc.), projects are routinely LEW—Late, Expensive and Wrong; that is, delivered late, go over budget, experience low quality and fail to meet customer requirements. LEW afflicts projects large and small, software and construction; standard and unique; private sector and public. System dynamics has long been applied to understand and improve project management, beginning with the path-breaking work in the dispute between Ingalls shipbuilding and the US Navy (Cooper, 1980; Sterman, 2000; Chapter 2; Lyneis and Ford, 2007 review the extensive SD literature on project management). System dynamics models of projects provide endogenous explanations for LEW dynamics, including the impact of common disruptions such as: late customer changes; delays in design or construction approvals; labor and materials bottlenecks; inadequate coordination and communication between supplier and customer and across phases of the project; and others. Project dynamics are conditioned not only by the “Physics” such as delays in discovering rework as prototypes are built and testing carried out, but, importantly, by behavioral processes such as “the liars’’ club” in which known defects are concealed from others and from management (Ford and Sterman, 2003). These phenomena and many feedbacks in projects often amplify apparently small and innocent scope or schedule changes to cause large ripple effects leading to delay and disruption. SD models capturing a wide array of such behaviorally grounded feedbacks are now widely used both in dispute resolution and in proactive management improve project performance and avoid disputes (e.g., Godlewski et al., 2012). In the operations management literature, the role of exogenous shocks (Bendoly and Cotteleer, 2008) and feedback mechanisms (Bendoly and Swink, 2007; Bendoly et al., 2010a,b) have similarly been shown to have short and long-term implications for resource allocation in projects (see also Bendoly et al., 2014). In this issue, Parvan, Rahmandad and Haghani expand our understanding of project dynamics by empirically estimating for the first time the strength of critical feedback processes conditioning project outcomes using a sample of design and construction projects. They use half their sample to estimate the parameters that govern the feedback interactions between the design and construction phases of project model, including error rates, productivity, and delays in discovering errors leading to rework. They find that error rates are typically higher in the design phase than in construction and that it takes longer to discover the design errors. They also find that these factors explain up to 20% of project cost variability and that the estimated model does a good job of estimating cost and schedule outcomes beyond the estimation sample. Their findings provide clear managerial guidelines for effort allocation, a rigorous representation of project risk, and a method that can be applied in other project domains. Service operations and the challenge of continuous improvement have long been core concerns of operations management scholars and practitioners. A number of studies examine the motivating role of workloads (e.g., Schultz et al., 1998, 1999). In some cases authors have been able to empirically identify strong non-linear relationships between workload shifts and performance over an extended timeframe of activity (e.g., Linderman et al., 2006 in quality improvement contexts; Bendoly, 2013 in revenue management). Still others have focused on the role of learning opportunities can have on evolving performance capabilities over time. In a healthcare context, Kc et al. (2013) provide such an example. At the intersection of these findings is research that has shown the complex interplay between evolving capabilities and workload, illustrating the dynamics of optimal performance over time and management influence (Bendoly and Prietula, 2008). Similarly, system dynamics scholars have explored the dynamics of service delivery and quality and process improvement in settings where stress, interruptions and other conditions affect the motivation of front-line workers and the productivity and quality of their work. The literature is large: Levin et al., 1976, Oliva and Sterman, 2001, 2010, Oliva, 2001 and Martinez-Moyano et al., 2014 explore the determinants of service quality. A major line of work on “capability traps” (beginning with Repenning and Sterman, 2001, 2002) explores self-reinforcing dynamics whereby short-run pressure for output leads to longer hours, corner-cutting, less maintenance, safety, training, and investment in process improvement to build the capabilities required for long-run success. Capability traps arise in diverse settings, including high-hazard operations such as the oil and chemical industries (Repenning and Sterman, 2001, 2002), energy efficiency and sustainability (Sterman, 2015; Lyneis and Sterman, forthcoming), product development (Repenning, 2001), aviation safety (Rudolph and Repenning, 2002), organizational growth (Perlow et al., 2002), health care (Rudolph et al., 2009; Homer and Hirsch, 2006), corporate strategy (Gary, 2010; Rahmandad, 2012) and others. In this issue, Chuang and Oliva work in this tradition to explore the impact of labor availability on retailer inventory record inaccuracy (IRI). They show how poor record quality can trigger a reinforcing feedback that might cause progressive deterioration in product availability, sales, and resources for improvement. Their empirical study estimates the strength of these effects; the results inform the development of a dynamic model, which they use to assess the impact of different operational errors and policies for improvement on record accuracy and organizational performance. The last section of the paper explores the strength of the feedback between the extra work created by IRI and the possibility of a death spiral. Finally, Morrison builds on the capability trap literature to explore the tension between innovation by front line workers and process improvement. Through an extensive ethnographic study, Morrison examines the process improvement initiative of a manufacturing firm. As is standard in improvement programs, the firm sought to engage front-line employees in the improvement process. However, the improvement program increased time pressure on workers and created bottlenecks for the engineers and other specialized personnel needed to implement the improvement suggestions. Faced with these pressures, the employees improvised well-intentioned workarounds to address the resource shortages. Many of these, because they were successful, eased the pressure for improvement and masked the underlying process problems that impeded sustainable productivity gains. Morrison uses his detailed field work to develop a parsimonious model integrating resource allocation, the demands of the improvement program, and the workarounds created by employees. In a series of elegant experiments he uses the model to uncover the sources of resource shortages, the role of workarounds as a solution to front line problems and their unintended role in masking weaknesses in the underlying system. Like previous work in the capability trap literature, Morrison’s model shows how the well-intended actions of different actors can interact with each other and with the physical operating system to trap the system in low performance and thwart improvement. This special issue necessarily spans a small subset of the work at the intersection of operations management and system dynamics. We believe there is much to gain for members of both fields in additional work to explore the complementarities and synergies illustrated by the papers here. Yet this special issue also has a broader, and possibly even more important objective: to expand the way we think and theorize about operations management. Most empirical research in operations has taken a variance-based theory approach to understand operations management. This approach “examines relationships between contextual variables, the use of practices and the associated performance outcomes” (Sousa and Voss, 2008; p. 698). OM scholars have drawn on contingency theory to better understand how context influences the performance benefits and implementation of well-established practices (e.g. Zhang et al., 2012; Sousa, 2003). This “comparative statics” approach (Pettigrew et al., 2001) suppresses the dynamics that are central to the performance of operations and persistently vexing to practitioners and theorists alike. Static analyses help us understand what works but not how things work. In contrast, the methodological principles outlined above and illustrated by the papers in this special issue focus on the development of process-based theories that explicitly incorporate the physical and institutional structure of operational systems and the behavioral decision rules of the actors in those systems, that adopt a broad boundary incorporating multiple feedbacks, time delays, nonlinearities and accumulation processes, and that use grounded methods to develop and test the models. The resulting theories provide empirically tested, robust models in which to design policies for improvement, interactive virtual worlds to catalyze learning among the actors needed to implement those policies, and methods to assess the results. It is our hope that the articles in this special issue motivate additional collaboration between scholars and practitioners in operations management and system dynamics.

Full Text
Published version (Free)

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call