Abstract

In the 50 years since the 1961 paper by Little on proving the queuing theorem L = λW, it has become clear that there are two distinct approaches to its proof: sample path theory and steady state (stationary) analysis. Furthermore, an important generalization, usually denoted H = λG, has appeared. It permits weighting the usual integer valued items in the queue by a cost (or other continuous attribute) unique to the item. A more restrictive theoretical analysis is the development of a distributional form of Little's Law. The application of Little's Law in real-world practice has increased far beyond that which anyone might have anticipated in 1961. The largest areas are in operations management and in the queues inside computers. In “Little's Law as Viewed on Its 50th Anniversary,” J. D. C. Little describes this evolution of theory and growth of applications. For the latter, he provides a number of illustrative case studies, including such topics as hospital emergency departments, lean manufacturing, and, in computers, the big banks of servers used for contemporary applications such as medical record keeping and Facebook. Prediction markets are widely recognized as an effective tool to aggregate information from selfish individuals into a common belief. One of the longest-running prediction markets is the Iowa Electronic Market, which allows betting real money on election outcomes. Various studies have shown that the information generated by these markets often serves as a better prediction of the actual outcome than polling data. Due to increased popularity of these markets, several mechanisms for implementing these markets have been developed in recent literature. These seemingly different market mechanisms try to achieve one or more of the desirable properties of bounding the loss of market organizer, risk minimization, truthful learning of beliefs, etc. In “A Unified Framework for Dynamic Prediction Market Design,” S. Agrawal, E. Delage, M. Peters, Z. Wang, and Y. Ye present a convex optimization framework that unifies these seemingly unrelated models for centrally organizing prediction markets. The unified framework facilitates a better understanding of the trade-off between various existing mechanisms and provides an effective tool for designing new mechanisms with desirable properties. As a result, the authors develop the first proper, truthful, risk-controlled, loss-bounded (independent of the number of states) mechanism; none of the previously proposed mechanisms possessed all these properties simultaneously. Lifting is a well-established technique to derive valid inequalities for a mixed-integer linear program from inequalities that are valid for a subproblem involving a subset of the original variables. A lifting procedure is “sequence independent” if the coefficient assigned to each additional variable is independent of the others. Sequence independent liftings are useful in practice because they can be computed efficiently. Recent research has focused on the problem of generating cuts for the relaxation of a mixed-integer program obtained by dropping all integrality constraints of the nonbasic variables relative to an optimal basis of the continuous relaxation. In “A Geometric Perspective on Lifting,” M. Conforti, G. Cornuéjols, and G. Zambelli address the question of lifting the coefficients of the nonbasic integer variables in these cuts. They give a geometric characterization of the best lifting coefficient for a single variable and provide conditions under which this gives rise to a sequence independent lifting. Finally, we exhibit families of cuts satisfying these conditions. In recent years, with a surge in competition in industrial markets, firms find themselves under pressure to respond to ever faster changes in demand and supply. As a result, there is broader employment of flexible forms of procurement strategies and tools, such as option contracts and spot purchases to supplement existing contracts. Optimal structuring and pricing of such contracts in the presence of spot trading and the natural information asymmetry between a buyer and a seller about the buyer's preferences is a complex problem. In “Sourcing Flexibility, Spot Trading, and Procurement Contract Structure,” P. P. Pei, D. Simchi-Levi, and T. I. Tunca analyze this problem by jointly endogenizing the determination of three major dimensions in contract design: (i) sales contracts versus options contracts; (ii) flat-price versus volume-dependent contracts; and (iii) volume discounts versus volume premia. They show that relative magnitudes of the buyer's and the seller's discount rates, as well as seller's production costs and capacity, are critical factors in determining the contract structure. The authors demonstrate that three major contract structures commonly emerge in equilibrium, namely flat-price sales contracts, sales contracts with volume discounts, and options contracts with volume discounts and premia, and they identify the conditions under which each contract structure is observed in optimality. Finally, they show the effects of market and industry parameters such as production costs, spot price variability and bid-ask spread for the spot purchases on contract design, characteristics, and efficiency. The valuation of the storage capacity of a liquefied natural gas terminal is an important practical problem. Exact valuation of this storage capacity as a real option is computationally intractable. In “Valuation of Storage at a Liquefied Natural Gas Terminal,” G. Lai, M. X. Wang, S. Kekre, A. Scheller-Wolf, and N. Secomandi develop a computationally tractable heuristic for the strategic valuation of this storage real option and an upper bound to benchmark their heuristic. Their computational results show that the heuristic is near optimal, and they provide managerial insights on the drivers of the value of this real option. Besides liquefied natural gas, their methods and insights have potential relevance for the valuation of the real option to store other commodities in facilities located downstream from a commodity production or transportation stage or the input used in the production of a commodity. There is often parameter uncertainty in the constraints of optimization problems. It is natural to formulate the problem as a joint chance constrained program (JCCP), which requires that all constraints be satisfied simultaneously with a given high probability. In “Sequential Convex Approximations to Joint Chance Constrained Programs: A Monte Carlo Approach,” L. J. Hong, Y. Yang, and L. Zhang propose an iterative algorithm based on convex approximations and a Monte Carlo method to solve JCCPs. Their algorithm has many desired properties and works well for practical problems. Decision trees are powerful tools to model risk management and homeland security problems, such as dealing with the aftermath of a hazardous occurrence or a terrorist attack. In decision trees, system design selections are made among different available alternatives at decision nodes, whereas some safety features are activated at event nodes that lead to one of several outcomes with specified probabilities of occurrence. Each specific sequence of cascading decisions and events initiating at the root node (e.g., hazardous occurrence) culminates in a leaf node with an accompanying loss value. In “Selecting Optimal Alternatives and Risk Reduction Strategies in Decision Trees,” H. D. Sherali, E. Dalkiran, and T. S. Glickman develop a novel decision tree optimization (DTO) approach that enables restructuring the system by judiciously composing the operational components and the allocation of available resources to mitigate failure probabilities and consequence losses in order to minimize the overall risk. They also develop a specialized branch-and-bound algorithm and establish its convergence to a global optimum. The efficiency of the algorithm is demonstrated on a hypothetical gas-line rupture case and several randomly generated DTO problem instances. In “Heavy-Traffic Analysis of a Multiple-Phase Network with Discriminatory Processor Sharing,” I. M. Verloop, U. Ayesta, and R. Núñez-Queija analyze a generalization of the discriminatory processor sharing (DPS) queue in a heavy-traffic setting. Customers present in the system are served simultaneously at rates controlled by a vector of weights. The authors assume that customers have phase-type distributed service requirements and allow that customers have different weights in various phases of their service. They establish a state-space collapse for the queue length vector in heavy traffic. That is, in the limit, the queue length vector is the product of an exponentially distributed random variable and a deterministic vector. In “Accounting for Parameter Uncertainty in Large-Scale Stochastic Simulations with Correlated Inputs,” B. Biller and C. G. Corlu consider a stochastic simulation with correlated inputs. They develop a Bayesian model to represent parameter uncertainty and stochastic uncertainty in the estimation of mean performance measure and confidence interval. The authors demonstrate the effectiveness of the Bayesian model focusing for an inventory simulation model with correlated demands. Latency problems are characterized by their focus on minimizing the waiting time for all clients. In “Charlemagne's Challenge: The Periodic Latency Problem,” S. Coene, F. C. R. Spieksma, and G. J. Woeginger investigate periodic latency problems. The key property of a periodic latency problem is that each client has to be visited regularly over an infinite horizon. More specifically, there is a server traveling at unit speed, and there is a set of clients with given positions. The server must visit the clients over and over again, subject to the constraint that successive visits to each client should fall within a given amount of time (the time bounds). Practical problems from diverse areas such as real-time task scheduling, preventive maintenance, and human resources can be seen as periodic latency problems. The authors investigate two main problems. In one problem the goal is to find a repeatable route for the server visiting as many clients as possible, without violating their time bounds. In a second problem, the goal is to minimize the number of servers needed to serve all clients. Depending upon the topology of the underlying network, polynomial-time algorithms and hardness results are derived for these two problems. The results draw sharp separation lines between easy and hard cases. The key to reducing maintenance costs is to establish efficient replacement strategies that prevent unexpected failures while maximizing system (asset) utilization. Traditionally, replacement policies have relied particularly on probabilistic assessments that are either based on reliability information or stationary Markovian degradation processes where, given a system's degradation state, the transition to any future state is dictated by a set of arbitrary probabilities. In “Structured Replacement Policies for Components with Complex Degradation Processes and Dedicated Sensors,” A. H. Elwany, N. Z. Gebraeel, and L. M. Maillart propose a new single-unit replacement decision model that utilizes real-time condition monitoring information communicated using sensor technology. Condition monitoring information is used to develop degradation signals that act as a proxy for the underlying physical degradation process. The amplitude/level of the degradation signal is modeled as a continuous-time, continuous-state stochastic process with a predetermined functional form, which is chosen based on the underlying physics-of-failure. Real-time signals are used to update the predictive distribution of the degradation signal using Bayesian techniques. The predictive distributions of the degradation signal are integrated within the Markov decision process model to derive an optimal sensor-based replacement policy for a single unit system, which balances the cost of failure, the cost of preventive replacement, and the cost of observing sensor data. Real-world vibration monitoring data from a rotating machinery application is used to study the performance of our replacement model. As California moves along to craft its own climate policy, a number of issues related to designs of an effective cap-and-trade program unfold. In “Economic and Emissions Implications of Load-Based, Source-Based, and First-Seller Emissions Trading Programs Under California AB32,” Y. Chen, A. L. Liu, and B. F. Hobbs examine the economic and emission implications of three proposals considered by the California government to regulate greenhouse gas emissions from the electric sector: load-based, source-based, and first-seller. As each differs by its point-of-regulation, the authors show in this paper that they lead to identical outcomes under some mild assumptions. This result suggests that when choosing a particular program design, the policy makers should focus on the easiness of implementation and possible integration to a national cap-and-trade program, instead of cost savings. In “Mixed 0-1 Linear Programs Under Objective Uncertainty: A Completely Positive Representation,” K. Natarajan, C. P. Teo, and Z. Zheng show that many mixed 0-1 integer programming problems with random objective function can be reformulated as a completely positive program under the distributional robust model, where information on only a few moments of the random coefficients are assumed to be known. Using this approach, probabilistic information of the behavior of the optimal solution vector in the stochastic problem can be inferred by solving a convex program. The efficacy of this approach is demonstrated on the order statistics and project management problems. Almost all organizations, big or small, are constantly faced with the challenge of allocating resources. It is a great challenge because the allocation decision usually involves implicit trade-offs among multiple criteria. More importantly, too often the allocation decision must be taken before a broad consensus or repeated observations on the relative importance of criteria have been established in a corporation, market, or economy. In “Efficient Resource Allocation via Efficiency Bootstraps: An Application to R&D Project Budgeting,” C.-.M. Chen and J. Zhu develop an allocation approach to tackle the resource allocation problem in which only one observation of potential resource recipients' performance is given. In their approach, special attention has been given to the mean-variance trade-off regarding the overall portfolio performance in a multicriteria context. Through their approach, the decision maker can obtain a mean-variance optimal allocation portfolio using one sample of multiple input-output criteria. When facing shortages, inventory models generally give the retailer one of two choices: either lose demand or promise to fill demand sometime in the future. In “A Periodic-Review Base-Stock Inventory System with Sales Rejection,” Y. Xu, A. Bisi, and M. Dada show that it can be optimal to not fill demand when the backlog of demand is sufficiently high. Such is the case when the cost of delay is so high that it is more economical to simply turn that customer away. Cooperation of perfectly rational players may be difficult or even impossible to achieve as shown by the celebrated Prisoner's Dilemma. An important lesson from the theory of repeated games is that cooperation is possible among rational long-lived players, provided that all information is public. Each player can be incentivized to cooperate by the threat of long-run retaliation by the rest of the society when a defection is publicly observed. However, in modern economies, information is decentralized and communication between agents takes the form of distributed protocols. In “Fault Reporting in Partially Known Networks and Folk Theorems,” T. Tomala considers a model where players may communicate with a limited set of agents and are not aware of the full structure of the communication network. He shows that cooperation can be achieved by a Nash equilibrium of the repeated game, if it is possible to construct a distributed protocol that identifies faulty behavior. Necessary and sufficient conditions for the existence of such protocols are given on the structure on the network, and explicit protocol constructions are provided. One of the hard problems faced by decision analysts is to describe how people trade off multiple conflicting objectives. How should someone compare alternatives that affect health and wealth? How should a government judge alternatives that affect economic growth, jobs, and the environment? During the last 40 years, utility independence has been the main concept used by decision analysts to make these problems tractable. But sometimes it is not possible to find such independencies. In “One-Switch Independence for Multiattribute Utility Functions,” A. E. Abbas and D. E. Bell describe a new condition that allows dependencies to exist and still allows the analyst to make progress. They recognize that sometimes dependency between objectives can be described in a simple “one-switch” way. For example, the house you buy will depend on your wealth, but for any pair of houses, one will be preferred when you are wealthier and the other when you are poorer. The government may prefer one environmental program when the economy is strong but another when it is weak. The authors exploit this relationship to derive new methods to analyze preferences. In the last decade, queueing systems with flexible servers have generated a lot of interest in the operations research community. As a result, there is now a significant body of research addressing the question of how servers should be assigned dynamically to tasks as the state of the system evolves. The objective is to utilize each server's training and abilities to achieve optimal system performance (e.g., to maximize throughput or minimize holding costs). Previous work on the optimal assignment of servers to tasks has assumed that when multiple servers are assigned to the same task, their combined service rate is additive. However, this is a restrictive assumption because it does not take into account the fact that server collaboration may be synergistic (e.g., due to factors such as complementarity of skills, motivation, etc.) or not (e.g., due to bad team dynamics, lack of space or tools, etc.). In “Queueing Systems with Synergistic Servers,” S. Andradóttir, H. Ayhan, and D. G. Down focus on the optimal assignment of servers to tasks when server collaboration is synergistic. They show that when the servers are generalists, that is, when each server has the same service rate for all tasks, then synergistic servers should collaborate at all times. However, when the servers are heterogeneous with respect to the tasks they are trained for, then there is a trade-off between taking advantage of server synergy on the one hand and of each server's training and abilities on the other hand. The ability to optimize inventory levels is a source of competitive advantage or a strategic necessity for virtually all companies. As supply chains become more complex, it is necessary for supply chain models to handle acyclic network structures (caused by common components or shipping methods, for example) and general cost functions caused by phenomena like review periods or stochastic lead times. In “Optimizing Strategic Safety Stock Placement in General Acyclic Networks,” S. Humair and S. P. Willems present a dynamic program to solve the inventory placement problem in general acyclic networks with generalized cost functions.

Full Text
Published version (Free)

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call