This issue includes a Special Cluster of seven papers on the topic of High-Throughput Optimization. The concept for the cluster arose from a session that I organized on this topic for the INFORMS Annual Meeting in Pittsburgh in 2006. A call for papers for a Special Issue of the INFORMS Journal on Computing was issued in 2007, but before the due date for the papers arrived, I was appointed Editor-inChief of JOC. For this reason, the day-to-day handling of the papers devolved to a number of the JOC Area Editors: Karen Aardal, Bob Fourer, Harvey Greenberg, and John Hooker. It is these four who should be recognized as the Guest Editors for this Special Cluster. The motivation for the Special Cluster is the observation that parallel computing is no longer limited to very expensive high-end computers, making it the special preserve of well-funded agencies and companies. Most new personal computers have two or even four cores. Most computers are connected via the Internet to many other computers. The opportunity to take advantage of massive and inexpensive parallel computing is here. At the same time, there are numerous difficult and complex problems in optimization that could benefit from high-throughput computing. High-throughput optimization is the result. This is generally described as systems for solving optimization problems that require large amounts of computing resources over lengthy time periods. Bussieck, Ferris, andMeeraus open the Special Cluster with the paper “Grid-Enabled Optimization with GAMS,” which describes how the GAMS modeling system has been extended to allow optimization to take place on a loosely coupled grid of heterogeneous computing resources. Michel, See, and Van Hentenryck then describe a system for constraint programming that exploits parallelism transparently without changes to the sequential code in “Transparent Parallelization of Constraint Programming.” When a model cannot be automatically decomposed for a parallel solution, then specialized algorithms are needed for particular applications. Xu, Ralphs, Ladanyi, and Saltzman describe a framework for parallelizing branch-and-bound algorithms for solving mixed-integer programs in “Computational Experience with a Software Framework for Parallel Integer Programming.” Ferris, Maravelias, and Sundaramoorthy describe a decomposition approach for the MIP problem that arises in “Simultaneous Batching and Scheduling Using Dynamic Decomposition on a Grid.” Regis and Shoemaker describe the parallelization of a global optimization method in “Parallel Stochastic Global Optimization Using Radial Basis Functions.” Zhang, Shi, Meyer, Nazareth, and D’Souza show how a high-throughput optimization setup greatly reduces solution time in “Solving Beam-Angle Selection and Dose Optimization Simultaneously via High-Throughput Computing.” Finally, Linderoth, Margot, and Thain show how to use highthroughput optimization to improve the solutions to a gambling problem in “Improving Bounds on the Football Pool Problem by Integer Programming and High-Throughput Computing.” Solution techniques that make use of a search tree, such as branch and bound and many techniques of constraint programming, are natural candidates for parallelization. It is no surprise that six of the seven papers in the Special Cluster involve the parallelization of tree search of some kind. It is also notable that four of the papers make use of the open-source Condor framework for managing resources for highthroughput computing. The era of high-throughput optimization is here now. The hardware is cheap and accessible, and as shown in this Special Cluster, effective algorithms and software tools are available as well. I expect to see breakthroughs in the solution of some very difficult optimization problems in the next few years as these new tools come into play. This edition is rounded out by five regular papers on a variety of topics including simulation, heuristic search, and cut generation. The JOC Constraint Programming and Optimization Area is truly at the interface of operations research