The availability and power of parallel and distributed computers is having a significant impact on how expensive problems are solved in all areas of numerical computation, and is likely to have an even larger impact in the future. This paper presents a view of how the consideration of parallelism is affecting, and is likely to affect, one important field within numerical computation, the field of nonlinear optimization. It does not attempt to survey the research that has been done in parallel nonlinear optimization. Rather it presents a set of examples, drawn mainly from our own research, that illustrate many of the limitations, opportunities, and challenges inherent in incorporating parallelism into the field of nonlinear optimization. These examples include parallel methods for unconstrained optimization problems with a small to moderate number of variables, parallel methods for large block bordered systems of nonlinear equations, and parallel methods for small-scale and large-scale global optimization problems. Our overall conclusions are mixed. For most generic optimization problems with a small to moderate number of variables, the consideration of parallelism does not appear to be leading to major algorithmic innovations. For many classes of large-scale problems, however, the consideration of parallelism appears to be creating opportunities for the development of interesting new methods that may be advantageous on parallel and sometimes even on sequential computers. In addition, a number of large-scale parallel optimization algorithms exhibit irregular, coarse-grain structure, which leads to interesting computer science challenges in areas such as dynamic scheduling and load-balancing.