As energies have increased exponentially with time, so have the size and complexity of accelerators and control systems. Neural networks (NNs) may offer the kinds of improvements in computation and control that are needed to maintain acceptable functionality. For control, their associative characteristics could provide signal conversion or data translation. Because they can do any computation such as least-squares, they can close feedback loops autonomously to provide intelligent control at the point of action rather than at a central location that requires transfers, conversions, hand-shaking and other costly repetitions like input protection. Both computation and control can be integrated on a single chip, a printed circuit or an optical equivalent that is also inherently faster through full parallel operation. For such reasons one expects lower costs and better results. Such systems could be optimized by integrating sensor and signal-processing functions. Distributed nets of such hardware could communicate and provide global monitoring and multiprocessing in various ways, e.g. via token, slotted or parallel rings (or Steiner trees), for compatibility with existing systems. Problems and advantages of this approach, such as an optimal, real-time Turing machine, are discussed. Simple examples are simulated and hardware implemented using discrete elements that demonstrate some basic characteristics of learning and parallelism. Future “microprocessors” are predicted and requested on this basis.