1 Requirement This paper describes an approach to multilayer perceptron (MLP) learning that is optimized for hardware implementation. Experimental results to date are promising, and it is the express aim of this paper to present these results concisely - a detailed mathematical analysis will follow in a subsequent, longer publication. Error backpropagation (Rumelhart et al. 1986) has achieved remarkable success as an algorithm for solving hard classification problems with MLP networks. It is, however, not readily amenable to VLSI integration, and the distinction it draws between hidden and output nodes renders it hostile to analog circuit forms. Use of the mathematical chain rule, to calculate the effect of a weight connecting to a hidden unit on the errors {Q} in the output layer, renders the error calculation scheme for hidden units different from, and more complicated than that for output units. The Virtual Targets learning scheme circumvents this problem by introducing an explicit desired state, or target for each of the hidden units, which is updated continuously, and stored along with the synapse weights. While this means that a target state must be stored for each input pattern and hidden node, it simplifies and renders homogeneous the process of weight evolution for all neurons. Furthermore, since a target state is already stored for each output neuron, the scheme essentially removes the distinction during learning between hidden and output nodes. Analog integrated circuits based on the virtual targets strategy will therefore be flexible in architectural terms, as all units will be configurable as either output or hidden layer neurons. The fundamental idea of adapting the internal representation, either as well as, or instead of the weights, is not itself new (Rohwer 1990; Grossman et al. 1990; Krogh et al. 1990). However, these pieces of work were not optimized for hardware implementation. The fundamental difference is that simplicity of implementation has been made the primary goal in the work described in this paper, to produce a system optimized for analog VLSI. There are also several important differences in detail between the work described in this paper and these earlier, similar Neural Computation 4, 366-381 (1992) @ 1992 Massachusetts Institute of Technology