In the last few years, anatomical and physiological studies have provided new insights into the organization of the parieto-frontal network underlying visually guided arm-reaching movements in at least three domains. (1) Network architecture. It has been shown that the different classes of neurons encoding information relevant to reaching are not confined within individual cortical areas, but are common to different areas, which are generally linked by reciprocal association connections. (2) Representation of information. There is evidence suggesting that reach-related populations of neurons do not encode relevant parameters within pure sensory or motor "reference frames", but rather combine them within hybrid dimensions. (3) Visuomotor transformation. It has been proposed that the computation of motor commands for reaching occurs as a simultaneous recruitment of discrete populations of neurons sharing similar properties in different cortical areas, rather than as a serial process from vision to movement, engaging different areas at different times. The goal of this paper was to link experimental (neurophysiological and neuroanatomical) and computational aspects within an integrated framework to illustrate how different neuronal populations in the parieto-frontal network operate a collective and distributed computation for reaching. In this framework, all dynamic (tuning, combinatorial, computational) properties of units are determined by their location relative to three main functional axes of the network, the visual-to-somatic, position-direction, and sensory-motor axis. The visual-to-somatic axis is defined by gradients of activity symmetrical to the central sulcus and distributed over both frontal and parietal cortices. At least four sets of reach-related signals (retinal, gaze, arm position/movement direction, muscle output) are represented along this axis. This architecture defines informational domains where neurons combine different inputs. The position-direction axis is identified by the regular distribution of information over large populations of neurons processing both positional and directional signals (concerning the arm, gaze, visual stimuli, etc.) Therefore, the activity of gaze- and arm-related neurons can represent virtual three-dimensional (3D) pathways for gaze shifts or hand movement. Virtual 3D pathways are thus defined by a combination of directional and positional information. The sensory-motor axis is defined by neurons displaying different temporal relationships with the different reach-related signals, such as target presentation, preparation for intended arm movement, onset of movements, etc. These properties reflect the computation performed by local networks, which are formed by two types of processing units: matching and condition units. Matching units relate different neural representations of virtual 3D pathways for gaze or hand, and can predict motor commands and their sensory consequences. Depending on the units involved, different matching operations can be learned in the network, resulting in the acquisition of different visuo-motor transformations, such as those underlying reaching to foveated targets, reaching to extrafoveal targets, and visual tracking of hand movement trajectory. Condition units link these matching operations to reinforcement contingencies and therefore can shape the collective neural recruitment along the three axes of the network. This will result in a progressive match of retinal, gaze, arm, and muscle signals suitable for moving the hand toward the target.