The proposed work introduces a neural control strategy for guiding adaptation in spiking neural structures acting as nonlinear controllers in a group of bio-inspired robots which compete in reaching targets in a virtual environment. The neural structures embedded into each agent are inspired by a specific part of the insect brain, namely Central Complex, devoted to detect, learn and memorize visual features for targeted motor control. A reduced-order model of a spiking neuron is used as the basic building block for the neural controller. The control methodology employs bio-inspired, correlation based learning mechanisms like Spike timing dependent plasticity with the addition of a reward/punishment-based method experimentally found in insects. The reference signal for the overall multi-agent control system is imposed by a global reward, which guides motor learning to direct each agent towards specific visual targets. The neural controllers within the agents start from identical conditions: the learning strategy induces each robot to show anticipated targeting actions upon specific visual stimuli. The whole control structure also contributes to make the robots refractory or more sensitive to specific visual stimuli, showing distinct preferences in future choices. This leads to an environmentally induced, targeted motor control, even without a direct communication among the agents, giving robots, while running, the ability to perform adaptation in real-time. Experiments, carried out in a dynamic simulation environment, show the suitability of the proposed approach. Specific performance indexes, like Shannon׳s Entropy, are adopted to quantitatively analyze diversity and specialization within the group.