Brain-inspired machine intelligence research seeks to develop computational models that emulate the information processing and adaptability that distinguishes biological systems of neurons. This has led to the development of spiking neural networks, a class of models that promisingly addresses the biological implausibility and the lack of energy efficiency inherent to modern-day deep neural networks. In this work, we address the challenge of designing neurobiologically motivated schemes for adjusting the synapses of spiking networks and propose contrastive signal-dependent plasticity, a process which generalizes ideas behind self-supervised learning to facilitate local adaptation in architectures of event-based neuronal layers that operate in parallel. Our experimental simulations demonstrate a consistent advantage over other biologically plausible approaches when training recurrent spiking networks, crucially side-stepping the need for extra structure such as feedback synapses.