This study presents an algorithm for binaural speech dereverberation based on the supervised learning of short-term binaural cues. The proposed system combined a delay-and-sum beamformer with a neural network-based post-filter that attenuated reverberant components in individual time-frequency units. A multi-conditional training procedure was used to simulate the uncertainties of short-term binaural cues in response to room reverberation by mixing the direct part of head related impulse responses (HRIRs) with diffuse noise. Despite being trained with only anechoic HRIRs, the proposed dereverberation algorithm was tested in a variety of reverberant environments and achieved considerable improvements relative to a coherence-based approach in terms of three objective metrics reflecting speech quality and speech intelligibility. Moreover, a systematic evaluation showed that the proposed system generalized very well to a wide range of acoustic conditions, including various measured binaural room impulse responses reflecting different reverberation times, azimuth positions spanning the entire frontal hemifield, various source-receiver distances as well as different artificial heads.