With the development of deep hashing learning, several end-to-end deep architectures have been proposed for fast image retrieval. However, learning to hash is essentially a mixed integer nonlinear optimisation problem with the non-deterministic polynomial-time (NP)-hard nature, which makes the standard back-propagation algorithm infeasible. A novel pairwise loss function with additional binary constraint via siamese network is proposed to improve the representation ability of hash codes. Compared to previous works, we force the output of each hidden node close to −1 or +1, which can produce more compact and discriminative hash codes. Extensive experimental results demonstrate that the method can generate more favourable results than existing state-of-the-art hash function learning methods with large margins.