Neural network (NN) computing contains a large number of multiply-and-accumulate (MAC) operations. The performance of NN accelerator is limited with the traditional von Neumann architecture due to the tremendous off-chip memory accesses. Resistive random-access memory (ReRAM)-based crossbars can naturally perform matrix–vector multiplication (MVM) operations and are well suitable for NN accelerators. In the existing ReRAM-based NN accelerators, the synaptic weights represented by the conductances of ReRAMs are mainly based on the binary coding. However, the imperfect fabrication process combined with stochastic filament-based switching leads to resistance variations of ReRAMs, which can significantly alter the weights in binary synapses and degrade the NN accuracy. Moreover, the NN accuracy further deteriorates with multilevel cells (MLCs) used for reducing hardware overhead. In this article, a novel unary coding of synaptic weights is proposed to overcome the resistance variations of MLCs and achieve reliable ReRAM-based neuromorphic computing. A variation-aware optimal mapping scheme is also proposed in compliance with the unary coding to guarantee high accuracy by leveraging a unique feature of unary coding—the existence of multiple ways to represent the same value. The optimal mapping obtains very small errors for weights with resistance variations of MLCs. Our simulation results show that under resistance variations, the proposed method achieves less than 0.08% and 3.43% accuracy loss on CIFAR10 and ImageNet, respectively, compared to the ideal accuracy. With each synaptic weight represented by four 2-b MLCs, the proposed method improves the accuracy over the traditional binary coding scheme by 83.39% and 87.6% for CIFAR10 and ImageNet, respectively.