The resistive cross-point array architecture has been proposed for on-chip implementation of weighted sum and weight update operations in neuro-inspired learning algorithms. However, several limiting factors potentially hamper the learning accuracy, including the nonlinearity and device variations in weight update, and the read noise, limited ON/OFF weight ratio and array parasitics in weighted sum. With unsupervised sparse coding as a case study algorithm, this paper employs device-algorithm co-design methodologies to quantify and mitigate the impact of these non-ideal properties on the accuracy. Our analysis shows that the realistic properties in weight update are tolerable, while those in weighted sum are detrimental to the accuracy. With calibration of realistic synaptic behaviors from experimental data, our study shows that the recognition accuracy of MNIST handwriting digits degrades from ∼96 to ∼30 percent. The strategies to mitigate this accuracy loss include 1) redundant cells to alleviate the impact of device variations; 2) a dummy column to eliminate the off-state current; and 3) selector and larger wire width to reduce IR drop along interconnects. The selector also reduces the leakage power in weight update. With improved properties by these strategies, the accuracy increases back to ∼95 percent, enabling reliable integration of realistic synaptic devices in neuromorphic systems.