Memristive crossbars have become a popular means for realizing unsupervised and supervised learning techniques. In previous neuromorphic architectures with leaky integrate-and-fire neurons, the crossbar itself has been separated from the neuron capacitors to preserve mathematical rigor. In this paper, we sought to design a simplified sparse coding circuit without this restriction, resulting in a fast circuit that approximated a sparse coding operation at a minimal loss in accuracy. We showed that connecting the neurons directly to the crossbar resulted in a more energy-efficient sparse coding architecture and alleviated the need to prenormalize receptive fields. This paper provides derivations for the design of such a network, named the simple spiking locally competitive algorithm, as well as CMOS designs and results on the CIFAR and MNIST data sets. Compared to a nonspiking, nonapproximate model which scored 33% on CIFAR-10 with a single-layer classifier, this hardware scored 32% accuracy. When used with a state-of-the-art deep learning classifier, the nonspiking model achieved 82% and our simplified, spiking model achieved 80% while compressing the input data by 92%. Compared to a previously proposed spiking model, our proposed hardware consumed 99% less energy to do the same work at 21 × the throughput. Accuracy held out with online learning to a write variance of 3%, suitable for the often reported 4-bit resolution required for neuromorphic algorithms, with offline learning to a write variance of 27%, and with read variance to 40%. The proposed architecture's excellent accuracy, throughput, and significantly lower energy usage demonstrate the utility of our innovations.