Artificial Intelligence (AI) and Machine Learning (ML) have become ubiquitous, extracting insight from images, sounds, and other ever-growing data sets. While the algorithms underlying deep neural networks (DNN) were developed in the 80's, their practical use has been made possible by logic technology scaling and hardware acceleration based on GPUs, an architecture suited for the massively parallel multiply-accumulate operations in neural networks. However, it is important to note that GPUs are not designed specifically for neural networks, which highlights the potential of even more efficient hardware acceleration for large data sets and complex algorithms, accounting for the constraints of time and power available. In a race to leverage data, further CMOS-based accelerator designs are indeed projected to enable performance gains over today's state-of-the-art GPUs.1 But in order to enable true disruptive advantage with analog compute, the material for the DNN building block, i.e. the synaptic weight, has to be identified. Only then, will we be able to emulate the very nature of DNNs by performing forward and backward operations in-memory, with a projected efficiency gain of several orders of magnitudes.2 The mapping of synaptic weights to device arrays for analog in-memory computing of the multiply-accumulate and update functions impose strict requirements on the material set, not only associated with electrical properties, but also relative to compatibility with CMOS processing. Existing memory elements such has phase-change memory (PCM) and resistive random access memory (ReRAM) are strong candidates due to their non-volatile nature and process maturity but do suffer from drift, programming asymmetry, and stochasticity.3 Such drawbacks can be addressed both at the synaptic cell design and material levels towards improved accuracy for DNN acceleration. In addition, new material sets, such as thin-film ferroelectrics are being considered for such applications but remain to be tested at dimensions relevant to large-scale integration for multi-bit operations. In contrast, the electrochemical random access memory (ECRAM) has been designed to meet deep learning acceleration requirements. Ionic motion through a solid-state electrolyte enables the programming of a host matrix resistance where redox reactions take place, and sense electrodes decouple the read from the write cycles of the cell.4 An example of ECRAM relies on the intercalation of Li-ions in a WO3 channel, with LiPON as an electrolyte.5 We show that such cells can exhibit upward of 1,000 discrete levels with near-ideal potentiation and depression symmetry over several orders of magnitude in a resistance range compatible with large array insertion. ECRAM cells are shown to scale down to 100x100 nm and can be programmed with 5 nsec write pulses. While we predict floating-point accuracy on the MNIST data set when using ECRAM in a restricted dynamic range for deep learning, key trade-offs need to be addressed to further validate ECRAM as a candidate synaptic element for analog compute. First, the cell complexity, beyond the existence of a third terminal, will depend on the amplitude and variation of the open-circuit voltage associated with the electrochemical potential of the ionic species in the channel and reservoir materials. Second, capacitive transients, observed after the write stimulus is imposed on the cell, need to be mitigated so not to dominate learning operations.6 Finally, while energy materials offer unique properties for ECRAM, their patterning and adoption in a CMOS environment remain challenging.