Deep learning-based image compression techniques can take advantage of the autoencoder’s benefits to achieve greater compression quality at the same bit rate as traditional image compression, which is more in line with user desires. Designing a high-performance processor that can increase the inference speed and efficiency of the deep learning image compression (DIC) network is important to make this technology more extensively employed in mobile devices. To the best of our knowledge, there is no dedicated processor that can accelerate DIC with low power consumption, and general-purpose network accelerators based on field programmable gate arrays (FPGA) cannot directly process compressed networks, so we propose a processor suitable for DIC in this paper. First, we analyze the image compression algorithm and quantize the data of the network into 16-bit fixed points using a dynamic hierarchical quantization. Then, we design an operation module, which is the core computational part for processing. It is composed of convolution, sampling, and normalization units, which pipeline the inference calculation for each layer of the network. To achieve high-throughput inference computing, the processing elements group (PEG) array with local buffers is developed for convolutional computation. Based on the common components in encoding and decoding, the sampling and normalization units are compatible with codec computation and utilized for image compression with time-sharing multiplexing. According to the control signal, the operation module could change the order of data flow through the three units so that they perform encoding and decoding operations, respectively. Based on these design methods and schemes, DIC is deployed into the Xilinx Zynq ZCU104 development board to achieve high-throughput image compression at 6 different bit rates. The experimental results show that the processor can run at 200 MHz and achieve 283.4 GOPS for the 16 bits fixed-point DIC network.