Compute-in-memory has been increasingly appreciated by researchers as a well-suited hardware accelerator in convolutional neural networks (CNNs), because it can achieve low power consumption and high inference accuracy. This work presents a novel TD-CIM structure using:1) A Capacitor Charging scheme that uses Compact 8T Model for multiply-and-accumulate (MAC) Operations with serials inputs in Time Domain Level; 2) a new replicated bit-line time-domain converter (RBL-TDC) to achieve the quantization of the multiply-accumulate operations with high accuracy; 3) A 22 nm FD-SOI 16 Kb TD-CIM macro fabricated using foundry provided compact 8T-SRAM cells, which achieves normalized energy efficiency(EF) of 5816.5 TOPS/W, normalized area efficiency(64TOPS/mm2), and 8-bit weight for 8-bit serials inputs with 64 accumulations per cycle, as well as output precision(14b) in the MAC operation. This work also obtains an inference accuracy of 92.57 % on the VGG-16 network using the Cifar10 dataset over PVT variations.