In recent years, computing in memory (CIM) has been regarded as a promising competitor for low-power accelerators for neural networks. Different implementations based on various memories like ReRAM and flash memory have been proposed. However, a ‘cloud’ is hovering on the horizon of CIM. The limited frequency and high power consumption of DACs and ADCs have become major obstacles of further improvement of the power efficiency of CIM accelerators. The method of utilizing a more advanced process to increase the frequency and reduce the power consumption of DACs and ADCs can hardly be used because of the tight coupling of memory process and CMOS logic process in CIM accelerators. To solve this problem, this paper proposes a CIM-digital heterogeneous neural network accelerating system with analog interconnection. It decouples the CMOS logic process and the memory process in CIM accelerators by using analog interconnection, and implements ADCs and other units in an independent CMOS process. According to experimental results, compared to prior CIM accelerators, the proposed architecture can achieve an increase in the sampling rate of 26.7 times, to 100 MHz, and an increase in the performance of 26.7 times accordingly. The power efficiency can be improved 5.8 times.