The ever-growing data volume in big data era drives designers to find more efficient processor architectures both in performance and energy consumption. Among various computing patterns in big data applications, string operations are common but important parts of data processing. However, due to the consideration of generality, current general-purpose CPUs are not efficient in both performance and energy consumption when processing simple and fixed computation patterns of discrete string operations. On the other hand, moving massive data from memory to computing units through NoC, Cache hierarchies, and other memory access data paths is time-consuming and especially energy consuming. Fortunately, emerging technologies, such as Hybrid Memory Cube (HMC), enable the processing-in-memory (PIM) functionality without transferring massive data to remote processing units. In this paper, we propose PIM-WEAVER, a high-efficiency novel acceleration architecture for string processing using PIM mechanism, which is the 3D integration technology that facilitates stacking logic and memory dies in a single package. The PIM-WEAVER is implemented using such technology by integrating string processing units into the real world HMC memory. In PIM-WEAVER, the general-purpose acceleration architecture for string operation is implemented within the memory cube, which can reduce the latency of data transfer and also save energy. We also propose a full-stack solution of programming interface and control mechanism of instruction level. Our comprehensive evaluations using typical string processing algorithms from big data applications show that the PIM-WEAVER gains an average speedup of 14.74x over high-performance Intel processor, and reduces the energy consumption by 82.1% on average, with tiny area overhead.