Most of current processing-in-memory (PIM) architectures are application specific, that is, they can only accelerate particular functions, e.g., matrix-vector dot product for neural network acceleration. However, practical applications usually involve various functions. In order to accelerate different functions, various accelerators, and dedicated circuits have been proposed. In this work, by exploring the similarities among some commonly used dedicated circuits, we adopt ferroelectric field-effect transistors (FeFETs) to build a reconfigurable multifunctional memory architecture named Re-FeMAT. Re-FeMAT is composed of multiple processing elements (PEs). Each PE is not only a nonvolatile memory array, but also can perform logic operations (i.e., the PIM mode), convolutions (i.e., the binary convolutional neural network and the convolutional neural network (CNN) acceleration mode) and content search (i.e., the ternary content-addressable memory (TCAM) mode) without changing the circuit structure. Re-FeMAT can support applications that require multiple functions. As an example, by configuring different PEs to different working modes and using a simulated annealing algorithm or a tabu search algorithm to optimize the task-PE assignment, Re-FeMAT can completely accelerate few-shot learning applications. Our simulation results based on a calibrated FeFET model show that the proposed Re-FeMAT architecture achieves better performance and power efficiency than the previous FeMAT architecture. Compared with FeFET-based single-functional circuits, though the power dissipation of Re-FeMAT is higher in some modes, the power-delay product is still smaller. Compared with a state-of-the-art FeFET-based multifunctional accelerator named attention-in-memory, Re-FeMAT achieves lower power, latency, and energy when accelerating a complete few-shot learning task.