In recent years, integrated optical processing units (IOPUs) have demonstrated advantages in energy efficiency and computational speed for neural network inference applications. However, limited by optical integration technology, the practicality and versatility of IOPU face serious challenges. In this work, a scalable parallel photonic processing unit (SPPU) for various neural network accelerations based on high-speed phase modulation is proposed and implemented on a silicon-on-insulator platform, which supports parallel processing and can switch between multiple computational paradigms simply and without latency to infer different neural network structures, enabling to maximize the utility of on-chip components. The SPPU adopts a scalable and process-friendly architecture design, with a preeminent photonic-core energy efficiency of 0.83 TOPS/W, two to ten times higher than existing integrated solutions. In the proof-of-concept experiment, a convolutional neural network (CNN), a residual CNN, and a recurrent neural network (RNN) are all implemented on our photonic processor to handle multiple tasks of handwritten digit classification, signal modulation format recognition, and review emotion recognition. The SPPU achieves multi-task parallel processing capability, serving as a promising and attractive research route to maximize the utility of on-chip components under the constraints of integrated technology, which helps to make IOPU more practical and universal.