ABSTRACT Due to the limited computational power of edge devices in distributed manufacturing systems, the challenge of meeting real-time computing requirements for industrial big data arises. Additionally, the significant number of computational tasks results in considerable energy expenses. Therefore, it is crucial to effectively tackle the multi-objective cloud-edge collaborative task offloading problem (MOCECTOP). This paper focuses on two optimisation objectives: total computation time delay and computational energy consumption, which are related to promoting work efficiency and lowering carbon emissions. The weights of these two objectives are hard to determine at different production stages. The challenge is to get multiple models with various possible weight pairs of multiple objectives within a single training session and to achieve high-quality schedules for MOCECTOP in real-time production line control. We address this challenge by proposing a hierarchical parameter sharing (HPS) multi-objective optimisation framework based on multi-agent deep reinforcement learning. The network model comprises a task selection agent and a computing node selection agent. The task selection agent prioritises tasks for computation based on their state features, while the computing node selection agent allocates available computing devices to a selected task. Parameter sharing and collaborative training are employed to obtain a solution for the multi-objective problem when considering the variations in computational capabilities between the cloud and the edge. The HPS optimisation framework effectively fulfills the near real-time requirements for task computation in distributed manufacturing. Numerical experiments demonstrate that our HPS-based strategy quickly produces better schedules superior to those of existing multi-objective solution methods.