In modern discrete flexible manufacturing systems, dynamic disturbances frequently occur in real time and each job may contain several special operations in partial-no-wait constraint due to technological requirements. In this regard, a hierarchical multiagent deep reinforcement learning (DRL)-based real-time scheduling method named hierarchical multi-agent proximal policy optimization (HMAPPO) is developed to address the dynamic partial-no-wait multiobjective flexible job shop scheduling problem (DMOFJSP-PNW) with new job insertions and machine breakdowns. The proposed HMAPPO contains three proximal policy optimization (PPO)-based agents operating in different spatiotemporal scales, namely, objective agent, job agent, and machine agent. The objective agent acts as a higher controller periodically determining the temporary objectives to be optimized. The job agent and machine agent are lower actuators, respectively, choosing a job selection rule and machine assignment rule to achieve the temporary objective at each rescheduling point. Five job selection rules and six machine assignment rules are designed to select an uncompleted job and assign the next operation of which together with its successors in no-wait constraint on the corresponding processing machines. A hierarchical PPO-based training algorithm is developed. Extensive numerical experiments have confirmed the effectiveness and superiority of the proposed HMAPPO compared with other well-known dynamic scheduling methods. Note to Practitioners—The motivation of this article stems from the need to develop real-time scheduling methods for modern discrete flexible manufacturing factories, such as aerospace product manufacturing and steel manufacturing, where dynamic events frequently occur, and each job may contain several operations subjected to the no-wait constraint. Traditional dynamic scheduling methods, such as metaheuristics or dispatching rules, either suffer from poor time efficiency or fail to ensure good solution quality for multiple objectives in the long-term run. Meanwhile, few of the previous studies have considered the partial-no-wait constraint among several operations from the same job, which widely exists in many industries. In this article, we propose a hierarchical multiagent deep reinforcement learning (DRL)-based real-time scheduling method named HMAPPO to address the dynamic partial-no-wait multiobjective flexible job shop scheduling problem (DMOFJSP-PNW) with new job insertions and machine breakdowns. The proposed HMAPPO uses three DRL-based agents to adaptively select the temporary objectives and choose the most feasible dispatching rules to achieve them at different rescheduling points, through which the rescheduling can be made in real time and a good compromise among different objectives can be obtained in the long-term schedule. Extensive experimental results have demonstrated the effectiveness and superiority of the proposed HMAPPO. For industrial applications, this method can be extended to many other production scheduling problems, such as hybrid flow shops and open shop with different uncertainties and objectives.