Recently, manufacturing companies have been making efforts to increase resource utilization while ensuring the flexibility of production lines to respond to rapidly changing market environments and customer demand. In the high-tech manufacturing industry, which requires expensive manufacturing facilities and is capital-intensive, re-entrant production lines are used for efficient production with limited resources. In such a production system, a part visits a specific station repeatedly during the production period. However, a re-entrant production line requires an appropriate scheduling system because other parts with different processing requirements are processed at the same station. In this study, a re-entrant production line was modeled as a manufacturing environment via simulation, and an adaptive scheduling system was developed to improve its operational performance by applying deep reinforcement learning (DRL). To achieve this, a software architecture for integrating DRL with the simulation was developed and the states, actions, and rewards of the reinforcement learning (RL) agent were defined. Moreover, a discrete-event simulation control module was designed to collect data from the simulation model and evaluate the policy network trained via DRL. Finally, the applicability and effectiveness of the developed scheduling system were verified by conducting experiments on a hypothetical re-entrant production line.