Abstract

AbstractThe design and planning of railway alignments is the dominant task in railway construction. However, it is difficult to achieve self‐learning and learning from human experience with manual as well as automated design methods. Also, many existing approaches require predefined numbers of horizontal points of intersection or vertical points of intersection as input. To address these issues, this study employs deep reinforcement learning (DRL) to optimize mountainous railway alignments with the goal of minimizing construction costs. First, in the DRL model, the state of the railway alignment optimization environment is determined, and the action and reward function of the optimization agent are defined along with the corresponding alignment constraints. Second, we integrate a recent DRL algorithm called the deep deterministic policy gradient with optional human experience to obtain the final optimized railway alignment, and the influence of human experience is demonstrated through a sensitivity analysis. Finally, this methodology is applied to a real‐world case study in a mountainous region, and the results verify that the DRL approach used here can automatically explore and optimize the railway alignment, decreasing the construction cost by 17.65% and 7.98%, compared with the manual alignment and with the results of a method based on the distance transform, respectively, while satisfying various alignment constraints.

Full Text
Paper version not known

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call

Disclaimer: All third-party content on this website/platform is and will remain the property of their respective owners and is provided on "as is" basis without any warranties, express or implied. Use of third-party content does not indicate any affiliation, sponsorship with or endorsement by them. Any references to third-party content is to identify the corresponding services and shall be considered fair use under The CopyrightLaw.