Abstract

AbstractQuantitative description of perspective geometries is a challenging task due to the complexities of geometric shapes. In this paper, we address this gap by proposing a new methodology based on variational autoencoders (VAE) to derive low‐dimensional and exploitable parameters of the perspective road geometry. First, road perspective images were generated based on different alignment scenarios. Then, a VAE was built to create a regularized and exploitable latent space from the data. The latent space is a compressed representation of perspective geometry, from which six latent parameters were derived. Without prior expert knowledge, four of the latent parameters were found to represent distinctive attributes of the geometry, such as visual curvature, slope, sight distance, and curve direction. The latent parameters provided quantitative measurements of how the design scheme looks like in perspective view. It was found that a road with low accident rate has low values for codes 4 and 5, high values for code 3, and low variance for codes 3 and 6. The trained VAE model also ensured accurate generation of the perspective images by decoding the latent parameters. Overall, this research advances the understanding of road design by considering the driver's perception.

Full Text
Paper version not known

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call

Disclaimer: All third-party content on this website/platform is and will remain the property of their respective owners and is provided on "as is" basis without any warranties, express or implied. Use of third-party content does not indicate any affiliation, sponsorship with or endorsement by them. Any references to third-party content is to identify the corresponding services and shall be considered fair use under The CopyrightLaw.