Abstract

The goal of pedestrian trajectory prediction is to predict the future trajectory according to the historical one of pedestrians. Multimodal information in the historical trajectory is conducive to perception and positioning, especially visual information and position coordinates. However, most of the current algorithms ignore the significance of multimodal information in the historical trajectory. We describe pedestrian trajectory prediction as a multimodal problem, in which historical trajectory is divided into an image and coordinate information. Specifically, we apply fully connected long short-term memory (FC-LSTM) and convolutional LSTM (ConvLSTM) to receive and process location coordinates and visual information respectively, and then fuse the information by a multimodal fusion module. Then, the attention pyramid social interaction module is built based on information fusion, to reason complex spatial and social relations between target and neighbors adaptively. The proposed approach is validated on different experimental verification tasks on which it can get better performance in terms of accuracy than other counterparts.

Full Text
Paper version not known

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call

Disclaimer: All third-party content on this website/platform is and will remain the property of their respective owners and is provided on "as is" basis without any warranties, express or implied. Use of third-party content does not indicate any affiliation, sponsorship with or endorsement by them. Any references to third-party content is to identify the corresponding services and shall be considered fair use under The CopyrightLaw.