Abstract

Artificial intelligence technology has been widely used in various fields in recent years. In the case of typhoons, trajectory prediction technology can reduce the loss of human life and property caused by typhoon movements. From the perspective of deep learning, multimodal learning and multitask learning are applied to trajectory prediction. And a trajectory prediction model based on deep multimodal fusion and multitask generation (Trj-DMFMG) is proposed. The model mainly includes two modules: a deep multimodal fusion module and a multitask generation module. The deep multimodal fusion module is composed of several multimodal fusion modules. First, the multimodal trajectory sequence is divided into multiple multimodal subtrajectories by using a sliding window. Then, the multimodal fusion module trains different modal data to perform feature fusion through a long short-term memory network (LSTM) and a 3D convolution neural network (3D CNN). Finally, the features generated by multiple multimode fusion modules are deeply fused. The multitask generation module first trains the deep fusion features generated by the deep multimodal fusion module through the LSTM, then it realizes longitude and latitude prediction at the same time. In this paper, real typhoon data in the Northwest Pacific Ocean are used for simulation experiments. Through a comprehensive comparison of the prediction results in longitude and latitude, it is found that Trj-DMFMG has the best prediction effect and is more accurate and stable in long-term prediction.

Full Text
Published version (Free)

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call