Addressing the complexity of multi-task trajectory prediction, this study introduces a novel Deep Multimodal Network (DMN), which integrates a shared feature extractor and a multi-task prediction module with translational encoders to capture both intra-modal and inter-modal dependencies. Unlike traditional models that focus on single-task forecasting, our DMN efficiently and simultaneously predicts multiple trajectory outputs—locations, travel times, and transportation modes. Compared to baseline models including LSTM and Seq2Seq using a real-world dataset, the DMN demonstrates superior performance, reducing the location prediction error by 67% and the travel time error by 69%, while achieving an accuracy of 91. 44% in travel mode prediction. Statistical tests confirm the significance of these enhancements. Ablation studies further validate the critical role of modeling complex dependencies, highlighting the potential of DMN to advance intelligent and sustainable transportation systems.