The explanation of legal judgments is crucial for the transparency, fairness, and trustworthiness, aiming to provide rationales for decision-making. While previous works have focused on improving the accuracy of legal judgment prediction, the lack of explainability seriously limits the practical application of these methods. Many researchers have dedicated effort to extracting or generating rationales as explanations for legal judgments, ignoring the factual consistency of these rationales. Inconsistencies between rationales and fact descriptions severely impact their applicability. To address these issues, we investigate Event Chain – ordered sequences of events related to criminal behavior – as an intermediate representation in the fact descriptions, to enhance the representation of causal relationships among events, and focus on crucial events within the fact description. Specifically, we propose a multi-task learning approach, dubbed LegalMind, that introduces Event Chain as an auxiliary task and jointly models Event Chain and rationale in a unified decoder, to improve factual consistency in rationale generation. The experiment results show that our model outperforms the state-of-the-art methods, achieving improvements of 7.65% and 6.65% in AVG-BLEU on the CJO dataset and LAIC2021 dataset respectively compared to BART-C3VG. Furthermore, in comparison to BART-C3VG, our model demonstrated superior performance with increases of 6.17% and 8.33% in factual consistency on the CJO and LAIC2021 datasets, respectively.