In the use of deep learning for spacecraft anomaly detection, a key issue arises from the insufficient extraction of temporal dependencies in telemetry data. This can lead to an inability to accurately discern whether distribution changes in the data are caused by substantive anomalies or merely a consequence of model underfitting. To address this issue, we design a Temporal Dependency Extraction Enhanced Autoencoder model for multi-scale learning of telemetry data. Firstly, this model incorporates Multi-Scale Temporal Dependency Extraction blocks, which integrate self-attention, autoregressive, and feed-forward networks, aimed at systematically dissecting the long-term dependencies, historical information, and complex patterns in telemetry data. Building on these blocks, our model can efficiently and accurately reconstruct telemetry data while maintaining computational efficiency. Furthermore, we utilize an anomaly quantification metric based on the smoothed Manhattan distance, combined with the Drift Streaming Peaks-over-Threshold strategy for setting anomaly thresholds, thus establishing a comprehensive and precise anomaly alerts framework. Finally, we validate our approach using a dataset from the Attitude Control System of a Geostationary Earth Orbit satellite. The experimental results show that our method not only detects anomalies earlier than traditional methods but also provides an in-depth quantitative analysis of anomaly characteristics.