Satellite computing represents a recent computational paradigm in the development of low Earth orbit (LEO) satellites. It aims to augment the capabilities of LEO satellites beyond their current transparent relay functions by enabling real-time processing, thereby providing low-latency computational services to end users. In LEO constellations, a significant deployment of computationally capable satellites is orchestrated to offer enhanced computational resources. Challenges arise in the optimal allocation of terminal services to the most suitable satellite due to overlapping coverage among neighboring satellites, compounded by constraints on satellite energy and computational resources. The satellite service allocation (SSA) problem is recognized as NP-hard, yet assessing allocation methods through results allows for the application of deep reinforcement learning (DRL) to obtain improved solutions, partially addressing the SSA challenge. In this paper, we introduce a satellite computing capability model to quantify satellite computational resources. A DRL model is proposed to address service demands, computational resources, and resolve service allocation conflicts, strategically placing each service on appropriate servers. Through simulation experiments, numerical results demonstrate the superiority of our proposed method over baseline approaches in service allocation and satellite resource utilization, showcasing advancements in this field.
Read full abstract