Recommendation systems play a pivotal role in improving product competitiveness. Traditional recommendation models predominantly use centralized feature processing to operate, leading to issues such as excessive resource consumption and low real-time recommendation concurrency. This paper introduces a recommendation model founded on deep learning, incorporating edge computing and knowledge distillation to address these challenges. Recognizing the intricate relationship between the accuracy of deep learning algorithms and their complexity, our model employs knowledge distillation to compress deep learning. Teacher–student models were initially chosen and constructed in the cloud, focusing on developing structurally complex teacher models that incorporate passenger and production characteristics. The knowledge acquired from these models was then transferred to a student model, characterized by weaker learning capabilities and a simpler structure, facilitating the compression and acceleration of an intelligent ranking model. Following this, the student model underwent segmentation, and certain computational tasks were shifted to end devices, aligning with edge computing principles. This collaborative approach between the cloud and end devices enabled the realization of an intelligent ranking for product listings. Finally, a random selection of the passengers’ travel records from the last five years was taken to test the accuracy and performance of the proposed model, as well as to validate the intelligent ranking of the remaining tickets. The results indicate that, on the one hand, an intelligent recommendation system based on knowledge distillation and edge computing successfully achieved the concurrency and timeliness of the existing remaining ticket queries. Simultaneously, it guaranteed a certain level of accuracy, and reduced computing resource and traffic load on the cloud, showcasing its potential applicability in highly concurrent recommendation service scenarios.