Federated learning (FL) has emerged as a pivotal paradigm for training machine learning models across decentralized devices while maintaining data privacy. In the healthcare domain, FL enables collaborative training among diverse medical devices and institutions, enhancing model robustness and generalizability without compromising patient privacy. In this paper, we propose DPS-GAT, a novel approach integrating graph attention networks (GATs) with differentially private client selection and resource allocation strategies in FL. Our methodology addresses the challenges of data heterogeneity and limited communication resources inherent in medical applications. By employing graph neural networks (GNNs), we effectively capture the relational structures among clients, optimizing the selection process and ensuring efficient resource distribution. Differential privacy mechanisms are incorporated, to safeguard sensitive information throughout the training process. Our extensive experiments, based on the Regensburg pediatric appendicitis open dataset, demonstrated the superiority of our approach, in terms of model accuracy, privacy preservation, and resource efficiency, compared to traditional FL methods. The ability of DPS-GAT to maintain a high and stable number of client selections across various rounds and differential privacy budgets has significant practical implications, indicating that FL systems can achieve strong privacy guarantees without compromising client engagement and model performance. This balance is essential for real-world applications where both privacy and performance are paramount. This study suggests a promising direction for more secure and efficient FL medical applications, which could improve patient care through enhanced predictive models and collaborative data utilization.