Temporal action detection, a critical task in video activity understanding, is typically divided into two stages: proposal generation and classification. However, most existing methods overlook the importance of information transfer among proposals during classification, often treating each proposal in isolation, which hampers accurate label prediction. In this paper, we propose a novel method for inferring semantic relationships both within and between action proposals, guiding the fusion of action proposal features accordingly. Building on this approach, we introduce the Proposal Semantic Relationship Graph Network (PSRGN), an end-to-end model that leverages intra-proposal semantic relationship graphs to extract cross-scale temporal context and an inter-proposal semantic relationship graph to incorporate complementary neighboring information, significantly improving proposal feature quality and overall detection performance. This is the first method to apply graph structure learning in temporal action detection, adaptively constructing the inter-proposal semantic graph. Extensive experiments on two datasets demonstrate the effectiveness of our approach, achieving state-of-the-art results. Code and results are available at http://github.com/Riiick2011/PSRGN .