Existing methods for single image super-resolution (SISR) model the blur kernel as spatially invariant across the entire image, and are susceptible to the adverse effects of textureless patches. To achieve improved results, adaptive estimation of the degradation kernel is necessary. We explore the synergy of joint global and local degradation modeling for spatially adaptive blind SISR. Our model, named spatially adaptive network for blind super-resolution (SASR), employs a simple encoder to estimate global degradation representations and a decoder to extract local degradation. These two representations are fused with a cross-attention mechanism and applied using spatially adaptive filtering to enhance the local image detail. Specifically, SASR contains two novel features: (1) a non-local degradation modeling with contrastive learning to learn global and local degradation representations, and (2) a non-local spatially adaptive filtering module (SAFM) that incorporates the global degradation and spatial-detail factors to preserve and enhance local details. We demonstrate that SASR can efficiently estimate degradation representations and handle multiple types of degradation. The local representations avoid the detrimental effect of estimating the entire super-resolved image with only one kernel through locally adaptive adjustments. Extensive experiments are performed to quantitatively and qualitatively demonstrate that SASR not only performs favorably for degradation estimation but also leads to state-of-the-art blind SISR performance when compared to alternative approaches.