Hyperspectral (HS) pansharpening seeks to integrate low spatial resolution HS (LRHS) images with connected panchromatic (PAN) images to produce high spatial resolution HS (HRHS) images. Traditional pansharpening convolutional neural networks (CNNs) directly map LRHS and PAN images into HRHS images under fixed network parameters, which imply static pansharpening rules. However, real-world HS data are often characterized by spatial variations and intuitively the pansharpening rules should be dynamic. To deal with the dilemma, in this paper we develop dynamic HS pansharpening CNNs. We first specify the concepts of dynamic pansharpening and static pansharpening. Then, we propose a learn-to-learn oriented pansharpening CNN paradigm, which aims to learn a how-to-learn rule to produce spatially adaptive pansharpening rules and comprises three stages of preliminary fusion, scene-sensitive modulation and spectral reconstruction. Finally, following the paradigm, we design two groups of dynamic pansharpening CNNs (DyPNNs), i.e., internal-connection-based and external-connection-based. They involve various spatial modulations, including spatial affine transform (AT), spatial dynamic convolution (DC) or improved spatial attention (SA), and thus consist of six specific DyPNNs: IC-AT-DyPNN, IC-DC-DyPNN, IC-SA-DyPNN, EC-AT-DyPNN, EC-DC-DyPNN and EC-SA-DyPNN. Experimental results on several HS datasets verify the effectiveness of the proposed DyPNNs in terms of both the spatial reconstruction and spectral fidelity.