The short-term convective storm forecasting (i.e., nowcasting) mainly rely on weather radar which can resolve the 3D structure of convective storms. With the rapid development of numerical models, modern models can produce 3D reanalysis data which gives atmospheric background information of convective storms. Current deep learning nowcasting models use only 2D radar images for nowcasting and often require massive historical data for training. But it may not be operationally feasible to collect long-term radar data to train a new model. Hence, how to establish a nowcasting model using only a small dataset has become an important issue. In addition, existing models do not effectively use the state-of-the-art model reanalysis data, which is a shortcoming of these models. To tackle these problems, this article develops a pixel-wise convolutional recurrent neural network (Pixel-CRN) for precipitation nowcasting. It has three key designs: (1) Through a concise pixel-wise sampling and oversampling technique, Pixel-CRN can be trained using only a small dataset. (2) In spatial learning, Pixel-CRN embeds a spatial convolution subnet into the recurrent unit, which can input raw 3D radar and model reanalysis data, thus valuable atmospheric background information can be learned to assist nowcasting. (3) In spatiotemporal learning, according to the information bottleneck principle, Pixel-CRN builds a heterogeneous encoder-decoder structure to squeeze multi-channel 3D input data into latent space, and recurrently generates 30 and 60 min nowcasts. Compared with existing deep learning nowcasting methods, the experimental results show that Pixel-CRN can provide skillful results with a rather small training dataset.