Multi-person pose estimation is a challenging vision task that can be seriously affected by keypoint scale variation. Existing heatmap-based approaches are devoted to reducing the effect by optimizing backbone architecture or loss functions, but the problem of an inaccurate heatmap representation with different keypoint scales still exists. A scale-sensitive heatmap algorithm is presented to generate reasonable spatial and contextual features for the network to predict more precise coordinates, by systematically considering the standard deviation, truncated radius, and shape of Gaussian kernels. Specifically, the scale-sensitive heatmap algorithm contains three parts: inter-person heatmap, limited-area heatmap, and shape-aware heatmap. The inter-person heatmap allocates different standard deviations for each human instance proportionally calculated by the keypoint-based method, the limited-area heatmap defines the truncated radius to limit the influence area of Gaussian kernels, and the shape-aware heatmap modifies the Gaussian kernels generated by some ellipse-shaped joints. Our scale-sensitive heatmap algorithm outperforms the baseline by a considerable margin on the COCO and CrowdPose benchmark datasets. The code and pretrained models are available at https://github.com/ducongju/Scale-sensitive-Heatmap.