Most infrared and visible image fusion methods are designed based on the fact that visible images have rich scene information and more details like edges and textures than infrared images, while infrared images have prominent thermal target information. However, under poor illumination conditions, most areas of visible images are dark, may contain much noise, and lack the corresponding detail information compared with infrared images. As a result, fused images of those methods suffer from information loss, low contrast, and non-obvious targets. To solve this problem, we propose a novel fusion method. Firstly, an improved rolling guidance filter, named RF-RGF, is proposed to decompose source images into small-scale detail, large-scale detail, and base layers. Secondly, for the fusion of small-scale detail layers, a new nonlinear function-based rule is proposed to transfer more texture information from source images under poor illumination conditions to the fused images. For the fusion of large-scale detail layers, a novel fusion rule based on the weighted sum of support values (WSSV) is constructed to retain details effectively. Then, for the fusion of base layers, the rule based on the visual saliency map (VSM) is adopted to ensure high contrast and a well overall look of the fused image. Moreover, BIMEF and morphological bright and dark details (MBD) are used to further enhance the fused image’s contrast and details, making targets more obvious. Specifically, BIMEF is adopted to enhance the visible image before decomposition. The MBD obtained by two selective rules based on morphological top- and bottom-transformations (MTB) is used to enhance the base layer. Experimental results show that the fusion performance of the proposed method is better than other methods (including some state-of-the-art methods), especially in artifact suppression, information retention, contrast improvement, and target enhancement.