This paper presents an image fusion method based on a new class of wavelet - nonseparable wavelet with compact support, linear phase, orthogonality, and dilation matrix $\left( {{\begin{array}{*{20}c} 2 \hfill & 0 \hfill \\ 0 \hfill & 2 \hfill \\ \end{array}}} \right)$ . We first construct a 6 x 6 nonseparable wavelet filter bank. Using these filters the images involved are decomposed into nonseparable wavelet pyramids. Then the following fusion algorithm is proposed: for the low-frequency part, we select the average of the low-frequency subimages from both sensors. For every high-frequency subimage of each level, we select the absolute value of each pixel of the high-frequency subimage to form a new subimage, and the variance of each image patch over a 3 x 3 window in the new subimages is computed as an activity measurement. If the variance of the 3 x 3 window in one new subimage is greater than the variance of the corresponding 3 x 3 window in the other new subimage, then the center pixel value of the 3 x 3 window is selected as a new pixel value of the fused image. A new fused image is then reconstructed. The performance of the method is evaluated using entropy, root mean square error, and peak-to-peak signal-to-noise ratio. The experimental results show that this method has good vision effect. Because the nonseparable wavelet transform can extract more details from source images, all the features in the source images can be seen in the fused image, and the fused image can extract more information from the source images.