Medical image fusion has many applications to healthcare that is accomplished by extracting and then combining the complementary information from multiple medical images into a single image. The pulse coupled neural network (PCNN) is greatly applied to image fusion due to its efficient coupling of surrounding neurons, but suffers from network complexity and manual parameter settings. This paper proposes a novel parameter adaptive unit-linking PCNN (PAULPCNN) model, whose parameters are automatically obtained from the external stimulus and exhibits a simpler structure than the original PCNN. The multi-scale decomposition-based methods, particularly based on the non-subsampled shearlet transform (NSST), are widely employed by researchers to perform medical image fusion due to their efficient separation of spatial details at different scales. Motivated by the advantages of the PCNN and multi-scale decomposition-based methods, a hybrid medical image fusion method based on the PAULPCNN model is introduced in the NSST domain that merges the salient complementary details from a gray-scale and the corresponding pseudo-color medical image captured at different modalities to produce a more informative image that is more useful to medical experts in computer-aided diagnosis of diseases. The proposed method first employs NSST to transform the source images into a low-pass and several high-pass sub-bands, respectively. The high-pass sub-bands are combined using the firing times of the proposed PAULPCNN model, whereas a new distance-weighted regional energy-based rule is applied to construct the fused low-pass sub-band, which while estimating regional energy, assigns weight to different neighboring pixels depending on their distance from the central pixel. Finally, inverse NSST is applied on the fused sub-bands to construct the fused image. The effectiveness of the proposed technique is shown using the fusion results of eleven state-of-the-art methods on thirty gray-scale and pseudo-color brain medical image pairs comprising mild Alzheimer’s disease, Huntington’s disease, motor neuron disease, sagittal plane, coronal plane, transaxial plane, and glioma image pairs, where eight objective metrics are considered for the quantitative assessment. Experimental results demonstrate that the proposed method is competitive with the state-of-the-art methods, which even outperforms some of these methods by providing fused images with better visual quality and greater objective performance, which are more suitable for specialists to diagnose brain diseases.