Medical image fusion enhances diagnostic precision and facilitates clinical decision-making by integrating information from multiple medical imaging modalities. However, this field is still challenging as the output integrated image, whether from spatial or transform domain algorithms, may suffer from drawbacks such as low contrast, blurring effect, noise, over smoothness, etc. Also, some existing novel works are restricted to specific image datasets. So, to address such issues, a new multi-modal medical image fusion approach based on the advantageous effects of multiple transforms has been introduced in the present work. For this, we use an adaptive image decomposition tool known as Hilbert vibration decomposition (HVD). HVD decomposes an image into different energy components, and after a proper decomposition of the source images, the desirable features of the decomposed components are then passed through a guided filter (GF) for edge preservation. Then, the Laplacian pyramid integrates these filtered parts using the choose max rule. Since HVD offers better spatial resolution and is independent of fixed cut-off frequencies like other transforms, the subjective outputs from this method for different publicly available medical image datasets are clear and better than the previously 20 state-of-the-art published results. Moreover, the obtained values of different objective evaluation metrics such as information entropy (IE): 7.6943, 5.9737, mean: 110.6453, 54.6346, standard deviation (SD): 85.5376, 61.8129, average gradient (AG): 109.2818, 64.6451, spatial frequency (SF): 0.1475, 0.1100, and edge metric (QHK/S): 0.5400, 0.6511 demonstrate its comparability to others. The algorithm's running period of just 0.161244 s also indicates high computational efficiency.