The infrared and visible image fusion (IVIF) task aims to generate high-resolution images with richer texture details and finer salient information to enhance visual understanding for downstream tasks. However, current IVIF techniques often focus on exploring complementary information from source images at a single resolution, neglecting valuable information available at other resolutions. To effectively exploit multi-resolution semantics, we propose a novel framework called IMQFusion, which enhances fusion performance by leveraging intra-modality multi-resolution preservation and inter-modality multi-resolution query aggregation. Specifically, our method introduces an implicit neural representation (INR) with a robust continuous map that vertically reduces the semantic gap among adjacent paired resolutions, avoiding reliance on simple upsampling. Additionally, we employ a bidirectional guided strategy to horizontally facilitate resolution preservation. Moreover, to integrate multi-resolution semantics across both modalities and resolutions effectively, we develop an efficient aggregation strategy that incorporates a query mechanism, thereby enhancing inter-modality interactions among multi-resolution features. Extensive experiments demonstrate that, compared to state-of-the-art IVIF methods, the fused images generated by IMQFusion exhibit superior visual balance between salient information and texture details, as well as higher quantitative metrics. Furthermore, IMQFusion possesses the potential to improve object detection and can be extended to the field of medical image fusion.