Real-time fusion of images acquired from multiple sensors is significant in various fields, including military and aviation, to reduce the uncertainty in the acquired images and for wider temporal and spatial coverage. Current approaches to multi-sensor image fusion have a high computational complexity and difficult to implement in hardware. This paper presents a method based on two-scale decomposition and modified Frie-Chen operators to fuse images acquired from infrared and visible image sensors and its corresponding hardware implementation. The proposed method achieves 48%, 15%, and 100% improvements in total edge transfers, structural similarity, and night vision contrast, respectively, with respect to those of the latest publications known to the authors. The corresponding hardware architecture, synthesized using the Xilinx tool, is shown to consume 4% of the resources in the Virtex 4 field programmable gate array (FPGA-xc4vlx200). The proposed architecture has one unit of throughput per clock cycle and is able to process 30 high definition images/sec. The proposed architecture is also analyzed using the synopsis design vision tool with the 90-nm UMC standard complementary metal–oxide–semiconductor cell library. It is found that the architecture consumes 251.6 mW of power and has an area equivalent to 580K NAND2 gates. The lower hardware resource requirement and support of parallelism and pipelining make the proposed algorithm suitable for low-power, real-time image fusion applications.