In the domain of engineering interactive product design, the optimization of Human-Computer Interaction (HCI) is central to enhancing user experience and system efficiency. Notably, the application of cognitive load theory holds significant prominence in this field. Efficient evaluation of cognitive load not only facilitates the optimization of resource management and task allocation, but is also the key to creating a fluent interactive experience. With the advancement of Artificial Intelligence (AI) and physiological measurement technologies, new methodologies have emerged to measure and understand cognitive load. This research introduces a system for HCI training and assessment. The system, through the development of digital memory paradigms, induces and verifies varying levels of cognitive load with precision, thereby providing data-driven insights for HCI design. Specifically, this study constructs an AI-assisted multimodal cognitive load assessment framework, integrating physiological data collected from Functional Near-Infrared Spectroscopy (fNIRS) and eye-tracking technologies. By extracting and analyzing 29 physiological features, including channel features, graph-theoretic features, and features pertaining to eyelid and iris movements, we propose an innovative multimodal recognition approach to classify cognitive load across individuals. The experimental results not only verified the validity of the method, but also revealed changes in physiological patterns at different levels of cognitive load. In particular, the significant potential of the fNIRS feature in band I. This discovery suggests that we can monitor and predict cognitive load with greater precision through physiological signals, hence optimizing the design of HCI systems to alleviate user strain, prevent information overload, and enhance the naturalness and intuitiveness of interaction.