With the proliferation of AI technology, machine learning has emerged as a cornerstone of AI systems, facilitating pattern recognition and decision-making through robust data analysis. This encompasses various learning paradigms such as supervised, unsupervised, and reinforcement learning, all of which are indispensable for the advancement of artificial intelligence. Nevertheless, the development of AI necessitates substantial computational resources, with specialized chips serving as the linchpin, particularly in demanding tasks such as deep learning. Dedicated chip development, exemplified by GPUs and TPUs, plays a pivotal role in enhancing the performance of AI systems, notwithstanding challenges related to costs and market monopolies. Moreover, AI systems require significant power support, especially during the training of large-scale models. To address these challenges, this paper reviews the existing literature on modeling techniques aimed at enhancing the efficiency of machine learning and reducing energy consumption. This review encompasses optimal algorithm design, hardware optimization, and spatial modeling. Through the implementation of these approaches, the challenges posed by resource constraints in machine learning scenarios can be effectively mitigated, thereby fostering the continued development and application of AI technology.