Human action intent recognition has become increasingly dependent on computational accuracy, real-time responsiveness, and model lightness. Model selection, data filtering, and experimental design are three critical factors for the recognition of human intention in research. However, the performance of machine learning algorithms can vary depending on factors such as sensor location, the number of sensors used, channel selection, and dimensional combinations. Moreover, the collection of adequate and balanced data in such scenarios can be challenging. To address this issue, we present a comparative analysis of 12 commonly used machine learning algorithms for human action intention recognition. The synthetic minority oversampling technique is applied to fill in missing data. Traversing all possible combinations would require conducting 686 experiments, which is a daunting task in terms of both cost and efficiency. To tackle this challenge, we employ an orthogonal experiment design based on the Quasi-horizontal method. Our analysis indicates that lightGBM outperforms other algorithms in recognizing eight human daily activities. Furthermore, we conduct a polar difference and variance analysis based on a comprehensive balanced multi-metric orthogonal experiment for lightGBM using various sensor combinations and dimensions. The optimal combinations of different sensor numbers in terms of position, channel, and dimension are derived using this approach. Notably, our experimental design reduces the number of experiments required to only 49 times.