Twin support vector machine (TSVM) is a contemporary machine learning technique to tackle classification and regression problems. However, TSVM lacks the ability to differentiate between support vectors and noises since it neglects the positional information of input data samples, and hence it is sensitive to noises. Additionally, it fails to consider uncertainties associated with the data, which reduces its capacity for generalization. To address these drawbacks, we propose a novel fuzzy hyperplane based intuitionistic fuzzy twin proximal support vector machine. The first significant feature of the proposed approach is that it gives an intuitionistic fuzzy number based on the relevance to each data vector. This efficiently reduces the impact of noise and outliers by leveraging the local neighborhood information within the data points by incorporating membership and non-membership weights. Secondly, all the parameters present in the model are fuzzy variables, including the offset term and the elements of the normal vector. The suggested fuzzy hyperplane successfully reflects the inherent ambiguity prevalent in real-world categorization problems by reflecting vagueness in the input data through the use of fuzzy variables. The model’s efficiency is enhanced by solving two systems of linear equations to obtain two non-parallel classifiers rather than solving two quadratic programming problems as in standard TSVM. Utilizing non-linear kernel functions within the feature space enables the method to effectively identify complex patterns or non-linear relationships within the datasets. In order to demonstrate the effectiveness of the suggested approach, extensive computer experiments have been conducted on a set of eighteen benchmark datasets with both linear and non-linear kernels. In addition, rigorous statistical analysis, including Friedman and post-hoc Nemenyi tests, have been employed to assess the significance of the observed performance differences. Furthermore, we also performed numerical experiments utilizing linear, Gaussian, and polynomial kernels to classify electroencephalogram (EEG) signals. The outcomes of the experiment are analyzed in terms of average accuracy, processing time, and F-measure. The results demonstrate that the proposed method outperforms existing methods and achieves better generalization.