Studies on feature selection have been extensively conducted in the literature, as it plays a significant role in both supervised and unsupervised machine learning tasks. Since the bulk of features in high-dimensional data sets might not be significant, feature selection plays a key role in removing unimportant variables and improving prediction and data analysis performance. Many of the current feature selection methods, meanwhile, become ineffective when used on contemporary datasets, which exhibit an escalating number of features in relation to sample size. This paper introduces a novel supervised feature selection method for regression problems. The proposed algorithm is called maximal Clique with Interaction Screening (ISClique). The ISClique algorithm’s overall structure can be described in two steps. Initially, a filter approach is used to select relevant features from an initial feature space and examine the different interactions between them. This is done using an innovative coefficient based on Kendall’s tau and partial Kendall’s tau. Secondly, the maximal clique strategy is applied as a wrapper to the selected set from the previous step to construct subsets of features. The most optimal subset that minimizes prediction error is selected. The proposed method integrates the advantages of graph theory with feature screening. Additionally, because the criteria employed in developing the ISClique method accommodate variable heterogeneity, this method is equally suitable for classification tasks. The proposed hybrid approach has been evaluated through applications involving various simulation scenarios and real datasets. Experimental findings demonstrate the advantages of ISClique over comparable methods.