Feature selection and inference through modeling are combined into one method based on a network that can be used to point out irrelevant, redundant and dependent features in the data. It is shown that this network method is efficient in terms of reducing the number of calculations for estimating the probabilities under different model assumptions by breaking the data into fractions. We prove that the probability estimations within the network method lead to the detection of non-informative features with probability one if the data is sufficiently large. The proposed method’s accuracy in detecting complex relations between features, selecting informative features and classifying data-sets with different dimensions is assessed through experiments using both synthetic and real data. The results from the network method compare favorably with those from the well-known and powerful feature selection algorithms. It is further shown that the network method can handle complex relations between the features that are intractable for other algorithms.