Practitioners use feature importance to rank and eliminate weak predictors during model development in an effort to simplify models and improve generality. Unfortunately, they also routinely conflate such feature importance measures with feature impact, the isolated effect of an explanatory variable on the response variable. This can lead to real-world consequences when importance is inappropriately interpreted as impact in applications like medicine and business. The dominant approach for computing feature importance is through interrogation of a fitted model, which works well for feature selection, but gives distorted measures of feature impact. For example, the same method applied to the same data set can yield different feature importances, depending on the model, leading us to conclude that impact should be computed directly from the data. While there are nonparametric feature selection algorithms, they typically provide feature rankings, rather than direct measures of impact or importance. They also often focus on single-variable associations with the response. In this paper, we provide mathematical definitions of feature impact and importance, derived from partial dependence curves, that operate directly on the data. We develop two methods, StratImpact and StratImp, that estimate feature impact and importance from partial dependence measures using stratification of the explanatory variables. We show that features ranked by these definitions are competitive with, and often better than, existing feature selection techniques. We validate our methods through a comparison with contemporary methods using three real data sets and a testbed of simulated data.