Several underlying behaviors of feature selection techniques are analyzed in this paper. A bound relating sample size and dimensionality is derived and verified empirically. The 's-curve' relationship between test error and amount of training data is shown not to be a generalized behavior of all feature selection techniques.