Feature selection is the process of choosing a subset of the available features or attributes from a certain dataset in order to render the process of building a predictive model more efficient and accurate. The selection of attributes is, in most of the times, done sequentially. In this paper we propose a new filtering strategy that selects the attributes in a composite way rather than sequential. The advantage of this approach is that it allows for an important number of features that are highly relevant to their classes but statistically insignificant to participate in the learning process of the classifier. Results show that this new approach is promising and as good as the traditional one. Higher accuracy is reached when the number of the infrequent features increases. This approach is useful when we need for the infrequent features to be part of the predictive model since this, in turn, enforces the subjectivity of the decision made by the classifier.