The alarming rates of biodiversity loss has draw attention of the scientific community in developing quantitative characterization methods of the state of life variety, ecosystem and habitats. One way to address such aspects involves acoustically monitoring the bird population, given its relation with the health and status of the ecosystem. Automated computational tools can be adopted to enhance the processing and interpretation of birdsongs. The performance and reliability of the system heavily depends on the input features extracted from the acoustic recordings. Numerous acoustical features for characterizing birdsongs are reported in the literature, however the determination of the most relevant ones remains elusive. Moreover, literature evidences a marked focus on classification or detection performance, providing limited details on the set of features that contribute to high performance. This study investigates the problem of discerning relevant acoustical features when dealing with the automatic classification of eight Colombian bird species. We adopt different audio signal processing techniques, namely temporal, spectral, cepstral and chroma analyses, to construct a heterogeneous set of features. Feature selection is implemented using principal components analysis, ReliefF feature ranking, and genetic algorithms. By considering nearest neighbors classifiers, support vector machines, neural networks and bayesian classifiers, 49 distinct machine learning models are tested. Selection schemes are fine-tuned looking for maximizing the classification performance. Our results demonstrate that effective machine learning classification of bird species can be achieved by refining a heterogeneous set of features through feature selection. Consistently, our work identifies Mel frequency Cepstral coefficients, spectral decrease, and spectral rolloff point as the most recurrent features across various feature selection schemes. By incorporating these features in a heterogenous subset totaling a minimum of 19 features, classification performances above 95% can be achieved using a nearest neighbor classifier with the Manhattan distance. These findings are particularly notable as they are derived from processing field-noise corrupted acoustic recordings. The outcomes of this study yield potential acoustical features suitable for the characterization of birdsongs. In addition, our methodological design may serve as a reference for extending the assessment of new features and machine learning models, thereby contributing to the development of effective and feasible conservation tools.