A multimodal biometric-system based score level fusion technique is proposed to construct a robust human identification system. Feature fusion can be implemented via different methods. In this paper, the score level fusion of face and iris traits are combined and re-classified at Equal Error Rate (EER) value to improve the individual unimodal systems performance for recognizing 80 subjects (40 subject per one face-iris dataset). The multimodal classification results are compared and evaluated comprehensively using four competitive feature extraction methods: Principle Component Analysis (PCA), Fourier Descriptors (FDs), Gray Level Co-occurrence Matrix (GLCM), and Local Binary Pattern (LBP). Also, a low-quality resolution of MMU1 iris database are considered in this work as an additional challenge for system robustness. The accuracy rate of GLCM and LBP methods satisfied 100% with ORL-CASIA-V1 combination datasets, while PCA and GLCM methods achieved 100% with the low-quality ORL-MMU-1 combination datasets, these results provide an evidence of how the multimodal biometric system could improve the overall unimodal systems performance. Also, the GLCM advances all other feature extraction methods by having the highest accuracy rate with ORL-CASIA-V1 and ORL-MMU-1 combined datasets.