In this paper, we investigate the relative noise robustness of dynamic and static spectral features in speech recognition. It is found that the dynamic cepstrum is more robust to additive noise than its static counterpart. The results are consistent across different types of noise and over a wide range of noise levels. To exploit this unequal robustness, we propose a simple yet effective strategy of exponentially weighting the likelihoods that are contributed by the static and dynamic features during the decoding process. The optimal weights are discriminatively trained with a small amount of development data. This method is evaluated on two speaker-independent, connected digit databases, one in English (Aurora 2) and the other in Cantonese (CUDIGIT). For various types of noise at different signal-to-noise ratios (SNRs), the average relative word error rate reductions attained with the discriminatively trained weights are 36.6% and 41.9% for Aurora 2 and CUDIGIT, respectively. Noticeable performance improvement can be observed even when there is channel distortion. The proposed approach is appealing to practical applications because 1) noise estimation is not required, 2) model adaptation is not required, 3)only a minor modification of the decoding process is needed, and 4) only a few feature weights need to be trained