Previous work has shown that a supervised-learning algorithm estimating the ideal binary mask (IBM) can improve sentence intelligibility in noise for hearing-impaired (HI) listeners from scores below 30% to above 80% [Healy et al., J. Acoust. Soc. Am. 134 (2013)]. The algorithm generates a binary mask by using a deep neural network to classify speech-dominant and noise-dominant time-frequency units. In the current study, these results are extended to consonant recognition, in order to examine the specific speech cues responsible for the observed performance improvements. Consonant recognition in speech-shaped noise or babble was examined in normal-hearing and HI listeners in three conditions: unprocessed, noise removed via the IBM, and noise removed via the classification-based algorithm. The IBM demonstrated substantial performance improvements, averaging up to 45% points. The algorithm also produced sizeable gains, averaging up to 34% points. An information-transmission analysis of cues associated with manner of articulation, place of articulation, and voicing indicated general similarity in the cues transmitted by the IBM versus the algorithm. However, important differences were observed, which may guide the future refinement of the algorithm. [Work supported by NIH.]