Face masks impede visual and acoustic cues that help make speech processing and language comprehension more efficient. Many studies report this phenomenon, but few examined how listeners utilize semantic information to overcome the challenges posed by face masks. Fewer still investigated this impact on bilinguals' processing of face-masked speech [Smiljanic, Keerstock, Meemann, and Ransom, S. M. (2021). J. Acoust. Soc. Am. 149(6), 4013-4023; Truong, Beck, and Weber (2021). J. Acoust. Soc. Am. 149(1), 142-144]. Therefore, this study aims to determine how monolingual and bilingual listeners use semantic information to compensate for the loss of visual and acoustic information when the speaker is wearing a mask. A lexical priming experiment tested how monolingual listeners and early-acquiring simultaneous bilingual listeners responded to video of English word pairs. The prime-target pairs were either strongly related, weakly related, or unrelated and were both either masked or unmasked. Analyses of reaction time results showed an overall effect of masking in both groups and an effect of semantic association strength on processing masked and unmasked speech. However, speaker groups were not different; subsequent analyses of difference values showed no effect of semantic context. These results illustrate the limited role of word-level semantic information on processing in adverse listening conditions. Results are discussed in light of semantic processing at the sentence level.