Psycholinguistic research on children's early language environments has revealed many potential challenges for language acquisition. One is that in many cases, referents of linguistic expressions are hard to identify without prior knowledge of the language. Likewise, the speech signal itself varies substantially in clarity, with some productions being very clear, and others being phonetically reduced, even to the point of uninterpretability. In this study, we sought to better characterize the language-learning environment of American English-learning toddlers by testing how well phonetic clarity and referential clarity align in infant-directed speech. Using an existing Human Simulation Paradigm (HSP) corpus with referential transparency measurements and adding new measures of phonetic clarity, we found that the phonetic clarity of words' first mentions significantly predicted referential clarity (how easy it was to guess the intended referent from visual information alone) at that moment. Thus, when parents' speech was especially clear, the referential semantics were also clearer. This suggests that young children could use the phonetics of speech to identify globally valuable instances that support better referential hypotheses, by homing in on clearer instances and filtering out less-clear ones. Such multimodal "gems" offer special opportunities for early word learning. RESEARCH HIGHLIGHTS: In parent-infant interaction, parents' referential intentions are sometimes clear and sometimes unclear; likewise, parents' pronunciation is sometimes clear and sometimes quite difficult to understand. We find that clearer referential instances go along with clearer phonetic instances, more so than expected by chance. Thus, there are globally valuable instances ("gems") from which children could learn about words' pronunciations and words' meanings at the same time. Homing in on clear phonetic instances and filtering out less-clear ones would help children identify these multimodal "gems" during word learning.