Iterations of sentences were recorded audio-visually from talkers while they participated in a speech-tracking task. Six female talkers produced iterations of conversational and clear speech under two different experimental conditions: (a) while the talker was informed that only her visual–speech cues would be transmitted to the interlocutor and (b) while she was informed that only her auditory–speech cues would be transmitted to the interlocuter. In reality, both her auditory– and her visual–speech cues were recorded under each experimental condition. Target sentences were extracted for the recordings, edited, and presented in a random order to a group of 48 subjects. The subjects completed a speech-recognition task under two perceptual modalities: auditory-only and visual-only. The subjects’ mean speech-recognition scores were used to determine the speech intelligibility scores of individual talkers for each experimental condition. The results failed to reveal any differences between the speech intelligibility scores obtained while a talker intended to produce iterations of visual-clear speech and those obtained while she intended to produce iterations of auditory-clear speech. Hence, the findings failed to demonstrate that talkers modify their articulation patterns in order to compensate for the perceptual modality under which the interlocutor receives the speech information. [Work supported by a NSERC grant awarded to J-PG.]