The current study compared temporal and spectral acoustic contrast between vowel segments produced by speakers with dysarthria across three speech tasks-interactive, solo habitual, and solo clear. Nine speakers with dysarthria secondary to amyotrophic lateral sclerosis participated in the study. Each speaker was paired with a typical interlocutor over videoconferencing software. The speakers produced the vowels /i, ɪ, ɛ, æ/ in /h/-vowel-/d/ words. For the solo tasks, speakers read the stimuli aloud in both their habitual and clear speaking styles. For the interactive task, speakers produced a target stimulus for their interlocutor to select among the four possibilities. We measured the duration difference between long and short vowels, as well as the F1/F2 Euclidean distance between adjacent vowels, and also determined how well the vowels could be classified based on their acoustic characteristics. Temporal contrast between long and short vowels was higher in the interactive task than in both solo tasks. Spectral distance between adjacent vowel pairs was also higher for some pairs in the interactive task than the habitual speech task. Finally, vowel classification accuracy was highest in the interactive task. Overall, we found evidence that individuals with dysarthria produced vowels with greater acoustic contrast in structured interactions than they did in solo tasks. Furthermore, the speech adjustments they made to the vowel segments differed from those observed in solo speech.