Amyotrophic lateral sclerosis (ALS) is a progressive, ultimately fatal disease causing progressive muscular weakness. Most people living with ALS (plwALS) experience dysarthria, eventually becoming unable to communicate using natural speech. Many wish to use speech for as long as possible. Personalized automated speech recognition (ASR) model technology, such as Google's Project Relate, is argued to better recognize speech with dysarthria, supporting maintenance of understanding through real-time captioning. The objectives of this study are how plwALS and communication partners use Relate in everyday conversation over a period of up to 12 months and how it may change with any decline in speech over time. This study videoed interactions between three plwALS and communication partners. We assessed ASR caption accuracy and how well they preserved meaning. Conversation analysis was used to identify participants' own organizational practices in the accomplishment of interaction. Thematic analysis was used to understand better the participants' experiences of using ASR captions. All plwALS reported lower-than-expected ASR accuracy when used in conversation and felt ASR captioning was only useful in certain contexts. All participants liked the concept of live captioning and were hopeful that future improvements to ASR accuracy may support their communication in everyday life. Training is needed on best practices for customization and practical use of ASR technology and for the limitations of ASR in conversational settings. Support is needed for those less confident with technology and to reduce misplaced allocation of ownership of captioning errors, risking negative effects on psychological well-being.