Automatic speech recognition (ASR) systems deployed to sporting events generate subtitles to help people understand the sports action when they are in a noisy environment or if they are hearing impaired. The signal quality can be degraded due to crowd noise, emotional outbursts, and other sounds from sports play. Different accents from sports players around the world, in combination with different terminologies and slang, cause challenges for ASR systems. The use of our custom online supervised learning method provides a learning mechanism to enable ASR systems to adapt to sporting domains. In this paper, we discuss the results of a novel language model expansion for the 2016 U.S. Open Tennis Championships. Words in the corpora are expanded by <italic xmlns:mml="http://www.w3.org/1998/Math/MathML" xmlns:xlink="http://www.w3.org/1999/xlink">hypernyms</i> (which provide broader meaning to a target word) and <italic xmlns:mml="http://www.w3.org/1998/Math/MathML" xmlns:xlink="http://www.w3.org/1999/xlink">hyponyms</i> (which provide specific meaning) of a <italic xmlns:mml="http://www.w3.org/1998/Math/MathML" xmlns:xlink="http://www.w3.org/1999/xlink">synset</i> , or groups of words that are synonyms. The candidate word is filtered on the basis of relevancy predictors. The contextual evidence provides our language model with “understanding” of tennis words to decrease the word error rate by 5.41% and to increase word confidence by 6.71%. Subtitling corrections by human annotators provide online learning for our language model. Transcribed videos from previous U.S. Open tournaments and from YouTube are used to evaluate all experiments.