AI-mediated communication (AI-MC) assists and creates dialogue between people to suggest responses during digital conversations. Users see this when automated suggestions appear in text and email communications. This technology has the potential to be used on a mass scale and to eventually lead to complete conversation formation, possibly without any knowledge of one knowing they are engaging in dialogue with AI. Knowing that users adopt digital behaviors that contribute to efficiency, research must look at the influence of this tool in shaping social constructs. Understanding that gender bias exists in both the English language and in the development of AI leads to my inquiry in analyzing the role that AI-MC plays in constructing social norms with language usage. I argue that the use of gender biased language in AI predictive-text suggestions contributes to the social construction of gender in the creation, execution, and use of AI-MC. My research included searching for articles related to AI-MC, human computer relationships, and historical bias concerns with AI and language. Using the keywords gender, language, artificial intelligence, and AI-MC, I compiled research articles from the fields of human computer learning, critical media studies, and psychology to conduct a discourse analysis. Informed by feminist STS scholarship on marginalization, gender and language, and the theoretical framework of social construction theory, I performed an ideological critique of the scholarship to articulate how language constructs reality, specifically gender, and the underlying consequences and assumptions. Care is given to recognizing other underrepresented communities and cultural rhetoric in digital communication tools to add a layer of nuance in highlighting the impact this approach can have toward uncovering social constructs and contributing to solutions that embrace diversity. My argument also considers the impact this intervention could have during the formative years of youth given the influence of language and symbols in forming identities and social practices. I discuss solutions of using gender fair language and neutralized language, while encouraging caution around solutions that still lean toward the masculine. AI-MC needs to be created without gendered terminology as much as possible and with the automatic inclusion of gender-neutral terms when gendered terms are suggested (e.g., suggesting he, she, and they). Additionally, I recognize the tendency for society to construct norms around “acceptable” language, for example with methods such as tone policing. I call for advancements that consider the impact of these actions on marginalized groups by developing technologies that respect diverse rhetoric. Embracing various forms of expression leads to a more authentic representation of communities and thus society at large. Therefore, efforts to create AI-MC experiences that do not reinforce binary gender, nor only one model of acceptable discourse, are essential in future developments of this tool to embrace a more inclusive digital environment. Understanding the prominence of algorithms in our communicative experiences emphasizes the need for AI language design that embraces inclusivity to encourage positive relationship formation that is representative of all people regardless of gender identification, sexual orientation, dis/ability, race, or other identities.