Artificial Intelligence (AI) adoption has seen substantial growth across industries. This paper explores the escalating use of AI within the United States Department of Defense (DoD) and the implications that diversity, equity, and inclusion (DEI) have on Service members and Civilians across the Department. More specifically, this paper explores the DEI considerations within AI technologies on individual, team, and Department readiness. The DoD's AI usage spans various strategic and operational capabilities, however this paper explores two critical domains: healthcare and recruitment. In healthcare, AI offers the promise of early disease detection, enhanced diagnostic capabilities, and streamlined administrative processes. However, potential biases stemming from homogenous training data threaten the accuracy and reliability of these systems, jeopardizing Service member health and eroding trust in AI-assisted medical decision-making and potentially the DoD at large. In recruitment, while AI promises efficiency in identifying ideal candidates, its deployment can perpetuate biases, especially when the training data used is not representative of all demographics. Despite efforts to design "unbiased" systems by excluding demographic data, such strategies may inadvertently overlook the unique challenges faced by marginalized communities, further entrenching existing disparities. Both case studies underscore the importance of considering DEI in the development and deployment of AI systems. As the DoD continues to integrate AI into its operations, this paper’s recommendations stress the necessity of continuous DEI assessment to ensure that AI serves as an asset rather than a liability. The authors recommend the following: 1. Data diversity & review 2. Continuous monitoring and calibration 3. Stakeholder engagement 4. Adoption of DEI requirements within Ethical AI Frameworks 5. Further research