In recent years, there has been a wave of artificial intelligence (AI) technologies that offer to solve problems from shopping habits to mortgage approvals to critical systems operations. The rapidity of the development of these systems has led to both excitement and apprehension about the roles these systems should play in our modern societies. This paper focuses on the critical infrastructure industry, in general, and nuclear power generation, in particular, and seeks to scrutinize how we can leverage these novel technologies in human-centered ways to maintain or enhance the established high levels of reliability and resilience in these industries. First, we discuss the broader aspects of cognitive systems and activities that are critical to understanding the human-AI space. Then we explore different approaches to explainability in AI and the notions of trust. We then move on to discuss several human factors concepts and methods and how they can support the design of human-AI teams. We then explore recent research related to nuclear power that has been undertaken and evaluate the current industry and regulatory landscapes. Finally, we discuss identified research gaps and recommendations for solving these for the critical infrastructure space.