Artificial intelligence (AI) systems are increasingly popular in many applications. Nevertheless, AI technologies are still developing, and many issues need to be addressed. Among those, the reliability of AI systems needs to be demonstrated so that AI systems can be used with confidence by the general public. In this paper, we provide statistical perspectives on the reliability of AI systems, focusing on the time dimension. That is, the system can perform its designed functionality for the intended period of time. We introduce a so-called “SMART” statistical framework for AI reliability research, which includes five components: Structure of the system, Metrics of reliability, Analysis of failure causes, Reliability assessment, and Test planning. We review traditional methods in reliability data analysis and software reliability, and discuss how those existing methods can be transformed for reliability modeling and assessment of AI systems. Different from traditional reliability studies, the focus of AI reliability is on the software system to include the training data. Thus, we describe recent developments in modeling and analysis of AI reliability for software systems. The paper outlines statistical research challenges in this area, including out-of-distribution detection, the effect of the training set, adversarial attacks, model accuracy, and uncertainty quantification. We discuss how those topics can be related to AI reliability, with illustrative examples. The final element of SMART (test planning), is critical for the demonstration of AI reliability. Therefore we discuss data collection and testing planning, highlighting methods for improving system design in order to achieve higher AI reliability. The paper closes with some concluding remarks.