Over the past years there has been a surge of interest on the application of machine learning, and more broadly, artificial intelligence, to scientific applications. This interest has been spurred by an advancement of our computational capabilities coupled with the development of a large set of methods and machine learning frameworks to carry out complex analysis of large datasets in ways that were not possible in the past. In this presentation, I will explore the application of machine learning techniques to the area of atomic layer deposition, and in particular to process optimization. While most ALD processes are highly reproducible, the optimization of both new and existing ALD processes in novel reactors is often time consuming and labor intensive. This motivated us to search for ways of accelerating this task using both ex-situ and in-situ experimental data that is readily available in many laboratories.First, I will focus on the development of surrogate models that can predict the saturation dose times based on partially saturated growth profiles, a type of data that is part of the routine characterization of ALD processes in many labs. In order to develop this surrogate model, we first created a dataset using simulated data providing growth profiles across one of our ALD reactors for a wide range of possible experimental conditions, including precursor pressure, surface reactivity, growth per cycle, flow conditions, and dose times. We then used this dataset to train deep neural networks to predict the dose time that would lead to complete saturation. By tailoring the complexity of the model, the number of growth profiles per condition, and the structure/depth of the neural network we can explore the minimum number experiments and data points required to accurately predict saturation for an unknown ALD process. Our results show that for the optimization of a single precursor, neural networks can accurately predict saturation times using just a single growth profile and its corresponding dose time as an input.Then, I will focus on the use of in-situ techniques for the automatic optimization of ALD processes without a human in the loop. The advantage of using in-situ techniques is that they can provide a much faster response, reducing optimization times by more than two orders of magnitude. Using a realistic model of a single quartz crystal microbalance, we have developed both expert systems and Bayesian optimization-based models that are capable of exploring the process conditions and converge on optimal conditions leading to saturation. We then implemented the expert system model in one of our experimental reactors, experimentally validating this approach. This methodology can be expanded to other in-situ techniques, such as spectroscopic ellipsometry, or more sophisticated arrangement including more than one probe to obtain spatially dependent data.Finally, I will briefly discuss other potential applications of machine learning, for instance for process optimization using spatial ALD configurations, or to predict the self-limited nature of novel processes.This material is based upon work supported by Laboratory Directed Research and Development (LDRD) funding from Argonne National Laboratory, provided by the Director, Office of Science, of the U.S. Department of Energy under Contract No. DE-AC02-06CH11357.