As neural architecture search becomes an increasingly studied field, it has become apparent that it demands a great number of computational resources. These are usually devoted to computations and utilized to train and evaluate intermediate solutions during the search phase. Although most researchers focus on developing more efficient search methods, the main computational cost in terms of execution time percentage concerns the evaluation of candidate architectures. As such, many works utilize a smaller number of training epochs during search phase evaluations. In this work, we study the effect of reduced training in neural architecture search. We focus on the retention of relative rankings between architectures when they are trained with different optimizers and for various epochs. We discover relatively high rank correlations between various fully and partially trained, arbitrarily connected architectures (Kendall’s tau-b > 0.7). These are generated by mutating a simple convolutional architecture for the CIFAR-10 image recognition dataset. Furthermore, we observe similar behaviors in networks sampled from the NASBench neural architecture dataset, consisting of a fixed outer skeleton and variable cell module composition. Finally, we demonstrate the ability of genetic algorithms to find optimal solutions in noisy environments, by simulating the previous findings with perturbed n-dimensional Rastrigin functions.