The parts manufactured by Additive Manufacturing (AM) techniques such as Directed Energy Deposition (DED) are generally quantified by post-process inspection techniques. Though these techniques are reliable, they pose constraints in terms of cost and time. Additionally, the defective build states are not known during the actual processing. Real-time Artificial Intelligence (AI) based monitoring systems are on the rise as an alternative to post-processing inspection techniques with advances in sensing techniques and Deep Learning (DL). The article focuses on how the manifolds learnt by the embedded space of the two convolutional generative models, such as autoencoder, and Generative Adversarial Networks (GAN), could be exploited to differentiate built conditions. The co-axially mounted CCD camera acquired the melt pool morphology corresponding to six build parameters covering the process map from Lack of Fusion (LoF) to conduction regime. The images acquired from the CCD camera constituted the dataset to train and test the model performance. After training, the latent space of both the networks would have captured the commonality and differences, i.e. unique manifolds of the melt pool morphologies corresponding to the six build conditions. The learned manifolds from the two trained Convolutional Neural Networks (CNN) models were exploited by combining with One-Class SVM to classify the ideal build quality from the other conditions supervisedly. The prediction of the trained One-class SVM on the two latent spaces of the CNN models had an overall classification accuracy of ≈97%. The results on the proposed methodology demonstrate the potential and robustness of the developed vision-based methodology using manifold learning for DED process monitoring.