Tabular data, organized like tables with rows and columns, is widely used. Existing models for tabular data synthesis often face limitations related to data size or complexity. In contrast, deep generative models, a part of deep learning, demonstrate proficiency in handling large and complex data sets. While these models have shown remarkable success in generating image and audio data, their application in tabular data synthesis is relatively new, lacking a comprehensive comparison with existing methods. To fill this gap, this study aims to systematically evaluate and compare the performance of deep generative models with these existing methods for tabular data synthesis, while also investigating the efficacy of post-processing techniques. We aim to identify strengths and limitations and provide insights for future research and practical applications. Our study showed that the Synthetic Minority Oversampling Technique (SMOTE) and its variants outperform deep generative models, especially for small datasets. However, we observed that an ensemble of deep generative models and post-generation processing performs better on large datasets than SMOTE alone. The results of our study indicate that deep generative models hold promise as a valuable tool for generating tabular data. Nonetheless, further research is warranted to enhance the performance of deep generative models and gain a comprehensive understanding of their limitations.