Abstract
The advent of deep learning has enabled a series of opportunities; one of them is the ability to tackle subjective factors on the floor plan design and make predictions though spatial semantic maps. Nonetheless, the amount available of data grows exponentially on a daily basis, in this sense, this research seeks to investigate deep generative methods of floor plan design and its relationship between data volume, with training time, quality and diversity in the outputs; in other words, what is the amount of data required to rapidly train models that return optimal results. In our research, we used a variation of the Conditional Generative Adversarial Network algorithm, that is, Pix2pix, and a dataset of approximately 80 thousand images to train 10 models and evaluate their performance through a series of computational metrics. The results show that the potential of this data-driven method depends not only on the diversity of the training set but also on the linearity of the distribution; therefore, high-dimensional datasets did not achieve good results. It is also concluded that models trained on small sets of data (800 images) may return excellent results if given the correct training instructions (Hyperparameters), but the best baseline to this generative task is in the mid-term, using around 20 to 30 thousand images with a linear distribution. Finally, it is presented standard guidelines for dataset design, and the impact of data curation along the entire process.
Talk to us
Join us for a 30 min session where you can share your feedback and ask us any queries you have
Disclaimer: All third-party content on this website/platform is and will remain the property of their respective owners and is provided on "as is" basis without any warranties, express or implied. Use of third-party content does not indicate any affiliation, sponsorship with or endorsement by them. Any references to third-party content is to identify the corresponding services and shall be considered fair use under The CopyrightLaw.