Abstract
In the decade, researchers have proposed many remarkable algorithms in structural design, training modes, etc., in the field of Generative AI. However, with the explosive growth of demand for data and annotation in generative models and the emphasis on data-centric algorithms, improving model performance by improving data-efficiency has become another way for the future development of generative models. In this paper, we propose a new strategy to improve the performance of the generative models inspired by active learning. Through the cognitive feature extractor of generative models (similar oracle in active learning) to complete the annotation of data informativeness. By guiding the generative models to focus on samples in low-feature density region, i.e., with high informativeness, the models can gradually achieve “full-cognition” of the data. We conduct extensive experiments to verify the effectiveness of our strategy and demonstrate its widespread application in generative models and downstream data-driven tasks.
Talk to us
Join us for a 30 min session where you can share your feedback and ask us any queries you have
Disclaimer: All third-party content on this website/platform is and will remain the property of their respective owners and is provided on "as is" basis without any warranties, express or implied. Use of third-party content does not indicate any affiliation, sponsorship with or endorsement by them. Any references to third-party content is to identify the corresponding services and shall be considered fair use under The CopyrightLaw.