In the field of artificial intelligence and natural language processing (NLP), natural language generation (NLG) has significantly advanced. Its primary aim is to automatically generate text in a manner resembling human language. Traditional text generation has mainly focused on binary style transfers, limiting the scope to simple transformations between positive and negative tones or between modern and ancient styles. However, accommodating style diversity in real scenarios presents greater complexity and demand. Existing methods usually fail to capture the richness of diverse styles, hindering their utility in practical applications. To address these limitations, we propose a multi-class conditioned text generation model. We overcome previous constraints by utilizing a transformer-based decoder equipped with adversarial networks and style-attention mechanisms to model various styles in multi-class text. According to our experimental results, the proposed model achieved better performance compared to the alternatives on multi-class text generation tasks in terms of diversity while it preserves fluency. We expect that our study will help researchers not only train their models but also build simulated multi-class text datasets for further research.