Collaborative robots face barriers to widespread adoption due to the complexity of programming them to achieve human-like movement. Learning from demonstration (LfD) has emerged as a crucial solution, allowing robots to learn tasks directly from expert demonstrations, offering versatility and an intuitive programming approach. However, many existing LfD methods encounter issues such as convergence failure and lack of generalization ability. In this paper, we propose: (1) a generative adversarial network (GAN)-based model with multilayer perceptron (MLP) architecture, coupled with a novel loss function designed to mitigate convergence issues; (2) an affine transformation-based generalization method aimed at enhancing LfD tasks by improving their generalization performance; (3) a data preprocessing method tailored to facilitate deployment on robotics platforms. We conduct experiments on a UR5 robotic platform tasked with handwritten digit recognition. Our results demonstrate that our proposed method significantly accelerates generation speed, achieving a remarkable processing time of 23 ms, which is five times faster than movement primitives (MPs), while preserving key features from demonstrations. This leads to outstanding convergence and generalization performance.