Continual learning is a high-potential technique that enables intelligent motor fault diagnosis models to extend new diagnosable fault classes without costly training from scratch. However, existing continual learning methods have the following limitations. (1) They manually detect new faults, which is labor-intensive, untimely, and more importantly, may lead to mistaken diagnosis results. (2) They adopt the traditional knowledge distillation to align the absolute responses of old and new models, which alleviates catastrophic forgetting but restricts flexible learning from incremental datasets. To overcome the above limitations, this paper proposes a novel self-driven continual learning framework for class-added motor fault diagnosis, which can spontaneously detect unseen faults and perform more flexible continual learning from incremental datasets. For the automatic detection of unseen faults, after collecting online samples, adversarial training with exemplars of each seen class is conducted to measure the class separability. The truth fault classes that are unseen for diagnosis models can be clearly distinguished from all seen classes, and correspondingly missed diagnosis or misdiagnosing can be avoided effectively and incremental samples with new fault types can be collected quickly. For the flexible continual learning strategy, a more flexible knowledge distillation is proposed to preserve the prediction propensity rather than the absolute response. This strategy not only keeps the recognition performance of old classes but also loosens unnecessary constraints and increases the diagnosis model plasticity to learn new knowledge from incremental datasets, thus improving the accuracy of motor fault diagnosis during continual learning. The effectiveness of the proposed method is verified by conducting fault simulation experiments of three-phase motors and its superiority is also demonstrated by comparing it with some state-of-the-art diagnosis methods.