Shrinkage methods reduce the search space of a Differentiable Architecture Search (DARTS) by progressively discarding candidates, which accelerates the search speed. However, their shrinkage strategy suffers from the vulnerability of too fine task granularity. In other words, they drop only one of the least promising candidates per round of shrinkage, which is suboptimal in terms of performance and efficiency. In this study, we introduce the concept of Granular Computing (GrC) into the shrinkage method and present a Fast Progressive Differentiable Architecture Search (FP-DARTS) method. This method effectively reduces the computational complexity of each round of shrinkage, thereby improving the efficiency and performance of the algorithm. FP-DARTS can be divided into three stages: adaptive granularity division and selection, granular-channel performance evaluation, and progressive shrinkage. In the first stage, to reorganize the task granularity, we cluster the candidate operations into granular-channels and adaptively select the appropriate task granularity. We also propose a dynamic clustering strategy to avoid introducing additional computation. In the second stage, we train the architecture parameters to measure the potential of the granular-channels. In the third stage, to improve the stability of the shrinkage results, we introduce a channel annealing mechanism to smoothly discard unpromising granular-channels. We conducted systematic experiments on CIFAR-10 and ImageNet and achieved a test accuracy of 97.56% on CIFAR-10 with 0.04 GPU-days, and a test accuracy of 75.5% on ImageNet with 1.2 GPU-days. We also conducted experiments on the search space of NAS-Bench-201, and obtained test accuracies of 94.22, 73.07, and 46.23% for CIFAR-10, CIFAR-100 and ImageNet16-120, respectively. The above experimental results demonstrate that FP-DARTS achieves higher search speed and competitive performance compared to other state-of-the-art shrinkage methods and non-shrinkage methods.