Neural Architecture Search (NAS) has pioneered various constructive principles to push forward the development of deep learning and achieved dramatic performances for diverse tasks recently. Existing NAS methods mainly focus on a single specific task to discover the architecture automatically. But actually, these methods lack ample exploitation and exploration for the latent ability of architecture search mechanism, e.g., from diverse cross-task distributions to discover a unified architecture automatically. In this work, we propose a Cross-task Differentiable ARchiTecture Search (Cross-DARTS for short) framework to discover a unified architecture for different low-level vision tasks automatically, to further widen the capacity of NAS. Specifically, we establish a new model to bridge different low-level vision tasks under the architecture search perspective. By performing a new data construction that integrates multi-task distributions, Cross-DARTS is obtained based on the differentiable search scheme. A multi-scale fusion cell with powerful contextual representation capacity is designed as the basic component of search space towards the low-level vision. Consistent achievements of promising results on three vision tasks, including noise, rain, joint rain and haze removal fully show our superiority.