Detecting rotated faces is a challenging task with images from uncontrolled environments. The use of deep convolutional neural networks have greatly improved detection performance, but these methods still do not fully exploit face structure information. This leaves faces with more extreme rotation angles undetectable. In this paper, we present a novel Multi-Task Collaboration Network (MTCNet) for rotation-invariance face detection that fully uses facial landmarks to improve the detection performance by means of collaboration between face detection and face alignment. Differing from previous methods that predict rotation angles in a single step, MTCNet employs a cascaded architecture with three stages to predict faces with gradually decreasing rotation-in-plane ranges in a coarse-to-fine process. Accurate facial landmarks further facilitate face detection. We also introduce a new training loss by integrating the geometric angle into the penalization process, which is much more reasonable than measuring the differences of training samples roughly. Our approach also explores contextual information to distinguish challenging faces from unconstrained scenarios. Extensive experimental results were conducted to demonstrate the effectiveness of MTCNet on both the multiple orientation and rotation datasets. Empirical studies show that MTCNet achieves results competitive with state-of-the-art face detectors while being time-efficient.