Unsupervised anomaly detection aims to identify data samples that have low probability density from a set of input samples, and only the normal samples are provided for model training. The inference of abnormal regions on the input image requires an understanding of the surrounding semantic context. This work presents a Semantic Context based Anomaly Detection Network, SCADN, for unsupervised anomaly detection by learning the semantic context from the normal samples. To achieve this, we first generate multi-scale striped masks to remove a part of regions from the normal samples, and then train a generative adversarial network to reconstruct the unseen regions. Note that the masks are designed in multiple scales and stripe directions, and various training examples are generated to obtain the rich semantic context . In testing, we obtain an error map by computing the difference between the reconstructed image and the input image for all samples, and infer the abnormal samples based on the error maps. Finally, we perform various experiments on three public benchmark datasets and a new dataset LaceAD collected by us, and show that our method clearly outperforms the current state-of-the-art methods.