Deep convolutional models often produce inadequate predictions for inputs which are foreign to the training distribution. Consequently, the problem of detecting outlier images has recently been receiving a lot of attention. Unlike most previous work, we address this problem in the dense prediction context. Our approach is based on two reasonable assumptions. First, we assume that the inlier dataset is related to some narrow application field (e.g. road driving). Second, we assume that there exists a general-purpose dataset which is much more diverse than the inlier dataset (e.g. ImageNet-1 k). We consider pixels from the general-purpose dataset as noisy negative samples since most (but not all) of them are outliers. We encourage the model to recognize borders between the known and the unknown by pasting jittered negative patches over inlier training images. Our experiments target two dense open-set recognition benchmarks (WildDash 1 and Fishyscapes) and one dense open-set recognition dataset (StreetHazard). Extensive performance evaluation indicates competitive potential of the proposed approach. • Training with noisy negative images greatly improves dense open-set recognition. • Training with randomly pasted negatives improves generalization on mixed-content images. • Shared features improve outlier detection and allow for inference with a single forward pass. • State-of-the-art results on dense open-set recognition benchmarks: WildDash 1, Fishyscapes and StreetHazard.