Abstract

When deep neural networks are deployed in a highly automated vehicle for environment perception tasks in an unseen (target) domain that differs from the training (source) domain, the mismatch will result in decreased performance. Domain adaptation methods aim at overcoming this mismatch. Many recently investigated methods for unsupervised domain adaptation train a model using labeled source data and unlabeled target data at the same time. These methods assume that data from the target domain is available during the source domain training, which is not always the case in real applications. In this paper we present a way to perform an online style transfer for continual domain adaptation which improves performance on (multiple) unseen target domains using a given perception model. The approach is based on an image style transfer in the frequency domain and requires neither an adjustment of the given source-trained model parameters to the target domain, nor does it require any considerable amount of memory for storing its frequency domain representation of the source domain style, which is particularly important considering the hardware limitations in an autonomous vehicle.

Full Text
Published version (Free)

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call