Conventional scene text detection approaches essentially assume that training and test data are drawn from the same distribution and have achieved compelling results. However, scene text detectors often suffer from performance degradation in real-world applications, since the feature distribution of training images is different from that of test images obtained from a new scene. To address the above problems, we propose a novel method called Text Enhancement Network (TEN) based on adversarial learning for cross-domain scene text detection. Specifically, we first design a Multi-adversarial Feature Alignment (MFA) module to maximally align features of the source and target data from low-level texture to high-level semantics. Second, we develop the Text Attention Enhancement (TAE) module to re-weigh the importance of text regions and accordingly enhance the corresponding features, in order to improve the robustness against noisy background. Additionally, we design a self-training strategy to further boost the performance of our TEN. We conduct extensive experiments on five benchmarks, and the experimental results demonstrate the effectiveness of our TEN.
Read full abstract