Abstract

Normalization techniques are known to provide faster model convergence and good generalization performance, having achieved great success in computer vision and natural language processing. Recently, several deep neural network-based click-through rate (CTR) prediction models have applied such normalization techniques to their deep network components to make model training stable. However, we observe that applying existing normalization techniques (e.g. Batch Normalization and Layer Normalization) to feature embedding of the models leads to the degradation of model performance. In this study, we conjecture that existing normalization techniques can easily ignore the significance of each feature embedding, leading to sub-optimal performance. To support our claim, we theoretically show that existing normalization techniques tend to equalize the norm of individual feature embedding. To overcome this limitation, we propose a theory-inspired normalization technique, called Embedding Normalization, which not only makes model training stable but also improves the performance of CTR prediction models by preserving the significance of each feature embedding. Through extensive experiments on various real-world CTR prediction datasets, we show that our proposed normalization technique leads to faster model convergence and achieves better or comparable performance than other normalization techniques. Especially, our Embedding Normalization is effective in not only deep neural network-based CTR prediction models but also shallow CTR prediction models that do not utilize deep neural network components.

Full Text
Published version (Free)

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call