Abstract

Stochastic vector quantization methods have been extensively studied in supervised and unsupervised learning problems as online, data-driven, interpretable, robust, and fast to train and evaluate algorithms. Being prototype-based methods, they depend on a dissimilarity measure, which is both necessary and sufficient to belong to the family of Bregman divergences, if the mean value is used as the representative of the cluster. In this work, we investigate the convergence properties of stochastic vector quantization (VQ) and its supervised counterpart, Learning Vector Quantization (LVQ), using Bregman divergences. We employ the theory of stochastic approximation to study the conditions on the initialization and the Bregman divergence generating functions, under which, the algorithms converge to desired configurations. These results formally support the use of Bregman divergences, such as the Kullback-Leibler divergence, in vector quantization algorithms.

Full Text
Published version (Free)

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call