Abstract

Many machine learning methods used for the treatment of sequential data often rely on the construction of vector representations of unitary entities (e.g. words in natural language processing, or k-mers in bioinformatics). Traditionally, these representations are constructed with optimization formulations arising from co-occurrence based models. In this work, we propose a new method to embed these entities based on the Distance Geometry Problem: find object positions based on a subset of their pairwise distances or inner products. Considering the empirical Pointwise Mutual Information as a surrogate for the inner product, we discuss two Distance Geometry based algorithms to obtain word vector representations. The main advantage of such algorithms is their significantly lower computational complexity in comparison with state-of-the-art word embedding methods, which allows us to obtain word vector representations much faster. Furthermore, numerical experiments indicate that our word vectors behave quite well on text classification tasks in natural language processing as well as regression tasks in bioinformatics.

Full Text
Published version (Free)

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call