Abstract
Knowledge graph (KG) embedding techniques represent entities and relations as low-dimensional, continuous vectors, and thus enables machine learning models to be easily adapted to KG completion and querying tasks. However, learned dense vectors are inefficient for large-scale similarity computations. Learning-to-hash is to learn compact binary codes from high-dimensional input data and provides a promising way to accelerate efficiency by measuring Hamming distance instead of Euclidean distance or dot-product. Unfortunately, most of learning-to-hash methods cannot be directly applied to KG structure encoding. In this paper, we introduce a novel framework for encoding incomplete KGs and graph queries in Hamming space. To preserve KG structure information from embeddings to hash codes and address the ill-posed gradient issue in optimization, we utilize a continuation method with convergence guarantees to jointly encode queries and KG entities with geometric operations. The hashed embedding of a query can be utilized to discover target answers from incomplete KGs whilst the efficiency has been greatly improved.We compared our model with state-of-the-art methods on real-world KGs. Experimental results show that our framework not only significantly speeds up the searching process, but also provides good results for unanswerable queries caused by incomplete information.
Talk to us
Join us for a 30 min session where you can share your feedback and ask us any queries you have
Disclaimer: All third-party content on this website/platform is and will remain the property of their respective owners and is provided on "as is" basis without any warranties, express or implied. Use of third-party content does not indicate any affiliation, sponsorship with or endorsement by them. Any references to third-party content is to identify the corresponding services and shall be considered fair use under The CopyrightLaw.