Background: Knowledge representation learning aims at mapping entity and relational data in knowledge graphs to a low-dimensional space in the form of vectors. The existing work has mainly focused on structured information representation of triples or introducing only one additional kind of information, which has large limitations and reduces the representation efficiency. Objective: This study aims to combine entity description information and textual relationship description information with triadic structure information, and then use the linear mapping method to linearly transform the structure vector and text vector to obtain the joint representation vector. Methods: A knowledge representation learning (DRKRL) model that fuses external information for semantic enhancement is proposed, which combines entity descriptions and textual relations with a triadic structure. For entity descriptions, a vector representation is performed using a bi-directional long- and short-term memory network (Bi-LSTM) model and an attention mechanism. For the textual relations, a convolutional neural network is used to vectorially encode the relations between entities, and then an attention mechanism is used to obtain valuable information as complementary information to the triad. Results: Link prediction and triadic group classification experiments were conducted on the FB15K, FB15K-237, WN18, WN18RR, and NELL-995 datasets. Theoretical analysis and experimental results show that the DRKRL model proposed in this paper has higher accuracy and efficiency compared with existing models. Conclusion: Combining entity description information and textual relationship description information with triadic structure information can make the model have better performance and effectively improve the knowledge representation learning ability.
Read full abstract