Entity completion for industrial knowledge graph based on zero-shot learning

  • Abstract
  • Literature Map
  • Similar Papers
Abstract
Translate article icon Translate Article Star icon
Take notes icon Take Notes

Entity completion for industrial knowledge graph based on zero-shot learning

Similar Papers
  • Research Article
  • Cite Count Icon 37
  • 10.1109/jproc.2023.3279374
Zero-Shot and Few-Shot Learning With Knowledge Graphs: A Comprehensive Survey
  • Jun 1, 2023
  • Proceedings of the IEEE
  • Jiaoyan Chen + 7 more

Machine learning (ML), especially deep neural networks, has achieved great success, but many of them often rely on a number of labeled samples for supervision. As sufficient labeled training data are not always ready due to, e.g., continuously emerging prediction targets and costly sample annotation in real-world applications, ML with sample shortage is now being widely investigated. Among all these studies, many prefer to utilize auxiliary information including those in the form of knowledge graph (KG) to reduce the reliance on labeled samples. In this survey, we have comprehensively reviewed over <inline-formula xmlns:mml="http://www.w3.org/1998/Math/MathML" xmlns:xlink="http://www.w3.org/1999/xlink"> <tex-math notation="LaTeX">$\text{90}$</tex-math> </inline-formula> articles about KG-aware research for two major sample shortage settings—zero-shot learning (ZSL) where some classes to be predicted have no labeled samples and few-shot learning (FSL) where some classes to be predicted have only a small number of labeled samples that are available. We first introduce KGs used in ZSL and FSL as well as their construction methods and then systematically categorize and summarize KG-aware ZSL and FSL methods, dividing them into different paradigms, such as the mapping-based, the data augmentation, the propagation-based, and the optimization-based. We next present different applications, including not only KG augmented prediction tasks such as image classification, question answering, text classification, and knowledge extraction but also KG completion tasks and some typical evaluation resources for each task. We eventually discuss some challenges and open problems from different perspectives.

  • Research Article
  • Cite Count Icon 9
  • 10.1016/j.cogsys.2023.101188
Improving deep learning with prior knowledge and cognitive models: A survey on enhancing explainability, adversarial robustness and zero-shot learning
  • Nov 30, 2023
  • Cognitive Systems Research
  • Fuseini Mumuni + 1 more

Improving deep learning with prior knowledge and cognitive models: A survey on enhancing explainability, adversarial robustness and zero-shot learning

  • Research Article
  • Cite Count Icon 11
  • 10.1016/j.websem.2022.100757
Benchmarking knowledge-driven zero-shot learning
  • Sep 17, 2022
  • Journal of Web Semantics
  • Yuxia Geng + 7 more

External knowledge (a.k.a. side information) plays a critical role in zero-shot learning (ZSL) which aims to predict with unseen classes that have never appeared in training data. Several kinds of external knowledge, such as text and attribute, have been widely investigated, but they alone are limited with incomplete semantics. Some very recent studies thus propose to use Knowledge Graph (KG) due to its high expressivity and compatibility for representing kinds of knowledge. However, the ZSL community is still in short of standard benchmarks for studying and comparing different external knowledge settings and different KG-based ZSL methods. In this paper, we proposed six resources covering three tasks, i.e., zero-shot image classification (ZS-IMGC), zero-shot relation extraction (ZS-RE), and zero-shot KG completion (ZS-KGC). Each resource has a normal ZSL benchmark and a KG containing semantics ranging from text to attribute, from relational knowledge to logical expressions. We have clearly presented these resources including their construction, statistics, data formats and usage cases w.r.t. different ZSL methods. More importantly, we have conducted a comprehensive benchmarking study, with a few classic and state-of-the-art methods for each task, including a method with KG augmented explanation. We discussed and compared different ZSL paradigms w.r.t. different external knowledge settings, and found that our resources have great potential for developing more advanced ZSL methods and more solutions for applying KGs for augmenting machine learning. All the resources are available at https://github.com/China-UK-ZSL/Resources_for_KZSL.

  • Research Article
  • Cite Count Icon 8
  • 10.1016/j.jvcir.2022.103629
Semantic guided knowledge graph for large-scale zero-shot learning
  • Sep 9, 2022
  • Journal of Visual Communication and Image Representation
  • Jiwei Wei + 5 more

Semantic guided knowledge graph for large-scale zero-shot learning

  • PDF Download Icon
  • Conference Article
  • Cite Count Icon 15
  • 10.1145/3534678.3539453
Disentangled Ontology Embedding for Zero-shot Learning
  • Aug 14, 2022
  • Yuxia Geng + 8 more

Knowledge Graph (KG) and its variant of ontology have been widely used for knowledge representation, and have shown to be quite effective in augmenting Zero-shot Learning (ZSL). However, existing ZSL methods that utilize KGs all neglect the intrinsic complexity of inter-class relationships represented in KGs. One typical feature is that a class is often related to other classes in different semantic aspects. In this paper, we focus on ontologies for augmenting ZSL, and propose to learn disentangled ontology embeddings guided by ontology properties to capture and utilize more fine-grained class relationships in different aspects. We also contribute a new ZSL framework named DOZSL, which contains two new ZSL solutions based on generative models and graph propagation models, respectively, for effectively utilizing the disentangled ontology embeddings. Extensive evaluations have been conducted on five benchmarks across zero-shot image classification (ZS-IMGC) and zero-shot KG completion (ZS-KGC). DOZSL often achieves better performance than the state-of-the-art, and its components have been verified by ablation studies and case studies. Our codes and datasets are available at https://github.com/zjukg/DOZSL.

  • PDF Download Icon
  • Research Article
  • 10.1155/2021/7480712
Attention-Based Graph Convolutional Network for Zero-Shot Learning with Pre-Training
  • Dec 7, 2021
  • Mathematical Problems in Engineering
  • Xuefei Wu + 4 more

Zero-shot learning (ZSL) is a powerful and promising learning paradigm for classifying instances that have not been seen in training. Although graph convolutional networks (GCNs) have recently shown great potential for the ZSL tasks, these models cannot adjust the constant connection weights between the nodes in knowledge graph and the neighbor nodes contribute equally to classify the central node. In this study, we apply an attention mechanism to adjust the connection weights adaptively to learn more important information for classifying unseen target nodes. First, we propose an attention graph convolutional network for zero-shot learning (AGCNZ) by integrating the attention mechanism and GCN directly. Then, in order to prevent the dilution of knowledge from distant nodes, we apply the dense graph propagation (DGP) model for the ZSL tasks and propose an attention dense graph propagation model for zero-shot learning (ADGPZ). Finally, we propose a modified loss function with a relaxation factor to further improve the performance of the learned classifier. Experimental results under different pre-training settings verified the effectiveness of the proposed attention-based models for ZSL.

  • PDF Download Icon
  • Research Article
  • Cite Count Icon 7
  • 10.1186/s13640-018-0371-x
Semantic embeddings of generic objects for zero-shot learning
  • Jan 15, 2019
  • EURASIP Journal on Image and Video Processing
  • Tristan Hascoet + 2 more

Zero-shot learning (ZSL) models use semantic representations of visual classes to transfer the knowledge learned from a set of training classes to a set of unknown test classes. In the context of generic object recognition, previous research has mainly focused on developing custom architectures, loss functions, and regularization schemes for ZSL using word embeddings as semantic representation of visual classes. In this paper, we exclusively focus on the affect of different semantic representations on the accuracy of ZSL. We first conduct a large scale evaluation of semantic representations learned from either words, text documents, or knowledge graphs on the standard ImageNet ZSL benchmark. We show that, using appropriate semantic representations of visual classes, a basic linear regression model outperforms the vast majority of previously proposed approaches. We then analyze the classification errors of our model to provide insights into the relevance and limitations of the different semantic representations we investigate. Finally, our investigation helps us understand the reasons behind the success of recently proposed approaches based on graph convolution networks (GCN) which have shown dramatic improvements over previous state-of-the-art models.

  • Research Article
  • Cite Count Icon 18
  • 10.3233/sw-210435
Explainable zero-shot learning via attentive graph convolutional network and knowledge graphs
  • Aug 27, 2021
  • Semantic Web
  • Yuxia Geng + 5 more

Zero-shot learning (ZSL) which aims to deal with new classes that have never appeared in the training data (i.e., unseen classes) has attracted massive research interests recently. Transferring of deep features learned from training classes (i.e., seen classes) are often used, but most current methods are black-box models without any explanations, especially textual explanations that are more acceptable to not only machine learning specialists but also common people without artificial intelligence expertise. In this paper, we focus on explainable ZSL, and present a knowledge graph (KG) based framework that can explain the transferability of features in ZSL in a human understandable manner. The framework has two modules: an attentive ZSL learner and an explanation generator. The former utilizes an Attentive Graph Convolutional Network (AGCN) to match class knowledge from WordNet with deep features learned from CNNs (i.e., encode inter-class relationship to predict classifiers), in which the features of unseen classes are transferred from seen classes to predict the samples of unseen classes, with impressive (important) seen classes detected, while the latter generates human understandable explanations for the transferability of features with class knowledge that are enriched by external KGs, including a domain-specific Attribute Graph and DBpedia. We evaluate our method on two benchmarks of animal recognition. Augmented by class knowledge from KGs, our framework generates promising explanations for the transferability of features, and at the same time improves the recognition accuracy.

  • Conference Article
  • Cite Count Icon 53
  • 10.1145/3442381.3450042
OntoZSL: Ontology-enhanced Zero-shot Learning
  • Apr 19, 2021
  • Yuxia Geng + 7 more

Zero-shot Learning (ZSL), which aims to predict for those classes that have never appeared in the training data, has arisen hot research interests. The key of implementing ZSL is to leverage the prior knowledge of classes which builds the semantic relationship between classes and enables the transfer of the learned models (e.g., features) from training classes (i.e., seen classes) to unseen classes. However, the priors adopted by the existing methods are relatively limited with incomplete semantics. In this paper, we explore richer and more competitive prior knowledge to model the inter-class relationship for ZSL via ontology-based knowledge representation and semantic embedding. Meanwhile, to address the data imbalance between seen classes and unseen classes, we developed a generative ZSL framework with Generative Adversarial Networks (GANs). Our main findings include: (i) an ontology-enhanced ZSL framework that can be applied to different domains, such as image classification (IMGC) and knowledge graph completion (KGC); (ii) a comprehensive evaluation with multiple zero-shot datasets from different domains, where our method often achieves better performance than the state-of-the-art models. In particular, on four representative ZSL baselines of IMGC, the ontology-based class semantics outperform the previous priors e.g., the word embeddings of classes by an average of 12.4 accuracy points in the standard ZSL across two example datasets (see Figure 4).

  • Conference Article
  • Cite Count Icon 1
  • 10.1109/ictai56018.2022.00064
A Zero-shot Learning Method with a Multi-Modal Knowledge Graph
  • Oct 1, 2022
  • Yuhong Zhang + 3 more

Zero-shot learning aims to recognize unseen-classes using some seen-class samples as training set. It is challenging owing to that the feature representations of unseen-class samples are unavailable. Existing methods transfer the mapping from seen-classes to unseen-classes with the correlation as a bridge, in which, the semantic representations are used to discriminate the classes. However, the unavailability of visual representations for unseen-classes and the insufficient discrimination of semantic representations make the zero-shot learning challenging. Therefore, the visual representations are learned as complements to semantic representations to construct a multi-modal knowledge graph (KG), and a zero-shot learning method based on multi-modal KG is proposed in this paper. Specially, a semantic KG is introduced to capture the correlation of classes, and with the correlation, the visual feature representations of all classes are learned. Then, the discriminative visual representations and the semantic representations are used together to construct a multi-modal KG. With the multi-modal KG, the classifier for seen-classes is transferred to unseen classes. Extensive experimental results show the effectiveness of our method.

  • Research Article
  • Cite Count Icon 57
  • 10.1609/aaai.v34i05.6392
Generative Adversarial Zero-Shot Relational Learning for Knowledge Graphs
  • Apr 3, 2020
  • Proceedings of the AAAI Conference on Artificial Intelligence
  • Pengda Qin + 5 more

Large-scale knowledge graphs (KGs) are shown to become more important in current information systems. To expand the coverage of KGs, previous studies on knowledge graph completion need to collect adequate training instances for newly-added relations. In this paper, we consider a novel formulation, zero-shot learning, to free this cumbersome curation. For newly-added relations, we attempt to learn their semantic features from their text descriptions and hence recognize the facts of unseen relations with no examples being seen. For this purpose, we leverage Generative Adversarial Networks (GANs) to establish the connection between text and knowledge graph domain: The generator learns to generate the reasonable relation embeddings merely with noisy text descriptions. Under this setting, zero-shot learning is naturally converted to a traditional supervised classification task. Empirically, our method is model-agnostic that could be potentially applied to any version of KG embeddings, and consistently yields performance improvements on NELL and Wiki dataset.

  • Conference Article
  • Cite Count Icon 11
  • 10.1109/iccvw54120.2021.00104
Zero-Shot Learning via Contrastive Learning on Dual Knowledge Graphs
  • Oct 1, 2021
  • Jin Wang + 1 more

Graph Convolutional Networks (GCNs), which can integrate both explicit knowledge and implicit knowledge together, have shown effectively for zero-shot learning problems. Previous GCN-based methods generally leverage a single category (relationship) knowledge graph for zero-shot learning. However, in practical scenarios, multiple types of relationships among categories are usually available which can be represented as multiple knowledge graphs. To this end, we propose a novel dual knowledge graph contrastive learning framework to perform zero-shot learning. The proposed model fully exploits multiple relationships among different categories for zero-shot learning by employing graph convolutional representation and contrastive learning techniques. The main benefit of the proposed contrastive learning module is that it can effectively encourage the consistency of the category representations from different knowledge graphs while enhancing the discriminability of the generated category classifiers. We perform extensive experiments on several benchmark datasets and the experimental results show the superior performance of our approach.

  • Conference Article
  • Cite Count Icon 15
  • 10.1145/3338533.3366552
Residual Graph Convolutional Networks for Zero-Shot Learning
  • Dec 15, 2019
  • Jiwei Wei + 5 more

Most existing Zero-Shot Learning (ZSL) approaches adopt the semantic space as a bridge to classify unseen categories. However, it is difficult to transfer knowledge from seen categories to unseen categories through semantic space, since the correlations among categories are uncertain and ambiguous in the semantic space. In this paper, we formulated zero-shot learning as a classifier weight regression problem. Specifically, we propose a novel Residual Graph Convolution Network (ResGCN) which takes word embeddings and knowledge graph as inputs and outputs a visual classifier for each category. ResGCN can effectively alleviate the problem of over-smoothing and over-fitting. During the test, an unseen image can be classified by ranking the inner product of its visual feature and predictive visual classifiers. Moreover, we provide a new method to build a better knowledge graph. Our approach not only further enhances the correlations among categories, but also makes it easy to add new categories to the knowledge graph. Experiments conducted on the large-scale ImageNet 2011 21K dataset demonstrate that our method significantly outperforms existing state-of-the-art approaches.

  • Research Article
  • Cite Count Icon 2
  • 10.7717/peerj-cs.1260
Semantic-visual shared knowledge graph for zero-shot learning.
  • Mar 22, 2023
  • PeerJ Computer Science
  • Beibei Yu + 3 more

Almost all existing zero-shot learning methods work only on benchmark datasets (e.g., CUB, SUN, AwA, FLO and aPY) which have already provided pre-defined attributes for all the classes. These methods thus are hard to apply on real-world datasets (like ImageNet) since there are no such pre-defined attributes in the data environment. The latest works have explored to use semantic-rich knowledge graphs (such as WordNet) to substitute pre-defined attributes. However, these methods encounter a serious "role="presentation">domain shift" problem because such a knowledge graph cannot provide detailed enough semantics to describe fine-grained information. To this end, we propose a semantic-visual shared knowledge graph (SVKG) to enhance the detailed information for zero-shot learning. SVKG represents high-level information by using semantic embedding but describes fine-grained information by using visual features. These visual features can be directly extracted from real-world images to substitute pre-defined attributes. A multi-modals graph convolution network is also proposed to transfer SVKG into graph representations that can be used for downstream zero-shot learning tasks. Experimental results on the real-world datasets without pre-defined attributes demonstrate the effectiveness of our method and show the benefits of the proposed. Our method obtains a +2.8%, +0.5%, and +0.2% increase compared with the state-of-the-art in 2-hops, 3-hops, and all divisions relatively.

  • Conference Article
  • Cite Count Icon 1
  • 10.1145/3444685.3446283
Graph-based variational auto-encoder for generalized zero-shot learning
  • Mar 7, 2021
  • Jiwei Wei + 5 more

Zero-shot learning has been a highlighted research topic in both vision and language areas. Recently, generative methods have emerged as a new trend of zero-shot learning, which synthesizes unseen categories samples via generative models. However, the lack of fine-grained information in the synthesized samples makes it difficult to improve classification accuracy. It is also time-consuming and inefficient to synthesize samples and using them to train classifiers. To address such issues, we propose a novel Graph-based Variational Auto-Encoder for zero-shot learning. Specifically, we adopt knowledge graph to model the explicit inter-class relationships, and design a full graph convolution auto-encoder framework to generate the classifier from the distribution of the class-level semantic features on individual nodes. The encoder learns the latent representations of individual nodes, and the decoder generates the classifiers from latent representations of individual nodes. In contrast to synthesize samples, our proposed method directly generates classifiers from the distribution of the class-level semantic features for both seen and unseen categories, which is more straightforward, accurate and computationally efficient. We conduct extensive experiments and evaluate our method on the widely used large-scale ImageNet-21K dataset. Experimental results validate the efficacy of the proposed approach.

Save Icon
Up Arrow
Open/Close
  • Ask R Discovery Star icon
  • Chat PDF Star icon

AI summaries and top papers from 250M+ research sources.