Year Year arrow
arrow-active-down-0
Publisher Publisher arrow
arrow-active-down-1
Journal
1
Journal arrow
arrow-active-down-2
Institution Institution arrow
arrow-active-down-3
Institution Country Institution Country arrow
arrow-active-down-4
Publication Type Publication Type arrow
arrow-active-down-5
Field Of Study Field Of Study arrow
arrow-active-down-6
Topics Topics arrow
arrow-active-down-7
Open Access Open Access arrow
arrow-active-down-8
Language Language arrow
arrow-active-down-9
Filter Icon Filter 1
Year Year arrow
arrow-active-down-0
Publisher Publisher arrow
arrow-active-down-1
Journal
1
Journal arrow
arrow-active-down-2
Institution Institution arrow
arrow-active-down-3
Institution Country Institution Country arrow
arrow-active-down-4
Publication Type Publication Type arrow
arrow-active-down-5
Field Of Study Field Of Study arrow
arrow-active-down-6
Topics Topics arrow
arrow-active-down-7
Open Access Open Access arrow
arrow-active-down-8
Language Language arrow
arrow-active-down-9
Filter Icon Filter 1
Export
Sort by: Relevance
  • Open Access Icon
  • Research Article
  • 10.2200/s01170ed1v01y202202aim052
Applying Reinforcement Learning on Real-World Data with Practical Examples in Python
  • May 20, 2022
  • Synthesis Lectures on Artificial Intelligence and Machine Learning
  • Philip Osborne + 2 more

  • Open Access Icon
  • Research Article
  • Cite Count Icon 5
  • 10.2200/s01152ed1v01y202111aim051
Positive Unlabeled Learning
  • Apr 19, 2022
  • Synthesis Lectures on Artificial Intelligence and Machine Learning
  • Kristen Jaskie + 1 more

  • Open Access Icon
  • Research Article
  • Cite Count Icon 5
  • 10.2200/s01152ed1v01y202111aim050
Explainable Human--AI Interaction: A Planning Perspective
  • Jan 24, 2022
  • Synthesis Lectures on Artificial Intelligence and Machine Learning
  • Sarath Sreedharan + 2 more

  • Open Access Icon
  • Research Article
  • Cite Count Icon 6
  • 10.2200/s01091ed1v01y202104aim049
Transfer Learning for Multiagent Reinforcement Learning Systems
  • May 27, 2021
  • Synthesis Lectures on Artificial Intelligence and Machine Learning
  • Felipe Leno Da Silva + 1 more

Reinforcement learning methods have successfully been applied to build autonomous agents that solve many sequential decision making problems. However, agents need a long time to learn a suitable policy, specially when multiple autonomous agents are in the environment. This research aims to propose a Transfer Learning (TL) framework to accelerate learning by exploiting two knowledge sources: (i) previously learned tasks; and (ii) advising from a more experienced agent. The definition of such framework requires answering several challenging research questions, including: How to abstract and represent knowledge, in order to allow generalization and posterior reuse?, How and when to transfer and receive knowledge in an efficient manner?, and How to evaluate the transfer quality in a Multiagent scenario?.

  • Open Access Icon
  • Research Article
  • Cite Count Icon 2
  • 10.2200/s01063ed1v01y202012aim048
Network Embedding: Theories, Methods, and Applications
  • Mar 22, 2021
  • Synthesis Lectures on Artificial Intelligence and Machine Learning
  • Cheng Yang + 4 more

Many machine learning algorithms require real-valued feature vectors of data instances as inputs. By projecting data into vector spaces, representation learning techniques have achieved promising performance in many areas such as computer vision and natural language processing. There is also a need to learn representations for discrete relational data, namely networks or graphs. Network Embedding (NE) aims at learning vector representations for each node or vertex in a network to encode the topologic structure. Due to its convincing performance and efficiency, NE has been widely applied in many network applications such as node classification and link prediction. This book provides a comprehensive introduction to the basic concepts, models, and applications of network representation learning (NRL). The book starts with an introduction to the background and rising of network embeddings as a general overview for readers. Then it introduces the development of NE techniques by presenting several representative methods on general graphs, as well as a unified NE framework based on matrix factorization. Afterward, it presents the variants of NE with additional information: NE for graphs with node attributes/contents/labels; and the variants with different characteristics: NE for community-structured/large-scale/heterogeneous graphs. Further, the book introduces different applications of NE such as recommendation and information diffusion prediction. Finally, the book concludes the methods and applications and looks forward to the future directions.

  • Open Access Icon
  • Research Article
  • Cite Count Icon 6
  • 10.2200/s01062ed1v01y202012aim047
Introduction to Symbolic Plan and Goal Recognition
  • Jan 25, 2021
  • Synthesis Lectures on Artificial Intelligence and Machine Learning
  • Reuth Mirsky + 2 more

Abstract Plan recognition, activity recognition, and goal recognition all involve making inferences about other actors based on observations of their interactions with the environment and other age...

  • Research Article
  • Cite Count Icon 312
  • 10.2200/s01045ed1v01y202009aim046
Graph Representation Learning
  • Sep 15, 2020
  • Synthesis Lectures on Artificial Intelligence and Machine Learning
  • William L Hamilton

Graph-structured data is ubiquitous throughout the natural and social sciences, from telecommunication networks to quantum chemistry. Building relational inductive biases into deep learning architectures is crucial for creating systems that can learn, reason, and generalize from this kind of data. Recent years have seen a surge in research on graph representation learning, including techniques for deep graph embeddings, generalizations of convolutional neural networks to graph-structured data, and neural message-passing approaches inspired by belief propagation. These advances in graph representation learning have led to new state-of-the-art results in numerous domains, including chemical synthesis, 3D vision, recommender systems, question answering, and social network analysis. This book provides a synthesis and overview of graph representation learning. It begins with a discussion of the goals of graph representation learning as well as key methodological foundations in graph theory and network analysis. Following this, the book introduces and reviews methods for learning node embeddings, including random-walk-based methods and applications to knowledge graphs. It then provides a technical synthesis and introduction to the highly successful graph neural network (GNN) formalism, which has become a dominant and fast-growing paradigm for deep learning with graph data. The book concludes with a synthesis of recent advancements in deep generative models for graphs—a nascent but quickly growing subset of graph representation learning.

  • Research Article
  • Cite Count Icon 36
  • 10.2200/s00980ed1v01y202001aim045
Introduction to Graph Neural Networks
  • Mar 19, 2020
  • Synthesis Lectures on Artificial Intelligence and Machine Learning
  • Zhiyuan Liu + 1 more

  • Research Article
  • Cite Count Icon 4
  • 10.2200/s00966ed1v01y201911aim044
Introduction to Logic Programming
  • Feb 10, 2020
  • Synthesis Lectures on Artificial Intelligence and Machine Learning
  • Michael Genesereth + 1 more

Logic Programming is a style of programming in which programs take the form of sets of sentences in the language of Symbolic Logic. Over the years, there has been growing interest in Logic Programming due to applications in deductive databases, automated worksheets, Enterprise Management (business rules), Computational Law, and General Game Playing. book introduces Logic Programming theory, current technology, and popular applications. this volume, we take an innovative, model-theoretic approach to logic programming. We begin with the fundamental notion of datasets, i.e., sets of ground atoms. Given this fundamental notion, we introduce views, i.e., virtual relations; and we define classical logic programs as sets of view definitions, written using traditional Prolog-like notation but with semantics given in terms of datasets rather than implementation. We then introduce actions, i.e., additions and deletions of ground atoms; and we define dynamic logic programs as sets of action definitions. addition to the printed book, there is an online version of the text with an interpreter and a compiler for the language used in the text and an integrated development environment for use in developing and deploying practical logic programs. This is a book for the 21st century: presenting an elegant and innovative perspective on logic programming. Unlike other texts, it takes datasets as a fundamental notion, thereby bridging the gap between programming languages and knowledge representation languages; and it treats updates on an equal footing with datasets, leading to a sound and practical treatment of action and change. – Bob Kowalski, Professor Emeritus, Imperial College London In a world where Deep Learning and Python are the talk of the day, this book is a remarkable development. It introduces the reader to the fundamentals of traditional Logic Programming and makes clear the benefits of using the technology to create runnable specifications for complex systems. – Son Cao Tran, Professor in Computer Science, New Mexico State University Excellent introduction to the fundamentals of Logic Programming. The book is well-written and well-structured. Concepts are explained clearly and the gradually increasing complexity of exercises makes it so that one can understand easy notions quickly before moving on to more difficult ideas. – George Younger, student, Stanford University

  • Research Article
  • Cite Count Icon 280
  • 10.2200/s00960ed2v01y201910aim043
Federated Learning
  • Dec 19, 2019
  • Synthesis Lectures on Artificial Intelligence and Machine Learning
  • Qiang Yang + 5 more