Abstract

Information field theory (IFT), the information theory for fields, is a mathematical framework for signal reconstruction and non-parametric inverse problems. Artificial intelligence (AI) and machine learning (ML) aim at generating intelligent systems, including such for perception, cognition, and learning. This overlaps with IFT, which is designed to address perception, reasoning, and inference tasks. Here, the relation between concepts and tools in IFT and those in AI and ML research are discussed. In the context of IFT, fields denote physical quantities that change continuously as a function of space (and time) and information theory refers to Bayesian probabilistic logic equipped with the associated entropic information measures. Reconstructing a signal with IFT is a computational problem similar to training a generative neural network (GNN) in ML. In this paper, the process of inference in IFT is reformulated in terms of GNN training. In contrast to classical neural networks, IFT based GNNs can operate without pre-training thanks to incorporating expert knowledge into their architecture. Furthermore, the cross-fertilization of variational inference methods used in IFT and ML are discussed. These discussions suggest that IFT is well suited to address many problems in AI and ML research and application.

Highlights

  • In order to overcome this limitation of ADVI that limits its usage in Information field theory (IFT) contexts with their large number of degrees of freedom (DoF), the Metric Gaussian Variational Inference (MGVI) [30] algorithm approximates the posterior uncertainty of ζ with the help of the Fisher information metric

  • This paper argues that IFT techniques can well be regarded as machine learning (ML) and Artificial intelligence (AI) methods by showing their interrelation with generative neural network (GNN), normalizing flows, and variational inference (VI) techniques

  • The generative models build and used in IFT are GNNs that can interpret data without initial training, thanks to the domain knowledge coded into their architecture [29]

Read more

Summary

Motivation

This information might either be precise, like ∇ · B = 0 in electrodynamics, or more phenomenological, in the sense that a field shaped by a certain process can often be characterized by its n-point correlation functions Having knowledge on such correlations can be sufficient to regularize the otherwise ill-posed field inference problem from finite and noisy data such that meaningful statements about the field can be made. The relation of IFT with methods and concepts used in artificial intelligence (AI) and machine learning (ML) research are outlined, in particular with generative neural networks (GNNs) and in the usage of variational inference.

Basics
Prior Standardization
Power Spectra
Amplitude Model
Dynamical Systems
Generative Model
Neural Networks
Comparison with IFT Models
Basic Idea
ADVI and Mean Field Approximation
MGVI and Fisher Information Metric
Exact Uncertainty Covariance
Geometric Variational Inference
Conclusions and Outlook
Full Text
Published version (Free)

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call