Abstract

In many applications, intelligent agents need to identify any structure or apparent randomness in an environment and respond appropriately. We use the relative entropy to separate and quantify the presence of both linear and nonlinear redundancy in a sequence and we introduce the new quantities of total mutual information gain and incremental mutual information gain. We illustrate how these new quantities can be used to analyze and characterize the structures and apparent randomness for purely autoregressive sequences and for speech signals with long and short term linear redundancies. The mutual information gain is shown to be an important new tool for capturing and quantifying learning for sequence modeling and analysis.

Highlights

  • Many learning applications require agents to respond to their current environment for analysis or control

  • Our focus in this paper is on the agent learning component wherein upon taking some observations of the environment, we develop an understanding of the structure of the environment, formulate models of this structure, and study any remaining apparent randomness or unpredictability

  • If we were to plot the spectra as the predictor order is increased from 0 to 10, this evolution would be clearer with the substantial jump in incremental mutual information gain in going from 0 to 1 showing a magnitude at low frequencies and rough location of the peak but not the bandwidth

Read more

Summary

Introduction

Many learning applications require agents to respond to their current environment for analysis or control. Analyses of learning with respect to identifying structures or changes in data sequences have often focussed on the classical Shannon entropy, its convergence to the entropy rate, and the relative entropy between subsequences, resulting in the definition of new quantities related to Shannon information theory that are defined to capture ideas relevant to these learning problems. Among these quantities are the terms entropy gain, information gain, redundancy, predictability, and excess entropy [1,2]. The expressions for relative entropy in Equations (14)–(16), straightforward, allow deeper insights into existing structure and apparent randomness in sequences, and examples are provided in later sections of what these expressions reveal

Agent Learning and Redundancy
Linear and Nonlinear Redundancy
Mutual Information Gain
Stationary and Gaussian
A Distribution Free Information Measure
Autoregressive Modeling
Speech Processing
AR Speech Model
Long Term Redundancy
Discussion and Conclusions
Full Text
Published version (Free)

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call