Abstract
BAYESIAN ANALYSIS, ENDOGENOUS DATA, AND CONVERGENCE OF BELIEFS By Andrew T . Foerster, M.S. A Thesis submitted in partial fulfillment of the requirements for the degree of Master of Science at Virginia Commonwealth University. Virginia Commonwealth University, 2006 Major Director: Dr. Hassan Sedaghat Associate Professor, Department of Mathematics Major Director: Dr. Patricia Pepple Williamson Associate Professor, Department of Statistics and Operations Research Problems in statistical analysis, economics, and many other disciplines often involve a trade-off between rewards and additional information that could yield higher future rewards. This thesis investigates such a trade-off, using a class of problems known as bandit problems. In these problems, a reward-seeking agent makes decisions based upon his beliefs about a parameter that controls rewards. While some choices may generate higher short-term rewards, other choices may provide information that allows the agent to learn about the parameter, thereby potentially increasing future rewards. Learning occurs if the agent's subjective beliefs about the parameter converge over time to the parameter's true value. However, depending upon the environment, learning may or may not be optimal, as in the end, the agent cares about maximizing rewards and not necessarily learning the true value of the underlying parameter Chapter 1: Introduction Statistical experiments and analysis often consider situations where a large quantity of data are collected and then the statistician makes a decision, such as estimation of a parameter. Another class of experiments requires witnessing only a small amount of data, making a decision, witnessing more data, making another decision, witnessing even more data, and so forth. In economics, a similar setup involves rewards generated by an unknown stochastic process, coupled with actions that yield rewards, and the actions can generate information about the stochastic process. In general, problems of sequential analysis have one key trade-off: the decision-maker can choose between either a known outcome or an unknown outcome. Choosing the known outcome has welldefined benefits, but choosing the unknown outcome can be beneficial in that the decision-maker can gain information about the nature of the unknown outcome. Specifically, the decision-maker has a set of possible actions to choose from. One subset of those actions generates rewards from some process that is known and understood by the decision maker. A second subset of possible actions generates rewards from an unknown process, but choosing one of these actions helps reveal information about the unknown process, allowing learning to occur. Under what circumstances is it in the decision maker's best interest to take a risk and choose an unknown with hopes of learning? Learning in sequential analysis depends highly on endogenous data, which comes from an unknown process, but it leads to learning. Using Bayesian analysis, what are the best guidelines for making decisions? Using the optimal decision-making rules, learning may or may not occur, depending upon whether it is in the best interest of the decision-maker. There are certain conditions that a sequential analysis problem must satisfy to ensure learning, it is not always guaranteed. This paper discusses learning and endogenous data in a sequential analysis framework using 1 Bayesian analysis. The next two chapters provide a background: Chapter 2 discusses the measure theory definitions needed, Chapter 3 outlines other key concepts, such as optimality and martingales. The remainder of the paper discusses bandit problems, which are the typical sequential analysis problems. Chapter 4 establishes results regarding the most basic framework. Chapter 5 generalizes Chapter 4 by allowing more abstract mathematical spaces and functions. Learning, which is defined as the convergence of subjective probability beliefs, is the subject of Chapter 6. Chapter 7 presents an application of the results, and Chapter 8 concludes. Chapter 2: Measure Theory Concepts and Definitions While many applications of bandit problems will use familiar functions and parameter spaces, the fact that the results apply to a much larger class of objects expands their use, applicability, and generality. Chapter 4 presents a simple example that uses finite choices for parameters, and Chapter 5 expands and generalizes to include more general sets. This chapter defines key measure theory concepts needed in Chapter 5. First, the spaces for parameters may be metric spaces.
Talk to us
Join us for a 30 min session where you can share your feedback and ask us any queries you have
Disclaimer: All third-party content on this website/platform is and will remain the property of their respective owners and is provided on "as is" basis without any warranties, express or implied. Use of third-party content does not indicate any affiliation, sponsorship with or endorsement by them. Any references to third-party content is to identify the corresponding services and shall be considered fair use under The CopyrightLaw.