Abstract

In a previous paper Amman et al. (Macroecon Dyn, 2018) compare the two dominant approaches for solving models with optimal experimentation (also called active learning), i.e. the value function and the approximation method. By using the same model and dataset as in Beck and Wieland (J Econ Dyn Control 26:1359–1377, 2002), they find that the approximation method produces solutions close to those generated by the value function approach and identify some elements of the model specifications which affect the difference between the two solutions. They conclude that differences are small when the effects of learning are limited. However the dataset used in the experiment describes a situation where the controller is dealing with a nonstationary process and there is no penalty on the control. The goal of this paper is to see if their conclusions hold in the more commonly studied case of a controller facing a stationary process and a positive penalty on the control.

Highlights

  • In recent years there has been a resurgent interest in economics on the subject of optimal or strategic experimentation referred to as active learning, see e.g. Amman et al (2018), Buera et al (2011) and Savin and Blueschke (2016).1 There are two prevailing methods for solving this class of models

  • Those conclusions apply to a situation where the controller is dealing with a nonstationary process and there is no penalty on the control

  • In a previous paper Amman et al (2018) compare the value function and the approximation method in a situation where the controller is dealing with a nonstationary process and there is no penalty on the control

Read more

Summary

Introduction

In recent years there has been a resurgent interest in economics on the subject of optimal or strategic experimentation referred to as active learning, see e.g. Amman et al (2018), Buera et al (2011) and Savin and Blueschke (2016). There are two prevailing methods for solving this class of models. Until very recently there was an invisible line dividing researchers using one approach from those using the other It is only in Amman et al (2018) that the value function approach and the approximation method are used to solve the same problem and their solutions are compared. In that paper the focus is on comparing the policy function results reported in Beck and Wieland (2002), through the value function, to those obtained through approximation methods. Those conclusions apply to a situation where the controller is dealing with a nonstationary process and there is no penalty on the control.

Problem Statement
Solving the Value Function
Approximating the Value Function
Experimentation
Conclusions
Full Text
Paper version not known

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call