Abstract

Discrete-choice life cycle models can be used to, e.g., estimate how social security reforms change employment rate. Optimal employment choices during the life course of an individual can be solved in the framework of life cycle models. This enables estimating how a social security reform influences employment rate. Mostly, life cycle models have been solved with dynamic programming, which is not feasible when the state space is large, as often is the case in a realistic life cycle model. Solving such life cycle models requires the use of approximate methods, such as reinforced learning algorithms. We compare how well a deep reinforced learning algorithm ACKTR and dynamic programming solve a relatively simple life cycle model. We find that the average utility is almost the same in both algorithms, however, the details of the best policies found with different algorithms differ to a degree. In the baseline model representing the current Finnish social security scheme, we find that reinforced learning yields essentially as good results as dynamics programming. We then analyze a straight-forward social security reform and find that the employment changes due to the reform are almost the same. Our results suggest that reinforced learning algorithms are of significant value in analyzing complex life cycle models.

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call

Disclaimer: All third-party content on this website/platform is and will remain the property of their respective owners and is provided on "as is" basis without any warranties, express or implied. Use of third-party content does not indicate any affiliation, sponsorship with or endorsement by them. Any references to third-party content is to identify the corresponding services and shall be considered fair use under The CopyrightLaw.