Abstract

Identifying and understanding the impact of algorithmic trading on financial markets has become a critical issue for market operators and regulators. Advanced data feed and audit trail information from market operators now make the full observation of market participants' actions possible. A key question is the extent to which it is possible to understand and characterize the behavior of individual participants from observations of trading actions. In this paper, we consider the basic problems of categorizing and recognizing traders (or, equivalently, trading algorithms) on the basis observed limit orders. Our approach, which is based on inverse reinforcement learning (IRL), is to model trading decisions as a Markov decision process and then use observations of an optimal decision policy to find the reward function. The approach strikes a balance between two desirable features in that it captures key empirical properties of order book dynamics and yet remains computationally tractable. Making use of a real-world data set from the E-Mini futures contract, we compare two principal IRL variants, linear IRL and Gaussian process IRL. Results suggest that IRL-based feature spaces support accurate classification and meaningful clustering.

Full Text
Published version (Free)

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call