Abstract

This paper offers a novel perspective on the implications of increasingly autonomous and “black box” algorithms, within the ramification of algorithmic trading, for the integrity of capital markets. Artificial intelligence (AI) and particularly its subfield of machine learning (ML) methods have gained immense popularity among the great public and achieved tremendous success in many real-life applications by leading to vast efficiency gains. In the financial trading domain, ML can augment human capabilities in both price prediction, dynamic portfolio optimization, and other financial decision-making tasks. However, thanks to constant progress in the ML technology, the prospect of increasingly capable and autonomous agents to delegate operational tasks and even decision-making is now beyond mere imagination, thus opening up the possibility for approximating (truly) autonomous trading agents anytime soon. Given these spectacular developments, this paper argues that such autonomous algorithmic traders may involve significant risks to market integrity, independent from their human experts, thanks to self-learning capabilities offered by state-of-the-art and innovative ML methods. Using the proprietary trading industry as a case study, we explore emerging threats to the application of established market abuse laws in the event of algorithmic market abuse, by taking an interdisciplinary stance between financial regulation, law & economics, and computational finance. Specifically, our analysis focuses on two emerging market abuse risks by autonomous algorithms: market manipulation and “tacit” collusion. We explore their likelihood to arise on global capital markets and evaluate related social harm as forms of market failures. With these new risks in mind, this paper questions the adequacy of existing regulatory frameworks and enforcement mechanisms, as well as current legal rules on the governance of algorithmic trading, to cope with increasingly autonomous and ubiquitous algorithmic trading systems. It shows how the “black box” nature of specific ML-powered algorithmic trading strategies can subvert existing market abuse laws, which are based upon traditional liability concepts and tests (such as “intent” and “causation”). In concluding, by addressing the shortcomings of the present legal framework, we develop a number of guiding principles to assist legal and policy reform in the spirit of promoting and safeguarding market integrity and safety.

Full Text
Paper version not known

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call

Disclaimer: All third-party content on this website/platform is and will remain the property of their respective owners and is provided on "as is" basis without any warranties, express or implied. Use of third-party content does not indicate any affiliation, sponsorship with or endorsement by them. Any references to third-party content is to identify the corresponding services and shall be considered fair use under The CopyrightLaw.