Abstract

Notwithstanding that regulatory mandates require that all security market design changes pass the dual test of fairness and efficiency, most regulators have not even defined efficiency let alone fairness. It should therefore come as little surprise why design changes such as the introduction of algorithmic and high frequency trading or dark pools are causing considerable controversy in the marketplace. There is no evidence-based policy framework within which such changes can be meaningfully evaluated. In this work we seek to develop a Market Quality Framework in which as a start both fairness and efficiency are defined. From these definitions we establish a series of empirical proxies. Thereafter, we develop a systems estimation model and demonstrate its use by analyzing the 2004-2011 explosive growth in algorithmic trading (AT) on the London Stock Exchange and NYSE Euronext Paris. Our results show that greater AT increases market fairness and efficiency but only in top quintile stocks. We address the robustness of these results to end-of-quarter reporting deadlines and to trading before and after MiFID1, a 2007 regulatory regime that fragmented the market. In addition, we analyze the over-identifying restrictions, and perform both Hausman and Stock-Yogo tests of the exogeneity and strength of our AT instruments.

Full Text
Published version (Free)

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call