Abstract

Modern cross-sectional strategies incorporating sophisticated neural architectures outperform their traditional counterparts when applied to mature assets with long histories. However, deploying them on instruments with limited samples generally produces over-fitted models with degraded performance. In this paper, we introduce Fused Encoder Networks -- a hybrid parameter-sharing transfer ranking model which fuses information extracted using an encoder-attention module from a source dataset with a similar but separate module operating on a smaller target dataset of interest. This mitigates the issue of models with poor generalisability. Additionally, the self-attention mechanism enables interactions among instruments to be accounted for, both at the loss-level during model training and at inference time. Focusing on momentum applied to the top ten cryptocurrencies by market capitalisation as a demonstrative use-case, our model outperforms state-of-the-art benchmarks on most measures and significantly boosts the Sharpe ratio. It continues outperforming baselines even after accounting for the high transaction costs associated with trading cryptocurrencies.

Full Text
Published version (Free)

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call