Alpha and Execution are two of the most important concepts in financial markets. Hedge funds strive to outperform the market on a daily basis (Alpha), and to do so, they search for novel ideas and strategies to be implemented on the markets. But, as we all know, power is nothing without control, which leads us to the other side of the coin: Execution. Although the literature is always looking for a landing towards market efficiency and liquidity hypotheses on which to build one’s own economic theories, empirical data show that this does not always happens: markets can be far from efficient in certain situations and, more importantly, they can turn into real liquidity traps. The GameStop example is illustrative: a short squeeze orchestrated by private investors brought massive speculative funds to their knees, if not outright closure. Usually, major financial operators aspire to trading techniques that allow them to apply their ideas in the pursuit of strong gains, without the amount of their investment impacting projected return: slippage and market volatility caused by excessively large orders should be avoided. Unfortunately, this is not an easy pro- cedure: Machine Learning methods have been introduced in this field to produce better results with respect to traditional methodologies. Machine Learning has grown in popularity in recent years, both as a consequence of the excellent results produced in different application areas and the rising amount of data accessible. The focus of this thesis is on Reinforcement Learning, a sub-field of Machine Learning that entails teaching an algorithmic agent to learn independently through interaction with the environment. The major objective of this research is to let the agent develop and implement a trading strategy for executing an order in the most efficient possible manner, while reducing market effect and avoiding the risk of market volatility. In particular we introduce Fitted Q-Iteration, an iterative Reinforcement Learning algorithm that seeks to converge to an optimal policy through the construction of a reward estimate. In achieving this goal, Regression Trees are used to represent our Q-function. This algorithm is embedded in a larger framework: multiple agents are trained to operate under different market conditions generated in a completely simulated multi-agent environment. The issue of choosing the best execution agent is addressed through techniques used in Multi Armed Bandit modeling. To the best of our knowledge, this is the first time such an approach to the Optimal Execution has been discussed.
Alfa ed Esecuzione sono due dei concetti più importanti nei mercati finanziari. I fondi speculativi cercano costantemente di sovraperformare il mercato (Alfa) e, per farlo, elaborano nuove idee e strategie da implementare sui mercati. Ma come tutti sappiamo, la potenza è nulla senza il controllo: è qui che entra in gioco l’Esecuzione. Sebbene la letteratura sia sempre alla ricerca di un approdo verso l’efficienza dei mercati e di ipotesi di liquidità su cui costruire le proprie teorie economiche, i dati empirici dimostrano che questo non sempre accade: i mercati, infatti, possono essere tutt’altro che efficienti in certe situazioni e, soprattutto, possono trasformarsi in vere e proprie trappole di liquidità. L’esempio di GameStop è emblematico: uno short squeeze orchestrato da investitori privati ha messo in ginocchio, se non addirittura costretto alla chiusura, fondi speculativi dotati di ingenti risorse. Di solito, i grandi operatori finanziari aspirano a tecniche di trading che permettano loro di applicare le proprie idee di investimento senza che la quantità del loro ordine abbia un impatto negativo sul rendimento previsto. Sfortunatamente, questo non è un obiettivo semplice: i metodi di Machine Learning sono stati introdotti nel mondo finanziario proprio per tentare di produrre risultati migliori rispetto alle tecniche tradizionali. Il Machine Learning è cresciuto molto in termini di popolarità negli ultimi anni, sia come conseguenza degli eccellenti traguardi raggiunti in diverse aree di applicazione, sia per la crescente quantità di dati accessibili da poter utilizzare. Questa tesi si concentra sul Reinforcement Learning, una branca del Machine Learning che permette ad un agente algoritmico di imparare in modo indipendente attraverso l’interazione con l’ambiente. L’obiettivo principale di questa ricerca è quello di consentire all’agente di sviluppare e implementare una strategia di trading per l’esecuzione di un ordine nel modo più efficiente possibile, riducendo l’impatto ed evitando il rischio di volatilità del mercato. In particolare introduciamo l’algoritmo Fitted Q-Iteration, che attraverso la costruzione di una stima del reward cerca di convergere verso una policy ottimale. Per rappresentare la nostra Q-function, sono stati utilizzati Alberi di Regressione. Questo algoritmo è inserito in un framework più ampio: più agenti sono addestrati ad operare, ciascuno in diverse condizioni di mercato ottenute in un ambiente completamente simulato. La selezione dell’agente corretto è affrontata attraverso tecniche utilizzate nel contesto Multi Armed Bandit. Al meglio delle nostre conoscenze, questa è la prima volta che un tale approccio al problema dell’Esecuzione Ottima viene discusso.
Reinforcement learning for optimal execution
Marsico, Alessandro
2020/2021
Abstract
Alpha and Execution are two of the most important concepts in financial markets. Hedge funds strive to outperform the market on a daily basis (Alpha), and to do so, they search for novel ideas and strategies to be implemented on the markets. But, as we all know, power is nothing without control, which leads us to the other side of the coin: Execution. Although the literature is always looking for a landing towards market efficiency and liquidity hypotheses on which to build one’s own economic theories, empirical data show that this does not always happens: markets can be far from efficient in certain situations and, more importantly, they can turn into real liquidity traps. The GameStop example is illustrative: a short squeeze orchestrated by private investors brought massive speculative funds to their knees, if not outright closure. Usually, major financial operators aspire to trading techniques that allow them to apply their ideas in the pursuit of strong gains, without the amount of their investment impacting projected return: slippage and market volatility caused by excessively large orders should be avoided. Unfortunately, this is not an easy pro- cedure: Machine Learning methods have been introduced in this field to produce better results with respect to traditional methodologies. Machine Learning has grown in popularity in recent years, both as a consequence of the excellent results produced in different application areas and the rising amount of data accessible. The focus of this thesis is on Reinforcement Learning, a sub-field of Machine Learning that entails teaching an algorithmic agent to learn independently through interaction with the environment. The major objective of this research is to let the agent develop and implement a trading strategy for executing an order in the most efficient possible manner, while reducing market effect and avoiding the risk of market volatility. In particular we introduce Fitted Q-Iteration, an iterative Reinforcement Learning algorithm that seeks to converge to an optimal policy through the construction of a reward estimate. In achieving this goal, Regression Trees are used to represent our Q-function. This algorithm is embedded in a larger framework: multiple agents are trained to operate under different market conditions generated in a completely simulated multi-agent environment. The issue of choosing the best execution agent is addressed through techniques used in Multi Armed Bandit modeling. To the best of our knowledge, this is the first time such an approach to the Optimal Execution has been discussed.| File | Dimensione | Formato | |
|---|---|---|---|
|
2021_12_Marsico.pdf
accessibile in internet solo dagli utenti autorizzati
Descrizione: Testo della tesi
Dimensione
26.9 MB
Formato
Adobe PDF
|
26.9 MB | Adobe PDF | Visualizza/Apri |
I documenti in POLITesi sono protetti da copyright e tutti i diritti sono riservati, salvo diversa indicazione.
https://hdl.handle.net/10589/181577