The impact of Automated Trading Systems (ATS) on financial markets is growing every year and the trades generated by an algorithm now account for the majority of orders that arrive at stock exchanges. Historically, these systems were based on advanced statistical methods and signal processing able to extract trading signals from financial data. However, the recent successes of Machine Learning have attracted the interest of the financial community and have raised the question if the techniques that have proven so successful in classifying images or beating Go world champions could perform equally well in detecting patterns in the financial markets. In this thesis, we explore how to find a trading strategy via Reinforcement Learning, a general class of algorithms that allows an agent to find an optimal strategy for a sequential decision problem by directly interacting with the environment. In particular, we focus on Policy Gradient Methods, a class of state-of-the-art techniques which constructs an estimate of the optimal policy for the control problem by iteratively improving a parametric policy. In the first part of this thesis, we conduct a thorough review of these methods in the traditional risk-neutral framework for reinforcement learning, in which the goal of the agent is to maximize its total returns. In the second part, we propose an original extension of these methods to the risk-sensitive framework, in which the agent wishes instead to optimize the trade-off between the rewards and the risk required to achieve them. We also present an innovative parameter-based reformulation of these algorithms in which the optimal policy is approximated by stochastically perturbing the parameter of a deterministic controller. This extends the well-known Policy Gradient with Parameter-based Exploration (PGPE) method to the risk-sensitive framework. To the best of our knowledge, this is the first time such an algorithm is discussed. In the last part of this thesis, after a review of the financial applications of reinforcement learning that can be found in the literature, we present a numerical application of these algorithms to the traditional asset allocation problem with transaction costs.
L'impatto sui mercati finanziari degli Automated Trading Systems, ovvero sistemi capaci di eseguire ordini in maniera automatica senza il bisogno di supervisione umana, è in costante crescita e ormai la grande maggioranza delle operazioni nelle borse mondiali è generata da algoritmi. Tradizionalmente, questi sistemi erano basati su sofisticati metodi statistici o di analisi dei segnali capaci di estrarre dei segnali dai dati finanziari. Tuttavia, i recenti successi del Machine Learning hanno attratto l'interesse dell'industria finanziaria e in molti si sono chiesti se le tecniche che si sono rivelate così efficaci nella classificazione di immagini e nel battere il campione del mondo di Go possano essere ugualmente applicate all'identificazione di segnali nei mercati finanziari. In questa tesi si esplora come determinare una strategia di trading attraverso Reinforcement Learning, una famiglia di algoritmi che permette ad un agente di trovare una strategia ottima per un problema di controllo ottimo stocastico a tempo discreto attraverso l'interazione diretta con l'ambiente. In particolare, ci si concentra sui metodi Policy Gradient, delle tecniche avanzate che generano un'approssimazione della stragia ottima attraverso il miglioramento iterativo di una strategia parametrica. Nella prima parte di questa tesi, si conduce uno studio bibliografico di questi metodi per la versione neutrale al rischio del problema di controllo, in cui l'agente è solamente interessato a massimizzare i propri guadagni. Nella seconda parte della tesi, si propone un'estensione originale di questi metodi alla formulazione del problema di controllo in cui l'agente è averso al rischio e desidera controllare il rischio necessario per realizzare un guadagno. Inoltre, si presenta un'innovativa riformulazione ``parameter-based'' di questi algoritmi, in cui la ricerca della strategia ottima procede attraverso la perturbazione stocastica dei parametri di un controllore deterministico. Questa idea estende i ben noti metodi ``Policy Gradient with Parameter-based Exploration'' (PGPE) nel caso di aversione al rischio. Al meglio delle nostre conoscenze, questo è il primo studio in cui questa estensione viene proposta. Nell'ultima parte di questa tesi, dopo aver discusso le applicazioni finanziarie del Reinforcement Learning che si trovano in letteratura, si presenta un'applicazione numerica di questi algoritmi al classico problema di asset allocation con costi di transazione.
Policy gradient algorithms for the asset allocation problem
NECCHI, PIERPAOLO GIORGIO
2015/2016
Abstract
The impact of Automated Trading Systems (ATS) on financial markets is growing every year and the trades generated by an algorithm now account for the majority of orders that arrive at stock exchanges. Historically, these systems were based on advanced statistical methods and signal processing able to extract trading signals from financial data. However, the recent successes of Machine Learning have attracted the interest of the financial community and have raised the question if the techniques that have proven so successful in classifying images or beating Go world champions could perform equally well in detecting patterns in the financial markets. In this thesis, we explore how to find a trading strategy via Reinforcement Learning, a general class of algorithms that allows an agent to find an optimal strategy for a sequential decision problem by directly interacting with the environment. In particular, we focus on Policy Gradient Methods, a class of state-of-the-art techniques which constructs an estimate of the optimal policy for the control problem by iteratively improving a parametric policy. In the first part of this thesis, we conduct a thorough review of these methods in the traditional risk-neutral framework for reinforcement learning, in which the goal of the agent is to maximize its total returns. In the second part, we propose an original extension of these methods to the risk-sensitive framework, in which the agent wishes instead to optimize the trade-off between the rewards and the risk required to achieve them. We also present an innovative parameter-based reformulation of these algorithms in which the optimal policy is approximated by stochastically perturbing the parameter of a deterministic controller. This extends the well-known Policy Gradient with Parameter-based Exploration (PGPE) method to the risk-sensitive framework. To the best of our knowledge, this is the first time such an algorithm is discussed. In the last part of this thesis, after a review of the financial applications of reinforcement learning that can be found in the literature, we present a numerical application of these algorithms to the traditional asset allocation problem with transaction costs.File | Dimensione | Formato | |
---|---|---|---|
MS_Thesis_Pierpaolo_Necchi.pdf
accessibile in internet per tutti
Descrizione: Thesis
Dimensione
2.75 MB
Formato
Adobe PDF
|
2.75 MB | Adobe PDF | Visualizza/Apri |
I documenti in POLITesi sono protetti da copyright e tutti i diritti sono riservati, salvo diversa indicazione.
https://hdl.handle.net/10589/131892