The goal in the standard Reinforcement Learning problem is to find a policy that optimizes the expected accumulated reward collected while interacting with the environment. Such an objective, however, is not enough in a lot of real-life applications, like finance, where controlling the uncertainty of the outcome is imperative. The mean-volatility objective penalizes, through a tunable parameter, policies with high variance of the per-step reward. Unlike most risk-averse objectives, which consider some statistical property of the accumulated reward, the mean-volatility objective admits simple linear Bellman equations that resemble, up to a reward transformation, those of the risk-neutral case. This potentially enables adapting the large wealth of algorithms and results from the standard literature to the risk-averse case. However, the difficulty with the aforementioned reward transformation is that it requires knowing the expected accumulated reward of the policy in question, a quantity that is usually unknown a priori. This work focuses on policy evaluation and optimization algorithms for the mean-volatility objective. More specifically, this thesis focuses on analyzing the sample complexity of such algorithms compared to their risk-neutral counterparts, especially in problems with large or continuous state and action spaces. We consider two different approaches for policy evaluation: the direct method and the factored method. Moreover, we analyze a full actor-critic algorithm adapted for the mean-volatility objective. Experiments are then carried out to test these algorithms in a simple environment that exhibits some trade-off between optimality, in expectation, and uncertainty of outcome.
L'obiettivo nel problema standard di Reinforcement Learning è quello di trovare una politica che ottimizzi il return atteso ottenuta mentre si interagisce con l'ambiente. Tale obiettivo, tuttavia, non è sufficiente in molte applicazioni reali, come la finanza, dove è importante anche controllare il rischio. L'obiettivo mean-volatility penalizza, attraverso un parametro regolabile, le politiche con alta varianza della reward ad ogni step. A differenza della maggior parte degli obiettivi avversi al rischio che considerano qualche proprietà statistica del return cumulato, la mean-volatility ammette semplici equazioni lineari di Bellman con una forma simile, tramite una trasformazione della reward, a quelle del caso neutrale al rischio. Questo permette potenzialmente di adattare la grande ricchezza di algoritmi e risultati della letteratura standard al caso avverso al rischio. Tuttavia, la difficoltà con la suddetta trasformazione della reward è che richiede di conoscere la ricompensa cumulata attesa della politica in questione, una quantità che di solito è sconosciuta a priori. Questo lavoro si concentra sulla valutazione delle politiche e sugli algoritmi di ottimizzazione per l'obiettivo della mean-volatility. Più specificamente, questa tesi si concentra sull'analisi della sample complexity di tali algoritmi rispetto alle loro controparti neutrali al rischio, specialmente in problemi con spazi di stato e di azione grandi o continui. Consideriamo due diversi approcci per la valutazione della politica: il metodo diretto e il metodo fattorizzato. Analizziamo inoltre un algoritmo completo actor-critic adattato alla mean-volatility. Vengono poi condotti esperimenti per esaminare tali effetti in ambienti che mostrano un certo compromesso tra l'ottimalità in valore atteso ed il rischio legato all'incertezza.
Analyzing the complexity of mean-volatility algorithms for risk-averse reinforcement learning
ELDOWA, KHALED MAZEN MAHMOUD ELSAYED
2020/2021
Abstract
The goal in the standard Reinforcement Learning problem is to find a policy that optimizes the expected accumulated reward collected while interacting with the environment. Such an objective, however, is not enough in a lot of real-life applications, like finance, where controlling the uncertainty of the outcome is imperative. The mean-volatility objective penalizes, through a tunable parameter, policies with high variance of the per-step reward. Unlike most risk-averse objectives, which consider some statistical property of the accumulated reward, the mean-volatility objective admits simple linear Bellman equations that resemble, up to a reward transformation, those of the risk-neutral case. This potentially enables adapting the large wealth of algorithms and results from the standard literature to the risk-averse case. However, the difficulty with the aforementioned reward transformation is that it requires knowing the expected accumulated reward of the policy in question, a quantity that is usually unknown a priori. This work focuses on policy evaluation and optimization algorithms for the mean-volatility objective. More specifically, this thesis focuses on analyzing the sample complexity of such algorithms compared to their risk-neutral counterparts, especially in problems with large or continuous state and action spaces. We consider two different approaches for policy evaluation: the direct method and the factored method. Moreover, we analyze a full actor-critic algorithm adapted for the mean-volatility objective. Experiments are then carried out to test these algorithms in a simple environment that exhibits some trade-off between optimality, in expectation, and uncertainty of outcome.File | Dimensione | Formato | |
---|---|---|---|
2021_10_Eldowa.pdf
accessibile in internet per tutti
Descrizione: Thesis text.
Dimensione
3.47 MB
Formato
Adobe PDF
|
3.47 MB | Adobe PDF | Visualizza/Apri |
I documenti in POLITesi sono protetti da copyright e tutti i diritti sono riservati, salvo diversa indicazione.
https://hdl.handle.net/10589/179788