Policy gradient (PG) methods are a class of effective reinforcement learning algorithms, particularly when dealing with continuous control problems. These methods learn the parameters of parametric policies via stochastic gradient ascent, typically using on-policy trajectory data to estimate the policy gradient. However, such reliance on fresh data makes them sample-inefficient. Indeed, vanilla PG methods require O(ϵ^(-2)) trajectories to reach an ϵ-approximate stationary point. A common strategy to improve efficiency is to reuse off-policy information from past iterations, such as previous gradients or trajectories. While gradient reuse has received substantial theoretical attention, leading to improved rates of O(ϵ^(-3/2)), the reuse of past trajectories remains largely unexplored from a theoretical perspective. In this work, we provide the first rigorous theoretical evidence that extensive reuse of past off-policy trajectories can significantly accelerate convergence in PG methods. We introduce a power mean correction to the multiple importance weighting estimator and propose RPG (Retrospective Policy Gradient), a PG algorithm that combines old and new trajectories for policy updates. Through a novel analysis, we show that, under established assumptions, RPG achieves a sample complexity of O(ϵ^(-1)), the best known rate in the literature. We further validate empirically our approach against PG methods with state-of-the-art rates.
Policy gradient (PG) sono una classe di algoritmi di reinforcement learning particolarmente efficaci, soprattutto nei problemi di controllo continuo. Questi metodi apprendono i parametri di politiche parametriche tramite stochastic gradient ascent, utilizzando tipicamente dati di traiettorie on-policy per stimare il gradiente della politica. Tuttavia, questa dipendenza da dati sempre nuovi li rende inefficienti in termini di campioni. In effetti, i metodi PG vanilla richiedono O(ϵ^(-2)) traiettorie per raggiungere un punto stazionario ϵ-approssimato. Una strategia comune per migliorare l’efficienza è riutilizzare informazioni off-policy da iterazioni precedenti, come gradienti o traiettorie. Sebbene il riutilizzo dei gradienti abbia ricevuto molta attenzione teorica, portando a tassi migliorati di O(ϵ^(-3/2)), il riutilizzo delle traiettorie passate rimane in gran parte inesplorato dal punto di vista teorico. In questo lavoro, forniamo la prima evidenza teorica rigorosa che un ampio riutilizzo di traiettorie off-policy passate può accelerare significativamente la convergenza nei metodi PG. Introduciamo una correzione tramite la power mean nello stimatore a multiple importance weighting e proponiamo RPG (Retrospective Policy Gradient), un algoritmo PG che combina traiettorie vecchie e nuove per aggiornare la politica. Tramite una nuova analisi, dimostriamo che, sotto ipotesi consolidate, RPG raggiunge una complessità campionaria pari a O(ϵ^(-1)), il miglior tasso noto in letteratura. Validiamo inoltre empiricamente il nostro approccio confrontandolo con metodi PG che raggiungono i migliori tassi noti.
Trajectory reuse in policy gradients
MANSUTTI, FEDERICO
2024/2025
Abstract
Policy gradient (PG) methods are a class of effective reinforcement learning algorithms, particularly when dealing with continuous control problems. These methods learn the parameters of parametric policies via stochastic gradient ascent, typically using on-policy trajectory data to estimate the policy gradient. However, such reliance on fresh data makes them sample-inefficient. Indeed, vanilla PG methods require O(ϵ^(-2)) trajectories to reach an ϵ-approximate stationary point. A common strategy to improve efficiency is to reuse off-policy information from past iterations, such as previous gradients or trajectories. While gradient reuse has received substantial theoretical attention, leading to improved rates of O(ϵ^(-3/2)), the reuse of past trajectories remains largely unexplored from a theoretical perspective. In this work, we provide the first rigorous theoretical evidence that extensive reuse of past off-policy trajectories can significantly accelerate convergence in PG methods. We introduce a power mean correction to the multiple importance weighting estimator and propose RPG (Retrospective Policy Gradient), a PG algorithm that combines old and new trajectories for policy updates. Through a novel analysis, we show that, under established assumptions, RPG achieves a sample complexity of O(ϵ^(-1)), the best known rate in the literature. We further validate empirically our approach against PG methods with state-of-the-art rates.| File | Dimensione | Formato | |
|---|---|---|---|
|
2025_07_Mansutti_Executive_Summary_02.pdf
accessibile in internet per tutti a partire dal 01/07/2026
Descrizione: executive summary
Dimensione
956.34 kB
Formato
Adobe PDF
|
956.34 kB | Adobe PDF | Visualizza/Apri |
|
2025_07_Mansutti_Tesi_01.pdf
accessibile in internet per tutti a partire dal 01/07/2026
Descrizione: tesi
Dimensione
1.73 MB
Formato
Adobe PDF
|
1.73 MB | Adobe PDF | Visualizza/Apri |
I documenti in POLITesi sono protetti da copyright e tutti i diritti sono riservati, salvo diversa indicazione.
https://hdl.handle.net/10589/239917