Policy-based authorization systems are widely adopted in modern distributed infrastructures to enforce access control, security, and data sharing requirements. Recent advances in automation and Artificial Intelligence (AI) have enabled the adoption of the Policy-as-Code paradigm, where authorization requirements expressed in natural language are translated into executable policy code. In particular, Large Language Models (LLMs) are increasingly used to automatically derive authorization policies from natural language specifications, significantly reducing the effort required to transform high-level requirements into machine-enforceable rules. However, these approaches introduce a critical challenge: automatically generated policies cannot be assumed to be correct, even when they are syntactically valid and executable. This thesis addresses the problem of validating and testing automatically generated authorization policies before their deployment in production systems. In particular, the work focuses on the policy validation phase, aiming to ensure that generated policies exhibit the intended authorization behavior when evaluated by a Policy Decision Point (PDP). The core contribution of this work is the design and implementation of an automated policy testing framework for policies written in the declarative language Rego and enforced through the Open Policy Agent (OPA). Automated input generation is integrated leveraging LLMs to explore the policy input space and exercise diverse authorization scenarios. The resulting testing outputs provide objective evidence of policy correctness or deviation, supporting informed decisions about the reliability of generated policies. This work contributes to identifying a practical and reusable solution for integrating automated policy testing into AI-assisted policy engineering pipelines, increasing trustworthiness, reducing reliance on manual review, and facilitating safe adoption of automatically generated authorization policies in real-world systems.
I sistemi di autorizzazione basati su policy sono ampiamente utilizzati per applicare regole di accesso, sicurezza e condivisione dei dati all’interno delle moderne infrastrutture distribuite. I recenti progressi nell’automazione e nell’Intelligenza Artificiale (IA) hanno favorito l’adozione del paradigma Policy-as-Code, in cui requisiti di autorizzazione espressi in linguaggio naturale sono tradotti in codice eseguibile. In questo contesto, l’impiego di Large Language Models (LLM) consente di generare policy di autorizzazione a partire da descrizioni testuali, riducendo significativamente lo sforzo necessario per trasformare requisiti di alto livello in regole formalizzate ed eseguibili. Tuttavia, tali approcci introducono una criticità fondamentale: le policy generate automaticamente non possono essere considerate corrette a priori, anche quando risultano sintatticamente valide ed eseguibili. Questa tesi affronta il problema della validazione e del testing delle policy di autorizzazione generate automaticamente prima del loro utilizzo in ambienti di produzione. L'obiettivo è quello di garantire che le policy generate manifestino il comportamento autorizzativo atteso quando valutate da un Policy Decision Point (PDP). Il principale contributo della tesi consiste nella progettazione e implementazione di un framework automatico di validazione per policy espresse nel linguaggio dichiarativo Rego ed eseguite tramite Open Policy Agent (OPA). Il framework integra la generazione automatica degli input, sfruttando LLM per esplorare lo spazio degli input e coprire scenari autorizzativi eterogenei. I risultati di validazione forniscono evidenze sulla correttezza o divergenza delle policy, supportando valutazioni successive circa la loro affidabilità. Questo lavoro propone una soluzione pratica e riutilizzabile per l’integrazione del testing automatico in pipeline di policy engineering assistite da IA, contribuendo ad aumentare l’affidabilità del processo, a ridurre la dipendenza dalla revisione manuale e a favorire l’adozione sicura di policy di autorizzazione generate automaticamente in contesti reali.
A benchmark to evaluate LLM-generated rego policies
SIDOTI, SUSANNA
2024/2025
Abstract
Policy-based authorization systems are widely adopted in modern distributed infrastructures to enforce access control, security, and data sharing requirements. Recent advances in automation and Artificial Intelligence (AI) have enabled the adoption of the Policy-as-Code paradigm, where authorization requirements expressed in natural language are translated into executable policy code. In particular, Large Language Models (LLMs) are increasingly used to automatically derive authorization policies from natural language specifications, significantly reducing the effort required to transform high-level requirements into machine-enforceable rules. However, these approaches introduce a critical challenge: automatically generated policies cannot be assumed to be correct, even when they are syntactically valid and executable. This thesis addresses the problem of validating and testing automatically generated authorization policies before their deployment in production systems. In particular, the work focuses on the policy validation phase, aiming to ensure that generated policies exhibit the intended authorization behavior when evaluated by a Policy Decision Point (PDP). The core contribution of this work is the design and implementation of an automated policy testing framework for policies written in the declarative language Rego and enforced through the Open Policy Agent (OPA). Automated input generation is integrated leveraging LLMs to explore the policy input space and exercise diverse authorization scenarios. The resulting testing outputs provide objective evidence of policy correctness or deviation, supporting informed decisions about the reliability of generated policies. This work contributes to identifying a practical and reusable solution for integrating automated policy testing into AI-assisted policy engineering pipelines, increasing trustworthiness, reducing reliance on manual review, and facilitating safe adoption of automatically generated authorization policies in real-world systems.| File | Dimensione | Formato | |
|---|---|---|---|
|
2026_03_Sidoti.pdf
accessibile in internet solo dagli utenti autorizzati
Descrizione: A Benchmark to Evaluate LLM-Generated Rego Policies
Dimensione
9.95 MB
Formato
Adobe PDF
|
9.95 MB | Adobe PDF | Visualizza/Apri |
I documenti in POLITesi sono protetti da copyright e tutti i diritti sono riservati, salvo diversa indicazione.
https://hdl.handle.net/10589/252309