The widespread application of artificial intelligence (AI) in various fields has brought huge social and economic benefits, but it has also raised concerns about fairness and bias. For instance AI-based recruitment systems are biased towards specific genders and races. U.S. courts use COMPAS to predict the recidivism risk of criminal suspects, and these systems often overestimate the recidivism risk of black defendants, leading to unfair judicial outcomes. And facial recognition systems have lower accuracy in identifying people of color. This thesis aims to analyze existing literature, classify and summarize methods for mitigating unfairness in AI systems, to provide a reference for the development of debiasing technology in future AI systems. A methodology that started with research questions, literature query statements and literature screening criteria for AI unfairness mitigation techniques. Based on this methodology, we review and analyze the literature and categorise the literature into machine learning, deep learning, recommendation systems, large language models (including natural language processing), knowledge graphs, and information retrieval according to different AI fields, and sort them by year. We also summarize the commonly used debiasing experimental datasets. Then we investigate various debiasing techniques such as pre-processing, in-processing and post-processing. The results show the advantages and disadvantages of these techniques in specific AI subfields. Finally, we proposed some future research directions in the field of AI fairness. Our systematic review and analysis of the literature on AI unfairness mitigation techniques is expected to provide valuable reference for scholars and practitioners in the related fields.
L’applicazione diffusa dell’intelligenza artificiale (AI) in vari campi ha portato enormi benefici sociali ed economici, ma ha anche sollevato preoccupazioni sull’equità e sui pregiudizi. Ad esempio, i sistemi di reclutamento basati sull’intelligenza artificiale sono sbilanciati verso generi e razze specifici. I tribunali statunitensi utilizzano COMPAS per prevedere il rischio di recidiva dei sospettati criminali e questi sistemi spesso sovrastimano il rischio di recidiva degli imputati neri, portando a risultati giudiziari ingiusti. E i sistemi di riconoscimento facciale hanno una precisione inferiore nell’identificare le persone di colore. Questa tesi mira ad analizzare la letteratura esistente, classificare e riassumere i metodi per mitigare l'iniquità nei sistemi di intelligenza artificiale, per fornire un riferimento per lo sviluppo della tecnologia di debiasing nei futuri sistemi di intelligenza artificiale. Una metodologia iniziata con domande di ricerca, dichiarazioni di interrogativi della letteratura e criteri di screening della letteratura per le tecniche di mitigazione dell’iniquità dell’AI. Sulla base di questa metodologia, esaminiamo e analizziamo la letteratura e la classifichiamo in apprendimento automatico, apprendimento profondo, sistemi di raccomandazione, modelli linguistici di grandi dimensioni (inclusa l'elaborazione del linguaggio naturale), grafici della conoscenza e recupero delle informazioni in base a diversi campi di intelligenza artificiale e li ordiniamo per anno. Riassumiamo anche i set di dati sperimentali di debiasing comunemente utilizzati. Quindi investighiamo varie tecniche di debiasing come pre-elaborazione, in-elaborazione e post-elaborazione. I risultati mostrano i vantaggi e gli svantaggi di queste tecniche in specifici sottocampi dell’AI. Infine, abbiamo proposto alcune direzioni di ricerca future nel campo dell’equità dell’intelligenza artificiale. Si prevede che la nostra revisione sistematica e l’analisi della letteratura sulle tecniche di mitigazione dell’iniquità dell’AI forniranno un prezioso riferimento per studiosi e professionisti nei campi correlati.
Approaches to unfairness mitigation in AI: an exploratory research
LI, MINGYUE
2023/2024
Abstract
The widespread application of artificial intelligence (AI) in various fields has brought huge social and economic benefits, but it has also raised concerns about fairness and bias. For instance AI-based recruitment systems are biased towards specific genders and races. U.S. courts use COMPAS to predict the recidivism risk of criminal suspects, and these systems often overestimate the recidivism risk of black defendants, leading to unfair judicial outcomes. And facial recognition systems have lower accuracy in identifying people of color. This thesis aims to analyze existing literature, classify and summarize methods for mitigating unfairness in AI systems, to provide a reference for the development of debiasing technology in future AI systems. A methodology that started with research questions, literature query statements and literature screening criteria for AI unfairness mitigation techniques. Based on this methodology, we review and analyze the literature and categorise the literature into machine learning, deep learning, recommendation systems, large language models (including natural language processing), knowledge graphs, and information retrieval according to different AI fields, and sort them by year. We also summarize the commonly used debiasing experimental datasets. Then we investigate various debiasing techniques such as pre-processing, in-processing and post-processing. The results show the advantages and disadvantages of these techniques in specific AI subfields. Finally, we proposed some future research directions in the field of AI fairness. Our systematic review and analysis of the literature on AI unfairness mitigation techniques is expected to provide valuable reference for scholars and practitioners in the related fields.File | Dimensione | Formato | |
---|---|---|---|
Approaches_to_Unfairness_Mitigation_in_AI__An_Exploratory_Research_Mingyue_Li.pdf
accessibile in internet solo dagli utenti autorizzati
Dimensione
3.93 MB
Formato
Adobe PDF
|
3.93 MB | Adobe PDF | Visualizza/Apri |
I documenti in POLITesi sono protetti da copyright e tutti i diritti sono riservati, salvo diversa indicazione.
https://hdl.handle.net/10589/223082