Artificial Intelligence (AI) algorithms – and data that feed them – are increasingly imbued with agency and impact and are empowered to make decisions in our lives in a wide variety of domains: from search engines, information filtering, political campaigns, health to the prediction of criminal recidivism or loan repayment. Indeed, algorithms are difficult to understand. Explaining how they exercise their power and influence and how a given input (whether or not consciously released) is transformed into output is an ambitious goal. In the Computer Science field, Explainable Artificial Intelligence (XAI) techniques have been developed to disclose and study AI algorithms to let expert users, domain experts, and newcomers explore their inner workings and understand the nature of their outputs. Moreover, understanding the mechanisms of the algorithms that control AI machines is an issue that has also affected humanistic fields such as Social Sciences and Science and Technology Studies that have focused chiefly on the effect that these emerging technologies have on society. Visual communication, especially Data Visualization and Visual Analytics for XAI, have proven to be practical tools to represent and explain the mechanisms of algorithms or justify their results through explanatory processes. However, current research on XAI is not addressing the needs of people affected by them. To meet this need, the visual inheritance of communication design allowed to observe XAI from a novel perspective, focusing on the representation of the social context and on the reproduction of algorithmic settings as pivotal elements for explaining AI to the general public. The research was conducted by exploiting a mixed methodology, predominantly guided by Research through Design processes complemented by qualitative and quantitative approaches from the social science tradition and Human-Computer Interaction. Specifically, while considering the permeability of the involved fields and operating translation acts, the dissertation proposes embalming and dissecting as valuable visual communication tactics to expand the XAI field of action toward the general public.
Gli algoritmi di intelligenza artificiale (IA) - e i dati che li alimentano - sono sempre più performanti e in grado di prendere decisioni nelle nostre vite in un’ampia varietà di settori: dai motori di ricerca, al filtraggio delle informazioni, alle campagne politiche, alla salute, alla previsione della recidiva criminale o del rimborso dei prestiti. In effetti, gli algoritmi sono difficili da capire. Spiegare come esercitano il loro potere e la loro influenza e come un dato input (rilasciato o meno consapevolmente) viene trasformato in output è un obiettivo ambizioso. Nel campo dell’informatica, sono state sviluppate tecniche di Explainable Artificial Intelligence (XAI) per divulgare e studiare gli algoritmi di IA, in modo da consentire a utenti esperti, conoscitori del dominio e neofiti di esplorarne il funzionamento interno e di comprendere la natura dei loro risultati. Inoltre, la comprensione dei meccanismi degli algoritmi che controllano le macchine di IA è una questione che ha interessato anche campi umanistici come le scienze sociali, che si sono concentrati principalmente sull’effetto che queste tecnologie emergenti hanno sulla società. La comunicazione visiva, in particolare la Data Visualization e la Visual Analytics per la XAI, si sono dimostrate strumenti pratici per rappresentare e spiegare i meccanismi degli algoritmi o giustificare i loro risultati attraverso processi esplicativi. Tuttavia, l’attuale ricerca sulle XAI non risponde alle necessità di tutte le persone che ne sono interessate. Per rispondere a questa esigenza, l’eredità visiva del design della comunicazione ha permesso di osservare le XAI da una prospettiva inedita, concentrandosi sulla rappresentazione del contesto sociale come cardine per spiegare l’IA al grande pubblico. La ricerca è stata condotta sfruttando una metodologia mista, prevalentemente guidata da processi di Research through Design integrati da approcci qualitativi e quantitativi provenienti dalla tradizione delle scienze sociali e della Human-Computer Interaction. Infine, considerando la permeabilità dei campi coinvolti, la tesi propone la conservazione (imbalsamazione) e la dissezione (dissezione) come valide tattiche di comunicazione visiva per espandere la portata di XAI al grande pubblico. In particolare, considerando la permeabilità dei campi coinvolti e gli atti di traduzione operativa, la tesi propone l’imbalsamazione e la dissezione come valide tattiche di comunicazione visiva per espandere il campo d’azione della XAI verso il grande pubblico.
Embalming and dissecting Artificial Intelligence. Visual explanations for the general public
Gobbo, Beatrice
2021/2022
Abstract
Artificial Intelligence (AI) algorithms – and data that feed them – are increasingly imbued with agency and impact and are empowered to make decisions in our lives in a wide variety of domains: from search engines, information filtering, political campaigns, health to the prediction of criminal recidivism or loan repayment. Indeed, algorithms are difficult to understand. Explaining how they exercise their power and influence and how a given input (whether or not consciously released) is transformed into output is an ambitious goal. In the Computer Science field, Explainable Artificial Intelligence (XAI) techniques have been developed to disclose and study AI algorithms to let expert users, domain experts, and newcomers explore their inner workings and understand the nature of their outputs. Moreover, understanding the mechanisms of the algorithms that control AI machines is an issue that has also affected humanistic fields such as Social Sciences and Science and Technology Studies that have focused chiefly on the effect that these emerging technologies have on society. Visual communication, especially Data Visualization and Visual Analytics for XAI, have proven to be practical tools to represent and explain the mechanisms of algorithms or justify their results through explanatory processes. However, current research on XAI is not addressing the needs of people affected by them. To meet this need, the visual inheritance of communication design allowed to observe XAI from a novel perspective, focusing on the representation of the social context and on the reproduction of algorithmic settings as pivotal elements for explaining AI to the general public. The research was conducted by exploiting a mixed methodology, predominantly guided by Research through Design processes complemented by qualitative and quantitative approaches from the social science tradition and Human-Computer Interaction. Specifically, while considering the permeability of the involved fields and operating translation acts, the dissertation proposes embalming and dissecting as valuable visual communication tactics to expand the XAI field of action toward the general public.File | Dimensione | Formato | |
---|---|---|---|
GobboBeatrice_PhDThesis.pdf
accessibile in internet solo dagli utenti autorizzati
Descrizione: PhD Thesis — Gobbo Beatrice
Dimensione
19.6 MB
Formato
Adobe PDF
|
19.6 MB | Adobe PDF | Visualizza/Apri |
I documenti in POLITesi sono protetti da copyright e tutti i diritti sono riservati, salvo diversa indicazione.
https://hdl.handle.net/10589/188714