Artificial Intelligence (AI) has become a key driver of disruptive innovations, significantly changing our working environments, decision-making processes, and stakeholders’ interactions. Despite its opportunities, AI poses substantial risks that could undermine our society. The notion of trustworthy AI, often defined as a set of principles, arises to manage both the opportunities and risks of AI and ensure the ethical adoption of the technology. Among the principles of trustworthy AI, the explainability of AI has gained the attention of scholars for a long time, particularly from the technical perspective, because of the strong connection this principle has with stakeholders. Still, many literature gaps exist concerning the linkage of explainability with management. The research question of this Master’s Thesis addresses some of these gaps by asking: "How is explainability for multiple stakeholders enabled in the design and development phases of augmented decision-making systems based on AI?". The question is answered through a single case study methodology by employing the case of an Italian public agency implementing augmented decision-making systems based on AI as part of its digital transformation strategy. The empirical analysis is based on the coding of interviews and participant observations with multiple stakeholders involved in the project. The Thesis finds that developers, managers and users all favour explainability with specific contributions throughout the considered phases of the AI lifecycle. The Thesis contributes to advancing the academic debate on the explainability of AI, formalising with a conceptual framework four elements of novelty, namely mutual learning, rotating mediation, personalised explainability, and process fit. The relevance of the explainability of AI even in the design and development phases (not only from a technical perspective) is confirmed, with specific reference to augmented decision-making systems. Further, the paramount role of a multi-stakeholder perspective throughout the AI lifecycle is acknowledged.
L'Intelligenza Artificiale (IA) è diventata un motore chiave delle innovazioni dirompenti, cambiando significativamente i nostri ambienti di lavoro, i processi decisionali e le interazioni con gli stakeholder. Nonostante le opportunità offerte, l'IA comporta rischi sostanziali che potrebbero minare la nostra società. La nozione di IA trustworthy (‘affidabile’), spesso definita come un insieme di principi, nasce per gestire sia le opportunità sia i rischi dell'IA e garantire l'adozione etica della tecnologia. Tra i principi dell'IA trustworthy, l'explainability (‘spiegabilità’) dell'IA ha attirato a lungo l'attenzione degli studiosi, in particolare dal punto di vista tecnico, a causa del forte legame che questo principio ha con gli stakeholder. Tuttavia, esistono molte lacune nella letteratura riguardo al collegamento tra explainability e management. La domanda di ricerca di questa Tesi di Laurea Magistrale affronta alcune di queste lacune chiedendo: "Come viene abilitata l'explainability per molteplici stakeholder nelle fasi di progettazione e sviluppo dei sistemi di decision-making aumentato basati sull'IA?". La domanda viene risolta attraverso una metodologia di single case study, utilizzando il caso di un'agenzia pubblica italiana che implementa sistemi di decisione aumentata basati sull'IA come parte della sua strategia di trasformazione digitale. L'analisi empirica si basa sulla codifica di interviste e osservazioni partecipate con molteplici stakeholder coinvolti nel progetto. La Tesi rileva che sviluppatori, manager e utenti favoriscono tutti l’explainability con contributi specifici lungo le fasi considerate del ciclo di vita dell'IA. La Tesi contribuisce all’avanzamento del dibattito accademico sull’explainability dell'IA, formalizzando con un framework concettuale quattro elementi di novità, vale a dire apprendimento reciproco, mediazione rotante, explainability personalizzata e adeguamento al processo. La rilevanza dell’explainability dell'IA anche nelle fasi di progettazione e sviluppo viene confermata (non solo dal punto di vista tecnico), con specifico riferimento ai sistemi di decisione aumentata. Inoltre, viene riconosciuto il ruolo fondamentale di una prospettiva multi-stakeholder durante il ciclo di vita dell'IA.
Explainability for multiple stakeholders: the design and development of AI for augmented decision-making
Brusasco, Luca
2023/2024
Abstract
Artificial Intelligence (AI) has become a key driver of disruptive innovations, significantly changing our working environments, decision-making processes, and stakeholders’ interactions. Despite its opportunities, AI poses substantial risks that could undermine our society. The notion of trustworthy AI, often defined as a set of principles, arises to manage both the opportunities and risks of AI and ensure the ethical adoption of the technology. Among the principles of trustworthy AI, the explainability of AI has gained the attention of scholars for a long time, particularly from the technical perspective, because of the strong connection this principle has with stakeholders. Still, many literature gaps exist concerning the linkage of explainability with management. The research question of this Master’s Thesis addresses some of these gaps by asking: "How is explainability for multiple stakeholders enabled in the design and development phases of augmented decision-making systems based on AI?". The question is answered through a single case study methodology by employing the case of an Italian public agency implementing augmented decision-making systems based on AI as part of its digital transformation strategy. The empirical analysis is based on the coding of interviews and participant observations with multiple stakeholders involved in the project. The Thesis finds that developers, managers and users all favour explainability with specific contributions throughout the considered phases of the AI lifecycle. The Thesis contributes to advancing the academic debate on the explainability of AI, formalising with a conceptual framework four elements of novelty, namely mutual learning, rotating mediation, personalised explainability, and process fit. The relevance of the explainability of AI even in the design and development phases (not only from a technical perspective) is confirmed, with specific reference to augmented decision-making systems. Further, the paramount role of a multi-stakeholder perspective throughout the AI lifecycle is acknowledged.File | Dimensione | Formato | |
---|---|---|---|
2024_07_Brusasco_Tesi_01.pdf
accessibile in internet solo dagli utenti autorizzati
Descrizione: Tesi di Laurea Magistrale
Dimensione
1.3 MB
Formato
Adobe PDF
|
1.3 MB | Adobe PDF | Visualizza/Apri |
2024_07_Brusasco_Executive Summary_02.pdf
accessibile in internet solo dagli utenti autorizzati
Descrizione: Executive Summary
Dimensione
592.49 kB
Formato
Adobe PDF
|
592.49 kB | Adobe PDF | Visualizza/Apri |
I documenti in POLITesi sono protetti da copyright e tutti i diritti sono riservati, salvo diversa indicazione.
https://hdl.handle.net/10589/222942