This thesis presents the design and implementation of an affective architecture for a non-anthropomorphic robot capable of interacting with a human actor in an improvisational setting. The objective of the work is to enable autonomous, coherent, and expressive behavior based on internal emotional dynamics rather than predefined scripts, thereby enhancing the naturalness and engagement of human–robot interaction. The proposed system was developed on the \textit{BlackWing} platform and models eight emotions based on Plutchik’s framework. Emotional states evolve over time according to external stimuli and internal factors, including personality and mood: personality parameters influence emotional thresholds and temporal persistence, while mood introduces a longer-term affective bias. Sensory inputs from multiple modalities, such as vision, proximity, touch, and sound, are interpreted as interaction events that update the internal emotional state through dedicated equations When emotional thresholds are exceeded, the dominant state is expressed through coordinated movements and sound, exploiting the expressive potential of the robot’s morphology. The proposed architecture integrates perception, emotional processing, and behavioral control within an embedded system. A user study was conducted to evaluate the perceptual effectiveness of the expressions, providing preliminary evidence of their discernibility in human–robot interaction.
Questa tesi si pone l'obiettivo di progettare e implementare un’architettura affettiva per un robot non antropomorfo in grado di interagire con un attore umano in un contesto di improvvisazione teatrale. L'elemento centrale del lavoro è abilitare un comportamento autonomo, coerente ed espressivo basato su dinamiche emotive interne, anziché su sequenze predefinite, al fine di migliorare la naturalezza e il coinvolgimento nell’interazione uomo–robot. Il sistema proposto è stato sviluppato sulla piattaforma \textit{BlackWing} e modella otto emozioni basate sul modello di Plutchik. Gli stati emotivi evolvono nel tempo in funzione degli stimoli esterni e di fattori interni, tra cui personalità e umore: i parametri di personalità influenzano le soglie emotive e la persistenza temporale, mentre l’umore introduce un bias affettivo a lungo termine. Gli input sensoriali provenienti da diverse modalità, quali visione, prossimità, tatto e suono, vengono interpretati come eventi di interazione che aggiornano lo stato emotivo interno tramite specifiche equazioni. Quando i valori emotivi superano determinate soglie, lo stato dominante viene espresso attraverso movimenti coordinati e segnali sonori, sfruttando il potenziale espressivo della morfologia del robot. L’architettura integra percezione, elaborazione emotiva e controllo comportamentale all’interno di un sistema centralizzato. È stato infine condotto uno studio con utenti per valutare l’efficacia percettiva delle espressioni, fornendo evidenze preliminari della loro riconoscibilità nell’interazione uomo–robot.
Emotion-driven architecture for human-robot interaction
RIGHI, ELEONORA
2024/2025
Abstract
This thesis presents the design and implementation of an affective architecture for a non-anthropomorphic robot capable of interacting with a human actor in an improvisational setting. The objective of the work is to enable autonomous, coherent, and expressive behavior based on internal emotional dynamics rather than predefined scripts, thereby enhancing the naturalness and engagement of human–robot interaction. The proposed system was developed on the \textit{BlackWing} platform and models eight emotions based on Plutchik’s framework. Emotional states evolve over time according to external stimuli and internal factors, including personality and mood: personality parameters influence emotional thresholds and temporal persistence, while mood introduces a longer-term affective bias. Sensory inputs from multiple modalities, such as vision, proximity, touch, and sound, are interpreted as interaction events that update the internal emotional state through dedicated equations When emotional thresholds are exceeded, the dominant state is expressed through coordinated movements and sound, exploiting the expressive potential of the robot’s morphology. The proposed architecture integrates perception, emotional processing, and behavioral control within an embedded system. A user study was conducted to evaluate the perceptual effectiveness of the expressions, providing preliminary evidence of their discernibility in human–robot interaction.| File | Dimensione | Formato | |
|---|---|---|---|
|
2026_03_Righi_Thesis.pdf
accessibile in internet per tutti
Descrizione: thesis text
Dimensione
3.6 MB
Formato
Adobe PDF
|
3.6 MB | Adobe PDF | Visualizza/Apri |
|
2026_03_Righi_ExecutiveSummary.pdf
accessibile in internet per tutti
Descrizione: executive summary
Dimensione
416.1 kB
Formato
Adobe PDF
|
416.1 kB | Adobe PDF | Visualizza/Apri |
I documenti in POLITesi sono protetti da copyright e tutti i diritti sono riservati, salvo diversa indicazione.
https://hdl.handle.net/10589/253203