Autism is a neuro-developmental disorder characterized by impairments in both social interaction and verbal/non-verbal communication, leading to repetitive patterns in behaviours. Due to the great variety of symptoms, this condition is called Autism Spectrum Disorder (ASD). Even though there is no cure, therapies can improve these people’s life. ASD children lacks in different behaviours: one of them is Joint Attention, defined as the capability of sharing attentional focus. When dealing with children, therapies that use Social Robots are particularly interesting, since they easily attract the child. The goal of this project is to create a cheap and deliverable Social Robot for ASD children that not only is able to interact and entertain them, but also train and measure the Joint Attention behavior during the therapy. In order to achieve this, a well-known robot base, equipped with microcontrollers, motors, encoders and wheels, has been improved with Computer Vision capabilities. Two external hardware accelerators were used to run a Convolutional Neural Network applied in real time to the images detected by a small video camera embedded in the robot. The main models used deal with object detection, pose estimation and gaze estimation. A whole range of interactions has been implemented: the robot is able to move, reproduce sounds and is equipped with LEDs that participate as an ensemble to the delivery of a sensory feedback when the child interacts with it. It was designed also the behavior of the robot in order to support a Joint Attention task and measure how much the subject is looking at the target of the prompt. In terms of FPS performance the robot is able to process the video from 15 to 3.3 fps, depending on the models running at the time. For movement situations was also found a trade-off value in order to ensure the proper behavior under different circumstances. Different interaction’s tests were made with adult people, previous instructions about Oimi’s functioning, in order to fine tune the behavior of Oimi. The robot doesn’t behave in an unpredictable way when the final user is aware of the limitations of Oimi. The interaction appears to be enough natural and the expressed emotions are appropriate to the kind of interaction done. The message of prompt for the Joint Attention is interesting and engaging. Due to the COVID-19 emergency it was not possible to test the robot with ASD children. The only in field test that could have been done, was with a typically developing (TD) child, 4 years old, which had a prior experience with other robots. The developed robot resulted interesting and able to entertain the child. Being TD, the child appeared to be under-stimulated after few minutes, but the emotion that the robot had to transmit were perceived correctly. Even the prompt of a Joint Attention task was perceived correctly, but the behavior of the child was so unpredictable that was not possible to measure his performance in the task from the robot’s point of view.
L’autismo è un disordine legato al neurosviluppo caratterizzato da impedimenti sia nelle interazioni sociali che nella comunicazione verbale e non, portando a un pattern comportamentale ripetitivo. Data la grande varietà di sintomi, ci si riferisce a questa condizione con Disturbo dello Spettro Autistico (ASD). Anche se non c’è una cura definitiva, diverse terapie possono migliorare la vita di queste persone. I bambini con ASD hanno delle mancanze in diversi comportamenti: uno di queste è l’Attenzione Condivisa (Joint Attention, JA), definita come la capacità di condividere una focalizzazione attenzionale. Avendo a che fare con bambini, le terapie che utilizzano Robot Sociali sono particolarmente interessanti, perchè riescono facilmente ad attrarre il bambino. Lo scopo di questo progetto è quello di creare un Robot Sociale economico e pronto all’utilizzo, che non solo è capace di interagire e intrattenere i bambini, ma anche di allenarli e misurare il comportamento di Attenzione Condivisa durante la terapia. Per ottenere ciò, un equipaggiamento ben noto di microcontrollori, motori, encoder e ruote, è stato migliorato attraverso capacità di Computer Vision. Due acceleratori hardware esterni sono stati impiegati per permettere l’utilizzo di Convolutional Neural Network applicate in real time all’aquisizione dell’immagine da parte di una piccola videocamera inserita nel robot. I modelli utilizzati riguardano principalmente identificazione di oggetti, stima della posa (giunti articolari e punti facciali) e stima della direzione dello sguardo. E’ stato implementato un sistema già noto di interazioni: il robot è in grado di muoversi, riprodurre suoni e possiede dei LED che partecipano come insieme alla restituzione di un feedback sensoriale quando il bambino interagisce con lui. E’ stato anche disegnato il comportamento del robot al fine di richiedere un’attività di Attenzione Condivisa tra il bambino e il robot, misurando oggettivamente alcuni indicatori, come il tempo durante il quale il bambino guarda il target della richiesta. In termini di prestazioni riguardanti l’elaborazione di immagine, il robot è capace di processare le immagini della videocamera da 15 a 3.3 FPS, in base alla combinazione di modelli utilizzata. Per alcuni parametri legati al movimento è stato trovato un valore di compromesso per assicurare un corretto comportamento durante diverse circostanze. Sono stati eseguiti diversi test di interazione con persone adulte, previa istruzione funzionamento di Oimi, al fine di poter apportare migliorie al robot. Il robot non mostra comportamenti imprevisti in una situazione dove l’utente conosce i limiti di Oimi. L’interazione risulta abbastanza naturale e le emozioni manifestate sono consone al tipo di interazione effetuata. Il messaggio di richiesta di Attenzione Condivisa su un oggetto target risulta interessante e coinvolgente. Vista l’emergenza COVID-19 non è stato possibile testare il robot con bambini ASD. L’unica prova finale che è stata possibile fare è stata condotta con un bambino normotipico di 4 anni che ha avuto precedenti esperience con altri robot. Il robot sviluppato risulta interessante ed è in grado di intrattenere il bambino. Essendo normotipico, questo sembrava essere sottostimolato dopo qualche minuto, ma le emozioni che il robot cercava di trasmettere sono state percepite in maniera corretta. La richiesta di un attività di Attenzione Condivisa è stata compresa dal bambino, anche se il suo comportamento è stato così imprevedibile da non rendere possibile misurare la sua prestazione durante questa attività dal punto di vista del robot.
Development of a computer vision based social robot for interaction and joint attention task delivery with autism spectrum disorder children
Sortino, Dario Maria
2019/2020
Abstract
Autism is a neuro-developmental disorder characterized by impairments in both social interaction and verbal/non-verbal communication, leading to repetitive patterns in behaviours. Due to the great variety of symptoms, this condition is called Autism Spectrum Disorder (ASD). Even though there is no cure, therapies can improve these people’s life. ASD children lacks in different behaviours: one of them is Joint Attention, defined as the capability of sharing attentional focus. When dealing with children, therapies that use Social Robots are particularly interesting, since they easily attract the child. The goal of this project is to create a cheap and deliverable Social Robot for ASD children that not only is able to interact and entertain them, but also train and measure the Joint Attention behavior during the therapy. In order to achieve this, a well-known robot base, equipped with microcontrollers, motors, encoders and wheels, has been improved with Computer Vision capabilities. Two external hardware accelerators were used to run a Convolutional Neural Network applied in real time to the images detected by a small video camera embedded in the robot. The main models used deal with object detection, pose estimation and gaze estimation. A whole range of interactions has been implemented: the robot is able to move, reproduce sounds and is equipped with LEDs that participate as an ensemble to the delivery of a sensory feedback when the child interacts with it. It was designed also the behavior of the robot in order to support a Joint Attention task and measure how much the subject is looking at the target of the prompt. In terms of FPS performance the robot is able to process the video from 15 to 3.3 fps, depending on the models running at the time. For movement situations was also found a trade-off value in order to ensure the proper behavior under different circumstances. Different interaction’s tests were made with adult people, previous instructions about Oimi’s functioning, in order to fine tune the behavior of Oimi. The robot doesn’t behave in an unpredictable way when the final user is aware of the limitations of Oimi. The interaction appears to be enough natural and the expressed emotions are appropriate to the kind of interaction done. The message of prompt for the Joint Attention is interesting and engaging. Due to the COVID-19 emergency it was not possible to test the robot with ASD children. The only in field test that could have been done, was with a typically developing (TD) child, 4 years old, which had a prior experience with other robots. The developed robot resulted interesting and able to entertain the child. Being TD, the child appeared to be under-stimulated after few minutes, but the emotion that the robot had to transmit were perceived correctly. Even the prompt of a Joint Attention task was perceived correctly, but the behavior of the child was so unpredictable that was not possible to measure his performance in the task from the robot’s point of view.| File | Dimensione | Formato | |
|---|---|---|---|
|
Master_Thesis_Sortino.pdf
accessibile in internet solo dagli utenti autorizzati
Dimensione
22.27 MB
Formato
Adobe PDF
|
22.27 MB | Adobe PDF | Visualizza/Apri |
I documenti in POLITesi sono protetti da copyright e tutti i diritti sono riservati, salvo diversa indicazione.
https://hdl.handle.net/10589/175587