Over the last few years, the advancement of collaborative robotics has led Small Medium Enterprises (SMEs) to enhance their industrial automation. Collaborative robots (cobots), which support production processes, are agile and versatile tools that can be quickly reconfigured and deployed where they are needed; however, their programming phase requires an experienced person to reconfigure the robot every time its task changes. This work proposes a new method to program robots intuitively through the employment of Virtual Reality, allowing a non-skilled operator to teach the desired task. The human co-worker can demonstrate the sequence of Skills in the virtual environment in which the robot learning phase occurs. At the same time, the performed Skills are characterized with the appropriate pre and post-conditions. These parameters allow the robot to be aware of the feasibility of the operations taught by the user and to understand when the execution of the robot has not been performed correctly, alerting the operator and waiting for his/her intervention. Furthermore, being in a collaborative context, the user can actively participate in the execution of the task assigned to the cobot, which is able to responsively adapt its actions, recognizing which ones have already been carried out and which have not.
Negli ultimi anni, il progresso della robotica collaborativa ha permesso alle Piccole Medie Imprese (PMI) di migliorare la propria automazione industriale. I robot collaborativi (cobots), che affiancano i processi produttivi, sono strumenti flessibili e versatili che possono essere rapidamente riprogrammati e dispiegati ove necessario; tuttavia, la fase di programmazione dei cobot richiede una figura professionale esperta che riconfiguri il robot ogni volta che la sua funzione viene modificata. In questo lavoro di tesi, viene proposto un nuovo metodo intuitivo per la programmazione dei robot attraverso l'impiego della Realtà Virtuale, consentendo a un operatore non esperto di insegnare al robot il compito desiderato. L'utente può mostrare al cobot la sequenza delle operazioni nell'ambiente virtuale in cui avviene la fase di apprendimento. Allo stesso tempo, le Skills vengono caratterizzate con pre e post-condizioni. Questi parametri consentono al robot di essere consapevole della fattibilità delle operazioni insegnategli dall'utente e di capire quando l'esecuzione del robot non è stata eseguita correttamente, avvisando l'operatore e attendendo il suo intervento. Inoltre, trovandosi in un contesto collaborativo, l'utente può partecipare attivamente all'esecuzione del compito assegnato al cobot, mentre questo riesce ad adattare in modo reattivo le sue azioni, riconoscendo quali sono state già compiute e quali no.
Learning human actions semantic in virtual reality for a better human-robot collaboration
Preziosa, Giuseppe Fabio
2020/2021
Abstract
Over the last few years, the advancement of collaborative robotics has led Small Medium Enterprises (SMEs) to enhance their industrial automation. Collaborative robots (cobots), which support production processes, are agile and versatile tools that can be quickly reconfigured and deployed where they are needed; however, their programming phase requires an experienced person to reconfigure the robot every time its task changes. This work proposes a new method to program robots intuitively through the employment of Virtual Reality, allowing a non-skilled operator to teach the desired task. The human co-worker can demonstrate the sequence of Skills in the virtual environment in which the robot learning phase occurs. At the same time, the performed Skills are characterized with the appropriate pre and post-conditions. These parameters allow the robot to be aware of the feasibility of the operations taught by the user and to understand when the execution of the robot has not been performed correctly, alerting the operator and waiting for his/her intervention. Furthermore, being in a collaborative context, the user can actively participate in the execution of the task assigned to the cobot, which is able to responsively adapt its actions, recognizing which ones have already been carried out and which have not.| File | Dimensione | Formato | |
|---|---|---|---|
|
2022_06_Preziosa_01.pdf
Open Access dal 19/05/2023
Descrizione: Thesis' text
Dimensione
15.58 MB
Formato
Adobe PDF
|
15.58 MB | Adobe PDF | Visualizza/Apri |
|
2022_06_Preziosa_02.pdf
Open Access dal 19/05/2023
Descrizione: Executive Summary of the thesis
Dimensione
4.7 MB
Formato
Adobe PDF
|
4.7 MB | Adobe PDF | Visualizza/Apri |
I documenti in POLITesi sono protetti da copyright e tutti i diritti sono riservati, salvo diversa indicazione.
https://hdl.handle.net/10589/188955