The research community has spent much effort in facing the problem of grasping novel objects in different settings to hold objects robustly with robotic manipulators. However, real manipulation tasks go far beyond holding objects and the grasp quality depends on the task the robot is meant to operate. While many quality metrics exist to evaluate the quality of a grasp by itself, no clear quantification of the quality of a grasp relatively to the task the grasp is used for has been defined yet. In this thesis, we propose a framework to extend the concept of grasp quality metric to task-oriented grasping by defining affordance functions via basic grasp metrics for an open set of task affordances. We evaluate both the effectivity of the proposed task-oriented metrics and their practical applicability by learning to infer them, firstly given the complete object geometry, and finally from vision. Indeed, we assess the validity of our novel framework both in the context of perfect information, i.e., known object model and in the partial information context, i.e., inferring task-oriented metrics from vision, underlining advantages and limitations of both situations. In the former, physical metrics of grasp hypotheses on an object are defined and computed in known object model simulation, in the latter deep models are trained to infer such properties from partial information in the form of synthesized range images.
La comunità di ricerca ha impiegato molti sforzi per affrontare il problema di afferrare oggetti non noti a priori in diversi contesti con l'obiettivo di prendere gli oggetti in maniera robusta con dei manipolatori robotici, sebbene gli obiettivi di un vero manipolatore vanno ben oltre l'afferrare oggetti e la qualità della presa dipende dal compito che il robot deve eseguire. Sebbene diverse metriche esistano per valutare la qualità di una presa da sola, non è ancora stata definita una chiara quantificazione della bontà di una presa relativa al compito da eseguire. In questa tesi proponiamo e valutiamo un framework per estendere il concetto di metrica di qualità di una presa definendola in relazione al compito da eseguire attraverso la definizione di funzioni di affordance come funzione di metriche di presa basilari per un set di compiti.
Learning models for task-oriented grasp quality metrics
Di PIETRO, GIANPAOLO
2018/2019
Abstract
The research community has spent much effort in facing the problem of grasping novel objects in different settings to hold objects robustly with robotic manipulators. However, real manipulation tasks go far beyond holding objects and the grasp quality depends on the task the robot is meant to operate. While many quality metrics exist to evaluate the quality of a grasp by itself, no clear quantification of the quality of a grasp relatively to the task the grasp is used for has been defined yet. In this thesis, we propose a framework to extend the concept of grasp quality metric to task-oriented grasping by defining affordance functions via basic grasp metrics for an open set of task affordances. We evaluate both the effectivity of the proposed task-oriented metrics and their practical applicability by learning to infer them, firstly given the complete object geometry, and finally from vision. Indeed, we assess the validity of our novel framework both in the context of perfect information, i.e., known object model and in the partial information context, i.e., inferring task-oriented metrics from vision, underlining advantages and limitations of both situations. In the former, physical metrics of grasp hypotheses on an object are defined and computed in known object model simulation, in the latter deep models are trained to infer such properties from partial information in the form of synthesized range images.| File | Dimensione | Formato | |
|---|---|---|---|
|
thesis.pdf
Open Access dal 17/09/2020
Descrizione: Thesis text
Dimensione
44.1 MB
Formato
Adobe PDF
|
44.1 MB | Adobe PDF | Visualizza/Apri |
I documenti in POLITesi sono protetti da copyright e tutti i diritti sono riservati, salvo diversa indicazione.
https://hdl.handle.net/10589/149814