In this thesis, TOF cameras are investigated and their performances analyzed for general purposes scene reconstruction. TOF cameras are depth devices providing range map from a given point of view. Their operative principle is based on the LiDAR technology, in which a powerful lightening source emits a codified light signal and detect it back in multiple points using a matrix of photo-diodes. Scene depth measurement is computed via phase-estimation between transmitted signals, taking into account the light speed in the environment. The Kinect V2 TOF camera provides state-of-the-art range map measurement. In this work, I describe its metrological qualification conducted according to the GUM. Accuracy and precision of the measurement are assessed under target-to-sensor distance, camera sensor pixel, incident angle and target material influences. Performances are compared with the state-of-the-art structured-light range map sensors and people body tracking function are also evaluated. In this work, I investigate the use of the Kinect V2 camera in SLAM reconstruction pipelines that register successive frame acquisition to provide larger-scale reconstruction. The novelty of this work regards the use of additional information to improve the reconstruction performances. In a first attempt, geometrical information are used in addition to visual one to find more robust correspondence matching for the registration process. Including geometrical information in keypoints features should improve alignment in low-texture cases and by consequence the overall reconstruction. Then, the use of an IMU rigidly fixed on the depth sensor is investigated. Inertial measurement are used as priors or hard-constraints for the rigid body transformation estimation. In particular IMU orientation measurements are robust enough and linearize of the rigid registration problem. Improvements provided by 3D keypoints as well as IMU priors are validated in real-case reconstruction applications.

In this thesis, TOF cameras are investigated and their performances analyzed for general purposes scene reconstruction. TOF cameras are depth devices providing range map from a given point of view. Their operative principle is based on the LiDAR technology, in which a powerful lightening source emits a codified light signal and detect it back in multiple points using a matrix of photo-diodes. Scene depth measurement is computed via phase-estimation between transmitted signals, taking into account the light speed in the environment. The Kinect V2 TOF camera provides state-of-the-art range map measurement. In this work, I describe its metrological qualification conducted according to the GUM. Accuracy and precision of the measurement are assessed under target-to-sensor distance, camera sensor pixel, incident angle and target material influences. Performances are compared with the state-of-the-art structured-light range map sensors and people body tracking function are also evaluated. In this work, I investigate the use of the Kinect V2 camera in SLAM reconstruction pipelines that register successive frame acquisition to provide larger-scale reconstruction. The novelty of this work regards the use of additional information to improve the reconstruction performances. In a first attempt, geometrical information are used in addition to visual one to find more robust correspondence matching for the registration process. Including geometrical information in keypoints features should improve alignment in low-texture cases and by consequence the overall reconstruction. Then, the use of an IMU rigidly fixed on the depth sensor is investigated. Inertial measurement are used as priors or hard-constraints for the rigid body transformation estimation. In particular IMU orientation measurements are robust enough and linearize of the rigid registration problem. Improvements provided by 3D keypoints as well as IMU priors are validated in real-case reconstruction applications.

Metrological analysis of Time-of-Flight cameras performances for multipurpose 3D reconstruction

GIANCOLA, SILVIO

Abstract

In this thesis, TOF cameras are investigated and their performances analyzed for general purposes scene reconstruction. TOF cameras are depth devices providing range map from a given point of view. Their operative principle is based on the LiDAR technology, in which a powerful lightening source emits a codified light signal and detect it back in multiple points using a matrix of photo-diodes. Scene depth measurement is computed via phase-estimation between transmitted signals, taking into account the light speed in the environment. The Kinect V2 TOF camera provides state-of-the-art range map measurement. In this work, I describe its metrological qualification conducted according to the GUM. Accuracy and precision of the measurement are assessed under target-to-sensor distance, camera sensor pixel, incident angle and target material influences. Performances are compared with the state-of-the-art structured-light range map sensors and people body tracking function are also evaluated. In this work, I investigate the use of the Kinect V2 camera in SLAM reconstruction pipelines that register successive frame acquisition to provide larger-scale reconstruction. The novelty of this work regards the use of additional information to improve the reconstruction performances. In a first attempt, geometrical information are used in addition to visual one to find more robust correspondence matching for the registration process. Including geometrical information in keypoints features should improve alignment in low-texture cases and by consequence the overall reconstruction. Then, the use of an IMU rigidly fixed on the depth sensor is investigated. Inertial measurement are used as priors or hard-constraints for the rigid body transformation estimation. In particular IMU orientation measurements are robust enough and linearize of the rigid registration problem. Improvements provided by 3D keypoints as well as IMU priors are validated in real-case reconstruction applications.
COLOSIMO, BIANCA MARIA
FOSSATI, FABIO VITTORIO
7-feb-2017
In this thesis, TOF cameras are investigated and their performances analyzed for general purposes scene reconstruction. TOF cameras are depth devices providing range map from a given point of view. Their operative principle is based on the LiDAR technology, in which a powerful lightening source emits a codified light signal and detect it back in multiple points using a matrix of photo-diodes. Scene depth measurement is computed via phase-estimation between transmitted signals, taking into account the light speed in the environment. The Kinect V2 TOF camera provides state-of-the-art range map measurement. In this work, I describe its metrological qualification conducted according to the GUM. Accuracy and precision of the measurement are assessed under target-to-sensor distance, camera sensor pixel, incident angle and target material influences. Performances are compared with the state-of-the-art structured-light range map sensors and people body tracking function are also evaluated. In this work, I investigate the use of the Kinect V2 camera in SLAM reconstruction pipelines that register successive frame acquisition to provide larger-scale reconstruction. The novelty of this work regards the use of additional information to improve the reconstruction performances. In a first attempt, geometrical information are used in addition to visual one to find more robust correspondence matching for the registration process. Including geometrical information in keypoints features should improve alignment in low-texture cases and by consequence the overall reconstruction. Then, the use of an IMU rigidly fixed on the depth sensor is investigated. Inertial measurement are used as priors or hard-constraints for the rigid body transformation estimation. In particular IMU orientation measurements are robust enough and linearize of the rigid registration problem. Improvements provided by 3D keypoints as well as IMU priors are validated in real-case reconstruction applications.
Tesi di dottorato
File allegati
File Dimensione Formato  
phDThesis Silvio FinalFeb2017.pdf

non accessibile

Descrizione: PhD Thesis
Dimensione 119.08 MB
Formato Adobe PDF
119.08 MB Adobe PDF   Visualizza/Apri

I documenti in POLITesi sono protetti da copyright e tutti i diritti sono riservati, salvo diversa indicazione.

Utilizza questo identificativo per citare o creare un link a questo documento: https://hdl.handle.net/10589/131440