The present dissertation deals with the design and evaluation of a suitable vision aided inertial navigation system in GPS denied environments, for small RUAVs. In particular, A tightly-coupled stereo vision-aided inertial navigation system is studied in this work, as a synergistic incorporation of vision with other sensors. Also, the effects of delay caused by the on-line processing of visual information, is considered and handled properly by delayed fusion of measurements in the estimator. Moreover, the method is leveraged and optimized by the implementation of an exact vision-based model. This way, negative effects on the estimator performance caused by delays and nature of relative-state measurements are mitigated simultaneously. The formulation of proposed tightly-coupled stereo vision-aided inertial navigation system is derived for a general three-dimensional motion, such as the one of an aerial vehicle. In order to avoid the loss of information possibly resulting by the preprocessing of visual information, a set of feature-based motion sensors and an inertial measurement unit are directly fused together to estimate the vehicle state. Two alternative feature-based observation models are considered within the proposed fusion architecture. The first model uses the trifocal tensor to propagate feature points by a homography, so as to express geometric constraints among three consecutive scenes. The second one is derived by using a rigid body motion model applied to 3D reconstructed feature points. A kinematic model accounts for the vehicle motion, and a Sigma-Point Kalman Filter is used to achieve a robust state estimation in the presence of non-linearities. Also, it is demonstrated how delay caused by image processing, if not explicitly taken into account, can lead to appreciable performance degradation of the estimator. Next, four different existing methods of delayed fusion are considered and compared. Simulations and Monte Carlo analyses are used to assess the estimation error and computational effort of the various methods. Finally, a best performing formulation is identified, that properly handles the fusion of delayed measurements in the estimator without increasing the time burden of the filter. As a result of delayed fusion, a better performance may be achieved by a trade off between the frequency and computational complexity of processing relative sate measurements. The evaluation and assessment of the above formulations are demonstrated with the help of field testing by STAR RUAV, aerial simulation experiment and also a real dataset gathered in a dynamic indoor case by the Rawseeds project. The characteristics of the inertial navigation system and state estimator are verified by experimental trials and post-processing tests using sensor data acquired on-board a small rotorcraft. However, lacking a good three-dimensional dataset, a ground robot dataset is also used to validate the concept and illustrate the basic performance of the proposed methods. The example is taken from the Rawseeds project, which provides several multi-sensor datasets with their associated ground truths. Specifically, a dynamic indoor dataset was used to assess the performance of a pure vision-aided inertial system in GPS-denied conditions and unreliable magnetometer readings. The effects of delays are demonstrated using an ad hoc virtual environment, that is based on a high fidelity simulator of the system where uncertainties, noises, sensor and actuator models are included.

Contesto.

Tightly coupled vision aided inertial navigation for three-dimensional motion

ASADI, EHSAN

Abstract

The present dissertation deals with the design and evaluation of a suitable vision aided inertial navigation system in GPS denied environments, for small RUAVs. In particular, A tightly-coupled stereo vision-aided inertial navigation system is studied in this work, as a synergistic incorporation of vision with other sensors. Also, the effects of delay caused by the on-line processing of visual information, is considered and handled properly by delayed fusion of measurements in the estimator. Moreover, the method is leveraged and optimized by the implementation of an exact vision-based model. This way, negative effects on the estimator performance caused by delays and nature of relative-state measurements are mitigated simultaneously. The formulation of proposed tightly-coupled stereo vision-aided inertial navigation system is derived for a general three-dimensional motion, such as the one of an aerial vehicle. In order to avoid the loss of information possibly resulting by the preprocessing of visual information, a set of feature-based motion sensors and an inertial measurement unit are directly fused together to estimate the vehicle state. Two alternative feature-based observation models are considered within the proposed fusion architecture. The first model uses the trifocal tensor to propagate feature points by a homography, so as to express geometric constraints among three consecutive scenes. The second one is derived by using a rigid body motion model applied to 3D reconstructed feature points. A kinematic model accounts for the vehicle motion, and a Sigma-Point Kalman Filter is used to achieve a robust state estimation in the presence of non-linearities. Also, it is demonstrated how delay caused by image processing, if not explicitly taken into account, can lead to appreciable performance degradation of the estimator. Next, four different existing methods of delayed fusion are considered and compared. Simulations and Monte Carlo analyses are used to assess the estimation error and computational effort of the various methods. Finally, a best performing formulation is identified, that properly handles the fusion of delayed measurements in the estimator without increasing the time burden of the filter. As a result of delayed fusion, a better performance may be achieved by a trade off between the frequency and computational complexity of processing relative sate measurements. The evaluation and assessment of the above formulations are demonstrated with the help of field testing by STAR RUAV, aerial simulation experiment and also a real dataset gathered in a dynamic indoor case by the Rawseeds project. The characteristics of the inertial navigation system and state estimator are verified by experimental trials and post-processing tests using sensor data acquired on-board a small rotorcraft. However, lacking a good three-dimensional dataset, a ground robot dataset is also used to validate the concept and illustrate the basic performance of the proposed methods. The example is taken from the Rawseeds project, which provides several multi-sensor datasets with their associated ground truths. Specifically, a dynamic indoor dataset was used to assess the performance of a pure vision-aided inertial system in GPS-denied conditions and unreliable magnetometer readings. The effects of delays are demonstrated using an ad hoc virtual environment, that is based on a high fidelity simulator of the system where uncertainties, noises, sensor and actuator models are included.
VIGEVANO, LUIGI
BORRI, MARCO
14-ott-2013
Contesto.
Tesi di dottorato
File allegati
File Dimensione Formato  
Thesis_Asadi.pdf

accessibile in internet solo dagli utenti autorizzati

Descrizione: Thesis text
Dimensione 24.7 MB
Formato Adobe PDF
24.7 MB Adobe PDF   Visualizza/Apri

I documenti in POLITesi sono protetti da copyright e tutti i diritti sono riservati, salvo diversa indicazione.

Utilizza questo identificativo per citare o creare un link a questo documento: https://hdl.handle.net/10589/82784