The increasing complexity of airport operations and the continuous growth of air traffic require increasingly advanced automation solutions, capable of ensuring safety, efficiency, and reliability under all operational conditions. In particular, autonomous ground guidance can reduce the risk of human error and improve maneuver management even under low visibility or high cognitive load conditions. Within this context, this Thesis proposes the development of a vision-based autonomous guidance and localization system, enabling aircraft navigation across the entire airport surface, even in the absence of Global Positioning System (GPS) data. The first contribution focuses on map-free autonomous taxiway guidance, based on real-time processing of images acquired from an onboard camera. The proposed algorithm isolates the centerline and, using an arc-based approach, identifies key points with respect to the taxiway line, enabling the computation of directional errors and ensuring robust tracking even in the presence of gaps or partially occluded markings. The second contribution introduces a map-free autonomous runway guidance algorithm, which exploits images from the camera to detect key runway features and estimate the aircraft’s lateral deviation from the nominal centerline. This approach ensures proper alignment during critical takeoff and landing phases, and allows accurate position estimation even at speeds consistent with these maneuvers. Finally, a map-based localization method is presented, which compares the visual information with an airport map, thereby providing a vision-based estimate. This estimation is improved by combining data from the odometry and the Inertial Measurement Unit (IMU) through an Extended Kalman Filter (EKF), ensuring continuous localization even in the absence of GPS and during airport sections with limited visual information. The developed algorithms were validated in a multibody simulation environment implemented in MATLAB/Simulink and tested with the X-Plane 11 flight simulator. The results confirm the potential of vision-based perception for autonomous airport navigation.
La crescente complessità delle operazioni aeroportuali e il continuo aumento del traffico aereo richiedono soluzioni di automazione avanzate, in grado di garantire sicurezza, efficienza e affidabilità in ogni condizione operativa. In particolare, la guida autonoma a terra può ridurre il rischio di errore umano e migliorare la gestione delle manovre anche in presenza di scarsa visibilità o elevato carico cognitivo. Nel contesto considerato, questa Tesi propone lo sviluppo di un sistema di guida autonoma e localizzazione basata sulla visione, consentendo la navigazione del velivolo lungo tutta la superficie dell’aeroporto anche in assenza di dati provenienti dal Sistema di Posizionamento Globale (GPS). Il primo contributo riguarda la guida autonoma map-free su taxiway, basata sull’elaborazione in tempo reale delle immagini acquisite da una telecamera di bordo. L’algoritmo proposto isola la linea centrale e, con un approccio basato su archi, identifica punti chiave lungo la taxiway, permettendo il calcolo degli errori direzionali e garantendo un tracciamento robusto anche in presenza di gap o marcature parzialmente occluse. La seconda proposta introduce un algoritmo di guida autonoma map-free su pista, sfruttando le immagini ottenute dalla telecamera per rilevare le principali caratteristiche e stimare la deviazione laterale del velivolo rispetto alla posizione nominale centrale. Ciò garantisce un corretto allineamento durante le fasi critiche di decollo e atterraggio e consente una stima accurata della posizione anche a velocità compatibili con tali manovre. Infine, viene presentato un metodo di localizzazione basato su mappa, che confronta le informazioni visive con una mappa aeroportuale, ottenendo così una stima basata sulla visione. Integrando i dati provenienti da odometria e Unità di Misura Inerziale (IMU), tramite un Filtro di Kalman Esteso (EKF), viene migliorata la stima precedente garantendo continuità della localizzazione anche in assenza di GPS e durante sezioni aeroportuali con informazioni visive limitate. Gli algoritmi sviluppati sono stati validati in un ambiente di simulazione multibody in MATLAB/Simulink e testati con il simulatore X-Plane 11. I risultati confermano il potenziale della percezione visiva per la navigazione autonoma aeroportuale.
Autonomous aircraft ground navigation via map-free and map-based visual localization
Stecchini, Irene;TISON, GIORGIA
2024/2025
Abstract
The increasing complexity of airport operations and the continuous growth of air traffic require increasingly advanced automation solutions, capable of ensuring safety, efficiency, and reliability under all operational conditions. In particular, autonomous ground guidance can reduce the risk of human error and improve maneuver management even under low visibility or high cognitive load conditions. Within this context, this Thesis proposes the development of a vision-based autonomous guidance and localization system, enabling aircraft navigation across the entire airport surface, even in the absence of Global Positioning System (GPS) data. The first contribution focuses on map-free autonomous taxiway guidance, based on real-time processing of images acquired from an onboard camera. The proposed algorithm isolates the centerline and, using an arc-based approach, identifies key points with respect to the taxiway line, enabling the computation of directional errors and ensuring robust tracking even in the presence of gaps or partially occluded markings. The second contribution introduces a map-free autonomous runway guidance algorithm, which exploits images from the camera to detect key runway features and estimate the aircraft’s lateral deviation from the nominal centerline. This approach ensures proper alignment during critical takeoff and landing phases, and allows accurate position estimation even at speeds consistent with these maneuvers. Finally, a map-based localization method is presented, which compares the visual information with an airport map, thereby providing a vision-based estimate. This estimation is improved by combining data from the odometry and the Inertial Measurement Unit (IMU) through an Extended Kalman Filter (EKF), ensuring continuous localization even in the absence of GPS and during airport sections with limited visual information. The developed algorithms were validated in a multibody simulation environment implemented in MATLAB/Simulink and tested with the X-Plane 11 flight simulator. The results confirm the potential of vision-based perception for autonomous airport navigation.| File | Dimensione | Formato | |
|---|---|---|---|
|
2025_12_Stecchini_Tison_Thesis_01.pdf
non accessibile
Descrizione: thesis text
Dimensione
42.34 MB
Formato
Adobe PDF
|
42.34 MB | Adobe PDF | Visualizza/Apri |
|
2025_12_Stecchini_Tison_Executive Summary_02.pdf
non accessibile
Descrizione: executive summary
Dimensione
1.32 MB
Formato
Adobe PDF
|
1.32 MB | Adobe PDF | Visualizza/Apri |
I documenti in POLITesi sono protetti da copyright e tutti i diritti sono riservati, salvo diversa indicazione.
https://hdl.handle.net/10589/246481