In the last years, the exponential growth in images and videos kicked off the development of increasingly performing methods for the analysis of this kind of data. In particular, Deep Learning has already largely proven its capabil- ities in solving image related problems such as image classification, object detection, and semantic segmentation. Recently, the computer vision com- munity focused its effort in extending this type of approach to the analysis of video data. There are many applications of such technology in everyday’s life including, but not limited to, autonomous driving, video summarization, video surveillance, traffic control, or virtual reality. This work focuses on the specific problem of automatic detection of salient objects in videos. We approached the problem by means of optical flow estimation, by predicting how the objects move from one frame to the other. Then we use the esti- mated optical flow to propagate object segmentation through time, tracking the objects in all the frames. To this purpose we propose a new Deep Learn- ing approach to semi-supervised video object segmentation which exploits both the capability of Convolutional Neural Networks (CNNs) of analyzing the spatial correlations among pixels, and the capability of Recurrent Neural Networks (RNNs) of capturing the temporal information in sequential data. Analysis of video data is a complex task due to irregular camera motion, occlusions, wide ranges of scales and illuminating conditions, presence of multiple objects in the scenes, etc. We tackled this challenges by designing an architecture, both convolutional and recurrent, that takes the best from both approaches and allows to predict both the optical flow and the segmen- tation of each frame of a video in an accurate way, and obtaining results comparable with state-of-the-art. This thesis introduces the task, carefully reviews the related state of the art, describes the approach we adopted in building the aforementioned model, and details the experimental evaluation of each of the component of the model to prove its effectiveness.
Negli ultimi anni, la crescita esponenziale della disponibilità di immagini e video ha dato inizio allo sviluppo di metodi sempre più performanti per l’analisi di questo tipo di dati. In particolare, il Deep Learning ha già ampia- mente dimostrato le sue capacità di risolvere problemi relativi alle immagini come la classificazione, l’individuazione di oggetti, e la segmentazione se- mantica. Recentemente, la comunità di computer vision ha concentrato i suoi sforzi nell’estendere questo tipo di approccio anche all’analisi di dati video. Le applicazioni nella vita di tutti i giorni sono numerose includendo, ma non solo, la guida autonoma, sintetizzazione di video, video sorveglian- za, controllo del traffico, e realtà virtuale. Questo lavoro di tesi si focalizza sul problema di individuare in maniera automatica gli oggetti principali nei video. Abbiamo trattato il problema dal punto di vista di stima del flusso ottico, stimando come gli oggetti si muovono da un frame all’altro. Il flus- so ottico stimato viene usato per propagare la segmentazione dell’oggetto nel tempo, individuando gli oggetti in tutti i frame. A questo scopo propo- niamo un nuovo metodo di Deep Learning per la segmentazione semisuper- visionata di oggetti nei video sfruttando sia la capacità delle Reti Neurali Convoluzionali di analizzare le correlazioni spaziali tra i pixel, e la capa- cità delle Reti Neurali Ricorrenti di catturare l’informazione temporale in dati sequenziali. Analizzare dati video è un compito difficile a causa di mo- vimenti irregolari della camera, occlusioni, la presenza di più oggetti nelle scene, etc. Noi abbiamo affrontato questi problemi progettando un’ architet- tura convoluzionale ricorrente che permette di predirre sia il flusso ottico che la segmentaziona in maniera accurata, ottenendo risultati comparabili con lo stato dell’arte. Questa tesi introduce il problema, analizza lo stato dell’arte, descrive l’approccio adottato nel progettare il modello prima menzionato, ed espone nei particolari la valutazione sperimentale di ogni componente per dimostrare la sua efficacia.
Semi-supervised video object segmentation with convolutional recurrent neural networks
LATTARI, FRANCESCO
2016/2017
Abstract
In the last years, the exponential growth in images and videos kicked off the development of increasingly performing methods for the analysis of this kind of data. In particular, Deep Learning has already largely proven its capabil- ities in solving image related problems such as image classification, object detection, and semantic segmentation. Recently, the computer vision com- munity focused its effort in extending this type of approach to the analysis of video data. There are many applications of such technology in everyday’s life including, but not limited to, autonomous driving, video summarization, video surveillance, traffic control, or virtual reality. This work focuses on the specific problem of automatic detection of salient objects in videos. We approached the problem by means of optical flow estimation, by predicting how the objects move from one frame to the other. Then we use the esti- mated optical flow to propagate object segmentation through time, tracking the objects in all the frames. To this purpose we propose a new Deep Learn- ing approach to semi-supervised video object segmentation which exploits both the capability of Convolutional Neural Networks (CNNs) of analyzing the spatial correlations among pixels, and the capability of Recurrent Neural Networks (RNNs) of capturing the temporal information in sequential data. Analysis of video data is a complex task due to irregular camera motion, occlusions, wide ranges of scales and illuminating conditions, presence of multiple objects in the scenes, etc. We tackled this challenges by designing an architecture, both convolutional and recurrent, that takes the best from both approaches and allows to predict both the optical flow and the segmen- tation of each frame of a video in an accurate way, and obtaining results comparable with state-of-the-art. This thesis introduces the task, carefully reviews the related state of the art, describes the approach we adopted in building the aforementioned model, and details the experimental evaluation of each of the component of the model to prove its effectiveness.File | Dimensione | Formato | |
---|---|---|---|
thesis.pdf
non accessibile
Descrizione: Testo della tesi
Dimensione
18.01 MB
Formato
Adobe PDF
|
18.01 MB | Adobe PDF | Visualizza/Apri |
I documenti in POLITesi sono protetti da copyright e tutti i diritti sono riservati, salvo diversa indicazione.
https://hdl.handle.net/10589/137630