More and more often we rely on technology to carry out tasks that, up to some years ago, we thought they could only be performed by humans: just think of self-driving vehicles or facial recognition. To date, the algorithms achieve performances comparable to those of humans if not better. Despite this however, it has been shown that it is possible to fool a model by making small and suitable changes to the input. These modifications, which are called adversarial attacks, do not arouse suspicion in the eyes of a person and for this reason questions have arisen regarding the safety of the models. Although, in principle the study of adversarial attacks was limited to a digital scenario, the question of security concerns their application in the real world. In particular, the goal of this work is to create an adversarial attack that manages to fool an object detector when the attack is applied to the objects belonging to the class person. In other words, make people disappear once the attack is applied. The application in the real world is represented by the use of the object detector in a surveillance system. In order for the attack to remain adversary when applied in reality, following its digital optimization, it must be faithfully reproducible. For this reason, the attacks created in this work aim to be as simple as possible using as constituent elements geometric shapes. The latter are defined completely by parameters that determine their position, their color and their shape.
Sempre più spesso ci affidiamo alla tecnologia per svolgere mansioni che, fino a qualche anno fa, pensavamo potessero essere eseguite esclusivamente da esseri umani: basti pensare ai veicoli a guida autonoma o al riconoscimento facciale. Ad oggi, gli algoritmi raggiungono performance comparabili a quelle umane se non migliori. Nonostante questo, è stato mostrato che è possibile ingannare un modello apportando delle piccole modifiche ad hoc all’input. Queste mofifiche, che prendono il nome di attacchi avversari, non destastano sospetti agli occhi di una persona e per questo motivo sono sorti degli interrogativi riguardanti la sicurezza dei modelli. Sebbene, in principio lo studio degli attacchi avversari si è limitato ad uno scenario digitale, l’interrogativo sulla sicurezza riguarda la loro applicazione nel mondo reale. In particolare, l’obiettivo di questo lavoro è la creazione di un attacco avversario che riesca ad ingannare un object detector quando l’attacco viene applicato a delle persone. In altre parole, far si che le persone spariscano una volta che l’attacco è stato applicato. L’applicazione nel mondo reale è rappresentata dall’utilizzo del object detector in un sistema di sorveglianza. Affinchè l’attacco rimanga avversario quando viene applicato nella realtà, a seguito della sua ottimizzazione digitale, deve essere fedelmente riproducibile. Per questo motivo, gli attachi creati in questo lavoro hanno l’obiettivo di essere il più semplici possibili utilizzando come elemento costituente delle forme geometriche. Quest’ultime sono definite completamente da parametri che determinano la loro posizione, il loro colore e la loro forma.
Deployable and parametric adversarial attacks against object detector
DERETTI, ANDREA
2021/2022
Abstract
More and more often we rely on technology to carry out tasks that, up to some years ago, we thought they could only be performed by humans: just think of self-driving vehicles or facial recognition. To date, the algorithms achieve performances comparable to those of humans if not better. Despite this however, it has been shown that it is possible to fool a model by making small and suitable changes to the input. These modifications, which are called adversarial attacks, do not arouse suspicion in the eyes of a person and for this reason questions have arisen regarding the safety of the models. Although, in principle the study of adversarial attacks was limited to a digital scenario, the question of security concerns their application in the real world. In particular, the goal of this work is to create an adversarial attack that manages to fool an object detector when the attack is applied to the objects belonging to the class person. In other words, make people disappear once the attack is applied. The application in the real world is represented by the use of the object detector in a surveillance system. In order for the attack to remain adversary when applied in reality, following its digital optimization, it must be faithfully reproducible. For this reason, the attacks created in this work aim to be as simple as possible using as constituent elements geometric shapes. The latter are defined completely by parameters that determine their position, their color and their shape.File | Dimensione | Formato | |
---|---|---|---|
Classical_Format_Thesis___Adrea_Deretti.pdf
solo utenti autorizzati dal 01/12/2023
Descrizione: Thesis
Dimensione
8.09 MB
Formato
Adobe PDF
|
8.09 MB | Adobe PDF | Visualizza/Apri |
Executive_Summary___Andrea_Deretti.pdf
solo utenti autorizzati dal 01/12/2023
Descrizione: Executive summary
Dimensione
4.8 MB
Formato
Adobe PDF
|
4.8 MB | Adobe PDF | Visualizza/Apri |
I documenti in POLITesi sono protetti da copyright e tutti i diritti sono riservati, salvo diversa indicazione.
https://hdl.handle.net/10589/199072