It is known that machine learning models are vulnerable to adversarial samples: inputs that are crafted through adversarial machine learning algorithms with the objective of deceiving the classifier. Adversarial samples crafted against one model can be effective also against other different models, that may have a different architecture, as long as both models perform the same tasks: this property is known as transferability. Hence, even if an attacker does not have much knowledge about the target system it can train a substitute model and craft the samples against it, then it can transfer them to the real target model. In this thesis, we study the application of adversarial machine learning techniques to the domain of banking fraud detection. To this end, we identify the main challenges to the application of these algorithms in this field and we overcome them by designing a novel approach to perform evasion attacks. Having at our disposal two bank datasets and different state-of-the-art fraud detection systems, we evaluate the security of those models by simulating an attacker that performs an evasion attack with different degrees of knowledge. We show that the outcome of the attack is strictly dependent on the fraud detector against which the attack is directed, with an evasion rate ranging from 60% to 100%.
È noto che i modelli di machine learning (apprendimento automatico) sono vulnerabili agli adversarial sample (campioni ostili): campioni realizzati attraverso appositi algoritmi di adversarial machine learning (apprendimento automatico in ambiente ostile) con l'obiettivo di ingannare il classificatore. Gli adversarial sample realizzati contro un modello possono essere efficaci anche contro altri modelli, che possono avere un'architettura diversa, a condizione che entrambi i modelli svolgano gli stessi tasks: questa proprietà è nota come trasferibilità. Quindi, anche se un attaccante non ha conoscenze riguardo al sistema che sta bersagliando può addestrare un modello sostitutivo, creare gli adversarial sample contro di esso, e poi trasferirli al modello effettivo sotto attacco. In questa tesi, studiamo l'applicazione di tecniche di adversarial machine learning nel campo della rilevazione delle frodi bancarie. A questo scopo, identifichiamo le principali problematiche nell'applicazione di questi algoritmi in questo campo e le superiamo progettando un nuovo approccio per eseguire attacchi evasivi. Avendo a nostra disposizione due datasets bancari e diversi sistemi di rilevamento delle frodi con prestazioni comparabili allo stato dell'arte, valutiamo la sicurezza di questi modelli simulando un attaccante con diversi gradi di conoscenza. Mostriamo che l'esito dell'attacco è strettamente dipendente dal rilevatore di frodi contro cui è diretto l'attacco, con una percentuale di evasione che varia dal 60% al 100%.
Evasion attacks against banking fraud detection systems
SANTINI, LUCA
2018/2019
Abstract
It is known that machine learning models are vulnerable to adversarial samples: inputs that are crafted through adversarial machine learning algorithms with the objective of deceiving the classifier. Adversarial samples crafted against one model can be effective also against other different models, that may have a different architecture, as long as both models perform the same tasks: this property is known as transferability. Hence, even if an attacker does not have much knowledge about the target system it can train a substitute model and craft the samples against it, then it can transfer them to the real target model. In this thesis, we study the application of adversarial machine learning techniques to the domain of banking fraud detection. To this end, we identify the main challenges to the application of these algorithms in this field and we overcome them by designing a novel approach to perform evasion attacks. Having at our disposal two bank datasets and different state-of-the-art fraud detection systems, we evaluate the security of those models by simulating an attacker that performs an evasion attack with different degrees of knowledge. We show that the outcome of the attack is strictly dependent on the fraud detector against which the attack is directed, with an evasion rate ranging from 60% to 100%.File | Dimensione | Formato | |
---|---|---|---|
msc_thesis_santini.pdf
solo utenti autorizzati dal 28/11/2020
Descrizione: msc. thesis
Dimensione
2.32 MB
Formato
Adobe PDF
|
2.32 MB | Adobe PDF | Visualizza/Apri |
I documenti in POLITesi sono protetti da copyright e tutti i diritti sono riservati, salvo diversa indicazione.
https://hdl.handle.net/10589/152240