Researchers have demonstrated that modern banking Fraud Detection Systems (FDSs) are vulnerable to Adversarial Machine Learning (AML) attacks, showing scenarios in which a well-motivated attacker carefully crafts malicious samples that evade classification at inference time. Moreover, uncontrolled periodic retraining of a fraud detector allows an attacker to craft poisoning transactions that, if undetected, slowly degrade the performances of the model over time. Mitigations for evasion and poisoning attacks have been thoroughly explored in the image classification domain. Most of these solutions require assumptions that do not hold for the existing AML attacks against FDSs, making them not directly adaptable to our problem. In this thesis, we propose an adversarial training approach for the mitigation of AML attacks against FDSs, called Reject ADversarial eXamples (RAD-X). Our approach is directly inspired by the existing technique of adversarial regularization. During the training phase of the FDS, we generate artificial frauds by modifying random original legitimate transactions. This is done by using decision rules based on the prior knowledge of the input features that are under the direct control of the attacker. Then, we let the FDS classify them, and include the misclassified artificial frauds in the final training dataset with the correct label. In other words, we reproduce and show examples of evasive transactions to our model before the actual attack takes place, anticipating a possible attacker. We evaluate our approach against a poisoning attack against five system-centric FDSs, using three different attacker’s strategies, three increasing degrees of attacker’s knowledge of the system, and two update policies for the inclusion of new incoming transactions. Our experimental validation shows that RAD-X is able to mitigate adversarial attacks based on classification evasion, by decreasing from 100% to 21.34% the total capital stolen from an attacker having limited knowledge of the FDS. This result is achieved with small performance trade-off, reducing in the worst case by 4.27% the generalization performance of the classifier due to the increase of false positives. In addition, we demonstrate that our approach achieves better results than state-of-the-art solutions respectively based on outlier detection and adversarial regularization. Even in the full attacker’s knowledge scenario, RAD-X reduces the overall stolen capital by 47.40% from the baseline (i.e., hinting that the attacker needs to adopt more complex strategies to evade classification).
Recenti ricerche hanno dimostrato che i moderni sistemi di individuazione di frodi bancarie (FDS) sono vulnerabili ad attacchi di Adversarial Machine Learning (AML), mostrando scenari in cui un attaccante ben motivato può costruire frodi che evadono la classificazione del modello di Machine Learning (ML) in tempo di test. Inoltre, un riaddestramento periodico e incontrollato del FDS può permettere all’attaccante di costruire transazioni di poisoning, ovvero transazioni che, una volta incluse nel dataset di addestramento, degradano lentamente nel tempo le prestazioni di tale sistema. Le mitigazioni per gli evasion attack e poisoning attack sono state largamente esplorate nel dominio della classificazione delle immagini. Tali mitigazioni non sono direttamente adattabili al nostro problema, dal momento che richiedono presupposti che non reggono per gli esistenti attacchi di AML contro i FDS. In questa tesi, proponiamo Reject ADversarial eXamples (RAD-X), un approccio di adversarial training per la mitigazione di attacchi AML contro i FDS. Il nostro approccio è direttamente ispirato alla tecnica di adversarial regularization. Durante la fase di addestramento del modello, generiamo frodi artificiali modificando transazioni legittime casuali. Otteniamo tali frodi usando regole basate sulla conoscenza a priori delle feature di input controllate dall’attaccante. In seguito, lasciamo che sia il FDS a classificare le frodi artificiali e infine, includiamo quelle erroneamente classificate nel dataset finale di addestramento, ma con il giusto label. Pertanto, riproduciamo e mostriamo esempi di transazioni evasive al nostro modello prima che l’attacco abbia luogo, anticipando un eventuale attaccante. Sperimentiamo il nostro approccio contro un attacco di poisoning, con cinque FDSs system-centric, usando tre diverse strategie per l’attaccante, tre livelli crescenti di conoscenza del sistema da parte dell’attaccante, e due update policy per l’inclusione di nuove transazioni nel dataset. I risultati dei nostri esperimenti mostrano che RAD-X è in grado di mitigare attacchi di AML basati sull’evasione del classificatore, riducendo dal 100% al 21.34% il capitale totale rubato da un attaccante avente una limitata conoscenza del FDS. Questo risultato è ottenuto con basso compromesso di prestazioni, riducendo al massimo del 4.27% la performance di generalizzazione del modello di ML, a causa dell’incremento dei falsi positivi. Inoltre, dimostriamo che il nostro approccio ottiene risultati migliori rispetto a delle soluzioni allo stato dell’arte, rispettivamente basate sull’individuazione di outlier e sull’adversarial regularization. Anche nello scenario in cui l’attaccante ha completa conoscenza del sistema, RAD-X riduce il capitale rubato in totale del 47.40% rispetto alle configurazioni standard (pertanto, suggerendo che l’attaccante dovrebbe adottare strategie più complesse per evadere il classificatore).
RAD-X : an adversarial training approach for Fraud Detection Systems
PALADINI, TOMMASO
2020/2021
Abstract
Researchers have demonstrated that modern banking Fraud Detection Systems (FDSs) are vulnerable to Adversarial Machine Learning (AML) attacks, showing scenarios in which a well-motivated attacker carefully crafts malicious samples that evade classification at inference time. Moreover, uncontrolled periodic retraining of a fraud detector allows an attacker to craft poisoning transactions that, if undetected, slowly degrade the performances of the model over time. Mitigations for evasion and poisoning attacks have been thoroughly explored in the image classification domain. Most of these solutions require assumptions that do not hold for the existing AML attacks against FDSs, making them not directly adaptable to our problem. In this thesis, we propose an adversarial training approach for the mitigation of AML attacks against FDSs, called Reject ADversarial eXamples (RAD-X). Our approach is directly inspired by the existing technique of adversarial regularization. During the training phase of the FDS, we generate artificial frauds by modifying random original legitimate transactions. This is done by using decision rules based on the prior knowledge of the input features that are under the direct control of the attacker. Then, we let the FDS classify them, and include the misclassified artificial frauds in the final training dataset with the correct label. In other words, we reproduce and show examples of evasive transactions to our model before the actual attack takes place, anticipating a possible attacker. We evaluate our approach against a poisoning attack against five system-centric FDSs, using three different attacker’s strategies, three increasing degrees of attacker’s knowledge of the system, and two update policies for the inclusion of new incoming transactions. Our experimental validation shows that RAD-X is able to mitigate adversarial attacks based on classification evasion, by decreasing from 100% to 21.34% the total capital stolen from an attacker having limited knowledge of the FDS. This result is achieved with small performance trade-off, reducing in the worst case by 4.27% the generalization performance of the classifier due to the increase of false positives. In addition, we demonstrate that our approach achieves better results than state-of-the-art solutions respectively based on outlier detection and adversarial regularization. Even in the full attacker’s knowledge scenario, RAD-X reduces the overall stolen capital by 47.40% from the baseline (i.e., hinting that the attacker needs to adopt more complex strategies to evade classification).File | Dimensione | Formato | |
---|---|---|---|
Thesis.pdf
accessibile in internet solo dagli utenti autorizzati
Descrizione: TESI
Dimensione
1.91 MB
Formato
Adobe PDF
|
1.91 MB | Adobe PDF | Visualizza/Apri |
main.pdf
non accessibile
Descrizione: EXECUTIVE SUMMARY
Dimensione
355.71 kB
Formato
Adobe PDF
|
355.71 kB | Adobe PDF | Visualizza/Apri |
I documenti in POLITesi sono protetti da copyright e tutti i diritti sono riservati, salvo diversa indicazione.
https://hdl.handle.net/10589/186214