Social networks are enormous sources of human-generated content. Users continuously create useful information, hard to detect, extract, and categorize. Language Models (LMs) have always been among the most useful and used approaches to process textual data. At first designed as simple unigram models, through the years they have been improved until the recent release of BERT, a pre-trained model reaching state-of-the-art performances in many heterogeneous benchmark tasks. In this thesis, I apply LMs to textual content publicly shared on social media. I search for meaningful representations of users that encode their syntactic and semantic features. Once appropriate embeddings are defined, I compute similarities between users to perform higher-level tasks. Tested tasks include the extraction of emerging knowledge, represented by users similar to a given set of well-known accounts, community detection and characterization, user classification (e.g., political inclination) and controversy detection in online discussions. The obtained successful results demonstrate that similar users share similar textual content, and LMs can exploit this similarity to generate useful embeddings. Finally, due to the evident noise in contents shared on social media, I investigate how robust LMs are, by designing an evolutionary adversarial algorithm (EFSG) that generates sentences with the purpose of fooling LMs. Adversarial training is an effective defence technique, but there is still much work to do before a language model truly robust is trained.
I Social Network sono enormi fonti di contenuto generato dall’uomo. Gli utenti creano continuamente informazioni utili, difficili da rilevare, estrarre e categorizzare. I Language Models (modelli di linguaggio, LM) sono sempre stati tra gli approcci più utili e utilizzati per processare dati testuali. Inizialmente ideati come semplici modelli a unigramma (unigram models), durante gli anni sono stati migliorati fino all’uscita di BERT, un modello pre-allenato the raggiunge risultati allo stato dell’arte in molti compiti eterogenei. In questa tesi io applico i LMs al contenuto testuale condiviso pubblicamente sui social media. Cerco rappresentazioni significative di utenti che contengono le loro proprietà sintattiche e semantiche. Una volta che appropriati embedding sono definiti, calcolo similarità tra utenti per eseguire compiti di livello più alto. I compiti provati includono l’estrazione di conoscenza emergente, rappresentata da utenti simili ad un insieme di account ben noti, rilevamento e caratterizzazione di comunità, classificazione di utenti (per esempio in base all’inclinazione politica) e rilevamento di controversie nelle discussioni online. Gli ottimi risultati ottenuti dimostrano che utenti simili condividono simile contenuto testuale, e i LM possono sfruttare questa similarità per generare embedding utili. Infine, visto l’evidente rumore dei contenuti condivisi nei social media, investigo quanto robusti sono i modelli di linguaggio progettando un algoritmo adversarial evolutivo (EFSG) che genera frasi con lo scopo di ingannare i LM. L’apprendimento automatico in ambiente ostile è una tecnica di difesa efficace, ma c’è ancora molto lavoro da fare prima che un modello di linguaggio veramente robusto sia allenato.
Exploring and challenging the limits of language models and their application to human-generated textual content
Di Giovanni, Marco
2020/2021
Abstract
Social networks are enormous sources of human-generated content. Users continuously create useful information, hard to detect, extract, and categorize. Language Models (LMs) have always been among the most useful and used approaches to process textual data. At first designed as simple unigram models, through the years they have been improved until the recent release of BERT, a pre-trained model reaching state-of-the-art performances in many heterogeneous benchmark tasks. In this thesis, I apply LMs to textual content publicly shared on social media. I search for meaningful representations of users that encode their syntactic and semantic features. Once appropriate embeddings are defined, I compute similarities between users to perform higher-level tasks. Tested tasks include the extraction of emerging knowledge, represented by users similar to a given set of well-known accounts, community detection and characterization, user classification (e.g., political inclination) and controversy detection in online discussions. The obtained successful results demonstrate that similar users share similar textual content, and LMs can exploit this similarity to generate useful embeddings. Finally, due to the evident noise in contents shared on social media, I investigate how robust LMs are, by designing an evolutionary adversarial algorithm (EFSG) that generates sentences with the purpose of fooling LMs. Adversarial training is an effective defence technique, but there is still much work to do before a language model truly robust is trained.File | Dimensione | Formato | |
---|---|---|---|
Tesi.pdf
accessibile in internet per tutti
Dimensione
3 MB
Formato
Adobe PDF
|
3 MB | Adobe PDF | Visualizza/Apri |
I documenti in POLITesi sono protetti da copyright e tutti i diritti sono riservati, salvo diversa indicazione.
https://hdl.handle.net/10589/179110