In the past two decades, the volume of data collected and processed by computer systems has grown exponentially. The databases, employed initially to store and query data, are now inappropriate to address big data-related issues. They are indeed limited in horizontal scalability due to architectures that were conceived for single machines. The new big data requirements such as the large volume, the high velocity, and heterogeneity of formats have led to the development of a new tools’ family defined as data-intensive. With the advent of these technologies, new computational paradigms were introduced. MapReduce was one of the first and brought the idea to send the processing towards the data kept stored on multiple nodes. Later, another technique proposed the concept of stream of data by keeping the computation statically allocated and making the information flow through the system. With this approach, it is no longer necessary to store the data before it is processed. Today’s big data requirements are to transform raw data inputs into valuable knowledge, but also they ask for database-like capabilities such as state management and transactional guarantees. In this scenario, the design patterns of centralized databases from the ’80s would impose a very limiting trade-off between strong guarantees and scalability. The NoSQL approach is the first attempt made to address the issue. This category of systems is able to scale, but it provides neither strong guarantees nor a standardized SQL interface. Subsequently, the strong demand to exploit transactional guarantees again and to reintroduce the SQL language at scale motivated the origin of NewSQLs: databases with the legacy relational model but able to provide guarantees and also largely scale horizontally. In this work, we studied data-intensive tools to compare functioning paradigms and new approaches to strong guarantees in distributed scenarios. We analyzed the differences and common aspects that are emerging and could drive future design patterns. We developed a modeling framework to conduct our analysis systematically. The proposed models could be exploited in next-generation systems’ design to leverage established and emerging techniques in today’s data-intensive tools to store, query, and analyze data.
Negli ultimi due decenni, il volume di dati raccolti ed elaborati dai sistemi informatici `e cresciuto esponenzialmente. I database, originariamente impiegati per archiviare e interrogare banche dati, risultano ora inappropriati nel risolvere problemi relativi ai big data. Essi sono infatti limitati nella scalabilit`a orizzontale a causa di architetture concepite per una singola macchina. I nuovi requisiti relativi ai big data come il grande volume, l’alta velocit`a e l’eterogeneit`a dei formati hanno portato allo sviluppo di una nuova famiglia di strumenti definita ad alta intensit`a di dati. Con l’avvento di queste tecnologie sono stati introdotti nuovi paradigmi computazionali. Uno tra i primi, Map-Reduce, ha introdotto l’idea di inviare l’elaborazione verso i dati salvati su pi`u nodi. Successivamente, un’altra tecnica ha proposto il concetto di flusso di dati mantenendo il calcolo allocato staticamente e facendo fluire l’infromazione attraverso il sistema. Tramite quest’ultimo approccio non `e pi`u necessario salvare i dati prima della loro elaborazione. I requisiti odierni rispetto alla processazione di big data sono sia di trasformare dati grezzi in preziose conoscenze ma anche di poter utilizzare funzionalit`a simili ai database come la gestione di uno stato e il supporto a garanzie transazionali. In questo scenario, i modelli di progettazione dei database centralizzati degli anni ’80 imporrebbero un compromesso fortemente limitativo tra garanzie e scalabilit`a. L’approccio NoSQL `e stato il primo tentativo fatto nel risolvere il problema. Questa categoria di sistemi ha la capacit`a di scalare ma non fornisce n´e garanzie transazionali n´e un’interfaccia SQL standard. Successivamente, la forte richiesta di sfruttare nuovamente le garanzie forti e di reintrodurre il linguaggio SQL su larga scala ha motivato l’origine dei NewSQL: database con il modello relazionale classico ma in grado di fornire garanzie e anche scalare orizzontalmente. In questo lavoro di tesi abbiamo studiato gli strumenti ad alta intensit`a di dati per confrontare paradigmi di funzionamento e nuovi approcci introdotti a supporto di garanzie “forti” in scenari distribuiti. Abbiamo analizzato le differenze e gli aspetti comuni che stanno emergendo e che potrebbero guidare i modelli di progettazione futuri. Abbiamo sviluppato un framework di modellazione per condurre sistematicamente la nostra analisi. I modelli proposti potrebbero essere impiegati nella progettazione dei sistemi di prossima generazione, sfruttando le tecniche affermatesi ed emerse negli strumenti ad alta intesit`a di dati odierni per salvare, eseguire queries e analizzare dati.
Towards a unifying modeling framework for data-intensive tools
CILLONI, STEFANO
2018/2019
Abstract
In the past two decades, the volume of data collected and processed by computer systems has grown exponentially. The databases, employed initially to store and query data, are now inappropriate to address big data-related issues. They are indeed limited in horizontal scalability due to architectures that were conceived for single machines. The new big data requirements such as the large volume, the high velocity, and heterogeneity of formats have led to the development of a new tools’ family defined as data-intensive. With the advent of these technologies, new computational paradigms were introduced. MapReduce was one of the first and brought the idea to send the processing towards the data kept stored on multiple nodes. Later, another technique proposed the concept of stream of data by keeping the computation statically allocated and making the information flow through the system. With this approach, it is no longer necessary to store the data before it is processed. Today’s big data requirements are to transform raw data inputs into valuable knowledge, but also they ask for database-like capabilities such as state management and transactional guarantees. In this scenario, the design patterns of centralized databases from the ’80s would impose a very limiting trade-off between strong guarantees and scalability. The NoSQL approach is the first attempt made to address the issue. This category of systems is able to scale, but it provides neither strong guarantees nor a standardized SQL interface. Subsequently, the strong demand to exploit transactional guarantees again and to reintroduce the SQL language at scale motivated the origin of NewSQLs: databases with the legacy relational model but able to provide guarantees and also largely scale horizontally. In this work, we studied data-intensive tools to compare functioning paradigms and new approaches to strong guarantees in distributed scenarios. We analyzed the differences and common aspects that are emerging and could drive future design patterns. We developed a modeling framework to conduct our analysis systematically. The proposed models could be exploited in next-generation systems’ design to leverage established and emerging techniques in today’s data-intensive tools to store, query, and analyze data.File | Dimensione | Formato | |
---|---|---|---|
2019_12_Cilloni.pdf
accessibile in internet per tutti
Descrizione: Thesis document
Dimensione
4.07 MB
Formato
Adobe PDF
|
4.07 MB | Adobe PDF | Visualizza/Apri |
I documenti in POLITesi sono protetti da copyright e tutti i diritti sono riservati, salvo diversa indicazione.
https://hdl.handle.net/10589/152498