One of the today issues in software engineering is to find new effective ways to deal intelligently with the increasing complexity of distributed computing systems. An increasing number of users with greater mobility are constantly requiring more sophisticated functionality from larger applications running on faster architectures. Consequently, computer engineers are gradually led to rethink the traditional perspective on distributed systems design, replacing systems with fixed topologies, central authorization or supervision with systems highly decentralized and self-managed. Different paradigms to handle such complex systems and solve large-scale problems have been proposed. The original client/server model led to the evolution of dedicated clusters, followed by grids, and then by large-scale data centers. The last stage of virtualization comes from cloud computing infrastructures that enable on-demand allocation of computational resources. Cloud Computing model provides a first version of flexibility thanks its auto-scaling mechanisms that adapt the system capacity according the load generated by service requests. This flexibility in the amount of resources provided is often called elasticity and is the basis of the utility computing model. Regarding auto-scaling mechanisms, both the research community and the main cloud providers have already developed auto-scaling toolkit. However, most research solutions are centralized and typically bound to the limitations of a specific cloud provider in terms of resource prices, availability, reliability and connectivity. These limitations are moving current research efforts in Cloud computing to the study of efficient ways to exploit more than a single Cloud system for the deployment of a service. The capability to federate different Cloud providers becomes particularly useful when smaller companies want to offer part of their resources for a competitive price in a cloud market. In a scenario like this we would have several cloud computing systems with less capabilities, guarantees, and stability than a single specialized cloud provider. The extreme scale and dynamism however, call for robust and efficient protocols capable to self-organize and self-repair the whole system in case of a serious failure in one Cloud provider or in its connectivity. A crucial issue comes also from the need to ensure an high QoS (Quality of service) in the network respect to the requests execution. In this context important is the effort provided by the load-balancer as well as the auto-scaling mechanisms, which are in charge to make the system more flexible against events in the network (e.g. peaks of load, churn). So far load balancing approaches have been designed for networks with fixed or dynamic topologies having a perfect knowledge of the whole network. They therefore do not address the needs to have a decentralized algorithm where each node is able to delegate exceeding jobs to others, without knowledge of the whole network. The problem we want to solve in this thesis is to build a fully decentralized, self-organizing and self-adapting system reducing the human effort at deployment time in reconfiguring the whole system in case failure or workload peaks. This thanks to the usage of autonomic self-aggregation techniques that rewire such highly dynamic systems in clusters. Clusters, which are grouped with homogeneous nodes having same characteristics will have a further property of elasticity. Nodes “belonging” to a certain cluster will be autonomously able to make decisions whether to ask for capacity to other clusters based on their capacity and on the number of received requests. They will be able to change its own service and “migrate” to another cluster, making the system more elastic and with an improved QoS against load or churn events in the network.
MycoCloud. Improving QoS by managing elasticity of services in decentralized clouds
LUCIA, DONATO
2012/2013
Abstract
One of the today issues in software engineering is to find new effective ways to deal intelligently with the increasing complexity of distributed computing systems. An increasing number of users with greater mobility are constantly requiring more sophisticated functionality from larger applications running on faster architectures. Consequently, computer engineers are gradually led to rethink the traditional perspective on distributed systems design, replacing systems with fixed topologies, central authorization or supervision with systems highly decentralized and self-managed. Different paradigms to handle such complex systems and solve large-scale problems have been proposed. The original client/server model led to the evolution of dedicated clusters, followed by grids, and then by large-scale data centers. The last stage of virtualization comes from cloud computing infrastructures that enable on-demand allocation of computational resources. Cloud Computing model provides a first version of flexibility thanks its auto-scaling mechanisms that adapt the system capacity according the load generated by service requests. This flexibility in the amount of resources provided is often called elasticity and is the basis of the utility computing model. Regarding auto-scaling mechanisms, both the research community and the main cloud providers have already developed auto-scaling toolkit. However, most research solutions are centralized and typically bound to the limitations of a specific cloud provider in terms of resource prices, availability, reliability and connectivity. These limitations are moving current research efforts in Cloud computing to the study of efficient ways to exploit more than a single Cloud system for the deployment of a service. The capability to federate different Cloud providers becomes particularly useful when smaller companies want to offer part of their resources for a competitive price in a cloud market. In a scenario like this we would have several cloud computing systems with less capabilities, guarantees, and stability than a single specialized cloud provider. The extreme scale and dynamism however, call for robust and efficient protocols capable to self-organize and self-repair the whole system in case of a serious failure in one Cloud provider or in its connectivity. A crucial issue comes also from the need to ensure an high QoS (Quality of service) in the network respect to the requests execution. In this context important is the effort provided by the load-balancer as well as the auto-scaling mechanisms, which are in charge to make the system more flexible against events in the network (e.g. peaks of load, churn). So far load balancing approaches have been designed for networks with fixed or dynamic topologies having a perfect knowledge of the whole network. They therefore do not address the needs to have a decentralized algorithm where each node is able to delegate exceeding jobs to others, without knowledge of the whole network. The problem we want to solve in this thesis is to build a fully decentralized, self-organizing and self-adapting system reducing the human effort at deployment time in reconfiguring the whole system in case failure or workload peaks. This thanks to the usage of autonomic self-aggregation techniques that rewire such highly dynamic systems in clusters. Clusters, which are grouped with homogeneous nodes having same characteristics will have a further property of elasticity. Nodes “belonging” to a certain cluster will be autonomously able to make decisions whether to ask for capacity to other clusters based on their capacity and on the number of received requests. They will be able to change its own service and “migrate” to another cluster, making the system more elastic and with an improved QoS against load or churn events in the network.File | Dimensione | Formato | |
---|---|---|---|
tesi.pdf
accessibile in internet per tutti
Descrizione: Thesis text
Dimensione
5.66 MB
Formato
Adobe PDF
|
5.66 MB | Adobe PDF | Visualizza/Apri |
I documenti in POLITesi sono protetti da copyright e tutti i diritti sono riservati, salvo diversa indicazione.
https://hdl.handle.net/10589/78551