loud computing has gained significant popularity over past few years. Running application on the cloud can benefit from many features of it such as seamless maintenance and high resource availability. Furthermore, employing serviceoriented architecture and resource virtualization technology, cloud provides the highest scalability for enterprise applications having variant load. This feature of cloud is the main attraction for migration of workflows to the cloud. A workflow is comprising of several tasks communicating each other through network by sending various data volumes. The communication between tasks in a workflow sometimes turns into main bottleneck in migration of workflows to the cloud. In order to resolve this problem, we partition the workflow and share data files between tasks communicating each other more often. In addition, these tasks perform different operations, requiring different amount of resources. Since each task of a workflow requires different processing power to perform its operation, at time of load variation it must scale in a manner fulfilling its specific requirements the most. Scaling can be done manually, provided that the load change periods are deterministic, or automatically, when there are unpredicted load spikes and slopes in the workload. Enterprise application providers always look for the method that, minimizing resource consumption cost and handling maximum load, guarantees Quality of Service(Qos) expected from them. A number of autoscaling policies have been proposed so far. Some of these methods try to predict next incoming loads, while others tend to react to the incoming load at its arrival time and change the resource setup based on the real load rate rather than predicted one. However, in both methods there is need for an optimal resource provisioning policy that determines how many servers must be added to or removed from the system in order to fulfill the load while minimizing the cost. Current methods in this field take into account several of related parameters such as incoming workload, CPU usage of servers, network bandwidth, response time, processing power and cost of the servers. Nevertheless, none of them incorporates the life duration of a running server, the metric that can contribute to finding the most optimal policy. This parameter finds importance when the scaling algorithm tries to optimize the cost with employing a spectrum of various instance types featuring different processing powers and costs. We have created a generic LP(linear programming) model that takes into account all major factors involved in scaling including periodic cost, configuration cost and processing power of each instance type, instance count limit of clouds, and also life duration of each instance with customizable level of precision, and outputs an optimal combination of possible instance types suiting each task of a workflow the most. Our goal is to build a framework based on our LP model and workflow partitioning for optimal migration of any enterprise applications/workflows to the cloud.
Research staff
Assoc. Prof. Dr. Satish Narayana Srirama
Alireza Ostovar
Jaagup Viil
Publications
- S. N. Srirama, A. Ostovar: Optimal Resource Provisioning for Scaling Enterprise Applications on the Cloud, The 6th IEEE International Conference on Cloud Computing Technology and Science (CloudCom-2014), December 15-18, 2014, pp. 262-271. IEEE.
- S. N. Srirama, J. Viil: Migrating Scientific Workflows to the Cloud: Through Graph-partitioning, Scheduling and Peer-to-Peer Data Sharing, 16th IEEE International Conference on High Performance and Communications (HPCC 2014) workshops, August 20-22, 2014, pp. 1137-1144. IEEE.
Master theses
Alireza Ostovar – Optimal Resource Provisioning for Workflows in Cloud (2014)
Bachelor theses
Jaagup Viil – Remodelling Scientific Workflows for Cloud (2014)