Cloud Load-Balancing

Cloud load-balancing is the process of distributing workloads & computing resources within a cloud technology’s environment. It also helps organizations & enterprises to manage workload demands by allocating resources among multiple systems or servers. Cloud load balancing also involves hosting the distribution of workload traffic that resides over the internet.

Application of  Load balancing

Load balancing can be implemented in hardware as is the case with F5’s Big IP server or in software also such as Apache mod_proxy_balancer, the pound load-balancers & reverse-proxy software. Load balancing is an optimization technique which can be used to enhance utilization & throughput, lower latency, reduce response time & avoid overloading of systems.

Network Resources

The following network resources can be load balanced:

  • Network interfaces & services such as DNS, FTP & HTTP
  • Processing through computer system assignment
  • Access to application instances
  • Connection through intelligent switches
  • Resources of storage


Load Balancing Techniques

Scheduling Algorithms

The scheduler is an operating system module that selects the next jobs to be admitted into the system and the next process to run. Operating systems may feature up to three distinct scheduler types: a long-term scheduler (also known as an admission scheduler or high-level scheduler), a mid-term or medium-term scheduler, and a short-term scheduler. The names suggest the relative frequency with which their functions are performed.

Load Balancing Policies

In computing, load balancing improves the distribution of workloads across multiple computing resources, such as computers, a computer cluster, network links, central processing units, or disk drives. Load balancing aims to optimize resource use, maximize throughput, minimize response time, and avoid overload of any single resource. Using multiple components with load balancing instead of a single component may increase reliability and availability through redundancy. Load balancing usually involves dedicated software or hardware, such as a multilayer switch or a Domain name System server process.

A Comparative Study of Algorithms

Biased Random Sampling bases its job allocation on the network represented by a directed graph. For each execution node in this graph, in-degree means available resources and out-degree means allocated jobs. In-degree will decrease during job execution while out-degree will increase after job allocation.

Active Clustering is a self-aggregation algorithm to rewire the network. The experiment result is that”Active Clustering and Random Sampling Walk predictably perform better as the number of processing nodes is increased”.while the Honeyhive algorithm does not show the increasing pattern.


Cloud vs DNS Load Balancing

  1. Cloud load balancing can transfer loads to servers globally whereas DNS load balancers cannot.
  2. Cloud load balancers have the ability to deliver users to the closest regional server without interrupting the user’s tasks.
  3. Cloud load balancers addresses issue relating to TTL reliancy used in DNS load balancing.
  4. Cloud load balancing has the capability to increase response time by routing remote sessions to the best performing data-centers.

Goals of using load balancers

  • Application Response time
  • Availability of application – efficiently
  • Time of day
  • User location
  • A current & total capacity of data centers in which application is deployed