In the past, only system admins and network administrators needed to understand load balancers. But in the age of high-volume traffic managed through DevOps practices, it’s necessary for everyone to have a basic understanding of load balancers and their role in the modern application infrastructure.
Load balancers also referred to as application delivery controllers (ADCs), are hardware or software tools used for the distribution of network traffic across servers. They are used to improve the performance of websites and applications.
When users visit your site, the web server services the request. But every web server has processing and memory limitations. So if there are too many requests, the web server can become overwhelmed. The solution is to increase the number of web servers and make sure the traffic gets equally distributed among these servers in order to eliminate performance bottlenecks.
Load balancers are responsible for traffic distribution. When users send requests to your website, the requests go to the load balancer and the load balancer then decides which server to use according to various load-balancing algorithms. In mission-critical applications where high availability is a requirement, load balancers can route traffic to failover servers. Also, load balancers themselves can become a vulnerability. Generally, network architects implement some form of replication of the load balancers to make sure there is redundancy.
Short History of Load Balancers
In the 1990s, in the early days of the internet, the startup entrepreneurs of the time discovered a problem. Their budgets didn’t allow them to buy the most powerful mainframe computers of the day. They could only afford standard servers that were being sold by the PC manufacturers of the time. But these servers weren’t powerful enough to serve the kind of traffic these startups were getting. So some of those pioneers came up with the clever idea of using more off-the-shelf servers. They put a load balancer between the users and the servers and just kept sending traffic to different servers in a round-robin fashion. It solved their scalability issues. So the concept of load balancing is a direct result of hardware limitations that was an obstacle to business growth for early internet entrepreneurs.

Types of Traffic Load Balancers Can Handle

Depending on the design and purpose, load balancers can handle various forms of traffic:

HTTP - HTTP balancing is straightforward. The load balancer takes the request, uses the criteria set up by the network administrator to route the request to backend servers. The header information is modified according to the need of the application, so the backend server has the necessary information to process the request.

HTTPS - HTTPS protocol is used for encrypted traffic. It follows the same methods as HTTP except for how it deals with encryption. If the load balancers are using an SSL passthrough then they just let the traffic go to the backend server and let the decryption take place there. Load balancers with SSL termination decrypt the request and then pass the unencrypted request to the backend server.

TCP - Load balancers with TCP capabilities can directly route TCP traffic. It’s useful for dealing with popular applications and services like LDAP, MYSQL, and RTMP.

UDP - Recently load balancers have started to add UDP capabilities too. UDP load balancing is used for DNS load balancing, lightweight Syslog, or authentication applications like RADIUS.

Load Balancing Algorithms

The load balancing algorithm makes the decision about which backend server to send a particular request. Network administrators set up the algorithm based on the unique need of a particular site or application. Here are some common methods used:

Round Robin - In this algorithm, the load balancer sends a request to a server and then moves on to the next one. It keeps following this process in a round-robin manner. It’s a useful method if all your servers have similar processors and memories.

Least Connection Method - Load balancers direct traffic to the server that has the fewest connections with the assumption that this server will have the most resources available.

Least Response Time Method - Load balancers take into account both the connections and the response time. The server with the fewest connection and lowest average response time is given priority. This algorithm can be useful when you have servers with uneven processing power and memory resources. The algorithm allows you to use the most powerful servers while depending on the weaker ones during high-volume requests.

IP Hash Method - Load balancers select servers based on the visitor’s IP address hash. This method is used when the traffic needs to go to particular servers consistently.

The manufacturer or the designer decides the available algorithms on a load balancer. So your load balancer might not have particular algorithms available or it might even have custom algorithms created for specific applications.

Special Considerations for Load Balancers

Besides the traffic types and algorithms, network architects also need to consider the following things when implementing load balancers:

Health Checks

For various reasons, the backend servers can become unresponsive. It doesn’t make sense for load balancers to send requests to an unresponsive server. So load balancers generally have health check mechanisms in place to monitor the health of the backend servers in use. The health checks will regularly monitor the servers using particular protocols and predefined ports. The checks are often defined based on the needs of the applications they serve.

Redundancy and High Availability

To avoid a load balancer as a single point of failure, networks have redundancy. There is generally a failover server to take over the load balancing duties in case of a catastrophic failure of the primary load balancer. This ensures high availability (HA) for the network. Because DNS propagation can take time, system administrations use floating IPs to easily point to the failover server automatically.

Sticky Session Loads

When a request from a user needs to go to the same backend server every time due to application session requirements, then the load balancer has to have a way to recognize users and route the traffic to the right servers. In such a case, load balancers use cookies to recognize users. Then, the cookie or session information is used to direct traffic to the appropriate servers.

Various Categories of Load Balancers

There are both hardware and software load balancers. Even though cloud-based load balancers are basically software solutions, still they can be considered a special category because they are developed and maintained by cloud service providers. Here is a more in-depth look into various categories of load balancers:

Hardware Load Balancers

Organizations use hardware load balancers for speed. They are designed with special processors to optimize performance. Also, these category is better for security as they are physical servers and only accessible by personnel from the organization. But they require a higher level of expertise to operate. It’s also harder to scale your operations during high-growth periods due to physical limitations. Here are some of the top vendors of hardware load balancers:
F5 - F5 specializes in application networking technology. Their Big-IP line of products is specifically designed for load balancing. Over the years, the Big-IP suite of products has developed into a platform. The F5 equipment comes with web-based interfaces for easy configuration and F5 has developed its own WAN Optimization Manager that improves data transfer speeds within the WAN.
Cisco - Cisco is one of the largest producers of network hardware. The Cisco router hardware products can work as a load balancer. Cisco used to produce dedicated load balancers. But they stopped manufacturing them since 2012.
Citrix - Citrix was a competitor of Cisco’s load balancer products. After Cisco exited the load balancer market, a lot of Cisco customers moved over to Citrix NetScaler ADC. Customers like the Citrix NetScaler products because their load balancers easily integrate with Cisco Application-Centric Infrastructure (ACI).
Kemp Technologies - Kemp Technologies is dedicated to loading balancers and application delivery controllers (ADCs). Its LoadMaster line offers high-performance hardware load balancers. The product line offers both affordable and high-end hardware options. Its proprietary LoadMaster software supports the hardware for better performance and monitoring.
Redware - Redware has a wide range of load-balancing options. Its AppDirector OnDemand switches high-performance throughput capacity and on-demand throughput and service scalability.
Barracuda - Barracuda Load Balancer ADC provides all the benefits of standard hardware load balancers. On top, it provides intrusion detection for higher security. The solution is Microsoft Certified, so it works well with Microsoft applications and servers.
Fortinet - Fortinet’s FortiADC product line provides multiple models of load balancers. The hardware is designed with multicore processor technology with hardware-based SSL offloading to improve performance. Built-in Link Load Balancing helps network administrators create redundancies easily.
Zevenet - Zevenet is a software-based solution. But they provide their solution in hardware packages for enterprise-level customers.

Software Load Balancers

Software load balancing provides flexibility and scalability. Nowadays most businesses prefer software load balancers because they are cheaper to operate. Also, businesses can deploy these solutions in the cloud. For example, a company can decide to create an EC2 load balancer on AWS using Nginx. However, similar to hardware solutions, software load balancers also require high-level knowledge and experience to operate efficiently. Here are some of the prominent software load-balancing solutions:
NGINX Plus Load Balancing - NGINX Plus is a high-performance load-balancing solution that can support HTTP, TCP, and UDP workloads. It supports round robin, least connections, least response time, hash (user-defined key such as client IP address or URL), and IP hash. It can identify sessions and maintain session persistence.
HAProxy Load Balancing - HAProxy is a free and open-source load balancer that is used by industry giants like Airbnb, Instagram, GitHub, and more. It's great for high-availability services. It can provide great performance with SSL offloading, HTTP compression and routing, device detection, and lookup maps. Strong security features help prevent DDoS and Brute force attacks. HAProxy gets used a lot in Microservices architectures.
LoadMaster Load Balancing - Kemp technologies have a hardware solution as mentioned before. But their LoadMaster free software load balancer is also popular. It can work as an application load balancer. It has an inbuilt Web Application Firewall (WAF).
Apache Load Balancing - Apache load balancer is an open-source application. It used to be very popular. However, the popularity is waning because Apache slows down when too many connections are made. Also, configuring the Apache load balancer is complicated.
Here are a few other software load balancers to check out: Seesaw, Neutrino, Balance, Pen, and Traffic.

Cloud-Based Load Balancers

Even though applications are moving to the cloud, the need for load balancers hasn’t decreased. There is still the need for intelligent distribution of workloads. The early movers of the cloud used software load balancers to address their needs. But since then, the cloud providers have come up with their own cloud-based load-balancing solutions. Anyone developing applications in the cloud should be aware of the available services. Here are some of the prominent ones:

AWS Elastic Load Balancer

Amazon Web Service (AWS) is the dominant player in the cloud computing field. AWS has designed Elastic Load Balancer (ELB) to work well with their Amazon EC2 instances, containers, and IP addresses. The Elastic Load Balancer is able to route traffic to various Availability Zones. So it’s a great solution to scale high-volume applications with high availability needs. AWS current offers three types of load balancers:

Application Load Balancer: Application Load Balancer is intended for HTTP and HTTPS traffic. It works in Layer 7 (Application) of the OSI model. It can balance traffic within the confines of the Amazon Virtual Private Cloud (VPC).

Network Load Balancer: Network Load Balancer is for TCP traffic with low latency and high-performance requirements. It works in Layer 4 (Transport). It’s better at handling unpredictable traffic patterns.

Classic Load Balancer: This category of load balancers is out of favor. They are kept for legacy purposes. Amazon actively discourages new customers from using this load balancer.

Elastic Load Balancer takes care of all the resource requirements for you. If you set up the ELB with your application that has Auto Scaling turned on, the load balancer will keep track of auto-scaled servers to make sure it can route the traffic to the healthy servers. ELB can be also used in hybrid clouds to route traffic to your on-premise servers.

Azure Load Balancer

Azure is Microsoft’s Cloud solution. It already has fully-managed load balancing services in its standard suites. Application Gateway provides Transport Layer Security (TLS) protocol termination or HTTP/HTTPS request and application-layer processing. Traffic Manager provides DNS load-balancing solutions.
Azure has two types of load balancers - Basic SKU and Standard SKU. The Standard SKU can handle all the responsibilities of the Basic SKU. Standard SKU has a higher price point and higher performance. Here are some of the key differences:

Pool Size: Basic SKU can handle up to 100 instances while Standard SKU can handle up to 1000 instances.

Endpoint Restrictions: Basic SKU can serve Virtual Machines (VMs) in a single availability set or virtual machine scale set. Standard SKU can work within the scope of a single virtual network which can expand across multiple availability sets and virtual machine scale sets.

Management Operation Times: Basic SKU management operations can take 60-90+ seconds. Standard SKU operations are performed in less than 30 seconds.

Service Level Agreement (SLA): Basic SKU SLA is dependent on the VM SLA. Standard SKU guarantees 99.99% for a data path with two healthy VMs.

Pricing: Basic SKU is free. Standard SKU pricing depends on the inbound and outbound data and the number of rules implemented.

GCP Load Balancer

Google Cloud Platform (GCP) provides high-performance and scalable load-balancing solutions. Its load balancers can balance HTTP/HTTPS and TCP/SSL. Also, it has the SSL offload solution so you can centrally manage your SSL certificates. It also has added UDP load balancing. The load balancers seamlessly integrate with Google Content Delivery Network (CDN). It means better performance for users who download content from GCP.
Global external load balancing can be handled through HTTP(S) load balancing, SSL Proxy load balancing or TCP Proxy load balancing. Regional external load balancing is handled through Network load balancing and regional load balancing is handled through internal load balancing. The prices are dependent on the forwarding rules (protocol forwarding) and ingress and egress data processing.

DigitalOcean Load Balancer

DigitalOcean is a cloud service provider that has gained popularity among the open-source community and small businesses. It provides fully-managed, highly available load balancers. Each of the servers in the DigitalOcean is called a Droplet. The load balancers are designed to route traffic automatically to failover servers or Droplets.
DigitalOcean provides the load balancer service at a fixed price. The load balancers can support HTTP, HTTPS, HTTP/2, and TCP. In the HTTPS scenario, they can deal with both SSL passthrough and SSL termination. They can balance using round-robin and least connections algorithms. Even though load balancers are available in multiple regions, a particular load balancer can only route traffic to the backend server located in its region.

Cloudflare Load Balancer

Cloudflare is a Content Delivery Network (CDN). It provides DNS load balancing for its service. It runs regular health checks and can provide quick moves to failover servers to avoid customer dissatisfaction. Cloudflare load balancers can route traffic according to geographic proximity, so European requests will be served from European data centers and US requests will be served from US data centers. It improves latency.
Users can subscribe to different tiers of load balancers to meet their specific requirements. Pricing tiers are dependent on the number of origins, health check frequency, geo-routing, etc. Each tier also includes a certain number of monthly DNS queries.

Incapsula Load Balancer

Incapsula is another CDN provider. It provides Load Balancer-as-a-Service (LBaaS). It provides both local and global load balancing for content delivery. It’s an easy rule-based load-balancing system that works with large enterprises like HITACHI, SIEMENS, and XOOM. It uses a monthly subscription for per site using pro, business, and enterprise tiers.

Note: Even though each cloud service provides its own load balance solutions, you can use a software load balancer on a cloud instance to create your own load balancer.

Considerations for Choosing Load Balancers

Obviously, there are a lot of choices available for anyone looking for load balancers. When you are deciding on a solution, take the following into consideration:

Business Requirements: It’s tempting to take the least path of resistance. If you are working with a cloud provider, you might start using their load balancers. But think of the long-term effect on your business. If there is a chance that you will need to use a hybrid model in the future (cloud and on-premise) or use multiple cloud providers, you will have to change your strategy. So choose a solution with your future business objectives in mind.

Estimate Work Loads: Look at your business objectives and try to estimate growth of traffic. You might be subscribing to certain load-balancing services that might not be cost-effective for larger traffic surges or you might be using a solution that will not be able to scale. Future workloads should be a part of your selection process.

High Availability (HA): If your application requires high availability then you need the best load balancers that money can buy. But if you are using expensive load balancers for non-HA applications, you might be wasting money and resources. So try to select a solution based on your quality of service requirements.

Security: Today every application faces security threats. When you are looking for a load balancer, it indicates business is good and your traffic is increasing. It also means you are becoming a bigger target. So make sure the load balancer you choose has up-to-date security features.

Return on Investment (ROI): Even a free load balancer on a cloud service requires a lot of time investment. In addition to the cost of hardware, software, or subscription, take into account the time to learn, configure and operate the system. Calculate the ROI based on the total cost of ownership (TCO).

Request more information today

Discover how our services can benefit your business. Leave your contact information and our team will reach out to provide you with detailed information tailored to your specific needs. Take the next step towards achieving your business goals.

Contact form:
Our latest news