Elastic Load Balancing is the heart of applications in AWS. Learn all about ELB, its types, and features in this post.
Nowadays, most organizations require managing and increasing their applications’ scalability, availability, and fault tolerance. AWS provides an excellent solution for this, i.e., the Elastic Load Balancing service. This service consists of a load balancer that can distribute workloads across many compute resources like virtual servers.
The Load Balancing services allow us to configure health checks to monitor the compute resources’ health. It also allows us to offload encryption and decryption work to your load balancer so that the compute resources can focus on their main work.
Elastic Load Balancing (ELB): Overview
ELB is a service that automatically distributes incoming traffic across multiple EC2 instances. This helps achieve higher fault tolerance levels in your applications by providing the load balancing capacity for distributing application traffic.
Moreover, Elastic Load Balancing can detect unhealthy EC2 instances, and as soon as an EC2 instance is found to be unhealthy, ELB stops sending traffic to it until it becomes healthy again. Customers can easily enable Elastic Load Balancing within a single or multiple Availability Zones for more consistent application performance.
Elastic Load Balancing Features
You can manage and create security groups associated with Elastic Load Balancing in Amazon Virtual Private Cloud (VPC) to give extra networking and security options for Application Load Balancer and Classic Load Balancer.
An Elastic Load Balancer is highly available. You can distribute the incoming traffic to your application to EC2 instances in a single Availability Zone or multiple Availability zones.
Elastic Load Balancers are designed to handle traffic as they grow and can load and balance millions of requests/sec. It can also handle sudden spikes of traffic.
With Elastic Load Balancing, you can keep the health of your EC2 instances in check and not risk sending traffic to an unhealthy instance.
Operational monitoring and logging
Amazon CloudWatch reports Application and Classic Load Balancer metrics such as error counts, error types, request latency, request counts, and more.
You can enable Delete Protection on an Elastic Load Balancer to prevent it from being accidentally deleted.
Components of Elastic Load Balancers
You have to configure one or more listeners for your load balancer. It is a process that checks for connection requests. It is set up with a protocol and a port for connections for the front-end connections (client to load balancer) and a protocol for the back-end connections(load balancer to back-end instance).
Supported protocols for elastic load balancing include:
A load balancer serves as the “traffic policeman” in front of your servers, distributing client requests across all servers equipped to handle them in a way that maximizes speed and capacity utilization and ensures that no server is overworked, which can result in performance degradation.
The load balancer routes traffic to the active servers in case one server goes offline. The load balancer initiates requests to a new server when it is added to the server group.
Requests are routed to one or more registered targets using each target group. You define a target group and conditions when you create each listener rule. When a rule condition is satisfied, the traffic is routed to the appropriate target group.
For various kinds of requests, you can make separate target groups. For example, create different target groups for requests to your application’s microservices and generic requests.
Types of Load Balancers
Application Load Balancer
Application Load Balancer enables developers to set up and direct incoming end-user traffic to apps running on the AWS public cloud.
Load balancing is crucial in a cloud environment with numerous web applications. A load balancer ensures no single server is overloaded by dividing network traffic and information flows among several servers. This boosts user experiences, increases application responsiveness and availability, and can defend against distributed denial-of-service (DDoS) assaults.
Access to web applications has grown significantly in recent years. However, unexpected traffic increases can slow down online services and reduce availability. The Application Load Balancer efficiently distributes network load in the public cloud to increase stability and availability.
The Application Load Balancer will only direct traffic to a healthy target inside the cloud resource if a problematic application reaches Layer 7. WebSocket is another protocol supported by Application Load Balancer for increased connectivity with the underlying server.
Websites and mobile applications that run in containers or on AWS EC2 instances benefit the most from the usage of an application load balancer. In a microservices architecture, Application Load Balancer can be utilized as an internal load balancer in front of EC2 instances or Docker containers that implement a specific service. Additionally, it can be used in front of a RESTful API application.
Numerous AWS services are compatible with the application load balancer, including:
AWS Auto Scaling
Amazon Elastic Container Service
AWS Certificate Manager
Classic Load Balancer
The traffic from incoming applications is split across numerous EC2 instances into various Availability Zones using a classic load balancer. A classic load balancer acts as the client’s sole point of contact. This makes your application more accessible. Without impairing the general flow of requests to your application, you can add/remove instances from your classic load balancer as your needs change.
A listener uses the protocol and port you set to monitor client connection requests. The listener then transmits requests to one or more registered instances using the configured protocol and port. You modify your load balancer by adding one or more listeners.
To ensure that the load balancer only routes requests to healthy instances, you can configure health checks, which are used to keep an eye on the condition of the registered instances.
The classic load balancer, by default, evenly distributes traffic among the Availability Zones you enable for your load balancer. Enable cross-zone load balancing on your load balancer to distribute traffic among all registered instances in all activated Availability Zones.
Types of Classic Load Balancer:
Internet-Facing Classic Load Balancers: An Internet-facing load balancer can route requests from clients to the EC2 instances registered with the load balancer over the Internet, thanks to its publicly resolvable DNS domain. Your load balancer is given a public DNS name when it is built, which clients can use to make requests. The DNS servers translate the DNS name of your load balancer to the load balancer nodes’ public IP addresses. Private IP addresses connect each load balancing node to the back-end instances.
Internal Classic Load Balancers: An internal load balancer’s nodes have only personal IP addresses. An internal load balancer’s DNS name can be publicly resolved to the nodes’ IP addresses. As a result, requests can only be routed by internal load balancers from clients who have access to the load balancer’s VPC.
Network Load Balancer
The network load balancer works at the fourth layer of the OSI model. It can deal with millions of requests per second.
The load balancer chooses a target from the target group for the default rule after receiving a connection request. It tries to establish a TCP connection to the selected target on the port indicated in the listener settings.
To increase your application’s fault tolerance, you can enable multiple availability zones, a paid service in AWS for Network Load balancers. If one Availability Zone goes down, your application will not stop working.
For TCP traffic, a target is selected using a flow hash algorithm based on the protocol, source port, source IP address, destination port, destination IP address, and the TCP sequence number.
Clients’ TCP connections have different sequence numbers and source ports, so the connections are routed to other targets. Each unique TCP connection is routed to a single target for the duration of the connection.
Gateway Load Balancer
Your third-party virtual appliances can be simply deployed, scaled, and managed with the help of Gateway Load Balancer. It provides a single gateway for splitting traffic between numerous virtual appliances and scaling them up or down in response to demand. This eliminates potential points of failure in your network and increases availability.
Virtual appliances from independent manufacturers can be found, tested, and purchased directly through AWS Marketplace. Whether you want to continue working with your current vendors or try something new, this integrated experience accelerates the deployment process so that you can benefit from your virtual appliances more quickly.
Benefits of gateway load Balancer:
Quicker deployment of third-party virtual appliances.
Scaling your virtual appliances while managing costs.
Improve the availability of virtual appliance
Elastic Load Balancers are a critical part of many infrastructures made on AWS. The features we get from an ELB make managing your infrastructure easier. Elastic Load Balancing is a tried-and-true method of spreading application and web traffic requests over several targets or instances.
You can automatically scale your various workloads using Elastic Load Balancer. We have covered all types of load balancers provided by AWS, and you can use them per your application’s demands.
Naman Yash is a Software Engineering Professional with 2+ years of Cloud Engineering experience in JP Morgan Chase. Currently, Naman is working as a freelance software engineer and content writer. He holds multiple AWS and Terraform certifications… read more