Load Balancing in Azure (I)

Load balancing is a critical component of modern network architectures. It adds intelligence to traffic routing and helps on scaling, improving resilience and making a better use of the available resources.

Azure offers a series of products dedicated to load balancing that cover different needs and scenarios. We will do a series on the different offerings and explain some of their features and characteristics. As always, but specially in the case of the Azure Load Balancer, we recommend going through their high quality public documentation in detail.

Azure Load Balancer

The Azure Load Balancer (ALB) is a layer 4 load balancer. It supports TCP and UDP traffic, discarding everything else. That means it also discards ICMP, which has been a common discussion topic in the past.

The ALB is currently offered in two different SKUs (Basic and Standard) and two different working modes (External and Internal).

The ALB is not really a device (even virtual), but a feature of Azure’s network fabric. This avoids single points of failure and performance bottlenecks, making the ALB a truly hyperscale load balancing solution providing low latency, high throughput, and scaling up to millions of flows for all TCP and UDP applications.This also means capacity is always available and you don’t have to warm up instances or will suffer any scaling delays.

External vs Internal Load Balancer

Let’s start by highlighting the main difference between the external and the internal LB:

  • The external load balancer presents a public IP, known as VIP, where the backend services are mapped to. Its purpose is to publish load balanced services to outside Azure in an open way.
  • The internal load balancer presents an IP address from your Virtual Network’s address space, commonly known as ILB IP, where the backend services are mapped into. Its purpose is to publish load balanced services to be consumed from within the Virtual Network or from another private location that is connected to the Virtual Network (VPN, ExpressRoute, etc)

Remember we’ve previously mentioned that NAT happens on the Virtual Switch? This is important to understand how external and internal LB operate.

Load balancer DNAT

Traffic inbound from Internet to a load balanced service mapped to a port on our VIP will go through Destination NAT (or DNAT), meaning that when the traffic hits your VM behind the LB, the destination IP will be that of the VM and not the VIP as in the original packet. The source IP will still be the client’s source IP as expected.

Stage Source IP Source Port Destination IP Destination Port
Traffic sent to VIP 1.1.1.1 12345 2.2.2.2 443/TCP
Traffic delivered to VM 1.1.1.1 12345 10.0.0.5 443/TCP*

*Destination port could be something else if that’s what you’ve configured when mapping your services.

The above table shows the DNAT for a scenario where a client on the Internet (1.1.1.1) hits our VIP (2.2.2.2) on port 443/TCP. The destination IP is changed when the traffic is delivered to the VM and this DNAT happens on the Virtual Switch, which as we’ve described before sits in the VM’s host. This is a clever way that Azure has to scale all their NAT work.

Please keep in mind this means the backend VMs are not aware of the VIP or ILB IP at all.

The same DNAT is applied for internal load balancer scenarios.

Load Balancer SNAT

SNAT or Source NAT is done by the Load Balancer in scenarios where our VMs are behind a Load Balancer with a VIP and do not have a public IP assigned on their NICs. As soon as there is a public IP address assigned on the VM’s NIC, outbound traffic will use that IP address, doing a slightly different kind of NAT which is not considered to be handled by the Load Balancer.

Sharing one (or more) VIPs for outbound traffic can be a challenge for scenarios where our backend VMs are all consuming an external service on a specific IP address and port and have to create several connections.

This is because when the SNAT is done, not only the backend VM’s (non publicly routable) source IP address is changed for the (publicly routable) VIP, but also the source port has to be changed. Why? Because multiple backend VMs are sharing the VIP, so port collisions might happen.

Azure solves this by preassigning a number of “VIP source ports” to each backend VM, thus not only avoiding collisions but also speeding up the NAT translation as ports don’t have to be requested (which was the case for certain scenarios in the past).

The default number of ports preassigned for each backend VM is 1024. This means each VM can have up to 1024 simultaneous connections per each destination IP and port pair, e.g. 1024 connections to 1.1.1.1:80, 1024 connections more to 1.1.1.2:80, 1024 connections more to 1.1.1.1:81, etc. Adding extra VIPs will also multiply 1024 per the number of VIPs assigned.

There is no SNAT in Internal Load Balancer scenarios.

Load balancing algorithms or distribution modes

Azure Load Balancer supports two different algorithms or distribution modes: 5-tuple hashing and 2-tuple hashing (also known as source IP affinity).

What is a 5-tuple and a 2-tuple in this case?

5-tuple refers to the 5 key-value pairs we can find in the IP and TCP/UDP headers: Protocol (TCP or UDP), Source Port (1-65535), Destination Port (1-65535), Source IP (0.0.0.0-255.255.255.255) and Destination IP (0.0.0.0-255.255.255.255).

2-tuple refers to only the IP headers: Protocol (TCP or UDP), Source IP (0.0.0.0-255.255.255.255) and Destination IP (0.0.0.0-255.255.255.255).

So, a 5-tuple algorithm will send all packets matching the same 5-tuple to the same server. This is the equivalent of saying “send all packets of the same TCP/UDP connection to the same server” while the 2-tuple algorithm will “send all the packets from the same source IP to the same server”. The former effectively distributes connections across your backend server while the latter sends all connections from each client to the same server, thus actually distributing clients instead of connections.

Why are they called hashing algorithms?

Because the Load Balancer will compute and store a hash of the 5-tuple or 2-tuple, then match every packet against those.

As an example, let’s imagine a very simple hashing function where we sum source port, destination port and both source and destination IP addresses (after converting them to decimal). This hashing function returns a number, then we do modulo of that hashing function for the number of backend servers. That modulo result is the server where we will send the traffic. Examples:

Hash function = Src IP Dec + Dst IP Dec + Src Port + Dst Port

Connection 1

Src IP Src IP Decimal Dst IP Dst IP Decimal Src Port Dst Port Hash
1.1.1.1 16843009 10.20.30.40 169090600 12345 443 185946397

Connection 2

Src IP Src IP Decimal Dst IP Dst IP Decimal Src Port Dst Port Hash
1.1.1.1 16843009 10.20.30.40 169090600 12346 443 185946398

Connection 3

Src IP Src IP Decimal Dst IP Dst IP Decimal Src Port Dst Port Hash
2.3.4.5 33752069 10.20.30.40 169090600 23456 443 202866568

If we have 3 backend servers and we’re doing 5-tuple distribution mode, all packets from Connection 1 will go to server #2 because:

 185946397 % 3 = 1 

(Server #1 would be 0, #2 is 1 and #3 is 2).

Connection 2 will go to Server #3 because:

 185946398 % 3 = 2 

Connection 3 will go to Server #2 because:

202866568 % 3 = 1

The result for a 2-tuple algorithm or distribution mode will differ because the formula only uses two of the key-value pairs: Hash function = Src IP Dec + Dst IP Dec.

So, the results for a 2-tuple distribution mode will be:

Connection 1: 185933609 % 3 = 2
Connection 2: 185933609 % 3 = 2
Connection 3: 202842669 % 3 = 0

DSR or floating IP

Direct Server Return (DSR) or floating IP is a feature of the load balancer that disables DNAT. This means that the traffic hitting your backend VMs will have in its IP headers the VIP (or ILB IP for internal load balancers) as destination IP address. There is a number of scenarios where this is preferred over doing DNAT, being SQL clusters the most common reason to enable DSR or floating IP.

Stage Source IP Source Port Destination IP Destination Port
Traffic sent to VIP 1.1.1.1 12345 2.2.2.2 443/TCP
Traffic delivered to VM 1.1.1.1 12345 2.2.2.2 443/TCP*

*Destination port could be something else if that’s what you’ve configured when mapping your services.

The main gotcha when configuring DSR is that you need to make sure the backend VMs will accept traffic sent to an IP address (VIP or ILB IP) that they do not own. This is usually solved by adding the VIP or ILB IP (in these scenarios folks often call it the floating IP) as a secondary IP address, inside the guest OS config, to each backend VM. This will make the backend VM accept traffic directed to such IP address and use it as source for its replies.

Standard vs Basic Load Balancer

The Standard SKU is a more granular and feature rich version of the ALB’s Basic SKU. The former is aimed at scaling HA deployments of any size, including multi-zone deployments. On the other hand, the Basic SKU is restricted to a single Availability Set and lacks certain features such as HTTPS probes, TCP reset on idle or HA ports, which is a very useful feature for HA NVA deployments.

HA Ports on the Internal Load Balancer

We have mentioned that Azure’s Load Balancer is a layer 4 load balancer and that it balances UDP and TCP traffic only. There is a small exception, which is the HA Ports feature of the Internal Load Balancer (only available on Standard SKU).

HA Ports can be configured in different ways, but one of them (single IP address, non-floating IP / DSR) acts basically like a layer 3 load balancer because it will map all the LB IP ports to the backend VM. This is extremely useful for scenarios where you have a pair of NVAs in active-passive mode (think a pair of firewalls) and need low complexity and fast and reliable HA.

Conclusion

Azure’s Load Balancer is a cornerstone of the platform and an incredible piece of engineering. It can be configured in different ways for a big number of scenarios and can be combined with multiple other solutions, including other load balancing solutions like application load balancing (e.g. Application Gateway, ARR, Nginx, F5 LTM, HAProxy, …) or DNS load balancing (Traffic Manager, F5 GTM, etc) adding extra value and resiliency to our deployments.

It is very simple to deploy for basic needs, but it is recommended to really get familiar with it for more complex purposes.

Do you have something to say, corrections to make or any other feedback? Please get in touch through the comments section below.

One Reply to “Load Balancing in Azure (I)”

Leave a Reply

Fill in your details below or click an icon to log in:

WordPress.com Logo

You are commenting using your WordPress.com account. Log Out /  Change )

Google photo

You are commenting using your Google account. Log Out /  Change )

Twitter picture

You are commenting using your Twitter account. Log Out /  Change )

Facebook photo

You are commenting using your Facebook account. Log Out /  Change )

Connecting to %s

This site uses Akismet to reduce spam. Learn how your comment data is processed.