Back

What is load balancing and how does it work?

What is Load Balancing

Load balancing is the process of distributing requests across a pool of upstream servers. An upstream server is a cloud or dedicated server with a private or public IP address. We provide HTTP/HTTPS (Layer 7) and TCP (Layer 4) load balancing services.

There are two main benefits to load balancing: the scalability and high availability of the application.

As your user base grows, you can seamlessly add application servers to the pool without downtime. And if one server fails, others ensure the application stays online. Other benefits include TLS workload offloading to a load balancer, and IP floating without the need for a Layer 2 domain.

How it works

You can create a load balancer instance in the Customer Portal.

During the process of load balancer creation, you will be provided with its public IP address (you may choose to have multiple IPs for a single instance). You should then use the load balancer's IP address as if it was the address of your application server.

The load balancer then distributes incoming requests across a pool of your real application servers.

For example, you can set up an HTTP(S) load balancer and point a domain name to the balancer's public IP by setting the corresponding A record for that domain name. In this scenario, the load balancer instance receives HTTP/HTTPS requests sent to your domain name and distributes them across your upstream application servers.

What is Load Balancing and how it works

Our load balancing solution is NGINX Plus based, and we use the "Least Connections" load balancing method, where each incoming request is passed to the server with the least number of active connections, taking into account the weights of servers.

Suggested Articles

  • Load balancing

    How a load balancer instance is created and how it is billed

  • Load balancing

    Features specific to HTTP(S) load balancing