0% found this document useful (0 votes)
58 views

Using Nginx As HTTP Load Balancer

Nginx can be used as an efficient HTTP load balancer to distribute traffic across multiple application servers. It supports several load balancing methods including round-robin, least-connected, and IP hash. Round-robin is the default which distributes requests evenly. Least-connected sends requests to less busy servers. IP hash ensures a client's requests always go to the same server. Server weights can also be configured to influence load distribution. Nginx performs health checks and avoids failing servers.

Uploaded by

HallBrenton
Copyright
© © All Rights Reserved
Available Formats
Download as PDF, TXT or read online on Scribd
0% found this document useful (0 votes)
58 views

Using Nginx As HTTP Load Balancer

Nginx can be used as an efficient HTTP load balancer to distribute traffic across multiple application servers. It supports several load balancing methods including round-robin, least-connected, and IP hash. Round-robin is the default which distributes requests evenly. Least-connected sends requests to less busy servers. IP hash ensures a client's requests always go to the same server. Server weights can also be configured to influence load distribution. Nginx performs health checks and avoids failing servers.

Uploaded by

HallBrenton
Copyright
© © All Rights Reserved
Available Formats
Download as PDF, TXT or read online on Scribd
You are on page 1/ 4

2021/8/22 Using nginx as HTTP load balancer

NGINX Sprint is on Aug 23-25 this year, virtual and free.


Register here.

Using nginx as HTTP load balancer


Load balancing methods english

Default load balancing configuration


Least connected load balancing
русский

Session persistence

Weighted load balancing news


Health checks
about

Further reading
download


security

documentation
Introduction faq

Load balancing across multiple application instances is a

books
commonly used
technique for optimizing resource utilization, support

maximizing throughput,
reducing latency, and ensuring fault-

tolerant configurations. trac


twitter
It is possible to use nginx as a very efficient HTTP load blog

balancer to
distribute traffic to several application servers and

to improve
performance, scalability and reliability of web unit
applications with nginx. njs

Load balancing methods


The following load balancing mechanisms (or methods) are
supported in
nginx:
round-robin  — requests to the application servers are
distributed
in a round-robin fashion,
least-connected — next request is assigned to the server
with the
least number of active connections,
ip-hash  — a hash-function is used to determine what
server should
be selected for the next request (based on
the client’s IP address).
Default load balancing configuration
The simplest configuration for load balancing with nginx may
look
like the following:
http {

upstream myapp1 {

server srv1.example.com;

server srv2.example.com;

server srv3.example.com;

https://nginx.org/en/docs/http/load_balancing.html 1/4
2021/8/22 Using nginx as HTTP load balancer
}

server {

listen 80;

location / {

proxy_pass http://myapp1;

}
}

In the example above, there are 3 instances of the same


application
running on srv1-srv3.
When the load balancing
method is not specifically configured,
it defaults to round-
robin.
All requests are
proxied to the server group myapp1, and
nginx applies HTTP load
balancing to distribute the requests.
Reverse proxy implementation in nginx includes load balancing
for HTTP,
HTTPS, FastCGI, uwsgi, SCGI, memcached, and
gRPC.
To configure load balancing for HTTPS instead of HTTP, just
use “https”
as the protocol.
When setting up load balancing for FastCGI, uwsgi, SCGI,
memcached, or gRPC, use
fastcgi_pass,
uwsgi_pass,
scgi_pass,
memcached_pass, and
grpc_pass
directives
respectively.
Least connected load balancing
Another load balancing discipline is least-connected.
Least-
connected allows controlling the load on application
instances
more fairly in a situation when some of the requests
take
longer to complete.
With the least-connected load balancing, nginx will try not to
overload a
busy application server with excessive requests,
distributing the new
requests to a less busy server instead.
Least-connected load balancing in nginx is activated when the
least_conn directive is used as part of the server group
configuration:
upstream myapp1 {

least_conn;

server srv1.example.com;

server srv2.example.com;

server srv3.example.com;

Session persistence
https://nginx.org/en/docs/http/load_balancing.html 2/4
2021/8/22 Using nginx as HTTP load balancer

Please note that with round-robin or least-connected load


balancing, each subsequent client’s request can be potentially
distributed to a different server.
There is no guarantee that the
same client will be always
directed to the same server.
If there is the need to tie a client to a particular application
server —
in other words, make the client’s session “sticky” or
“persistent” in
terms of always trying to select a particular
server — the ip-hash load
balancing mechanism can be used.
With ip-hash, the client’s IP address is used as a hashing key
to
determine what server in a server group should be selected
for the
client’s requests.
This method ensures that the requests
from the same client
will always be directed to the same server
except when this server is unavailable.
To configure ip-hash load balancing, just add the
ip_hash
directive to the server (upstream) group configuration:
upstream myapp1 {

ip_hash;

server srv1.example.com;

server srv2.example.com;

server srv3.example.com;

Weighted load balancing


It is also possible to influence nginx load balancing algorithms
even
further by using server weights.
In the examples above, the server weights are not configured
which means
that all specified servers are treated as equally
qualified for a
particular load balancing method.
With the round-robin in particular it also means a more or less
equal
distribution of requests across the servers — provided
there are enough
requests, and when the requests are
processed in a uniform manner and
completed fast enough.
When the
weight
parameter is specified for a server, the weight
is accounted as part
of the load balancing decision.
upstream myapp1 {

server srv1.example.com weight=3;

server srv2.example.com;

server srv3.example.com;

With this configuration, every 5 new requests will be distributed


across
the application instances as the following: 3 requests
https://nginx.org/en/docs/http/load_balancing.html 3/4
2021/8/22 Using nginx as HTTP load balancer

will be directed
to srv1, one request will go to srv2, and another
one — to srv3.
It is similarly possible to use weights with the least-connected
and
ip-hash load balancing in the recent versions of nginx.
Health checks
Reverse proxy implementation in nginx includes in-band (or
passive)
server health checks.
If the response from a particular
server fails with an error,
nginx will mark this server as failed,
and will try to
avoid selecting this server for subsequent
inbound requests for a while.
The
max_fails
directive sets the number of consecutive
unsuccessful attempts to
communicate with the server that
should happen during
fail_timeout.
By default,
max_fails
is set
to 1.
When it is set to 0, health checks are disabled for this
server.
The
fail_timeout
parameter also defines how long the
server will be marked as failed.
After
fail_timeout
interval
following the server failure, nginx will start to gracefully
probe
the server with the live client’s requests.
If the probes have
been successful, the server is marked as a live one.
Further reading
In addition, there are more directives and parameters that
control server
load balancing in nginx, e.g.
proxy_next_upstream,
backup,
down, and
keepalive.
For more
information please check our
reference documentation.
Last but not least,
application load balancing,
application
health checks,
activity monitoring and
on-the-fly
reconfiguration of server groups are available
as part of our
paid NGINX Plus subscriptions.
The following articles describe load balancing with NGINX Plus
in more detail:
Load Balancing with NGINX and NGINX Plus
Load Balancing with NGINX and NGINX Plus part 2

https://nginx.org/en/docs/http/load_balancing.html 4/4

You might also like