FAQ

Page Discussion Edit History

HttpUpstreamModule

(Redirected from NginxHttpUpstreamModule)

Contents

[edit] Synopsis

This module provides simple load-balancing (round-robin, least connections, and client IP) across upstream (backend) servers.

Example:

upstream backend  {
  server backend1.example.com weight=5;
  server backend2.example.com:8080;
  server unix:/tmp/backend3;
}
 
server {
  location / {
    proxy_pass  http://backend;
  }
}

[edit] Directives

[edit] ip_hash

Syntax: ip_hash
Default:
Context: upstream
Reference:ip_hash


This directive causes requests to be distributed between upstreams based on the IP-address of the client.
The key for the hash is the class-C network address of the client. This method guarantees that the client request will always be transferred to the same server. But if this server is considered inoperative, then the request of this client will be transferred to another server. This gives a high probability clients will always connect to the same server.

It is not possible to combine ip_hash and weight methods for connection distribution. If one of the servers must be removed for some time, you must mark that server as *down*.

For example:

upstream backend {
  ip_hash;
  server   backend1.example.com;
  server   backend2.example.com;
  server   backend3.example.com  down;
  server   backend4.example.com;
}

[edit] keepalive

Syntax: keepalive connections
Default:
Context: upstream
Appeared in: 1.1.4
Reference:keepalive


[edit] least_conn

Syntax: least_conn
Default:
Context: upstream
Appeared in: 1.3.1
1.2.2
Reference:least_conn


[edit] server

Syntax: server address [ parameters ]
Default:
Context: upstream
Reference:server


Directive assigns the name and the parameters of server. For the name it is possible to use a domain name, an address, port or unix socket. If domain name resolves to several addresses, then all are used.

  • weight = NUMBER - set weight of the server, if not set weight is equal to one.
  • max_fails = NUMBER - number of unsuccessful attempts at communicating with the server within the time period (assigned by parameter fail_timeout) after which it is considered inoperative. If not set, the number of attempts is one. A value of 0 turns off this check. What is considered a failure is defined by proxy_next_upstream or fastcgi_next_upstream (except http_404 errors which do not count towards max_fails).
  • fail_timeout = TIME - the time during which must occur *max_fails* number of unsuccessful attempts at communication with the server that would cause the server to be considered inoperative, and also the time for which the server will be considered inoperative (before another attempt is made). If not set the time is 10 seconds. fail_timeout has nothing to do with upstream response time, use proxy_connect_timeout and proxy_read_timeout for controlling this.
  • down - marks server as permanently offline, to be used with the directive ip_hash.
  • backup - (0.6.7 or later) only uses this server if the non-backup servers are all down or busy (cannot be used with the directive ip_hash)

Example configuration:

upstream  backend  {
  server   backend1.example.com    weight=5;
  server   127.0.0.1:8080          max_fails=3  fail_timeout=30s;
  server   unix:/tmp/backend3;
}

Attention: If you use only one upstream server nginx set a internal variable to 1, this means that the max_fails & fail_timeout parameter are not handled.

Effect: If nginx can not connect to upstream the request it's gone.

Solution: Use the same server several times

[edit] upstream

Syntax: upstream name { ... }
Default:
Context: http
Reference:upstream


This directive describes a set of servers, which can be used in directives proxy_pass and fastcgi_pass as a single entity. They can listen to server on different ports and furthermore, it is possible to simultaneously use a server that listens on both TCP and Unix sockets.

Servers can be assigned different weights. If not specified weight is equal to one.

Example configuration:

upstream backend {
  server backend1.example.com weight=5;
  server 127.0.0.1:8080       max_fails=3  fail_timeout=30s;
  server unix:/tmp/backend3;
}

Requests are distributed according to the servers in round-robin manner with respect of the server weight.
For example of every seven requests given above they will be distributed like this: 5 requests on backend1.example.com and one request to the second and the third of server. If with an attempt at the work with the server error occurred, then the request will be transmitted to the following server and then until all workers of server not are tested. If successful answer is not succeeded in obtaining from all servers, then to client will be returned the result of work with the last server.

[edit] Variables

Since version 0.5.18, it is possible to log via log_module variables.

Configuration example:

log_format timing '$remote_addr - $remote_user [$time_local]  $request '
  'upstream_response_time $upstream_response_time '
  'msec $msec request_time $request_time';
 
log_format up_head '$remote_addr - $remote_user [$time_local]  $request '
  'upstream_http_content_type $upstream_http_content_type';

[edit] $upstream_addr

Address (ip:port or unix:socket-path) of the upstream server that handled the request. If multiple upstream addresses were accessed while processing the request, then the addresses are separated by a comma and space, for example: "192.168.1.1:80, 192.168.1.2:80, unix:/tmp/sock". If there was an internal redirect from one server group to another using the "X-Accel-Redirect" or error_page, these groups of servers are separated by a colon with a space on each side, for example: "192.168.1.1:80, 192.168.1.2:80, unix:/tmp/sock : 192.168.10.1:80, 192.168.10.2:80". Note the spaces: it's a good idea to enclose this in "" in a log format to make parsing easier.

[edit] $upstream_cache_status

Appeared in 0.8.3. Possible values:

  • MISS
  • EXPIRED - expired, request was passed to backend
  • UPDATING - expired, stale response was used due to proxy/fastcgi_cache_use_stale updating
  • STALE - expired, stale response was used due to proxy/fastcgi_cache_use_stale
  • HIT

[edit] $upstream_status

Upstream server status of the answer. As in $upstream_addr, if more than one upstream server is accessed, the values are separated by commas and colons with spaces.

[edit] $upstream_response_time

Response time of upstream server(s) in seconds, with an accuracy of milliseconds. As in $upstream_addr, if more than one upstream server is accessed, the values are separated by commas and colons with spaces.

[edit] $upstream_http_$HEADER

Arbitrary HTTP protocol headers, for example:

$upstream_http_host

Bear in mind that if more than one upstream server is accessed, only the header from the last one appears here.

[edit] References

Original Documentation