Proxy_Store Nginx
Module ngx_http_proxy_module – Nginx.org
The ngx__proxy_module module allows passing
requests to another server.
Example Configuration location / {
proxy_pass localhost:8000;
proxy_set_header Host $host;
proxy_set_header X-Real-IP $remote_addr;}
Directives
Syntax:
proxy_bind
address
[transparent] |
off;
Default:
—
Context:, server, location
This directive appeared in version 0. 8. 22.
Makes outgoing connections to a proxied server originate
from the specified local IP address with an optional port (1. 11. 2).
Parameter value can contain variables (1. 3. 12).
The special value off (1. 12) cancels the effect
of the proxy_bind directive
inherited from the previous configuration level, which allows the
system to auto-assign the local IP address and port.
The transparent parameter (1. 0) allows
outgoing connections to a proxied server originate
from a non-local IP address,
for example, from a real IP address of a client:
proxy_bind $remote_addr transparent;
In order for this parameter to work,
it is usually necessary to run nginx worker processes with the
superuser privileges.
On Linux it is not required (1. 13. 8) as if
the transparent parameter is specified, worker processes
inherit the CAP_NET_RAW capability from the master process.
It is also necessary to configure kernel routing table
to intercept network traffic from the proxied server.
proxy_buffer_size size;
proxy_buffer_size 4k|8k;
Sets the size of the buffer used for reading the first part
of the response received from the proxied server.
This part usually contains a small response header.
By default, the buffer size is equal to one memory page.
This is either 4K or 8K, depending on a platform.
It can be made smaller, however.
proxy_buffering on | off;
proxy_buffering on;
Enables or disables buffering of responses from the proxied server.
When buffering is enabled, nginx receives a response from the proxied server
as soon as possible, saving it into the buffers set by the
proxy_buffer_size and proxy_buffers directives.
If the whole response does not fit into memory, a part of it can be saved
to a temporary file on the disk.
Writing to temporary files is controlled by the
proxy_max_temp_file_size and
proxy_temp_file_write_size directives.
When buffering is disabled, the response is passed to a client synchronously,
immediately as it is received.
nginx will not try to read the whole response from the proxied server.
The maximum size of the data that nginx can receive from the server
at a time is set by the proxy_buffer_size directive.
Buffering can also be enabled or disabled by passing
“yes” or “no” in the
“X-Accel-Buffering” response header field.
This capability can be disabled using the
proxy_ignore_headers directive.
proxy_buffers number size;
proxy_buffers 8 4k|8k;
Sets the number and size of the
buffers used for reading a response from the proxied server,
for a single connection.
proxy_busy_buffers_size size;
proxy_busy_buffers_size 8k|16k;
When buffering of responses from the proxied
server is enabled, limits the total size of buffers that
can be busy sending a response to the client while the response is not
yet fully read.
In the meantime, the rest of the buffers can be used for reading the response
and, if needed, buffering part of the response to a temporary file.
By default, size is limited by the size of two buffers set by the
proxy_cache zone | off;
proxy_cache off;
Defines a shared memory zone used for caching.
The same zone can be used in several places.
Parameter value can contain variables (1. 7. 9).
The off parameter disables caching inherited
from the previous configuration level.
proxy_cache_background_update on | off;
proxy_cache_background_update off;
This directive appeared in version 1. 10.
Allows starting a background subrequest
to update an expired cache item,
while a stale cached response is returned to the client.
Note that it is necessary to
allow
the usage of a stale cached response when it is being updated.
proxy_cache_bypass string… ;
Defines conditions under which the response will not be taken from a cache.
If at least one value of the string parameters is not empty and is not
equal to “0” then the response will not be taken from the cache:
proxy_cache_bypass $cookie_nocache $arg_nocache$arg_comment;
proxy_cache_bypass $_pragma $_authorization;
Can be used along with the proxy_no_cache directive.
proxy_cache_convert_head on | off;
proxy_cache_convert_head on;
This directive appeared in version 1. 9. 7.
Enables or disables the conversion of the “HEAD” method
to “GET” for caching.
When the conversion is disabled, the
cache key should be configured
to include the $request_method.
proxy_cache_key string;
proxy_cache_key $scheme$proxy_host$request_uri;
Defines a key for caching, for example
proxy_cache_key “$host$request_uri $cookie_user”;
By default, the directive’s value is close to the string
proxy_cache_key $scheme$proxy_host$uri$is_args$args;
proxy_cache_lock on | off;
proxy_cache_lock off;
This directive appeared in version 1. 1. 12.
When enabled, only one request at a time will be allowed to populate
a new cache element identified according to the proxy_cache_key
directive by passing a request to a proxied server.
Other requests of the same cache element will either wait
for a response to appear in the cache or the cache lock for
this element to be released, up to the time set by the
proxy_cache_lock_timeout directive.
proxy_cache_lock_age time;
proxy_cache_lock_age 5s;
This directive appeared in version 1. 8.
If the last request passed to the proxied server
for populating a new cache element
has not completed for the specified time,
one more request may be passed to the proxied server.
proxy_cache_lock_timeout time;
proxy_cache_lock_timeout 5s;
Sets a timeout for proxy_cache_lock.
When the time expires,
the request will be passed to the proxied server,
however, the response will not be cached.
Before 1. 8, the response could be cached.
proxy_cache_max_range_offset number;
This directive appeared in version 1. 6.
Sets an offset in bytes for byte-range requests.
If the range is beyond the offset,
the range request will be passed to the proxied server
and the response will not be cached.
proxy_cache_methods
GET |
HEAD |
POST… ;
proxy_cache_methods GET HEAD;
This directive appeared in version 0. 59.
If the client request method is listed in this directive then
the response will be cached.
“GET” and “HEAD” methods are always
added to the list, though it is recommended to specify them explicitly.
See also the proxy_no_cache directive.
proxy_cache_min_uses number;
proxy_cache_min_uses 1;
Sets the number of requests after which the response
will be cached.
proxy_cache_path
path
[levels=levels]
[use_temp_path=on|off]
keys_zone=name:size
[inactive=time]
[max_size=size]
[min_free=size]
[manager_files=number]
[manager_sleep=time]
[manager_threshold=time]
[loader_files=number]
[loader_sleep=time]
[loader_threshold=time]
[purger=on|off]
[purger_files=number]
[purger_sleep=time]
[purger_threshold=time];
Context:
Sets the path and other parameters of a cache.
Cache data are stored in files.
The file name in a cache is a result of
applying the MD5 function to the
cache key.
The levels parameter defines hierarchy levels of a cache:
from 1 to 3, each level accepts values 1 or 2.
For example, in the following configuration
proxy_cache_path /data/nginx/cache levels=1:2 keys_zone=one:10m;
file names in a cache will look like this:
/data/nginx/cache/c/29/b7f54b2df7773722d382f4809d65029c
A cached response is first written to a temporary file,
and then the file is renamed.
Starting from version 0. 9, temporary files and the cache can be put on
different file systems.
However, be aware that in this case a file is copied
across two file systems instead of the cheap renaming operation.
It is thus recommended that for any given location both cache and a directory
holding temporary files
are put on the same file system.
The directory for temporary files is set based on
the use_temp_path parameter (1. 10).
If this parameter is omitted or set to the value on,
the directory set by the proxy_temp_path directive
for the given location will be used.
If the value is set to off,
temporary files will be put directly in the cache directory.
In addition, all active keys and information about data are stored
in a shared memory zone, whose name and size
are configured by the keys_zone parameter.
One megabyte zone can store about 8 thousand keys.
As part of
commercial subscription,
the shared memory zone also stores extended
cache information,
thus, it is required to specify a larger zone size for the same number of keys.
For example,
one megabyte zone can store about 4 thousand keys.
Cached data that are not accessed during the time specified by the
inactive parameter get removed from the cache
regardless of their freshness.
By default, inactive is set to 10 minutes.
The special “cache manager” process monitors the maximum cache size set
by the max_size parameter,
and the minimum amount of free space set
by the min_free (1. 19. 1) parameter
on the file system with cache.
When the size is exceeded or there is not enough free space,
it removes the least recently used data.
The data is removed in iterations configured by
manager_files,
manager_threshold, and
manager_sleep parameters (1. 5).
During one iteration no more than manager_files items
are deleted (by default, 100).
The duration of one iteration is limited by the
manager_threshold parameter (by default, 200 milliseconds).
Between iterations, a pause configured by the manager_sleep
parameter (by default, 50 milliseconds) is made.
A minute after the start the special “cache loader” process is activated.
It loads information about previously cached data stored on file system
into a cache zone.
The loading is also done in iterations.
During one iteration no more than loader_files items
are loaded (by default, 100).
Besides, the duration of one iteration is limited by the
loader_threshold parameter (by default, 200 milliseconds).
Between iterations, a pause configured by the loader_sleep
Additionally,
the following parameters are available as part of our
commercial subscription:
purger=on|off
Instructs whether cache entries that match a
wildcard key
will be removed from the disk by the cache purger (1. 12).
Setting the parameter to on
(default is off)
will activate the “cache purger” process that
permanently iterates through all cache entries
and deletes the entries that match the wildcard key.
purger_files=number
Sets the number of items that will be scanned during one iteration (1. 12).
By default, purger_files is set to 10.
purger_threshold=number
Sets the duration of one iteration (1. 12).
By default, purger_threshold is set to 50 milliseconds.
purger_sleep=number
Sets a pause between iterations (1. 12).
By default, purger_sleep is set to 50 milliseconds.
In versions 1. 3, 1. 7, and 1. 10 cache header format has been changed.
Previously cached responses will be considered invalid
after upgrading to a newer nginx version.
proxy_cache_purge string… ;
This directive appeared in version 1. 5. 7.
Defines conditions under which the request will be considered a cache
purge request.
If at least one value of the string parameters is not empty and is not equal
to “0” then the cache entry with a corresponding
cache key is removed.
The result of successful operation is indicated by returning
the 204 (No Content) response.
If the cache key of a purge request ends
with an asterisk (“*”), all cache entries matching the
wildcard key will be removed from the cache.
However, these entries will remain on the disk until they are deleted
for either inactivity,
or processed by the cache purger (1. 12),
or a client attempts to access them.
Example configuration:
proxy_cache_path /data/nginx/cache keys_zone=cache_zone:10m;
map $request_method $purge_method {
PURGE 1;
default 0;}
server {…
location / {
proxy_pass backend;
proxy_cache cache_zone;
proxy_cache_key $uri;
proxy_cache_purge $purge_method;}}
This functionality is available as part of our
commercial subscription.
proxy_cache_revalidate on | off;
proxy_cache_revalidate off;
Enables revalidation of expired cache items using conditional requests with
the “If-Modified-Since” and “If-None-Match”
header fields.
proxy_cache_use_stale
error |
timeout |
invalid_header |
updating |
_500 |
_502 |
_503 |
_504 |
_403 |
_404 |
_429 |
off… ;
proxy_cache_use_stale off;
Determines in which cases a stale cached response can be used
during communication with the proxied server.
The directive’s parameters match the parameters of the
proxy_next_upstream directive.
The error parameter also permits
using a stale cached response if a proxied server to process a request
cannot be selected.
Additionally, the updating parameter permits
using a stale cached response if it is currently being updated.
This allows minimizing the number of accesses to proxied servers
when updating cached data.
Using a stale cached response
can also be enabled directly in the response header
for a specified number of seconds after the response became stale (1. 10).
This has lower priority than using the directive parameters.
The
“stale-while-revalidate”
extension of the “Cache-Control” header field permits
“stale-if-error”
using a stale cached response in case of an error.
To minimize the number of accesses to proxied servers when
populating a new cache element, the proxy_cache_lock
directive can be used.
proxy_cache_valid [code… ] time;
Sets caching time for different response codes.
For example, the following directives
proxy_cache_valid 200 302 10m;
proxy_cache_valid 404 1m;
set 10 minutes of caching for responses with codes 200 and 302
and 1 minute for responses with code 404.
If only caching time is specified
proxy_cache_valid 5m;
then only 200, 301, and 302 responses are cached.
In addition, the any parameter can be specified
to cache any responses:
proxy_cache_valid 301 1h;
proxy_cache_valid any 1m;
Parameters of caching can also be set directly
in the response header.
This has higher priority than setting of caching time using the directive.
The “X-Accel-Expires” header field sets caching time of a
response in seconds.
The zero value disables caching for a response.
If the value starts with the @ prefix, it sets an absolute
time in seconds since Epoch, up to which the response may be cached.
If the header does not include the “X-Accel-Expires” field,
parameters of caching may be set in the header fields
“Expires” or “Cache-Control”.
If the header includes the “Set-Cookie” field, such a
response will not be cached.
If the header includes the “Vary” field
with the special value “*”, such a
response will not be cached (1. 7).
with another value, such a response will be cached
taking into account the corresponding request header fields (1. 7).
Processing of one or more of these response header fields can be disabled
using the proxy_ignore_headers directive.
proxy_connect_timeout time;
proxy_connect_timeout 60s;
Defines a timeout for establishing a connection with a proxied server.
It should be noted that this timeout cannot usually exceed 75 seconds.
proxy_cookie_domain off;proxy_cookie_domain domain replacement;
proxy_cookie_domain off;
This directive appeared in version 1. 15.
Sets a text that should be changed in the domain
attribute of the “Set-Cookie” header fields of a
proxied server response.
Suppose a proxied server returned the “Set-Cookie”
header field with the attribute
“domain=localhost”.
The directive
proxy_cookie_domain localhost;
will rewrite this attribute to
“”.
A dot at the beginning of the domain and
replacement strings and the domain
attribute is ignored.
Matching is case-insensitive.
The domain and replacement strings
can contain variables:
proxy_cookie_domain $host $host;
The directive can also be specified using regular expressions.
In this case, domain should start from
the “~” symbol.
A regular expression can contain named and positional captures,
and replacement can reference them:
proxy_cookie_domain ~. (? P
Several proxy_cookie_domain directives
can be specified on the same level:
proxy_cookie_domain ~. ([a-z]+. [a-z]+)$ $1;
If several directives can be applied to the cookie,
the first matching directive will be chosen.
The off parameter cancels the effect
of the proxy_cookie_domain directives
inherited from the previous configuration level.
proxy_cookie_flags
off |
cookie
[flag… ];
proxy_cookie_flags off;
This directive appeared in version 1. 3.
Sets one or more flags for the cookie.
The cookie can contain text, variables, and their combinations.
The flag
can contain text, variables, and their combinations (1. 8).
secure,
only,
samesite=strict,
samesite=lax,
samesite=none
parameters add the corresponding flags.
nosecure,
noonly,
nosamesite
parameters remove the corresponding flags.
The cookie can also be specified using regular expressions.
In this case, cookie should start from
Several proxy_cookie_flags directives
can be specified on the same configuration level:
proxy_cookie_flags one only;
proxy_cookie_flags ~ nosecure samesite=strict;
In the example, the only flag
is added to the cookie one,
for all other cookies
the samesite=strict flag is added and
the secure flag is deleted.
of the proxy_cookie_flags directives
proxy_cookie_path off;proxy_cookie_path path replacement;
proxy_cookie_path off;
Sets a text that should be changed in the path
“path=/two/some/uri/”.
proxy_cookie_path /two/ /;
“path=/some/uri/”.
The path and replacement strings
proxy_cookie_path $uri /some$uri;
In this case, path should either start from
the “~” symbol for a case-sensitive matching,
or from the “~*” symbols for case-insensitive
matching.
The regular expression can contain named and positional captures,
proxy_cookie_path ~*^/user/([^/]+) /u/$1;
Several proxy_cookie_path directives
proxy_cookie_path /one/ /;
proxy_cookie_path / /two/;
of the proxy_cookie_path directives
proxy_force_ranges on | off;
proxy_force_ranges off;
This directive appeared in version 1. 7.
Enables byte-range support
for both cached and uncached responses from the proxied server
regardless of the “Accept-Ranges” field in these responses.
proxy_headers_hash_bucket_size size;
proxy_headers_hash_bucket_size 64;
Sets the bucket size for hash tables
used by the proxy_hide_header and proxy_set_header
directives.
The details of setting up hash tables are provided in a separate
document.
proxy_headers_hash_max_size size;
proxy_headers_hash_max_size 512;
Sets the maximum size of hash tables
proxy_hide_header field;
By default,
nginx does not pass the header fields “Date”,
“Server”, “X-Pad”, and
“X-Accel-… ” from the response of a proxied
server to a client.
The proxy_hide_header directive sets additional fields
that will not be passed.
If, on the contrary, the passing of fields needs to be permitted,
the proxy_pass_header directive can be used.
proxy__version 1. 0 | 1. 1;
proxy__version 1. 0;
This directive appeared in version 1. 4.
Sets the HTTP protocol version for proxying.
By default, version 1. 0 is used.
Version 1. 1 is recommended for use with
keepalive
connections and
NTLM authentication.
proxy_ignore_client_abort on | off;
proxy_ignore_client_abort off;
Determines whether the connection with a proxied server should be
closed when a client closes the connection without waiting
for a response.
proxy_ignore_headers field… ;
Disables processing of certain response header fields from the proxied server.
The following fields can be ignored: “X-Accel-Redirect”,
“X-Accel-Expires”, “X-Accel-Limit-Rate” (1. 6),
“X-Accel-Buffering” (1. 6),
“X-Accel-Charset” (1. 6), “Expires”,
“Cache-Control”, “Set-Cookie” (0. 44),
and “Vary” (1. 7).
If not disabled, processing of these header fields has the following
effect:
“X-Accel-Expires”, “Expires”,
“Cache-Control”, “Set-Cookie”,
and “Vary”
set the parameters of response caching;
“X-Accel-Redirect” performs an
internal
redirect to the specified URI;
“X-Accel-Limit-Rate” sets the
rate
limit for transmission of a response to a client;
“X-Accel-Buffering” enables or disables
buffering of a response;
“X-Accel-Charset” sets the desired
charset
of a response.
proxy_intercept_errors on | off;
proxy_intercept_errors off;
Determines whether proxied responses with codes greater than or equal
to 300 should be passed to a client
or be intercepted and redirected to nginx for processing
with the error_page directive.
proxy_limit_rate rate;
proxy_limit_rate 0;
Limits the speed of reading the response from the proxied server.
The rate is specified in bytes per second.
The zero value disables rate limiting.
The limit is set per a request, and so if nginx simultaneously opens
two connections to the proxied server,
the overall rate will be twice as much as the specified limit.
The limitation works only if
buffering of responses from the proxied
server is enabled.
proxy_max_temp_file_size size;
proxy_max_temp_file_size 1024m;
server is enabled, and the whole response does not fit into the buffers
set by the proxy_buffer_size and proxy_buffers
directives, a part of the response can be saved to a temporary file.
This directive sets the maximum size of the temporary file.
The size of data written to the temporary file at a time is set
by the proxy_temp_file_write_size directive.
The zero value disables buffering of responses to temporary files.
This restriction does not apply to responses
that will be cached
or stored on disk.
proxy_method method;
Specifies the HTTP method to use in requests forwarded
to the proxied server instead of the method from the client request.
Parameter value can contain variables (1. 6).
proxy_next_upstream
non_idempotent |
proxy_next_upstream error timeout;
Specifies in which cases a request should be passed to the next server:
error
an error occurred while establishing a connection with the
server, passing a request to it, or reading the response header;
timeout
a timeout has occurred while establishing a connection with the
invalid_header
a server returned an empty or invalid response;
_500
a server returned a response with the code 500;
_502
a server returned a response with the code 502;
_503
a server returned a response with the code 503;
_504
a server returned a response with the code 504;
_403
a server returned a response with the code 403;
_404
a server returned a response with the code 404;
_429
a server returned a response with the code 429 (1. 13);
non_idempotent
normally, requests with a
non-idempotent
method
(POST, LOCK, PATCH)
are not passed to the next server
if a request has been sent to an upstream server (1. 13);
enabling this option explicitly allows retrying such requests;
off
disables passing a request to the next server.
One should bear in mind that passing a request to the next server is
only possible if nothing has been sent to a client yet.
That is, if an error or timeout occurs in the middle of the
transferring of a response, fixing this is impossible.
The directive also defines what is considered an
unsuccessful
attempt of communication with a server.
The cases of error, timeout and
invalid_header are always considered unsuccessful attempts,
even if they are not specified in the directive.
The cases of _500, _502,
_503, _504,
and _429 are
considered unsuccessful attempts only if they are specified in the directive.
The cases of _403 and _404
are never considered unsuccessful attempts.
Passing a request to the next server can be limited by
the number of tries
and by time.
proxy_next_upstream_timeout time;
proxy_next_upstream_timeout 0;
This directive appeared in version 1. 5.
Limits the time during which a request can be passed to the
next server.
The 0 value turns off this limitation.
proxy_next_upstream_tries number;
proxy_next_upstream_tries 0;
Limits the number of possible tries for passing a request to the
proxy_no_cache string… ;
Defines conditions under which the response will not be saved to a cache.
equal to “0” then the response will not be saved:
proxy_no_cache $cookie_nocache $arg_nocache$arg_comment;
proxy_no_cache $_pragma $_authorization;
Can be used along with the proxy_cache_bypass directive.
proxy_pass URL;
location, if in location, limit_except
Sets the protocol and address of a proxied server and an optional URI
to which a location should be mapped.
As a protocol, “” or “”
can be specified.
The address can be specified as a domain name or IP address,
and an optional port:
proxy_pass localhost:8000/uri/;
or as a UNIX-domain socket path specified after the word
“unix” and enclosed in colons:
proxy_pass unix:/tmp/;
If a domain name resolves to several addresses, all of them will be
used in a round-robin fashion.
In addition, an address can be specified as a
server group.
Parameter value can contain variables.
In this case, if an address is specified as a domain name,
the name is searched among the described server groups,
and, if not found, is determined using a
resolver.
A request URI is passed to the server as follows:
If the proxy_pass directive is specified with a URI,
then when a request is passed to the server, the part of a
normalized
request URI matching the location is replaced by a URI
specified in the directive:
location /name/ {
proxy_pass}
If proxy_pass is specified without a URI,
the request URI is passed to the server in the same form
as sent by a client when the original request is processed,
or the full normalized request URI is passed
when processing the changed URI:
location /some/path/ {
Before version 1. 12,
if proxy_pass is specified without a URI,
the original request URI might be passed
instead of the changed URI in some cases.
In some cases, the part of a request URI to be replaced cannot be determined:
When location is specified using a regular expression,
and also inside named locations.
In these cases,
proxy_pass should be specified without a URI.
When the URI is changed inside a proxied location using the
rewrite directive,
and this same configuration will be used to process a request
(break):
rewrite /name/([^/]+) /users? name=$1 break;
In this case, the URI specified in the directive is ignored and
the full changed request URI is passed to the server.
When variables are used in proxy_pass:
proxy_pass request_uri;}
In this case, if URI is specified in the directive,
it is passed to the server as is,
replacing the original request URI.
WebSocket proxying requires special
configuration and is supported since version 1. 13.
proxy_pass_header field;
Permits passing otherwise disabled header
fields from a proxied server to a client.
proxy_pass_request_body on | off;
proxy_pass_request_body on;
Indicates whether the original request body is passed
to the proxied server.
location /x-accel-redirect-here/ {
proxy_method GET;
proxy_pass_request_body off;
proxy_set_header Content-Length “”;
proxy_pass… }
See also the proxy_set_header and
proxy_pass_request_headers directives.
proxy_pass_request_headers on | off;
proxy_pass_request_headers on;
Indicates whether the header fields of the original request are passed
proxy_pass_request_headers off;
proxy_pass_request_body directives.
proxy_read_timeout time;
proxy_read_timeout 60s;
Defines a timeout for reading a response from the proxied server.
The timeout is set only between two successive read operations,
not for the transmission of the whole response.
If the proxied server does not transmit anything within this time,
the connection is closed.
proxy_redirect default;proxy_redirect off;proxy_redirect redirect replacement;
proxy_redirect default;
Sets the text that should be changed in the “Location”
and “Refresh” header fields of a proxied server response.
Suppose a proxied server returned the header field
“Location: localhost:8000/two/some/uri/”.
proxy_redirect localhost:8000/two/ frontend/one/;
will rewrite this string to
“Location: frontend/one/some/uri/”.
A server name may be omitted in the replacement string:
proxy_redirect localhost:8000/two/ /;
then the primary server’s name and port, if different from 80,
will be inserted.
The default replacement specified by the default parameter
uses the parameters of the
location and
proxy_pass directives.
Hence, the two configurations below are equivalent:
location /one/ {
proxy_pass upstream:port/two/;
proxy_redirect upstream:port/two/ /one/;
The default parameter is not permitted if
proxy_pass is specified using variables.
A replacement string can contain variables:
proxy_redirect localhost:8000/ $host:$server_port/;
A redirect can also contain (1. 11) variables:
proxy_redirect $proxy_host:8000/ /;
The directive can be specified (1. 11) using regular expressions.
In this case, redirect should either start with
or with the “~*” symbols for case-insensitive
proxy_redirect ~^([^:]+):d+(/. +)$ $1$2;
proxy_redirect ~*/user/([^/]+)/(. +)$ $1. $2;
Several proxy_redirect directives
proxy_redirect localhost:8000/ /;
proxy_redirect /;
If several directives can be applied to
the header fields of a proxied server response,
of the proxy_redirect directives
Using this directive, it is also possible to add host names to relative
redirects issued by a proxied server:
proxy_redirect / /;
proxy_request_buffering on | off;
proxy_request_buffering on;
This directive appeared in version 1. 11.
Enables or disables buffering of a client request body.
When buffering is enabled, the entire request body is
read
from the client before sending the request to a proxied server.
When buffering is disabled, the request body is sent to the proxied server
In this case, the request cannot be passed to the
next server
if nginx already started sending the request body.
When HTTP/1. 1 chunked transfer encoding is used
to send the original request body,
the request body will be buffered regardless of the directive value unless
HTTP/1. 1 is enabled for proxying.
proxy_send_lowat size;
proxy_send_lowat 0;
If the directive is set to a non-zero value, nginx will try to
minimize the number
of send operations on outgoing connections to a proxied server by using either
NOTE_LOWAT flag of the
kqueue method,
or the SO_SNDLOWAT socket option,
with the specified size.
This directive is ignored on Linux, Solaris, and Windows.
proxy_send_timeout time;
proxy_send_timeout 60s;
Sets a timeout for transmitting a request to the proxied server.
The timeout is set only between two successive write operations,
not for the transmission of the whole request.
If the proxied server does not receive anything within this time,
proxy_set_body value;
Allows redefining the request body passed to the proxied server.
The value can contain text, variables, and their combination.
proxy_set_header field value;
proxy_set_header Host $proxy_host;proxy_set_header Connection close;
Allows redefining or appending fields to the request header
passed to the proxied server.
The value can contain text, variables, and their combinations.
These directives are inherited from the previous configuration level
if and only if there are no proxy_set_header directives
defined on the current level.
By default, only two fields are redefined:
proxy_set_header Host $proxy_host;
proxy_set_header Connection close;
If caching is enabled, the header fields
“If-Modified-Since”,
“If-Unmodified-Since”,
“If-None-Match”,
“If-Match”,
“Range”,
and
“If-Range”
from the original request are not passed to the proxied server.
An unchanged “Host” request header field can be passed like this:
proxy_set_header Host $_host;
However, if this field is not present in a client request header then
nothing will be passed.
In such a case it is better to use the $host variable – its
value equals the server name in the “Host” request header
field or the primary server name if this field is not present:
In addition, the server name can be passed together with the port of the
proxied server:
proxy_set_header Host $host:$proxy_port;
If the value of a header field is an empty string then this
field will not be passed to a proxied server:
proxy_set_header Acce
nginx reverse proxy, when to use cache vs store? – Server Fault
I’m in the process of restructuring my project’s web stack to:
nginx -> haproxy -> many (apache/passenger rails) instances
Some of the goals include:
single location for page caching (currently done via rails on each apache machine)
faster static content
remove ssl from internal pipeline
ip logging (previously lost due to running haproxy in tcp mode)
The image/stylesheet/javascript assets are cache cached, with appropiate headers. Our page caching is based on internal parameters, and shouldn’t respond to typical cache controls. To achieve these ends, our config looks something like
server {…
location /really_slow_dynamic_content/ {
root /var/www/tmp;
error_page 404 = @fetch;}
location @fetch {
internal;
proxy_pass haproxy_ip;
proxy_store /var/www/tmp${uri};
proxy_store_access user:rw group:rw all:r;}
location /assets/ {
proxy_cache assets;}
location / {
proxy_pass haproxy_ip;}}
I’m not really much of a sysadmin, and I know there are lots of alternatives/tweaks/additions that might be helpful. I also don’t quite understand the different between proxy_cache and proxy_store. So to my actual question…
Until we move the assets to the nginx machine, does it make sense to use proxy_cache for assets and proxy_store for slow dynamic content?
Also, if there are other considerations or software I should be considering, I would love to hear about them. Thank you!
Since posting this question, I’ve realized that the initial config I used doesn’t use the store at all, and that the error_page and internal settings from the (semi? ) official wiki example weren’t exactly optional (config updated here since it seems to be working, and a working config seems like a better legacy for this question). So, using the store for slow to create (and rarely updated) full pages, and the actual cache for images, javascript and such seems to be working pretty well for us. I’ll accept the one answer, since it at least gave me a lead to track down my issue, but I still don’t have a sense of whether or not I’m using the two directives in a manner for which they were intended or not (well, at least not regarding the store, the cache seems a bit more obvious).
nginx dynamic caching using proxy_store with reverse-proxy …
# servers to proxy to
upstream servers {
server 10. 0. 1;
server 10. 2;}
server {
listen 80;
server_name
# this proxy_method flag for logging will be helpful
# in determining if object is passed, cached or stored
set $proxy_method STORE;
set $store_extra ”;
# store dynamic content as physical files
# appended with a extension
location ~ /GetJSON {
root /var/tmp/nginx/json;
expires max;
try_files $ @fetch_json;}
location @fetch_json {
internal;
proxy_pass servers$request_uri;
proxy_store /var/tmp/nginx/json${request_uri};
proxy_store_access user:rw group:rw all:r;
set $proxy_method PASS;}
# any other static content will be handled by nginx
# reverse proxy and caching
location / {
proxy_ignore_headers X-Accel-Expires Expires Cache-Control Set-Cookie;
proxy_cache MYCACHE;
proxy_cache_use_stale error;
proxy_cache_valid any 30d;
add_header X-NGINX-Cache $upstream_cache_status;
set $proxy_method CACHE;}
# this custom logging will show a ‘HIT’ or a ‘MISS’ tag
# per content
log_format up_head ‘[$time_local] “$request” ‘
‘$status $body_bytes_sent “$_referer”‘
‘”$_x_forwarded_for” “$upstream_cache_status”‘
‘”$proxy_method”‘
access_log /var/log/nginx/ up_head;}