Root directory and index file
root
The directive specifies the root directory that will be used to search for files. To get the path of the requested file, NGINX request URI is appended to root
the instruction specified path. The instructions may be on http {}
, server {}
or location {}
in the context of any level. In the following example, the definition of the virtual server root
command. It applies to all location {}
blocks that do not contain a root instruction to explicitly redefine the root:
server {
root /www/data;
location / {
}
location /images/ {
}
location ~ \.(mp3|mp4) {
root /www/media;
}
}
Here, NGINX for /images/
URI will be the beginning of the file system's /www/ data/images/
search for the files in the directory. If the URI with .mp3
or .mp4
ending extension, NGINX will /www/media/
search for the file directory, as it is defined in the position of the block matching.
If the request ends with /, NGINX treats it as a request for the directory and tries to find the index file in the directory. index
The directive defines the name of the index file (the default value is index.html). To continue the example, if the request URI is /images/some/path/
, NGINX returns the file /www/data/images/some/path/index.html
(if it exists). If not, NGINX returns HTTP 404 error (not found) by default. To configure NGINX to return automatically generated directory list, in the autoindex
instructions contained on the parameters:
location /images/ {
autoindex on;
}
You can index
list multiple file names directive. NGINX searches for files in the specified order and returns the first file it finds.
location / {
index index.$geo.html index.htm index.html;
}
Herein used $geo
variables are defined by geo
custom variable instruction set. The value of the variable depends on the IP address of the client.
To return the index file, NGINX checks whether it exists, and then internally redirects the new URI obtained by appending the name of the index file to the base URI. Internal redirects result in a new search for locations, and may end up in another location, as shown in the following example:
location / {
root /data;
index index.html index.php;
}
location ~ \.php {
fastcgi_pass localhost:8000;
#...
}
Here, if the request URI is /path/
, and /data/path/index.html
without the presence /data/path/index.php
exist, the internal redirect to /path/index.php
be mapped to the second position. As a result, the request is proxied.
Try several options
try_files
The command can be used to check whether the specified file or directory exists; NGINX will perform internal redirection, and if not, return the specified status code. For example, to check whether the file corresponding to the request URI is present, use the try_files
instructions and $uri
variables, as follows:
server {
root /www/data;
location /images/ {
try_files $uri /images/default.gif;
}
}
The file is specified in the form of a URI, and is processed using the root or alias instructions set in the context of the current location or virtual server. In this case, if the file corresponding to the original URI does not exist, NGINX will internally redirect to the URI specified by the last parameter and return /www/data/images/default.gif
.
The last parameter can also be a status code (directly with an equal sign) or a location name. In the following example, if try_files
all instructions are the parameters does not resolve to an existing file or directory, an error is returned 404.
location / {
try_files $uri $uri/ $uri.html =404;
}
In the next example, if neither the original URI nor the URI with an additional trailing slash will resolve to an existing file or directory, the request will be redirected to the specified location and passed to the proxy server.
location / {
try_files $uri $uri/ @backend;
}
location @backend {
proxy_pass http://backend.example.com;
}
For more information, watch the content caching webinar to learn how to significantly improve website performance and learn more about NGINX’s caching capabilities.
Optimize the performance of service content
Loading speed is a key factor in providing any content. Minor optimizations to NGINX configuration can increase productivity and help achieve optimal performance.
Enable sendfile
By default, NGINX will handle the file transfer itself and copy the file to the buffer before sending it. Enable sendfile
command eliminates the step of copying data to the buffer, and allow to copy data directly from a file descriptor to another descriptor. Alternatively, in order to prevent a quick connector fully occupied work processes can be used sendfile_max_chunk
instruction limits of a single sendfile()
amount of data transmitted in the call (in this example, 1 MB):
location /mp3 {
sendfile on;
sendfile_max_chunk 1m;
#...
}
Enable tcp_nopush
The tcp_nopush
instruction sendfile on;
for use with the instruction. This makes it possible NGINX sendfile()
immediately send an HTTP response header in a data packet after acquiring the data block.
location /mp3 {
sendfile on;
tcp_nopush on;
#...
}
Enable tcp_nodelay
tcp_nodelay
The instructions allow to override Nagle's algorithm , which was originally designed to solve the problem of small data packets in slow networks. The algorithm combines many small data packets into one larger data packet and sends the data packet with a delay of 200 milliseconds. Nowadays, when serving large static files, data can be sent immediately regardless of the packet size. The delay also affects online applications (ssh, online games, online transactions, etc.). By default, the tcp_nodelay
instruction is set to on, which means that Nagle's algorithm is disabled. This command is only used for keepalive connections:
location /mp3 {
tcp_nodelay on;
keepalive_timeout 65;
#...
}
Optimize the backlog queue
One of the important factors is how fast NGINX can handle incoming connections. The general rule is to put it in the "listen" queue of the listening socket when establishing a connection. Under normal load, the queue is small or there is no queue at all. But under high load, the queue will grow sharply, resulting in uneven performance, interrupted connections, and increased latency.
Show backlog queue
Use the command netstat -Lan
to display the current listening queue. The output may look like the following. It shows that there are 10 unaccepted connections in the listening queue on port 80, and these connections are for the configured maximum of 128 queued connections. This situation is normal.
Current listen queue sizes (qlen/incqlen/maxqlen)
Listen Local Address
0/0/128 *.12345
10/0/128 *.80
0/0/128 *.8080
In contrast, in the following command, the number of unaccepted connections (192) exceeds the limit of 128. This situation is common when the site has a lot of traffic. To get the best performance, you need to increase the maximum number of connections that can be queued for acceptance by NGINX in the operating system and NGINX configuration.
Current listen queue sizes (qlen/incqlen/maxqlen)
Listen Local Address
0/0/128 *.12345
192/0/128 *.80
0/0/128 *.8080
Adjust the operating system
The net.core.somaxconn
value of the kernel parameter value large enough to accommodate increased traffic from its default value (128). In this example, it is increased to 4096.
- The FreeBSD command is
sudo sysctl kern.ipc.somaxconn=4096
- Linux commands to 1.
sudo sysctl -w net.core.somaxconn=4096
2.net.core.somaxconn = 4096
added to the/etc/sysctl.conf
file.
Adjust NGINX
If the somaxconn
kernel parameter is set to a value greater than 512, set backlog
parameters NGINX increase listen
instructions to modify the match:
server {
listen 80 backlog=4096;
# ...
}
© The article is translated from Nginx Serving Static Content with some semantic adjustments.