NGINX and NGINX PLUS Caching Guide

Original author: Faisal Memon of F5

Original link: NGINX and NGINX PLUS Caching Guide

Reprint source: NGINX Chinese official website


The only official Chinese community of NGINX, all at nginx.org.cn 

We all know that the performance of applications and websites determines the success or failure of a business. But there's no clear answer on how to make your app or website perform better. While code quality and infrastructure are important, many times you can also dramatically improve the end-user application experience by using some very basic application delivery techniques. Implementing and optimizing caching within the application stack is a prime example. This blog post introduces some techniques for improving performance using the content caching capabilities of NGINX and NGINX Plus for both novice and advanced users.

Overview

The content cache sits between the client and the "origin server" and holds a copy of all visible content. If a client requests content stored by the cache, it will return the content directly without communicating with the origin server. This improves performance because the content is cached closer to the client, and the application server is more efficiently utilized by not re-executing page generation tasks with every request.

Several types of caches may exist between the web browser and the application server: client browser cache, intermediate cache, content delivery network (CDN), and load balancer or reverse proxy in front of the application server. Even caching at the reverse proxy/load balancer level can greatly improve performance.

For example, last year I took on the task of optimizing the performance of a website that was loading slowly. The first thing that caught my attention was that the site took more than 1 second to generate the home page. After a series of debugging, I found that the reason was that the page was marked as non-cacheable and had to be generated dynamically every time in response to the request. In fact, the page itself does not change frequently and has no personalized content, so this is not necessary. I tentatively marked the home page to be cached by the load balancer every 5 seconds. Just by making this adjustment, I can obviously feel the performance improvement. The time for the first byte to arrive is reduced to a few milliseconds, and the page loading speed is also visibly faster.

NGINX is typically deployed as a reverse proxy or load balancer in an application stack, with full caching capabilities. The next section discusses how to configure NGINX's basic caching capabilities.

How to set up and configure basic caching functionality

We only need two instructions to enable base caching: proxy_cache_path and  proxy_cache. proxy_cache_path The directive is used to set the path and configuration of the cache, and proxy_cache the directive is used to enable the cache.

proxy_cache_path /path/to/cache levels=1:2 keys_zone=my_cache:10m max_size=10g 
                 inactive=60m use_temp_path=off;

server {
    # ...
    location / {
        proxy_cache my_cache;
        proxy_pass http://my_upstream;
    }
}

proxy_cache_path The parameters of the directive define the following settings:

  • The local disk directory used for caching is  /path/to/cache/ .

  • levels A two-level directory structure is set up under  /  path/to/cache/ . Placing a large number of files in one directory can result in slow file access, so we recommend using a two-level directory structure for most deployments. If no  levels parameters are added, NGINX will place all files in the same directory.

  • keys_zone A shared memory area is set up to store cache keys and metadata (such as using timers). By keeping a copy of the key in memory, NGINX can quickly decide whether a request hits ( HIT) or misses the cache ( MISS) without retrieving disk, significantly increasing retrieval speed. Given that 1MB of memory can store approximately 8,000 keys of data, the 10MB of memory configured in the above example can store approximately 80,000 keys of data.

  • max_size Setting the upper limit of cache storage space (10G in the above example) is optional; not specifying a specific value means that the cache is allowed to grow until it takes up all available disk space. When the cache reaches the upper limit, the cache manager  process will delete the least recently used files to reduce the cache space below the limit.

  • inactive Specifies how long an item can remain in the cache without being accessed. In the above example, if a file is not requested within 60 minutes, the Cache Manager process will automatically delete it from the cache regardless of whether it has expired. The default value for this parameter is 10 minutes ( 10m). Note that inactive content is different from expired content. NGINX does not automatically delete expired content defined by cache control headers (for example  Cache-Control:max-age=120). Expired content (old content)  inactive will only be deleted if it has not been accessed within a specified period of time. If a user accesses expired content, NGINX refreshes it from the origin server and resets  inactive the timer.

  • NGINX will first place the files to be cached in a temporary storage area, and use_temp_path=off then the command will instruct NGINX to write them to the same directory where they are to be cached. We recommend that you set this parameter  offto avoid unnecessary data copying between file systems. use_temp_path Introduced in NGINX version 1.7.10 and  NGINX Plus R6  .

Finally, proxy_cache the directive starts caching  location everything that matches the URL of the parent code block (   / in this case ). You can also  proxy_cache add a directive to  server a code block, and it will apply to  all  code blocks proxy_cache for servers that don't have a directive  of their own.location

Deliver cached content when the origin server is down

The NGINX  content cache has a powerful feature: when the latest content cannot be obtained from the origin server, NGINX can distribute the old content in the cache. This typically occurs when all origin servers associated with cached content are down or busy. Instead of passing an error message to the client, NGINX sends the old file from cache. This approach provides additional fault tolerance for the NGINX proxy server and ensures normal operation in the event of server failure or traffic peaks. To enable this feature, add  proxy_cache_use_stale the directive:

location / {
    # ...
    proxy_cache_use_stale error timeout http_500 http_502 http_503 http_504;
}

As configured in the example above, if NGINX receives any error returned by or specified by the origin server  error, timeout and  5xx it has an older version of the requested file in its cache, it will not pass an error message to the client and will instead send the old file.

Cache tuning and performance optimization

NGINX has rich cache tuning settings options. The following example uses several of these configurations:

proxy_cache_path /path/to/cache levels=1:2 keys_zone=my_cache:10m max_size=10g 
                 inactive=60m use_temp_path=off;

server {
    # ...
    location / {
        proxy_cache my_cache;
        proxy_cache_revalidate on;
        proxy_cache_min_uses 3;
        proxy_cache_use_stale error timeout updating http_500 http_502
                              http_503 http_504;
        proxy_cache_background_update on;
        proxy_cache_lock on;

        proxy_pass http://my_upstream;
    }
}

These directives configure the following behavior:

  • proxy_cache_revalidate Instructs NGINX to use conditional  GET requests when refreshing content from the origin server. If the client's request item has been cached, but is defined as expired in the cache control header, NGINX will  GET add fields to the request header  sent to the origin server If-Modified-Since . This saves bandwidth because the server will only send  Last-Modified complete items that were modified after the time recorded in the request header that NGINX appended to the file when it was first cached.

  • proxy_cache_min_uses Sets how many times an item must be requested before it is cached in NGINX. This setting is useful if the cache is constantly being filled, as it ensures that only the most frequently accessed items are added to the cache. proxy_cache_min_uses The default value is 1.

  • When used in combination   ,  the directive's   parameters instruct NGINX to send old content if the item requested by the client has expired or is being updated on the origin server side. All updates will be done in the background. NGINX will return the old file for all requests until the newer file has been completely downloaded.proxy_cache_background_updateproxy_cache_use_staleupdating

  • When enabled  proxy_cache_lock , if multiple clients request a file that is not currently in the cache ( MISS), only the first of these requests is allowed to be sent to the origin server. The remaining requests will wait for a response to that request and then pull the file from the cache. If not enabled  proxy_cache_lock, all requests for files not found in the cache will communicate directly with the origin server.

Split cache across multiple drives

If you have multiple hard drives, you can use NGINX to split the cache between them. The following example will distribute client requests equally between two drives based on the request URI:

proxy_cache_path /path/to/hdd1 levels=1:2 keys_zone=my_cache_hdd1:10m
                 max_size=10g inactive=60m use_temp_path=off;
proxy_cache_path /path/to/hdd2 levels=1:2 keys_zone=my_cache_hdd2:10m
                 max_size=10g inactive=60m use_temp_path=off;

split_clients $request_uri $my_cache {
              50%          “my_cache_hdd1”;
              50%          “my_cache_hdd2”;
}

server {
    # ...
    location / {
        proxy_cache $my_cache;
        proxy_pass http://my_upstream;
    }
}

Two  proxy_cache_path instructions define two caches ( my_cache_hdd1 and  my_cache_hdd2) on two hard disks. split_clients The configuration block specifies that half of the request results ( 50%) should be cached  my_cache_hdd1and the other half should be cached  my_cache_hdd2. The hash value based on  $request_uri the variable (request URI) determines which cache to use for each request. The result of this is that requests for a specific URI will always be cached in the same cache.

Please note that this method is not a replacement for a RAID hard drive setup. In the event of a hard drive failure, this may result in unpredictable system behavior, for example, users may receive a 500 response code for requests directed to the failed hard drive. Proper RAID hard drive setup can handle hard drive failures.

Frequently Asked Questions (FAQ)

This section answers some frequently asked questions about NGINX content caching.

Is it possible to detect NGINX cache status?

Yes, you need to use   the command:add_header

add_header X-Cache-Status $upstream_cache_status;

X-Cache-Status This example adds an HTTP request header to the response to the client  . The following are  $upstream_cache_status possible values:

  • MISS -- The response was not found in the cache and needs to be obtained from the origin server. This response may later be cached.

  • BYPASS - The response is fetched from the origin server rather than served from the cache because the request matched a  proxy_cache_bypass directive (see "Can I Punch a Hole in the Cache?" below). The response may later be cached. cache.

  • EXPIRED - The entry in the cache has expired and the response contains the latest content from the origin server.

  • STALE --Content expired because the origin server failed to respond correctly, and configured  proxy_cache_use_stale.

  • UPDATING -- The content is out of date because the entry is currently being updated in response to a previous request and configured  proxy_cache_use_stale updating.

  • REVALIDATED -- The directive is enabled  proxy_cache_revalidate and NGINX verifies that the current cache contents are still valid ( If-Modified-Since or  If-None-Match).

  • HIT - The response contains the latest payload directly from the cache.

How does NGINX determine whether to cache?

 NGINX caches responses only if the origin server includes  Expires a request header with a date and time in the future or  Cache-Control a request header with  directives set to a non-zero value.max-age

By default, NGINX considers  Cache-Control other directives in request headers: NGINX does not cache the response when the request header contains   the or  Privatedirective  . NGINX also does not cache   responses containing request headers, only responses to   and   requests. You can override these defaults following the answer below.No-CacheNo-StoreSet-CookieGETHEAD

If  proxy_buffering set to  off, NGINX does not cache responses. By default this parameter is  on.

 Can the Cache-Control  request header be ignored ?

Yes, you need to use  proxy_ignore_headers instructions. For example, use the following configuration:

location /images/ {
    proxy_cache my_cache;
    proxy_ignore_headers Cache-Control;
    proxy_cache_valid any 30m;
    # ...
}

NGINX will ignore  all Cache-Control request headers  under  /images/ .  The directive specifies the validity period of cached data. If   the request header is ignored, this directive must be set. NGINX will not cache files without an expiration date.proxy_cache_validCache-Control

Can NGINX cache content containing Set-Cookie  in the request header  ?

Yes, you need to use  proxy_ignore_headers instructions, please see the above answer.

Can NGINX cache  PoST  requests?

Yes, you need to use   the command:proxy_cache_methods

proxy_cache_methods GET HEAD POST;

This example enables  POST caching of requests.

Can NGINX cache dynamic content?

Yes, as long as  Cache-Control the request header allows it. Caching dynamic content even briefly can reduce load on the origin server and database - this can improve time to first byte since pages no longer have to be regenerated for every request.

Is it possible to "punch a hole" in the cache?

Yes, you need to use  proxy_cache_bypass the command:

location / {
    proxy_cache_bypass $cookie_nocache $arg_nocache;
    # ...
}

This directive defines for which types of requests NGINX will immediately fetch content from the origin server, without first trying to query the cache. This phenomenon is sometimes called "punching a hole" in memory. In this example, NGINX  nocache does this for requests that include cookies or parameters, such as  http://www.example.com/?nocache=true. NGINX can still cache the generated response for future requests that are not bypassed.

What cache key does NGINX use?

The default format of the key generated by NGINX is similar to  the MD5 hash value of the NGINX variable  , but the actual algorithm used is slightly more complex. $scheme$proxy_host$request_uri

proxy_cache_path /path/to/cache levels=1:2 keys_zone=my_cache:10m max_size=10g
                 inactive=60m use_temp_path=off;

server {
    # ...
    location / {
        proxy_cache my_cache;
        proxy_pass http://my_upstream;
    }
}

In the example configuration above, http://www.example.org/my_image.jpg the cache key is calculated as  md5(“http://my_upstream:80/my_image.jpg”).

Note that $proxy_host the variable is for the hash value and not the actual hostname ( www.example.com). $proxy_host Is defined as  proxy_pass the hostname and port of the proxy server specified in the directive.

To change a key's underlying variable (or something else), use   a directive (see also answer below).proxy_cache_key

Can I use cookies as part of a cache key?

Yes, the cache key can be configured to any value, for example:

proxy_cache_key $proxy_host$request_uri$cookie_jessionid;

This example  JSESSIONID adds the cookie's value to the cache key. Items with the same URI but different  JSESSIONID values ​​are cached separately as unique items.

Does NGINX use  ETag  request header?

In NGINX 1.7.3 and  NGINX Plus R5  and later, ETag the request header  If-None-Match is fully supported along with .

How does NGINX handle byte range requests?

NGINX supports byte range requests and only items with specified bytes are served to the client if the file in the cache is up to date. If the file is not cached or has expired, NGINX will download the entire file from the origin server. If a single byte range is requested, NGINX sends it to the client as soon as it encounters the range in the download stream. If the request specifies multiple byte ranges in the same file, NGINX sends the entire file to the client after the download is complete.

Once the download is complete, NGINX moves the entire resource into the cache so that whether future byte range requests are single-byte or multi-byte, NGINX can find the content in the cache and respond immediately.

Note that upstream NGINX can make byte range requests to the server only if  upstream the server supports byte range requests.

Does NGINX support cache clearing?

NGINX Plus  supports selective cache clearing. This feature comes in handy when a file has been updated on the origin server but is still valid in the NGINX Plus cache (Cache-Control:max-age still valid, but proxy_cache_path the directive's inactive parameters has not expired). NGINX Plus's cache clearing feature can easily delete the file. document. For more information, see " Clear Cache Contents ."

How does NGINX handle  Pragma  request headers?

After the client adds  Pragma:no-cache the request header, it will bypass all intermediate caches and obtain the requested content directly from the origin server. By default, NGINX does not consider  Pragma request headers, but you can  proxy_cache_bypass configure this feature using the following directive:

location /images/ {
    proxy_cache my_cache;
    proxy_cache_bypass $http_pragma;
    # ...
}

Does NGINX support  the stale-while-revalidate  and  stale-if-error  extensions of the Cache-Control  request header  ?

NGINX Plus R12  and NGINX 1.11.10 and higher are supported. What these extensions do:

  • Cache-Control An extension to the HTTP request header   allows the use of old cached content if a cached response is currently being updated .stale-while-revalidate

  • Cache-Control Extensions to HTTP request headers   that allow the use of cached old responses when an error occurs .stale-if-error

These request headers have a lower priority than  proxy_cache_use_stale the directives mentioned above.

Does NGINX support  vary  request headers?

NGINX Plus R5  and NGINX 1.7.7 and higher are supported. To learn more about  request headers, see here.Vary 

Further reading

You can customize and tune NGINX caching in many ways. To learn more about NGINX caching, check out the following resources:


The only official Chinese community of NGINX, all at  nginx.org.cn

More NGINX-related technical information, interactive Q&A, series of courses, and event resources:  Open Source Community Official Website  |  WeChat Official Account

IntelliJ IDEA 2023.3 & JetBrains Family Bucket annual major version update new concept "defensive programming": make yourself a stable job GitHub.com runs more than 1,200 MySQL hosts, how to seamlessly upgrade to 8.0? Stephen Chow's Web3 team will launch an independent App next month. Will Firefox be eliminated? Visual Studio Code 1.85 released, floating window Yu Chengdong: Huawei will launch disruptive products next year and rewrite the history of the industry. The US CISA recommends abandoning C/C++ to eliminate memory security vulnerabilities. TIOBE December: C# is expected to become the programming language of the year. A paper written by Lei Jun 30 years ago : "Principle and Design of Computer Virus Determination Expert System"
{{o.name}}
{{m.name}}

Guess you like

Origin my.oschina.net/u/5246775/blog/10313972
Recommended