What modules are included by default when Nginx-nginx-1.18.0 is compiled? What modules and configuration statements does nginx-1.18.0 have, and what are their functions?

Several important blog posts related to Nginx written by myself
For details, please see the linkhttps://blog.csdn.net/wenhao_ir /article/details/135023881

Table of contents

01-How to check which modules are included by default when Nginx-nginx-1.18.0 is compiled?

If you want to see the default settings for compilation configuration options, you can run the following two commands:

Configure the compilation without manually setting the addition and removal of any modules:

./configure --prefix=/usr/local/nginx

then compile

make

After compilation is completed, which modules have been compiled will be displayed, as shown in the figure below:

It is worth noting that the name of the compiled target file can be different from the module name used in configure.
For example, for the module http_geo, although its name is: http_geo_module in the configure help command, the name of the target file it finally compiles is actually ngx_http_geo_module.o , I tested it, and if you don’t want to include this module, you need the following command:

--without-http_geo_module

This fully shows that the name of the compiled target file and the module name used in configure can be different.
Insert image description here
The complete information is as follows:

objs/src/core/nginx.o \
objs/src/core/ngx_log.o \
objs/src/core/ngx_palloc.o \
objs/src/core/ngx_array.o \
objs/src/core/ngx_list.o \
objs/src/core/ngx_hash.o \
objs/src/core/ngx_buf.o \
objs/src/core/ngx_queue.o \
objs/src/core/ngx_output_chain.o \
objs/src/core/ngx_string.o \
objs/src/core/ngx_parse.o \
objs/src/core/ngx_parse_time.o \
objs/src/core/ngx_inet.o \
objs/src/core/ngx_file.o \
objs/src/core/ngx_crc32.o \
objs/src/core/ngx_murmurhash.o \
objs/src/core/ngx_md5.o \
objs/src/core/ngx_sha1.o \
objs/src/core/ngx_rbtree.o \
objs/src/core/ngx_radix_tree.o \
objs/src/core/ngx_slab.o \
objs/src/core/ngx_times.o \
objs/src/core/ngx_shmtx.o \
objs/src/core/ngx_connection.o \
objs/src/core/ngx_cycle.o \
objs/src/core/ngx_spinlock.o \
objs/src/core/ngx_rwlock.o \
objs/src/core/ngx_cpuinfo.o \
objs/src/core/ngx_conf_file.o \
objs/src/core/ngx_module.o \
objs/src/core/ngx_resolver.o \
objs/src/core/ngx_open_file_cache.o \
objs/src/core/ngx_crypt.o \
objs/src/core/ngx_proxy_protocol.o \
objs/src/core/ngx_syslog.o \
objs/src/event/ngx_event.o \
objs/src/event/ngx_event_timer.o \
objs/src/event/ngx_event_posted.o \
objs/src/event/ngx_event_accept.o \
objs/src/event/ngx_event_udp.o \
objs/src/event/ngx_event_connect.o \
objs/src/event/ngx_event_pipe.o \
objs/src/os/unix/ngx_time.o \
objs/src/os/unix/ngx_errno.o \
objs/src/os/unix/ngx_alloc.o \
objs/src/os/unix/ngx_files.o \
objs/src/os/unix/ngx_socket.o \
objs/src/os/unix/ngx_recv.o \
objs/src/os/unix/ngx_readv_chain.o \
objs/src/os/unix/ngx_udp_recv.o \
objs/src/os/unix/ngx_send.o \
objs/src/os/unix/ngx_writev_chain.o \
objs/src/os/unix/ngx_udp_send.o \
objs/src/os/unix/ngx_udp_sendmsg_chain.o \
objs/src/os/unix/ngx_channel.o \
objs/src/os/unix/ngx_shmem.o \
objs/src/os/unix/ngx_process.o \
objs/src/os/unix/ngx_daemon.o \
objs/src/os/unix/ngx_setaffinity.o \
objs/src/os/unix/ngx_setproctitle.o \
objs/src/os/unix/ngx_posix_init.o \
objs/src/os/unix/ngx_user.o \
objs/src/os/unix/ngx_dlopen.o \
objs/src/os/unix/ngx_process_cycle.o \
objs/src/os/unix/ngx_linux_init.o \
objs/src/event/modules/ngx_epoll_module.o \
objs/src/os/unix/ngx_linux_sendfile_chain.o \
objs/src/core/ngx_regex.o \
objs/src/http/ngx_http.o \
objs/src/http/ngx_http_core_module.o \
objs/src/http/ngx_http_special_response.o \
objs/src/http/ngx_http_request.o \
objs/src/http/ngx_http_parse.o \
objs/src/http/modules/ngx_http_log_module.o \
objs/src/http/ngx_http_request_body.o \
objs/src/http/ngx_http_variables.o \
objs/src/http/ngx_http_script.o \
objs/src/http/ngx_http_upstream.o \
objs/src/http/ngx_http_upstream_round_robin.o \
objs/src/http/ngx_http_file_cache.o \
objs/src/http/ngx_http_write_filter_module.o \
objs/src/http/ngx_http_header_filter_module.o \
objs/src/http/modules/ngx_http_chunked_filter_module.o \
objs/src/http/modules/ngx_http_range_filter_module.o \
objs/src/http/modules/ngx_http_gzip_filter_module.o \
objs/src/http/ngx_http_postpone_filter_module.o \
objs/src/http/modules/ngx_http_ssi_filter_module.o \
objs/src/http/modules/ngx_http_charset_filter_module.o \
objs/src/http/modules/ngx_http_userid_filter_module.o \
objs/src/http/modules/ngx_http_headers_filter_module.o \
objs/src/http/ngx_http_copy_filter_module.o \
objs/src/http/modules/ngx_http_not_modified_filter_module.o \
objs/src/http/modules/ngx_http_static_module.o \
objs/src/http/modules/ngx_http_autoindex_module.o \
objs/src/http/modules/ngx_http_index_module.o \
objs/src/http/modules/ngx_http_mirror_module.o \
objs/src/http/modules/ngx_http_try_files_module.o \
objs/src/http/modules/ngx_http_auth_basic_module.o \
objs/src/http/modules/ngx_http_access_module.o \
objs/src/http/modules/ngx_http_limit_conn_module.o \
objs/src/http/modules/ngx_http_limit_req_module.o \
objs/src/http/modules/ngx_http_geo_module.o \
objs/src/http/modules/ngx_http_map_module.o \
objs/src/http/modules/ngx_http_split_clients_module.o \
objs/src/http/modules/ngx_http_referer_module.o \
objs/src/http/modules/ngx_http_rewrite_module.o \
objs/src/http/modules/ngx_http_proxy_module.o \
objs/src/http/modules/ngx_http_fastcgi_module.o \
objs/src/http/modules/ngx_http_uwsgi_module.o \
objs/src/http/modules/ngx_http_scgi_module.o \
objs/src/http/modules/ngx_http_memcached_module.o \
objs/src/http/modules/ngx_http_empty_gif_module.o \
objs/src/http/modules/ngx_http_browser_module.o \
objs/src/http/modules/ngx_http_upstream_hash_module.o \
objs/src/http/modules/ngx_http_upstream_ip_hash_module.o \
objs/src/http/modules/ngx_http_upstream_least_conn_module.o \
objs/src/http/modules/ngx_http_upstream_random_module.o \
objs/src/http/modules/ngx_http_upstream_keepalive_module.o \
objs/src/http/modules/ngx_http_upstream_zone_module.o \
objs/ngx_modules.o \

The following are the modules included by Nginx by default.

02-How to check what modules Nginx has that you can manually add or not add

You can use the command:

./configure --help

See which modules you can manually add or not add.
Note: In the listed information, starting with "with" or starting with "without" does not indicate whether it is included or not included by default, but only indicates whether this module is often added or not. It's just an addition, and of course it's a grammar example.

The results of running ./configure --help command are as follows:

./configure --help

Insert image description here

[root@geoiptest03 nginx-1.18.0]# ./configure --help

  --help                             print this message

  --prefix=PATH                      set installation prefix
  --sbin-path=PATH                   set nginx binary pathname
  --modules-path=PATH                set modules path
  --conf-path=PATH                   set nginx.conf pathname
  --error-log-path=PATH              set error log pathname
  --pid-path=PATH                    set nginx.pid pathname
  --lock-path=PATH                   set nginx.lock pathname

  --user=USER                        set non-privileged user for
                                     worker processes
  --group=GROUP                      set non-privileged group for
                                     worker processes

  --build=NAME                       set build name
  --builddir=DIR                     set build directory

  --with-select_module               enable select module
  --without-select_module            disable select module
  --with-poll_module                 enable poll module
  --without-poll_module              disable poll module

  --with-threads                     enable thread pool support

  --with-file-aio                    enable file AIO support

  --with-http_ssl_module             enable ngx_http_ssl_module
  --with-http_v2_module              enable ngx_http_v2_module
  --with-http_realip_module          enable ngx_http_realip_module
  --with-http_addition_module        enable ngx_http_addition_module
  --with-http_xslt_module            enable ngx_http_xslt_module
  --with-http_xslt_module=dynamic    enable dynamic ngx_http_xslt_module
  --with-http_image_filter_module    enable ngx_http_image_filter_module
  --with-http_image_filter_module=dynamic
                                     enable dynamic ngx_http_image_filter_module
  --with-http_geoip_module           enable ngx_http_geoip_module
  --with-http_geoip_module=dynamic   enable dynamic ngx_http_geoip_module
  --with-http_sub_module             enable ngx_http_sub_module
  --with-http_dav_module             enable ngx_http_dav_module
  --with-http_flv_module             enable ngx_http_flv_module
  --with-http_mp4_module             enable ngx_http_mp4_module
  --with-http_gunzip_module          enable ngx_http_gunzip_module
  --with-http_gzip_static_module     enable ngx_http_gzip_static_module
  --with-http_auth_request_module    enable ngx_http_auth_request_module
  --with-http_random_index_module    enable ngx_http_random_index_module
  --with-http_secure_link_module     enable ngx_http_secure_link_module
  --with-http_degradation_module     enable ngx_http_degradation_module
  --with-http_slice_module           enable ngx_http_slice_module
  --with-http_stub_status_module     enable ngx_http_stub_status_module

  --without-http_charset_module      disable ngx_http_charset_module
  --without-http_gzip_module         disable ngx_http_gzip_module
  --without-http_ssi_module          disable ngx_http_ssi_module
  --without-http_userid_module       disable ngx_http_userid_module
  --without-http_access_module       disable ngx_http_access_module
  --without-http_auth_basic_module   disable ngx_http_auth_basic_module
  --without-http_mirror_module       disable ngx_http_mirror_module
  --without-http_autoindex_module    disable ngx_http_autoindex_module
  --without-http_geo_module          disable ngx_http_geo_module
  --without-http_map_module          disable ngx_http_map_module
  --without-http_split_clients_module disable ngx_http_split_clients_module
  --without-http_referer_module      disable ngx_http_referer_module
  --without-http_rewrite_module      disable ngx_http_rewrite_module
  --without-http_proxy_module        disable ngx_http_proxy_module
  --without-http_fastcgi_module      disable ngx_http_fastcgi_module
  --without-http_uwsgi_module        disable ngx_http_uwsgi_module
  --without-http_scgi_module         disable ngx_http_scgi_module
  --without-http_grpc_module         disable ngx_http_grpc_module
  --without-http_memcached_module    disable ngx_http_memcached_module
  --without-http_limit_conn_module   disable ngx_http_limit_conn_module
  --without-http_limit_req_module    disable ngx_http_limit_req_module
  --without-http_empty_gif_module    disable ngx_http_empty_gif_module
  --without-http_browser_module      disable ngx_http_browser_module
  --without-http_upstream_hash_module
                                     disable ngx_http_upstream_hash_module
  --without-http_upstream_ip_hash_module
                                     disable ngx_http_upstream_ip_hash_module
  --without-http_upstream_least_conn_module
                                     disable ngx_http_upstream_least_conn_module
  --without-http_upstream_random_module
                                     disable ngx_http_upstream_random_module
  --without-http_upstream_keepalive_module
                                     disable ngx_http_upstream_keepalive_module
  --without-http_upstream_zone_module
                                     disable ngx_http_upstream_zone_module

  --with-http_perl_module            enable ngx_http_perl_module
  --with-http_perl_module=dynamic    enable dynamic ngx_http_perl_module
  --with-perl_modules_path=PATH      set Perl modules path
  --with-perl=PATH                   set perl binary pathname

  --http-log-path=PATH               set http access log pathname
  --http-client-body-temp-path=PATH  set path to store
                                     http client request body temporary files
  --http-proxy-temp-path=PATH        set path to store
                                     http proxy temporary files
  --http-fastcgi-temp-path=PATH      set path to store
                                     http fastcgi temporary files
  --http-uwsgi-temp-path=PATH        set path to store
                                     http uwsgi temporary files
  --http-scgi-temp-path=PATH         set path to store
                                     http scgi temporary files

  --without-http                     disable HTTP server
  --without-http-cache               disable HTTP cache

  --with-mail                        enable POP3/IMAP4/SMTP proxy module
  --with-mail=dynamic                enable dynamic POP3/IMAP4/SMTP proxy module
  --with-mail_ssl_module             enable ngx_mail_ssl_module
  --without-mail_pop3_module         disable ngx_mail_pop3_module
  --without-mail_imap_module         disable ngx_mail_imap_module
  --without-mail_smtp_module         disable ngx_mail_smtp_module

  --with-stream                      enable TCP/UDP proxy module
  --with-stream=dynamic              enable dynamic TCP/UDP proxy module
  --with-stream_ssl_module           enable ngx_stream_ssl_module
  --with-stream_realip_module        enable ngx_stream_realip_module
  --with-stream_geoip_module         enable ngx_stream_geoip_module
  --with-stream_geoip_module=dynamic enable dynamic ngx_stream_geoip_module
  --with-stream_ssl_preread_module   enable ngx_stream_ssl_preread_module
  --without-stream_limit_conn_module disable ngx_stream_limit_conn_module
  --without-stream_access_module     disable ngx_stream_access_module
  --without-stream_geo_module        disable ngx_stream_geo_module
  --without-stream_map_module        disable ngx_stream_map_module
  --without-stream_split_clients_module
                                     disable ngx_stream_split_clients_module
  --without-stream_return_module     disable ngx_stream_return_module
  --without-stream_upstream_hash_module
                                     disable ngx_stream_upstream_hash_module
  --without-stream_upstream_least_conn_module
                                     disable ngx_stream_upstream_least_conn_module
  --without-stream_upstream_random_module
                                     disable ngx_stream_upstream_random_module
  --without-stream_upstream_zone_module
                                     disable ngx_stream_upstream_zone_module

  --with-google_perftools_module     enable ngx_google_perftools_module
  --with-cpp_test_module             enable ngx_cpp_test_module

  --add-module=PATH                  enable external module
  --add-dynamic-module=PATH          enable dynamic external module

  --with-compat                      dynamic modules compatibility

  --with-cc=PATH                     set C compiler pathname
  --with-cpp=PATH                    set C preprocessor pathname
  --with-cc-opt=OPTIONS              set additional C compiler options
  --with-ld-opt=OPTIONS              set additional linker options
  --with-cpu-opt=CPU                 build for the specified CPU, valid values:
                                     pentium, pentiumpro, pentium3, pentium4,
                                     athlon, opteron, sparc32, sparc64, ppc64

  --without-pcre                     disable PCRE library usage
  --with-pcre                        force PCRE library usage
  --with-pcre=DIR                    set path to PCRE library sources
  --with-pcre-opt=OPTIONS            set additional build options for PCRE
  --with-pcre-jit                    build PCRE with JIT compilation support

  --with-zlib=DIR                    set path to zlib library sources
  --with-zlib-opt=OPTIONS            set additional build options for zlib
  --with-zlib-asm=CPU                use zlib assembler sources optimized
                                     for the specified CPU, valid values:
                                     pentium, pentiumpro

  --with-libatomic                   force libatomic_ops library usage
  --with-libatomic=DIR               set path to libatomic_ops library sources

  --with-openssl=DIR                 set path to OpenSSL library sources
  --with-openssl-opt=OPTIONS         set additional build options for OpenSSL

  --with-debug                       enable debug logging

03-Introduction to each configuration statement and module function

03-001:--pid-path=PATH

--pid-path=PATHIt is one of the Nginx configuration options and is used to specify the storage path of the PID file of the Nginx master process. The PID file contains the process ID (PID) of the Nginx main process.

Specifically, the--pid-path option allows you to specify a path where the PID file of the main Nginx process is saved. For example:

./configure --pid-path=/var/run/nginx/nginx.pid

The above command will save the PID file of the Nginx main process in the /var/run/nginx/nginx.pid file.

There are two main functions:

  1. Process management: PID files are usually used for process management, such as starting, stopping or reloading the Nginx process. System administrators can use the PID file to learn the current process ID of the Nginx main process to perform operations such as shutting down or reloading the configuration.

  2. Monitoring and Diagnostics: In some cases, monitoring tools or diagnostic scripts may need to obtain the PID of the Nginx main process to monitor or troubleshoot the process status.

Precautions:

  • If --pid-path is not specified, Nginx will save the PID file in the default path, usually /usr/local/nginx/logs/nginx.pid or a similar path, depending on your Nginx installation directory.
  • Make sure that the specified path is writable by the Nginx process, otherwise Nginx may not be able to create the PID file when it starts.
  • It is recommended to save the PID file in a manageable directory so that it can be easily found when needed.

03-002:--lock-path=PATH

--lock-path=PATHIt is one of the Nginx configuration options, used to specify the storage path of the file lock used by the Nginx main process. This file lock is to ensure that only one Nginx main process is running at the same time.

Specifically, the--lock-path option allows you to specify a path where the lock file of the Nginx main process will be saved. For example:

./configure --lock-path=/var/lock/nginx.lock

The above command will save the lock file of the Nginx main process in the /var/lock/nginx.lock file.

There are two main functions:

  1. Inter-process synchronization: When multiple processes access shared resources, synchronization between processes can be achieved through file locks to ensure that only one process can operate the shared resources at the same time. In Nginx, this is mainly used to prevent multiple master processes from starting at the same time.

  2. Avoid startup conflicts: When multiple Nginx master processes try to start, they will try to acquire file locks. Only the process that successfully acquires the lock can continue to start, while other processes must wait. This helps avoid startup conflicts and ensures that only one Nginx process takes control.

Precautions:

  • If --lock-path is not specified, Nginx will save the lock file in the default path, usually /usr/local/nginx/logs/nginx.lock or a similar path, depending on your Nginx installation directory.
  • Make sure that the specified path is writable by the Nginx process, otherwise Nginx may not be able to create the lock file when it starts.
  • The existence of the lock file can be used to determine whether the Nginx main process is running, because the lock file will only exist when the Nginx main process is running.

03-003:select_module

No manual settings are required because Nginx automatically makes the selection based on factors such as the operating system.

select_module is a compilation option of Nginx that enables the select module. This module allows Nginx to use the select system call for event handling.

Event processing is a very important concept in Nginx, which is used to handle asynchronous events such as network connections and timers. The select module is an event-driven module of Nginx, which uses the select system call to manage and process events.

Specifically, the select_module option does the following:

  1. Enableselect modules: By specifying this option, you tell Nginx to include support for select modules when compiling .

  2. Event handling: select The module uses the select system call to monitor the status of multiple file descriptors to determine whether any are readable, writable, or Unusual events occur. This is very useful for handling large numbers of concurrent connections.

Please note that the select module is generally not the best choice in high load and high concurrency scenarios, because select can be used when the number of file descriptors is large Performance may be limited. For higher performance requirements, you usually choose to use a more advanced event model, such asepoll (in Linux), kqueue (on BSD systems), or poll, these are event handling mechanisms that support higher concurrent connections.

In actual use, depending on your operating system and needs, you may need to select the appropriate event module, not just select. Nginx will by default try to choose the event module that best suits your system, but you can control the compilation process by explicitly specifying the `` family of options.

To sum up: You don’t need to worry about these two options. Nginx will select it when compiling, and in Linux, it will usually automatically selectepoll.

03-004:poll_module

No manual settings are required because Nginx automatically makes the selection based on factors such as the operating system.

This will be understood at point 03-003, so the introduction is omitted here.

03-005:threads

It needs to be added manually, it is not included by default.

threadsIs one of Nginx's compilation options, used to enable support for threads. In Nginx, thread support means that the server can use multiple threads to process requests to improve concurrency performance.

Specifically, the threads option does the following:

  1. Enable thread support: By specifying this option, you tell Nginx to include support for threads at compile time. This allows Nginx to utilize the threading mechanism provided by the operating system to handle requests.

  2. Concurrent processing: Thread support enables Nginx to handle multiple requests at the same time, and each request can be executed in an independent thread. This helps improve the server's concurrency performance, especially when faced with a large number of concurrent connections.

It's important to note that thread support is a relatively new feature in Nginx, and its behavior may vary depending on the operating system. In some traditional Nginx deployments, multi-processes rather than multi-threads are usually used to handle concurrent connections. The multi-process model is easier to implement and maintain, and is often sufficient for Nginx's high-performance requirements.

If you choose to enable threads, please ensure that your version of Nginx supports threads and is configured appropriately in your operating system. Additionally, the behavior and performance of thread support may be affected by the operating system and Nginx version, so it's best to do some testing and tuning before actual deployment.

Note: nginx-1.18.0 does not enable thread support by default. If you need to enable it, you need to specify it during configuration.

What is the difference between a process and a thread?
Process and thread are two important concepts in computer science. They represent the basic unit of execution in the computer, but they have some key differences:

  1. definition:

    • Process: A process is an execution of a program and an independent execution environment. Each process has an independent memory space, including code, data and system resources.
    • Thread: A thread is an execution unit in a process. A process can contain multiple threads, which share the same memory space and system resources but have independent execution paths.
  2. Resource allocation:

    • Process: A process is an independent resource unit with its own memory space, file handle, etc. Communication between processes usually requires special mechanisms, such as Inter-Process Communication (IPC).
    • Thread: A thread is a resource sharing unit within a process. Multiple threads can share the same data and context. Communication between threads is relatively easy because they share the same address space.
  3. Creation and destruction overhead:

    • Processes: Creating and destroying processes is relatively expensive. Each process has its own independent memory space for which resources need to be allocated and released.
    • Threads: Creating and destroying threads is less expensive because they share the process's resources. Threads are usually created faster than processes because new memory space does not need to be allocated and initialized.
  4. Concurrency:

    • Process: Processes are executed independently and are not affected by each other. Inter-process communication requires some additional mechanisms such as message passing or shared memory.
    • Threads: Threads share the same memory space, making it easier to communicate. But at the same time, shared resources need to be handled more carefully to avoid problems such as race conditions and deadlocks.
  5. stability:

    • Process: The isolation between processes is high, and the crash of one process usually does not affect other processes.
    • Threads: Since threads share the same memory space, an error in one thread may cause the entire process to crash.

In general, processes and threads are the two basic mechanisms used to implement concurrency in operating systems, and each has its applicable scenarios, advantages and disadvantages. The choice of using processes or threads depends on specific application requirements and design considerations.

03-006:file-aio

No manual settings are required, Nginx will automatically decide whether to enable it.

file-aioIs an Nginx configuration option used to enable file asynchronous I/O (AIO) support. AIO allows Nginx to handle file operations without blocking the process, improving the performance of reading and writing files.

Specifically, the file-aio option does the following:

  1. Enable file asynchronous I/O: By specifying this option, you tell Nginx to enable support for file asynchronous I/O at compile time.

  2. Improving performance: File asynchronous I/O allows Nginx to hand over these operations to the operating system for asynchronous processing when performing file read and write operations, without having to wait for these operations to complete. This helps improve server performance when processing files, especially in situations of high concurrency, large files, or heavy load.

The specific command to use this option is as follows:

./configure file-aio

Note that the availability of file asynchronous I/O depends on operating system support. Not all operating systems support AIO, and the level of support may vary. Before using the file-aio option, it is recommended to check the official documentation of Nginx and the documentation of the relevant operating system to ensure that AIO is available and applicable in your environment.

In Nginx 1.18.0 version, file asynchronous I/O (file AIO) support is enabled by default. This means that when compiling Nginx 1.18.0, support for file asynchronous I/O will be enabled by default, without additional file-aio configuration options.

In Nginx version 1.18.0, file asynchronous I/O support is detected and enabled based on the operating system's capabilities.If your operating system supports file asynchronous I/O and Nginx is able to detect the support, file asynchronous I/O will be enabled by default.

03-007:http_ssl_module

It needs to be enabled manually and is not included by default.

There is nothing to say about this, it means that the SSL certificate module is enabled by default. Note that this module requires the OpenSSL library in the system. Most Centos servers have OpenSSL installed by default. For example, the compilation configuration information below shows that the OpenSSL library in the system is used.
Insert image description here

03-008:http_v2_module

It needs to be enabled manually and is not included by default.

http_v2_moduleIs an Nginx configuration option used to enable HTTP/2 protocol support. HTTP/2 is the next generation version of the HTTP protocol, which brings several improvements in performance and efficiency, such as multiplexing, header compression, and more.

Specifically, the http_v2_module option does the following:

  1. Enable HTTP/2 support: By specifying this option, you tell Nginx to enable support for the HTTP/2 protocol at compile time.

  2. Performance improvements: HTTP/2 introduces multiplexing, allowing multiple requests and responses to be transmitted simultaneously on a single connection. This can improve page loading performance, especially in high-latency or unstable network environments.

  3. Header compression: HTTP/2 supports compression of header fields, reducing the size of requests and responses, thereby reducing network transmission costs.

The specific command to use this option is as follows:

./configure http_v2_module

It should be noted that while HTTP/2 provides many performance advantages, in some specific scenarios and configurations, you may need to pay attention to some details. For example, HTTP/2 usually requires encrypted transmission using HTTPS, so you may also need to configure SSL-related options.

Overall, enabling HTTP/2 is a great way to improve web server performance, especially with modern browsers and network environments.

03-009:http_realip_module

It needs to be enabled manually and is not included by default.

http_realip_moduleIs an Nginx configuration option used to enable the Real IP module. The Real IP module allows Nginx to extract the client's real IP address from the header of the request instead of obtaining it from the proxy server's IP address.

Specifically, the http_realip_module option does the following:

  1. Extract the real IP address: When Nginx is behind a proxy server, it usually obtains the client's IP address from the proxy server's IP address. When the Real IP module is enabled, Nginx will be able to extract the real client IP address from the request header.

  2. Trusted proxy server settings: The Real IP module allows you to configure the IP addresses of trusted proxy servers. Only request headers from these proxy servers will be considered trusted and the real IP will be extracted.

The specific command to use this option is as follows:

./configure http_realip_module

In the configuration file, you also need to configure directives such as real_ip_header and set_real_ip_from to clearly tell Nginx which request header to extract the real IP from and trust Which proxy server IP address.

Example configuration:

http {
    real_ip_header X-Forwarded-For;
    set_real_ip_from 192.168.1.0/24;  # 信任的代理服务器 IP 段
    # 其他配置...
}

The above configuration means extracting the real IP address from the X-Forwarded-For request header and only trusting the proxy server in the 192.168.1.0/24 IP segment.

The Real IP module is very useful when handling proxy server forwarding, especially in scenarios such as reverse proxies and load balancing.

Obviously, when Nginx is the first-level proxy, the function of this module cannot be used.

03-010:http_addition_module

Because it is not needed, there is no need to manually specify not to enable it.

http_addition_moduleIs an Nginx configuration option used to enable the HTTP Addition module. This module allows adding content to the body of the response, typically used to append some data after the content returned by the server.

Specifically, the http_addition_module option does the following:

  1. Dynamic addition of content: The HTTP Addition module allows appending some additional content to the end of the response content generated by Nginx. This content can be static or dynamically generated through Nginx variables.

  2. Response processing: Usually in some scenarios, you may want to add some additional information to the response after the server generates the response, such as adding at the bottom of the returned HTML page Copyright information or other notes.

The specific command to use this option is as follows:

./configure http_addition_module

In the configuration file, you can use the add_after_body directive to specify what to add. For example:

location / {
    add_after_body "<p>Additional content</p>";
    # 其他配置...
}

The above configuration means adding <p>Additional content</p> at the end of the response body.

The HTTP Addition module provides a convenient way to dynamically add additional content to the response generated by Nginx to meet some customized needs.

Feeling: I feel it is not very useful. Usually the response content is still generated in the HTML template.

03-011:http_xslt_module

Because it is not needed, there is no need to manually specify not to enable it.

http_xslt_moduleIs an Nginx configuration option that enables the XSLT module. XSLT (Extensible Stylesheet Language Transformations) is a language used to transform XML documents into other formats. It is usually used to transform response content in XML format into HTML or other formats.

Specifically, the http_xslt_module option does the following:

  1. XML Transformation: The XSLT module allows Nginx to transform XML-formatted response content into another format, typically HTML, by providing an XSL stylesheet.

  2. Dynamic content processing: With XSLT, you can dynamically process XML content in Nginx. This may be useful in some scenarios, especially when the server-generated response content is in XML format.

The specific command to use this option is as follows:

./configure http_xslt_module

In the configuration file, you can use the xslt_stylesheet directive to specify the path to the XSL stylesheet. For example:

location / {
    xslt_stylesheet /path/to/style.xsl;
    # 其他配置...
}

The above configuration means that the requested XML response content will be transformed using the XSL style sheet defined in /path/to/style.xsl.

It is important to note that XSLT needs to be used with caution as it can have a performance impact, especially on large XML documents. Typically, more modern front-end technologies like JavaScript and browser-side XSLT processing may be more common.

03-012:http_xslt_module=dynamic

Because it is not needed, there is no need to manually specify not to enable it.

http_xslt_module=dynamicIs an Nginx configuration option that enables XSLT modules to be loaded as dynamic modules. This means that the XSLT module will be loaded at runtime as a dynamic module, rather than statically included in the binary when Nginx is compiled.

Specifically, the http_xslt_module=dynamic option does the following:

  1. Dynamically load XSLT modules: With this option, you tell Nginx to dynamically load XSLT modules at runtime, rather than statically linking the module into the Nginx binary at compile time.

  2. Flexibility: Dynamic loading allows you to add or remove XSLT modules without recompiling Nginx, providing greater flexibility. You can load or unload modules when needed without having to recompile the entire Nginx.

The specific command to use this option is as follows:

./configure http_xslt_module=dynamic

In the configuration file, you can still use the xslt_stylesheet directive to specify the path to the XSL style sheet, but you need to pay attention to whether the XSLT module has been loaded correctly.

location / {
    xslt_stylesheet /path/to/style.xsl;
    # 其他配置...
}

You need to ensure that the XSLT module is available at runtime and that the associated XSL stylesheets are configured correctly. If the XSLT module is not loading, you may want to check Nginx's error log for information about loading issues.

03-013:http_image_filter_module—Nginx image processing

Because it is not needed, there is no need to manually specify not to enable it.

http_image_filter_moduleIs an Nginx configuration option used to enable the Image Filter module. This module allows Nginx to perform some basic processing on images at runtime, such as cropping, rotating, scaling, etc.

Specifically, the http_image_filter_module option does the following:

  1. Image processing: The Image Filter module allows real-time processing of images when Nginx is used as an image server. It can be configured to perform a range of image operations to meet specific needs.

  2. Basic operations: Image Filter provides some basic image processing instructions, such as scaling, rotation, cropping, etc. These operations can be specified through parameters in the HTTP request.

The specific command to use this option is as follows:

./configure http_image_filter_module

In the configuration file, you can use the image_filter directive to configure the Image Filter module. For example:

location /images/ {
    # 将请求的图像按照指定的参数进行处理
    image_filter resize 300 200;
    # 其他配置...
}

The above configuration means scaling the image under the /images/ path so that its width is 300 pixels and its height is 200 pixels.

The Image Filter module provides some basic functions for image processing, but please note that it is not a complete image processing tool, but provides some basic functions. If more advanced image processing capabilities are required, you may need to use specialized image processing tools or libraries.

03-014:http_image_filter_module=dynamic

Because it is not needed, there is no need to manually specify not to enable it.

Dynamically load Image Filter module

03-015:http_geoip_module

Because it is not needed, there is no need to manually specify not to enable it.
I actually looked at it and found that this module is not included by default.
Pay special attention to the difference from module http_geo_module. For a detailed introduction to module http_geo_module, please refer to 03- of this blog post. 036 hours.

http_geoip_module is an Nginx configuration option that enables the GeoIP module. The GeoIP module allows Nginx to obtain geographical location information, such as country, region, city, etc., based on the client's IP address, in order to perform customized processing based on this information.
The GeoIP module allows Nginx to obtain geolocation information based on the client's IP address using MaxMind's GeoIP database or other supported data sources.

But unfortunately, the GeoIP database has been upgraded to GeoIP2, so this module is useless.
For the use of GeoIP2, please refer to my other blog post: https://blog.csdn.net/wenhao_ir/article/details/134927487]Note: It is only visible to Hao Hongjun himself, because it involves server, account and other information[

03-016:http_sub_module—Replacement content

Because it is not needed, there is no need to manually specify not to enable it.

http_sub_moduleIs an Nginx configuration option used to enable the Substitution module. This module allows Nginx to perform string replacement within the response content, typically used to modify the content of the response body before returning it to the client.

Specifically, the http_sub_module option does the following:

  1. String Substitution: The Substitution module allows performing string substitution within response content. You can specify a string pattern and specify the new string to use for replacement.

  2. Content modification: Using this module, you can modify the response body in Nginx, such as replacing text, adding additional content, etc.

The specific command to use this option is as follows:

./configure http_sub_module

In the configuration file, you can configure the Substitution module using the sub_filter directive. For example:

location / {
    # 对响应体中的所有 "foo" 替换为 "bar"
    sub_filter "foo" "bar";
    
    # 设置替换规则
    sub_filter_once off;
    
    # 其他配置...
}

The above configuration means to replace "foo" in all response bodies with "bar". sub_filter_once off; configuration means that multiple substitutions are allowed, not just one.

The Substitution module is typically used on reverse proxies to modify the content of responses returned from the backend server, but may be useful in other scenarios as well. Please note that when using the string replacement function, care needs to be taken to ensure that the replaced pattern and target string do not affect the correctness of the response content.

03-017:http_dav_modul

Because it is not needed, there is no need to manually specify not to enable it.

http_dav_moduleIt is a configuration option of Nginx for enabling the WebDAV (Web Distributed Authoring and Versioning) module. WebDAV is an extension based on the HTTP protocol that supports collaborative editing and version control of files.

Specifically, the http_dav_module option does the following:

  1. WebDAV support: After enabling the WebDAV module, Nginx will be able to handle WebDAV requests, including creation, deletion, modification, etc. of files and directories.

  2. File management: WebDAV provides a method of remote file management, allowing users to operate files on remote servers through the HTTP protocol. This is useful for implementing web-based file management systems and collaborative editing systems.

The specific command to use this option is as follows:

./configure http_dav_module

In the configuration file, you can use the dav_methods directive to configure the allowed WebDAV methods, and use the create_full_put_path directive to configure whether to automatically create the URL specified in the PUT request. path.

location /webdav/ {
    dav_methods PUT DELETE MKCOL COPY MOVE;
    create_full_put_path on;
    # 其他配置...
}

The above configuration means enabling WebDAV under the /webdav/ path, allowing operations such as PUT, DELETE, MKCOL, COPY and MOVE, and automatically creating the specified path when performing a PUT operation.

The WebDAV module provides an infrastructure for applications that use Web-based file management and collaborative editing. Please note that when using WebDAV, you may need to consider security and access rights issues to ensure proper file management and protection.

03-018:http_flv_module

It may be necessary, but it is not included by default, so specify it manually.

http_flv_moduleIt is an Nginx configuration option used to enable the FLV (Flash Video) module. This module allows Nginx to provide basic streaming support when processing FLV files, so that FLV videos can be played in environments that support Flash players.

Specifically, the http_flv_module option does the following:

  1. FLV support: With the FLV module enabled, Nginx will be able to process FLV files directly instead of just passing them to the client as static files.

  2. Streaming media playback: The FLV module provides some functions that enable Nginx to provide FLV video playback support in a streaming media environment. This is useful for playing FLV files using Flash player on a web page.

The specific command to use this option is as follows:

./configure http_flv_module

In the configuration file, you can use the location directive to configure the behavior of the FLV module. For example:

location /videos/ {
    flv;
    # 其他配置...
}

The above configuration means enabling the FLV module under the /videos/ path.

Please note that there may be some limitations to using the FLV module in modern environments due to the waning use of the Flash player and more advanced video playback support provided by HTML5. If you need to support a wider range of video formats and playback methods, you may want to consider using modern technologies such as the HTML5 video tag.

03-019:http_mp4_module

Because it is required, it is not included by default and needs to be enabled manually.

http_mp4_moduleIs an Nginx configuration option used to enable the MP4 module. This module allows Nginx to process MP4 files directly, supporting basic streaming capabilities for playing MP4 videos in the browser.

Specifically, the http_mp4_module option does the following:

  1. MP4 support: With the MP4 module enabled, Nginx will be able to process MP4 files directly instead of just passing them to the client as static files.

  2. Streaming media playback: The MP4 module provides some functions that enable Nginx to provide MP4 video playback support in a streaming media environment. This is useful for playing MP4 files on a web page using, for example, an HTML5 player.

The specific command to use this option is as follows:

./configure http_mp4_module

In the configuration file, you can use the location directive to configure the behavior of the MP4 module. For example:

location /videos/ {
    mp4;
    # 其他配置...
}

The above configuration means enabling the MP4 module under the /videos/ path.

03-020:http_gunzip_module

Because it is required, it is not included by default and needs to be enabled manually.

http_gunzip_moduleIs an Nginx configuration option that enables decompression in the Gzip module. This module allows Nginx to decompress requests containing Gzip-compressed content so it can be processed when it receives a request from a client.

Specifically, the http_gunzip_module option does the following:

  1. Support client request decompression: After enabling the decompression function in the Gzip module, Nginx can process client requests containing Gzip compressed content and automatically decompress the request body, making The server can process the decompressed data normally.

  2. Improving performance: In some cases, the client may use Gzip to compress the request body to reduce the amount of data transferred. With decompression enabled, Nginx can improve performance when handling such requests.

The specific command to use this option is as follows:

./configure http_gunzip_module

You generally do not need additional configuration to enable decompression in the configuration file. Once enabled http_gunzip_module, Nginx will automatically attempt to decompress client requests containing Gzip-compressed content.

server {
    listen 80;
    server_name example.com;

    location / {
        # 其他配置...
    }
}

In the above configuration, the additional configuration of location / will automatically handle client requests containing Gzip compressed content.

Please note that while the decompression feature can improve performance, there are potential security risks to be aware of and should be carefully evaluated when used.

03-021:http_gzip_static_module

Because it is required, it is not included by default and needs to be enabled manually.

http_gzip_static_moduleIs an Nginx configuration option used to enable static file compression in the Gzip module. This module allows Nginx to Gzip compress static files on the server side to reduce file transfer size and improve page loading performance.

Specifically, the http_gzip_static_module option does the following:

  1. Static file compression: After enabling the static file compression function in the Gzip module, Nginx can transfer static files (such as CSS, JavaScript, HTML, etc.) The server performs Gzip compression and then transmits it to the client.

  2. Reduce transfer size: Gzip compression can significantly reduce file transfer size and improve page load speeds, especially when network bandwidth is limited.

The specific command to use this option is as follows:

./configure http_gzip_static_module

In the configuration file, you can use the gzip_static directive to enable Gzip compression of static files. For example:

http {
    gzip_static on;
    
    server {
        listen 80;
        server_name example.com;

        location /static/ {
            # 针对 /static/ 路径下的静态文件启用 Gzip 压缩
            gzip_static on;

            # 其他配置...
        }

        # 其他 server 配置...
    }
}

In the above configuration, gzip_static on; enables Gzip compression of static files, and the files under location /static/ will be Gzip compressed according to the configuration and then transmitted to the customer. end.

Using Gzip compression of static files can significantly improve page loading performance, but you need to ensure that compressed static files are stored on the server side to avoid the performance overhead of real-time compression.

Feeling: This is why the size of the static file we see on the browser side is smaller than the actual size of the static file. This is because gzip compression is enabled during the transfer process.

03-022:http_auth_request_module—Authentication function module

Because it is not needed, there is no need to manually specify not to enable it.

http_auth_request_moduleIs an Nginx configuration option used to enable the Auth Request module. This module allows Nginx to delegate authentication requests to other services or programs when performing access control, thereby achieving a more flexible authentication mechanism.

Specifically, the http_auth_request_module option does the following:

  1. Dynamic Authentication: The Auth Request module allows Nginx to send authentication requests to other services or programs that are responsible for authenticating users. This allows you to use custom authentication logic instead of just relying on Nginx's built-in Basic Authentication or other authentication modules.

  2. Flexibility: Delegating authentication requests allows you to use a variety of external authentication services, such as OAuth authentication, LDAP, HTTP requests, and more. This provides greater flexibility to accommodate different authentication needs.

The specific command to use this option is as follows:

./configure http_auth_request_module

In the configuration file, you can use the location directive to configure the Auth Request module. For example:

location /protected/ {
    # 发送身份验证请求到 /auth/check,由该服务进行身份验证
    auth_request /auth/check;

    # 如果身份验证成功,允许访问
    auth_request_set $auth_status $upstream_status;
    if ($auth_status = 200) {
        # 允许访问
        proxy_pass http://backend;
    }

    # 如果身份验证失败,返回 403 Forbidden
    deny all;
}

location = /auth/check {
    # 身份验证服务的配置,例如向其他服务发起身份验证请求
    # ...

    # 返回身份验证结果,200 表示验证成功,其他代码表示验证失败
    return 200;
}

In the above configuration, access under location /protected/ will trigger the Auth Request module and send the authentication request to the /auth/check path. If authentication is successful (return 200;), access is allowed, otherwise 403 Forbidden is returned. /auth/check The configuration corresponding to the path is an example. You need to configure a specific authentication service according to the actual situation.

03-023:http_random_index_module

Because it is not needed, there is no need to manually specify not to enable it.

http_random_index_moduleIs an Nginx configuration option used to enable the Random Index module. This module allows Nginx to randomly select a file as an index file when accessing a directory, instead of using a traditional index file (such as index.html).

Specifically, the http_random_index_module option does the following:

  1. Random index file: After enabling the Random Index module, when the client requests access to a directory without specifying an explicit index file, Nginx will randomly select a file from the directory. Returned to the client as an index file.

  2. Flexibility: This provides a more flexible way of presenting directory contents rather than using a fixed index file. This may be useful in some scenarios, such as displaying random pictures, files, etc. in a directory.

The specific command to use this option is as follows:

./configure http_random_index_module

In the configuration file, you can use the random_index directive to enable the Random Index module and specify the file types that need to be processed. For example:

location /images/ {
    # 在 /images/ 路径下启用 Random Index 模块
    random_index on;
    
    # 其他配置...
}

The above configuration indicates that the Random Index module is enabled in the /images/ path, and Nginx will randomly select a file in this directory and return it to the client as an index file.

Please note that using a random index file may not be suitable for all scenarios, especially if you need to control the order of presentation. In some application scenarios, it is more common to use fixed index files, such as index.html.

03-024:http_secure_link_module

Because it is not needed, there is no need to manually specify not to enable it.

http_secure_link_moduleIs an Nginx configuration option used to enable the Secure Link module. The Secure Link module provides a URL-based security mechanism for protecting resources from unauthorized access.

Specifically, the http_secure_link_module option does the following:

  1. URL Security: The Secure Link module is used to create URLs that contain signatures to ensure that only requests with valid signatures can access the resource. This prevents unauthorized access and misuse of the link.

  2. Resource Protection: By using secure links, you can ensure that only clients that know the signing algorithm and key can generate valid links, thus protecting sensitive resources from unauthorized access. Access.

The specific command to use this option is as follows:

./configure http_secure_link_module

In the configuration file, you can use the secure_link directive to configure the behavior of the Secure Link module. For example:

location /secure/ {
    # 启用 Secure Link 模块
    secure_link $arg_md5,$arg_expires;

    # 配置密钥和有效期
    secure_link_md5 "secretkey$uri$arg_expires";
    secure_link_expires +1m;

    # 其他配置...
}

In the above configuration, secure_link enabled the Secure Link module and configured the signature parameters ($arg_md5 and $arg_expires) . The secure_link_md5 directive configures the signature algorithm, using the key and URI. The secure_link_expires directive configures the validity period of the link.

When the client accesses the /secure/ path, it needs to provide valid signature and timestamp parameters, otherwise it will not be able to access the resource.

Please note that the Secure Link module provides a basic security mechanism, but in some scenarios with advanced security requirements, other more complex security mechanisms and authentication methods may need to be considered.

03-025:http_degradation_module—Prevent Nginx from overusing server memory

http_degradation_moduleThe module is used to limit Nginx's excessive use of server memory. When Nginx's memory usage of the server exceeds a certain value, it will limit excess requests, such as returning 204 or 404 pages.

Source code link for this module:
https://github.com/chronolaw/annotated_nginx/blob/master/nginx/src/http/modules/ngx_http_degradation_module.c

The memory usage of Centos7.9 under no load after installing Guardian God and Nginx is as follows:
Insert image description here

How to use it:

①Write the following code in the http module:

    # 限制内存最大使用为500M
    degradation sbrk=500m;

Insert image description here
sbrk: stands for "set break", which refers to a system call used to manage the address space of a process, typically for memory management. In this context, it means that when Nginx's memory usage exceeds 500M, limit Nginx to continue to occupy more memory.

500m: This is a parameter indicating the maximum memory size allowed by Nginx. Here, 500m means that a memory limit of 500 megabytes is set.

②Write the following code in the server module:

    location / {
    
    
		# 当系统资源使用达到90%时启用Degradation 模块,防止服务器过载
		degrade 204;
	}

The above code indicates that when Nginx uses more than 500M of memory, it returns a 204 page.

03-026:http_slice_module

Because it is not needed, there is no need to manually specify not to enable it.

http_slice_moduleIs an Nginx configuration option used to enable the Slice module. The Slice module allows the client's request to be split into multiple slices, and allows the server to process these slices at different times or in parallel, thereby improving the concurrent processing capabilities of requests.

Specifically, the http_slice_module option does the following:

  1. Request Slicing: The Slice module allows splitting large client requests into multiple fragments. This is useful when dealing with large file uploads or large data streams.

  2. Concurrent processing: The segmented request fragments can be processed concurrently on the server, thereby improving the request processing capabilities. Each fragment can be processed independently without waiting for the transfer of the entire request to complete.

The specific command to use this option is as follows:

./configure http_slice_module

In the configuration file, you can use the slice directive to configure the behavior of the Slice module. For example:

location /upload {
    # 启用 Slice 模块
    slice;
    
    # 设置切片大小
    slice_size 1m;

    # 其他配置...
}

In the above configuration, slice; enables the Slice module and sets the slice size to 1 MB through slice_size. Clients can upload large files into 1 MB chunks, and the server can process these chunks concurrently.

The Slice module is typically used to handle large file uploads or large data streams. Note that when using the Slice module, the client needs to support and send the Range header to specify the requested slice range.

03-027:http_stub_status_module-Nginx running status monitoring page

It may be necessary, but it is not included by default, so specify it manually.

http_stub_status_moduleIs an Nginx configuration option used to enable the Stub Status module. The Stub Status module provides a simple page for monitoring the basic performance indicators and running status of the Nginx server.

Specifically, the http_stub_status_module option does the following:

  1. Performance monitoring: The Stub Status module allows you to obtain the performance indicators of the Nginx server through HTTP requests, such as the number of connections, request processing, the number of requests with various status codes, etc.

  2. Status information: Visit the Stub Status page to obtain status information about the Nginx worker process for performance tuning and monitoring.

The specific command to use this option is as follows:

./configure http_stub_status_module

In the configuration file, you can use the location directive to configure the page access path of the Stub Status module. For example:

server {
    listen 80;
    server_name example.com;

    location /nginx_status {
        # 启用 Stub Status 模块
        stub_status;
        
        # 允许指定 IP 地址访问 Stub Status 页面
        allow 127.0.0.1;
        deny all;
        
        # 其他配置...
    }

    # 其他 server 配置...
}

In the above configuration,location /nginx_status enabled the Stub Status module and configured the IP address that allows access to the page. You can get basic performance information of Nginx by visiting http://example.com/nginx_status.

Sample output from the Stub Status page might include the following information:

Active connections: 10
server accepts handled requests
 10 10 10
Reading: 0 Writing: 1 Waiting: 9

This information provides information about the current number of connections, request processing, and read, write, and wait status. You can monitor these indicators to understand the load and performance of the server. Please note that in a production environment, access to the Stub Status page should be configured carefully to prevent information leakage.

03-028:http_charset_module

Because it is not needed, there is no need to manually specify not to enable it.

http_charset_module is a module of Nginx, used to handle character set (Charset) related settings. Specifically, http_charset_module is mainly used to configure the character set information in the server response to ensure that the client correctly parses and displays text content.

The following are the main functions of http_charset_module:

  1. Character set configuration: allows you to configure the character set of the server response, ensuring that text content is delivered to the client in the correct character set.

  2. Automatic detection of character set: You can enable automatic character set detection so that the server automatically detects the character set of a text file and specifies it in the response. This is useful for working with multilingual websites or text files that receive multiple character sets.

A configuration example using http_charset_module is as follows:

http {
    charset utf-8;  # 设置默认字符集为 UTF-8

    server {
        listen 80;
        server_name example.com;

        location / {
            # 允许自动检测字符集
            charset_types text/plain text/html;
            
            # 其他配置...
        }
    }
}

In the above example:

  • charset utf-8;The default character set is configured as UTF-8.
  • charset_types text/plain text/html; File types that allow automatic detection of character sets include plain text (text/plain) and HTML (text/html).

After this configuration, Nginx will add the Content-Type header to the response, specifying the corresponding character set information to ensure that the client correctly parses and displays the text content. Character set settings are important to ensure that your website displays text content correctly in different locales.

03-029:http_gzip_module

This module is very special, because in the target file compiled by default, I found the following module:

objs/src/http/modules/ngx_http_gzip_filter_module.o

After asking, ngx_http_gzip_filter_module is actually http_gzip_module. The reason for this problem should be that the name in the configure file of nginx can be different from the name of the target file.

So there is no need to display and include this module here.

http_gzip_moduleIt is a module of Nginx that is used to implement Gzip compression of the content of HTTP responses. Gzip compression is a technology used to reduce file size and increase transfer speed, especially for text content such as HTML, CSS, JavaScript, etc.

The following are the main functions of http_gzip_module:

  1. Content compression: Gzip compression can significantly reduce the size of transferred files, thereby reducing network transfer time and bandwidth consumption.

  2. Network performance: After compressing files, the client needs less time to download resources, thereby increasing page loading speed, reducing user waiting time, and improving user experience.

A configuration example using http_gzip_module is as follows:

http {
    gzip on;  # 启用 Gzip 压缩
    gzip_types text/plain text/css application/json application/javascript application/xml;  # 指定需要压缩的文件类型

    server {
        listen 80;
        server_name example.com;

        location / {
            # 其他配置...
        }
    }
}

In the above example:

  • gzip on;Gzip compression is enabled.
  • gzip_typesSpecifies the file types that require Gzip compression, including text/plain, text/css, application/json, etc.

After this configuration, Nginx will check the Content-Type of each response and perform Gzip compression on the configured file type. When the client supports Gzip compression, the server transmits the compressed content to the client.

Note: Although Gzip compression improves performance, it also requires compression and decompression operations between the server and client, which may cause some additional CPU overhead. Therefore, the performance improvement brought by compression needs to be weighed against the consumption of server resources.

Note: The module http_gzip_static_module only enables gzip compression for static files.

03-030:http_ssi_module

Because it is not needed, there is no need to manually specify not to enable it.

http_ssi_moduleIs a module of Nginx that handles server-side includes (SSI) functionality. SSI allows dynamic content to be embedded in HTML pages, allowing the page to be updated with server-side generated data.

The following are the main functions of http_ssi_module:

  1. Dynamic content embedding: SSI allows dynamically generated content to be inserted into HTML pages, allowing the page to update dynamically.

  2. Template engine: SSI can be thought of as a simple template engine that embeds dynamic data into static pages by inserting some special instructions into HTML pages.

  3. Module directives: provides directives such as <!--#include --> for including content from other files in an HTML page, or < a i=3> is used for output variables. <!--#echo -->

A configuration example using http_ssi_module is as follows:

http {
    ssi on;  # 启用 SSI 模块

    server {
        listen 80;
        server_name example.com;

        location / {
            root /path/to/your/html/files;
            index index.html;

            # 允许 SSI 处理 HTML 文件
            ssi_types text/html;

            # 其他配置...
        }
    }
}

In the above example:

  • ssi on;SSI module enabled.
  • ssi_types text/html;Specifies SSI processing of HTML files.

Then, within the HTML file, you can use SSI directives to insert dynamic content. For example:

<!DOCTYPE html>
<html lang="en">
<head>
    <meta charset="UTF-8">
    <meta name="viewport" content="width=device-width, initial-scale=1.0">
    <title>SSI Example</title>
</head>
<body>
    <h1>Hello, <!--#echo var="REMOTE_ADDR" -->!</h1>
    <p>The current time is: <!--#echo var="DATE_LOCAL" --></p>
</body>
</html>

In the above HTML file, <!--#echo var="REMOTE_ADDR" --> is used to output the client's IP address, and <!--#echo var="DATE_LOCAL" --> is used to output the server's local time. This dynamic content will be generated and replaced by the server side when the page loads.

03-031:http_userid_module

Because it is not needed, there is no need to manually specify not to enable it.

http_userid_moduleIt is a module of Nginx, used to handle user ID (User ID) related functions. The main function is to add or modify the user ID for the request, and can be customized through configuration.

The following are the main functions of http_userid_module:

  1. User ID: Allows adding or modifying a user ID for user requests, often used to track user access and behavior.

  2. Cookie settings: You can set a cookie to store the user ID so that the user can carry the ID in subsequent requests.

  3. Flexible configuration: provides a series of configuration items, and you can customize the generation rules, expiration time, etc. of user IDs as needed.

A configuration example using http_userid_module is as follows:

http {
    userid on;  # 启用 User ID 模块

    server {
        listen 80;
        server_name example.com;

        location / {
            # 允许 User ID 处理
            userid_mode transparent;

            # 设置 Cookie 名称和过期时间
            userid_name my_user_id;
            userid_expires 30d;

            # 其他配置...
        }
    }
}

In the above example:

  • userid on;User ID module enabled.
  • userid_mode transparent;Set the processing mode of the User ID module to transparent mode, that is, add the user ID to the request.
  • userid_name my_user_id; sets a cookie with the name my_user_id.
  • userid_expires 30d;The cookie expiration time is set to 30 days.

After this configuration, when a user accesses the server, Nginx will generate a user ID and store it on the client by setting a cookie. In subsequent requests, the client will carry this cookie, and the server can identify the user based on this user ID.

Please note that the use of user IDs needs to comply with the privacy policy and relevant regulations to ensure that user privacy is protected.

If the http_userid_module module is not enabled, can Django handle cookies?
Yes, Django itself is capable of handling cookies even without Nginx's http_userid_module module enabled. The Django framework has built-in functions for handling user sessions and cookies without relying on Nginx modules.

In Django, you can use Django's request object to read and set cookies. Here's a simple example:

from django.http import HttpResponse

def set_cookie(request):
    # 设置 Cookie
    response = HttpResponse("Cookie set!")
    response.set_cookie('my_cookie', 'cookie_value', max_age=3600)  # 设置过期时间为 1 小时
    return response

def get_cookie(request):
    # 读取 Cookie
    my_cookie_value = request.COOKIES.get('my_cookie', 'default_value')
    return HttpResponse(f"Value of my_cookie: {
      
      my_cookie_value}")

In the above example:

  • set_cookieThe view sets a cookie named my_cookie with a value of 'cookie_value' and an expiration time of 1 hour.
  • get_cookieThe view read the value of the cookie named my_cookie.

These views are accessible to clients through Django's URL mapping mechanism. You can perform more complex cookie operations according to your own needs, including setting paths, domains, security flags, etc. However, please be aware of privacy and security when using cookies and ensure compliance with relevant regulations and best practices.

03-032:http_access_module

This module is included by Nginx by default, but the name is actually "ngx_http_access_module.o"
It is required to set the IP whitelist and blacklist.

03-033:http_auth_basic_modul—Append username and password to protected resources

Because it is not needed, there is no need to manually specify not to enable it.

http_auth_basic_moduleIt is a module of Nginx used to enable HTTP Basic Authentication. HTTP Basic Authentication is a simple authentication mechanism that requires users to provide a username and password when accessing protected resources. This information is transmitted in Base64 encoding, so it is not suitable for transmitting passwords over non-secure connections.

The following are the main functions of http_auth_basic_module:

  1. Authentication: Enables HTTP basic authentication, which requires users to provide a valid username and password to access protected resources.

  2. Simple configuration: provides simple configuration options to specify the name of the authentication realm (Realm) and the location of the user password file.

A configuration example using http_auth_basic_module is as follows:

server {
    listen 80;
    server_name example.com;

    location / {
        auth_basic "Restricted Access";  # 设置认证领域的名称
        auth_basic_user_file /etc/nginx/.htpasswd;  # 指定用户密码文件的位置
        # 其他配置...
    }
}

In the above example:

  • auth_basic "Restricted Access";The name of the authentication realm is set to "Restricted Access", which is the text that users see in the pop-up authentication prompt box.

  • auth_basic_user_file /etc/nginx/.htpasswd; specifies the location of the password file containing the username and password. The format of the password file is username:encrypted_password. You can use the htpasswd command to generate such a file to ensure the security of the password.

In practical applications, it is recommended to use HTTPS to encrypt the transmission of usernames and passwords to increase security. In addition, regularly update password files and use strong password policies to ensure the security of user passwords.

03-034:http_mirror_module —Forward client requests to the mirror server simultaneously for development testing

Because it is not needed, there is no need to manually specify not to enable it.

http_mirror_moduleIt is a module of Nginx, used to set up a mirror server to mirror the data requested by the client to the specified server. This module is typically used to create a copy of an image in a production environment for testing, monitoring, or other purposes.

The following are the main functions of http_mirror_module:

  1. Request Mirroring: Mirror the client's request data to one or more specified servers so that these servers can receive the same request.

  2. Multiple uses: can be used in monitoring, testing, and load balancing scenarios to observe the behavior of requests without impacting the production environment.

A configuration example using http_mirror_module is as follows:

location / {
    mirror /mirror;
    # 其他配置...
}

location /mirror {
    internal;
    proxy_pass http://mirror_backend;
    # 其他配置...
}

In the above example:

  • location /mirrorThe directive is used in /mirror path. to mirror the data requested by the client to the

  • location /mirror forwards the mirroring request to through proxy_pass, which is the address of the mirror server. mirror_backend

After this configuration, when the client sends a request to the main server, the requested data will be mirrored to the /mirror path and forwarded to proxy_pass mirror_backend server.

Note that /mirror paths are usually marked with the internal keyword to ensure that only the master server can internally forward requests to this path. This helps ensure that mirror requests are not directly accessed by external clients.

03-035:http_autoindex_module—Automatically generate a file list in the directory

Because it is not needed, there is no need to manually specify not to enable it.

http_autoindex_module is a module of Nginx used to automatically generate directory indexes. When accessing a directory without specifying specific files, Nginx can use http_autoindex_module to automatically generate a file list of the directory to facilitate users to browse and access the contents.

The following are the main functions of http_autoindex_module:

  1. Directory index: Automatically generates a directory's file list so that users can access files in the directory through a browser.

  2. Customization: Provides a series of configuration options, allowing you to customize the display method of the directory index, including style, number of columns, sorting method, etc.

A configuration example using http_autoindex_module is as follows:

server {
    listen 80;
    server_name example.com;

    location / {
        autoindex on;  # 启用目录索引

        # 其他配置...
    }
}

In the above example, autoindex on; has directory indexing enabled. If you access a directory rather than a specific file, Nginx will automatically generate a file list for the directory.

You can also customize it through other configuration options of the autoindex directive. For example:

location / {
    autoindex on;  # 启用目录索引
    autoindex_exact_size off;  # 显示文件大小的约数而非精确值
    autoindex_localtime on;  # 使用本地时间而非 UTC 时间
    autoindex_header 1;  # 显示表头信息
    autoindex_columns 3;  # 指定列数
    autoindex_ignore "*.log";  # 忽略特定文件
    # 其他配置...
}

These options allow you to adjust the appearance and behavior of the directory index to meet your specific needs. Note that automatically generated catalog indexes may reveal sensitive information and should be used with caution in production environments.

03-036:http_geo_module

This module is also included by default. If it is included, you can use the geo command.

It is worth noting that although its name in the configure help command is: http_geo_module, the name of the target file it finally compiles is actually ngx_http_geo_module.o. I measured it. If you do not want to include this module, you need the following command:

--without-http_geo_module

This fully shows that the name of the compiled target file and the module name used in configure can be different.

Please note the difference between this module and modulehttp_geoip_module. For an introduction to the modulehttp_geoip_module, please refer to the introduction at point 03-015 of this blog post.

Q: Can you introduce the geo command of Nginx?
When using the geo directive in the Nginx configuration file, you can define a set of conditions based on the client's IP address or the value of other variables and execute under these conditions Corresponding operations. The geo module allows you to create a series of variables whose values ​​are based on the client's IP address or other specific values.

The following is the basic syntax of thegeo directive:

geo $variable {
    default value;
    CIDR_range value;
    CIDR_range value;
    ...
}
  • $variable: Define a variable used to store IP addresses or other values.
  • default value: Specifies the default value when no matching condition is found.
  • CIDR_range: Defines a CIDR range. When the client's IP address matches this range, the corresponding value will be used.

Here is a simple example demonstrating how to use thegeo directive:

http {
    geo $my_country {
        default ZZ; # 默认值为 ZZ

        192.168.1.0/24 US; # 如果客户端IP在这个范围内,$my_country的值为 US
        10.0.0.0/8    AU; # 如果客户端IP在这个范围内,$my_country的值为 AU
    }

    server {
        location / {
            # 使用$my_country变量进行条件判断
            if ($my_country = US) {
                return 301 http://www.example-us.com;
            }

            if ($my_country = AU) {
                return 301 http://www.example-au.com;
            }

            # 默认情况下,重定向到一个通用站点
            return 301 http://www.example-global.com;
        }
    }
}

In this example, based on the client's IP address, the value of the $my_country variable will be set to the corresponding country code. Then, check the value of this variable through the if statement and perform the corresponding operation. Please note that using the if statement may cause some undesirable problems, so use it with caution.

geoThe directive can be used for more complex conditional judgments, such as providing different services or content based on different IP ranges.
The following is an example of using the geo directive for load balancing:
The geo module in Nginx and how to use it to configure load balancing< /span>

03-037:http_map_module

Because it is not needed, there is no need to manually specify not to enable it.

http_map_moduleIt is a module of Nginx that is used to create mapping relationships to map one value to another value. It provides a simple yet powerful mechanism to define mapping rules in configuration files and then use these mappings elsewhere in Nginx.

The following are the main functions of http_map_module:

  1. Value Mapping: allows mapping of one value to another value. This can be used to convert, rename or modify the value of a variable during request processing.

  2. Conditional matching: Mappings can be based on conditional matching, allowing mappings to be selected based on some specific attributes of the request.

A configuration example using http_map_module is as follows:

http {
    map $request_method $new_method {
        default       $request_method;
        GET           "mapped_get";
        POST          "mapped_post";
        PUT           "mapped_put";
        DELETE        "mapped_delete";
    }

    server {
        listen 80;
        server_name example.com;

        location / {
            # 使用映射后的值
            add_header X-Mapped-Method $new_method;

            # 其他配置...
        }
    }
}

In the above example:

  • map $request_method $new_method defines a mapping that maps the value of $request_method to $new_method.

  • add_header X-Mapped-Method $new_method;The mapped value is used in request processing, adding it to the response header.

In practical applications, you can use http_map_module to dynamically select configuration items based on different attributes of the request, or to map one variable value to another form. This provides a very flexible way to customize the handling of requests as needed.

03-038:http_split_clients_module

Because it is not needed, there is no need to manually specify not to enable it.

http_split_clients_moduleIs a module of Nginx used to distribute client requests among multiple backend servers. It provides a mechanism to decide which backend server to distribute a request to based on some attribute of the client, such as the IP address or user agent string.

The following are the main functions of http_split_clients_module:

  1. A/B testing: can be used to implement A/B testing by assigning different client requests to different backend servers to test different functions or pages Version.

  2. Grayscale release: is used to implement grayscale release. By distributing requests to different back-end servers, the new version is gradually pushed to a small group of users for testing.

A configuration example using http_split_clients_module is as follows:

http {
    split_clients "${remote_addr}AAA" $backend {
        20%   server1;
        80%   server2;
        *    server3;
    }

    upstream backend {
        server server1.example.com;
        server server2.example.com;
        server server3.example.com;
    }

    server {
        listen 80;
        server_name example.com;

        location / {
            proxy_pass http://$backend;
            # 其他配置...
        }
    }
}

In the above example:

  • split_clients "${remote_addr}AAA" $backend defines a diversion rule that allocates 20% of the requests to server1, 80% to server2, and the remaining requests to server3.

  • upstream backendDefines the actual list of backend servers.

  • proxy_pass http://$backend;Use an offloaded backend server to handle requests.

Once configured this way, requests will be distributed to different backend servers based on the client's IP address. Please note that the actual configuration can be adjusted according to specific needs, for example, the offloading can be based on other variables.

03-038:http_referer_module

This module is required, but it is included by default, so there is no need to specify it manually.

Sorry, Nginx does not have a separate module named http_referer_module. However, there is a variable named http_referer which is used to get the value of the Referer header in the client request.

RefererThe header is part of the HTTP request header and is used to indicate the source URL of the request. http_referer Variables can be used in Nginx configuration to perform some specific processing or control based on the origin of the request.

Here is a simple example that demonstrates how to use the http_referer variable:

server {
    listen 80;
    server_name example.com;

    location / {
        if ($http_referer ~* (google|bing|yahoo)) {
            # 如果 Referer 包含 google、bing 或 yahoo,执行某些操作
            # 这里可以添加具体的处理逻辑
        }

        # 其他配置...
    }
}

In the above example, by using the $http_referer variable, you can perform a specific action by checking whether the source of the request contains "google", "bing", or "yahoo".

Please note that you need to be careful when using the if directive as it may introduce some unintuitive behavior. In Nginx, the if directive does not handle requests in different stages like in other programming languages, so it may not work as expected in some cases. If possible, it is recommended to avoid using the if directive, especially in complex configurations.

03-039:http_rewrite_module

This module is required, but it is included by default, so there is no need to specify it manually.

Sorry, there seems to be some misleading, Nginx does not have a module named without-http_rewrite_module. Instead, the rewrite functionality is provided by the ngx_http_rewrite_module module.

ngx_http_rewrite_moduleThe module is one of the most powerful and commonly used modules in Nginx. It allows you to modify the request URI and implement URL rewriting and redirection. By using the rewrite directive, you can modify the requested URI according to custom rules.

Here is a simple example demonstrating how to use the rewrite directive:

server {
    listen 80;
    server_name example.com;

    location /old-uri {
        rewrite ^/old-uri(.*)$ /new-uri$1 permanent;
    }

    # 其他配置...
}

In the above example, when the requested URI starts with /old-uri, the rewrite directive redirects it to the URI that starts with /new-uri New URI starting with. The permanent parameter indicates using HTTP 301 permanent redirect.

Please note that care should be taken when using the rewrite directive to ensure that the rules are correct and do not cause unnecessary performance impact. In complex rewrite rule scenarios, it may be necessary to have a deep understanding of how this module is used.

03-040:http_proxy_module

This module is required, but it is included by default, so there is no need to specify it manually.

http_proxy_moduleIt is a core module of Nginx and is used to forward proxy requests to other servers. It provides proxy server functionality, allowing Nginx to act as a reverse proxy or load balancer, forwarding requests to one or more servers on the backend.

The following are the main functions of http_proxy_module:

  1. Reverse proxy: Allows Nginx to receive requests from clients, proxy those requests to the backend server, and then return responses to the client. This helps hide the actual address and details of the backend server, improving security and maintainability.

  2. Load Balancing: Nginx can be configured as a load balancer to distribute requests to multiple backend servers to improve performance and availability.

  3. Cache: allows setting up a cache to cache responses from the backend server, thus reducing the load on the backend server and improving response speed.

A simple configuration example using http_proxy_module is as follows:

http {
    server {
        listen 80;
        server_name example.com;

        location / {
            proxy_pass http://backend_server;
            # 其他代理配置...
        }
    }

    upstream backend_server {
        server backend1.example.com;
        server backend2.example.com;
        # 可以添加更多的后端服务器...
    }
}

In the above example:

  • proxy_pass http://backend_server; specifies a proxy backend server that forwards requests to a set of backend servers defined by backend_server.

  • upstream backend_serverDefines a list of backend servers, either a single server or multiple servers. Nginx will distribute requests to these backend servers according to the load balancing algorithm.

This is a basic reverse proxy and load balancing configuration example. The specific configuration can be adjusted according to actual needs, such as adding cache, setting load balancing algorithm, configuring request headers, etc.

03-041:http_fastcgi_module

This module is required, but it is included by default, so there is no need to specify it manually.

is required because it needs to communicate with Gunicorn.
http_fastcgi_module is a module of Nginx used to communicate with the FastCGI process to interact with back-end applications through the FastCGI protocol. FastCGI is a communication protocol that allows Nginx as a web server to communicate with standalone FastCGI processes (such as PHP-FPM, uWSGI, Gunicorn, etc.) to handle dynamically generated content.

The following are the main functions of http_fastcgi_module:

  1. Communicate with FastCGI process: Allows Nginx to communicate with the backend's FastCGI process via the FastCGI protocol to handle dynamic content generation and processing.

  2. Supports multiple programming languages: The FastCGI protocol is a universal protocol, sohttp_fastcgi_module can be used with multiple programming languages, such as PHP , Python, Ruby, etc.

  3. Performance and flexibility: FastCGI provides better performance than traditional CGI because the FastCGI process can maintain persistent connections without starting a new one for each request. process. This improves performance and reduces resource consumption.

The following is a simple Nginx configuration example using http_fastcgi_module to communicate with the PHP-FPM process:

server {
    listen 80;
    server_name example.com;

    location / {
        try_files $uri $uri/ /index.php?$query_string;
    }

    location ~ \.php$ {
        include fastcgi_params;
        fastcgi_pass unix:/var/run/php-fpm.sock;
        fastcgi_param SCRIPT_FILENAME $document_root$fastcgi_script_name;
        fastcgi_param SCRIPT_NAME $fastcgi_script_name;
    }

    # 其他配置...
}

In the above example:

  • location ~ \.php$Defines location blocks for handling PHP scripts.
  • fastcgi_pass unix:/var/run/php-fpm.sock;Specifies the FastCGI server address to communicate with the PHP-FPM process.
  • fastcgi_paramSome FastCGI parameters are configured, including script file name and script name.

This is a basic configuration example, the exact configuration will depend on the type and configuration of your backend FastCGI process.

03-042:http_uwsgi_module

In fact, my Nginx communicates with Gunicorn, so I probably don't need this, but it doesn't matter, this is included by default anyway, so I don't need to worry about it.

http_uwsgi_moduleIt is a module of Nginx used to communicate with the uWSGI server to interact with back-end applications through the uWSGI protocol. uWSGI is a communication protocol used to connect web servers with application frameworks such as Django, Flask, etc.

The following are the main functions of http_uwsgi_module:

  1. Communicate with uWSGI server: Allows Nginx to communicate with the uWSGI server on the backend via the uWSGI protocol to handle dynamically generated content.

  2. Supports multiple application frameworks: The uWSGI protocol is a universal protocol, sohttp_uwsgi_module can be used with multiple application frameworks, such as Django, Flask, Pyramid, etc.

  3. Performance and flexibility: uWSGI provides high-performance communication with application frameworks, supporting persistent connections and multiple transport protocols, improving performance and reducing resource consumption.

The following is a simple Nginx configuration example using http_uwsgi_module to communicate with a uWSGI server:

server {
    listen 80;
    server_name example.com;

    location / {
        try_files $uri $uri/ @uwsgi;
    }

    location @uwsgi {
        include uwsgi_params;
        uwsgi_pass unix:/var/run/uwsgi.sock;
    }

    # 其他配置...
}

In the above example:

  • location @uwsgiDefines the location block for communicating with the uWSGI server.
  • uwsgi_pass unix:/var/run/uwsgi.sock;Specifies the uWSGI server address to communicate with the uWSGI server.
  • include uwsgi_params;Contains uWSGI related parameters.

This is a basic configuration example, the exact configuration will depend on the type and configuration of your backend uWSGI server.

03-043:http_scgi_module

Most of the time you won't use this, but it is included by default, so you don't need to worry about it.

http_scgi_moduleIt is a module of Nginx that is used to communicate with SCGI (Simple Common Gateway Interface) server to interact with back-end applications through the SCGI protocol. SCGI is a communication protocol, similar to FastCGI, that defines a simple interface between a web server and an application.

The following are the main functions of http_scgi_module:

  1. Communicate with SCGI server: Allows Nginx to communicate with the backend SCGI server via the SCGI protocol to handle dynamically generated content.

  2. Supports multiple programming languages: The SCGI protocol is a universal protocol, so http_scgi_module can be used with multiple programming languages, such as Python , Perl, etc.

  3. Performance and flexibility: SCGI provides high-performance communication with applications, supporting persistent connections, improving performance and reducing resource consumption.

The following is a simple Nginx configuration example using http_scgi_module to communicate with a SCGI server:

server {
    listen 80;
    server_name example.com;

    location / {
        include scgi_params;
        scgi_pass localhost:4000;
    }

    # 其他配置...
}

In the above example:

  • location /Defines the location block for communication with the SCGI server.
  • scgi_pass localhost:4000;Specifies the SCGI server address to communicate with the SCGI server.
  • include scgi_params;Contains SCGI related parameters.

This is a basic configuration example, the exact configuration will depend on the type and configuration of your backend SCGI server.

03-044:http_grpc_module

Because it is not needed, there is no need to worry about whether it is enabled or not.

The http_grpc_module module allows passing requests to gRPC servers (1.13.10). This module requires the ngx_http_v2_module module.

03-045:http_memcached_module

Because it is not needed, there is no need to worry about whether it is enabled or not.

http_memcached_moduleIs a module of Nginx used to communicate with Memcached server. Memcached is a high-performance distributed memory object caching system used to cache data and speed up application access.

The following are the main functions of http_memcached_module:

  1. Communicate with Memcached server: Allows Nginx to communicate with the back-end Memcached server through the Memcached protocol to read and write cached data.

  2. Cache acceleration: Using Memcached as the cache backend can accelerate the generation of dynamic content and improve response speed. By caching frequently accessed data into Memcached, you can reduce the load on your backend servers.

The following is a simple Nginx configuration example using http_memcached_module to communicate with a Memcached server:

http {
    server {
        listen 80;
        server_name example.com;

        location / {
            set $memcached_key $uri;
            memcached_pass memcached_server:11211;
            default_type text/html;
            error_page 404 = @fallback;
        }

        location @fallback {
            proxy_pass http://backend_server;
            # 其他配置...
        }
    }

    upstream backend_server {
        server backend1.example.com;
        server backend2.example.com;
        # 可以添加更多的后端服务器...
    }
}

In the above example:

  • location /Defines location blocks for communication with the Memcached server.
  • memcached_pass memcached_server:11211;Specifies the Memcached server address and port to communicate with the Memcached server.
  • set $memcached_key $uri;Sets the key for the Memcached store, typically using the requested URI as the key.
  • error_page 404 = @fallback; When the request returns 404, jump to the @fallback location block.
  • proxy_pass http://backend_server; In the @fallback location block, proxy the request to the backend server.

This is a basic configuration example, the specific configuration may need to be adjusted based on the needs of your application.

03-046:http_limit_conn_module

This module is required, but it is included by default, so there is no need to specify it manually.

http_limit_conn_moduleIt is a module of Nginx, used to limit the number of concurrent connections of the client. It can help prevent malicious attacks, improve server stability, and protect backend resources from excessive request pressure.

The following are the main functions of http_limit_conn_module:

  1. Limit concurrent connections: allows you to set a limit on the number of concurrent connections for a specific IP address, CIDR prefix, or other identifier.

  2. Prevent excessive requests: By limiting the number of concurrent connections from a single client, you can prevent certain malicious behaviors, such as brute force attacks, DDoS attacks, etc.

  3. Protect backend resources: Limiting the number of concurrent connections can ensure that the backend server will not be overloaded by too many connections, improving system reliability and performance.

The following is a simple Nginx configuration example, using http_limit_conn_module to limit the number of concurrent connections:

http {
    limit_conn_zone $binary_remote_addr zone=addr:10m;

    server {
        listen 80;
        server_name example.com;

        location / {
            limit_conn addr 5;
            # 其他配置...
        }
    }
}

In the above example:

  • limit_conn_zone $binary_remote_addr zone=addr:10m; creates a shared memory area named addr to store the connection status information of the IP address, occupying up to 10MB of memory.

  • limit_conn addr 5;Specifies that a maximum of 5 concurrent connections is allowed per IP address. Connections exceeding this limit will be delayed or refused, depending on configuration.

This is a basic configuration example, and the specific configuration can be adjusted according to your needs. Different limits can be set for different location blocks, and the size and other parameters of the shared memory area can be adjusted.

03-047:http_limit_req_module

This module is required, but it is included by default, so there is no need to specify it manually.

I have conducted detailed research on this module. For details, see:https://blog.csdn.net/wenhao_ir/article/details/134913876

http_limit_req_moduleIs a module of Nginx used to limit the rate of requests. It can help prevent malicious attacks, protect servers from excessive request pressure, and ensure service reliability.

The following are the main functions of http_limit_req_module:

  1. Limit request rate: allows you to set a rate limit for requests for a specific IP address, CIDR prefix, or other identifier.

  2. Prevent excessive requests: By limiting the request rate, you can prevent certain malicious behaviors, such as blasting attacks, DDoS attacks, etc.

  3. Protect backend resources: Limiting the request rate can ensure that the backend server will not be overloaded by too many requests, improving system reliability and performance.

The following is a simple Nginx configuration example using http_limit_req_module to limit the request rate:

http {
    limit_req_zone $binary_remote_addr zone=req_zone:10m rate=1r/s;

    server {
        listen 80;
        server_name example.com;

        location / {
            limit_req zone=req_zone burst=5;
            # 其他配置...
        }
    }
}

In the above example:

  • limit_req_zone $binary_remote_addr zone=req_zone:10m rate=1r/s; creates a shared memory area named req_zone to store request status information for IP addresses, occupying up to 10MB of memory at a rate of 1 request per second.

  • limit_req zone=req_zone burst=5;Specifies that 1 request per second is allowed per IP address, but a burst of up to 5 requests is allowed. If the request rate exceeds the limit, excess requests will be delayed or rejected, depending on the configuration.

This is a basic configuration example, and the specific configuration can be adjusted according to your needs. Different limits can be set for different location blocks, and the size and other parameters of the shared memory area can be adjusted.

03-048:http_empty_gif_module

Because it is not needed, there is no need to worry about whether it is enabled or not.

http_empty_gif_moduleNot an official module for Nginx, but a common custom module or trick. Typically used to create 1x1 pixel transparent GIF images, it is used as a lightweight "response".

This technique is usually used in web pages for tracking or statistical purposes. Its principle is that by letting the client request a very small and transparent GIF image, the server can record the relevant information of the request, such as the user visiting a certain page, opening an email, etc.

Here's an example Nginx configuration that might use this trick:

server {
    listen 80;
    server_name example.com;

    location /tracking-pixel.gif {
        empty_gif;
        # 其他配置...
    }

    # 其他配置...
}

In this example, when the client requests the /tracking-pixel.gif path, the empty_gif directive will tell Nginx to return a transparent 1x1 pixel GIF image, usually This image is empty and has no actual content. The purpose of this request is usually to record client activity without affecting the appearance of the page.

It should be noted that the empty_gif directive is not a standard Nginx module, but a custom directive used in Nginx configuration. This technique can be implemented by configuring a location that returns a 1x1 pixel transparent GIF image. In fact, you can directly use a real 1x1 pixel transparent GIF image, and it does not necessarily have to be implemented through a module or directive.

03-049:http_browser_module

Because it is not needed, there is no need to worry about whether it is enabled or not.

The main function of the broswer module is to determine the old and new browsers based on the value of "User-Agent" in the http request header and the browser's characteristic characters, and generate corresponding variables for subsequent request processing logic. use.
For details, please refer to: https://zhuanlan.zhihu.com/p/355305327

03-050:http_upstream_hash_module

Because it is not needed, there is no need to worry about whether it is enabled or not.

http_upstream_hash_moduleIt is a module of Nginx that provides load balancing based on hash algorithm. It allows you to distribute requests to specific servers within a set of backend servers based on specific attributes of the request (e.g. client IP, URI, etc.).

The following are the main functions of http_upstream_hash_module:

  1. Load balancing of hash algorithm: Map specific attributes (such as client IP, URI, etc.) to a specific server in a set of backend servers through a hash algorithm, Implement load balancing of requests.

  2. Session persistence: The use of a hashing algorithm allows the same hash value to be produced on the same attribute, thereby routing the same request to the same backend server. This helps maintain session state with a specific client.

The following is a simple Nginx configuration example using http_upstream_hash_module for hash load balancing:

http {
    upstream backend {
        hash $request_uri consistent;
        server backend1.example.com;
        server backend2.example.com;
        server backend3.example.com;
        # 可以添加更多的后端服务器...
    }

    server {
        listen 80;
        server_name example.com;

        location / {
            proxy_pass http://backend;
            # 其他配置...
        }
    }
}

In the above example:

  • hash $request_uri consistent; configures the hashing algorithm, using the requested URI as the key attribute of the hash. consistent keyword indicates the use of consistent hashing algorithm.

  • upstream backendA backend server group is defined that contains multiple servers.

  • proxy_pass http://backend; proxies requests to the backend server group named backend, selecting the backend server based on a hash algorithm.

This is a basic configuration example, the specific configuration can be adjusted according to your needs, such as selecting other attributes as key attributes of the hash.

03-051:http_upstream_ip_hash_module

Because it is not needed, there is no need to worry about whether it is enabled or not.

http_upstream_ip_hash_moduleIs a module of Nginx that provides hash load balancing based on client IP address. This module allows distribution of requests from the same IP address to the same server in the same set of backend servers. This ensures that requests from the same client are routed to the same backend server, helping to maintain session state for a specific client.

The following are the main functions of http_upstream_ip_hash_module:

  1. IP address hash load balancing: The client IP address is mapped to a specific server in a set of back-end servers through a hash algorithm to achieve IP address-based load balancing .

  2. Session persistence: The use of hashing algorithms allows the same hash value to be generated on the same client IP address, thereby routing requests from the same client IP to the same Backend server. This helps maintain session state with a specific client.

The following is a simple Nginx configuration example using http_upstream_ip_hash_module for IP address hash load balancing:

http {
    upstream backend {
        ip_hash;
        server backend1.example.com;
        server backend2.example.com;
        server backend3.example.com;
        # 可以添加更多的后端服务器...
    }

    server {
        listen 80;
        server_name example.com;

        location / {
            proxy_pass http://backend;
            # 其他配置...
        }
    }
}

In the above example:

  • ip_hash;An IP address hashing algorithm is configured that uses the client IP address as the key attribute of the hash.

  • upstream backendA backend server group is defined that contains multiple servers.

  • proxy_pass http://backend; proxies requests to the backend server group named backend, selecting the backend server based on an IP address hash algorithm.

This is a basic configuration example, the specific configuration can be adjusted according to your needs. This method is typically used in scenarios where you need to ensure that requests from the same client IP are routed to the same backend server to maintain session state.

03-052:http_upstream_least_conn_module—Load balancing based on number of connections

Because it is not needed, there is no need to worry about whether it is enabled or not.

http_upstream_least_conn_moduleIt is a module of Nginx that provides load balancing based on the minimum number of connections. This module will distribute new requests to the backend server with the smallest number of current connections to achieve load balancing.

The following are the main functions of http_upstream_least_conn_module:

  1. Minimum connections load balancing: By tracking the current number of connections to each backend server, new requests are distributed to the server with the fewest current connections, thereby achieving minimum connections. Number of load balancing.

  2. Dynamic adaptation: This algorithm can automatically adapt to the load of the back-end server to ensure that requests are more evenly distributed across servers, thereby improving overall performance.

The following is a simple Nginx configuration example using http_upstream_least_conn_module for minimum number of connections load balancing:

http {
    upstream backend {
        least_conn;
        server backend1.example.com;
        server backend2.example.com;
        server backend3.example.com;
        # 可以添加更多的后端服务器...
    }

    server {
        listen 80;
        server_name example.com;

        location / {
            proxy_pass http://backend;
            # 其他配置...
        }
    }
}

In the above example:

  • least_conn;Configure the load balancing algorithm with the minimum number of connections.

  • upstream backendA backend server group is defined that contains multiple servers.

  • proxy_pass http://backend; proxies requests to the backend server group named backend, selecting the backend server based on the least connections algorithm.

This is a basic configuration example, the specific configuration can be adjusted according to your needs. This method is often used in scenarios where you want to ensure that requests are distributed to servers with fewer connections to avoid connection overload.

03-053:http_upstream_random_module

Because it is not needed, there is no need to worry about whether it is enabled or not.

Similar to the previous modules, it is related to load balancing. In fact, it is a function that may be used when Nginx is used as a reverse proxy.

03-054:http_upstream_keepalive_module

Because it is not needed, there is no need to worry about whether it is enabled or not.

Similar to the previous modules, it is related to load balancing. In fact, it is a function that may be used when Nginx is used as a reverse proxy.

03-056:http_upstream_zone_module

Because it is not needed, there is no need to worry about whether it is enabled or not.

Similar to the previous modules, it is related to load balancing. In fact, it is a function that may be used when Nginx is used as a reverse proxy.

03-057:http_perl_module

Because it is not needed, there is no need to worry about whether it is enabled or not.

http_perl_moduleIt is a module of Nginx that allows the use of code blocks written in Perl language in Nginx configuration to extend the functionality of Nginx. Through this module, you can embed Perl code in Nginx to implement more complex request processing logic, dynamic content generation and other functions.

The following are the main functions of http_perl_module:

  1. Dynamic request handling: allows the use of Perl to write code blocks to handle requests and implement more complex logic and business rules.

  2. Custom variables and directives: allows extending the Nginx configuration syntax through Perl to add custom variables and directives.

  3. Integration with other Perl modules: Through http_perl_module, you can easily integrate other Perl modules to achieve more functions.

The following is a simple Nginx configuration example, using http_perl_module:

http {
    perl_modules perl/lib;
    
    server {
        listen 80;
        server_name example.com;

        location / {
            perl my_handler;
            # 其他配置...
        }
    }
}

perl_require perl/lib/my_handler.pl;

sub my_handler {
    my $r = shift;
    $r->send_http_header('text/plain');
    $r->print("Hello, Perl in Nginx!");
    return OK;
}

In the above example:

  • perl_modules perl/lib;Specifies the storage path of the Perl module.

  • perl my_handler; handles the request using a Perl code block in location /.

  • perl_require perl/lib/my_handler.pl; introduces the Perl file my_handler.pl, which contains the implementation of the my_handler subroutine.

  • my_handlerIn the subroutine, a simple "Hello, Perl in Nginx!" response is generated from Perl code.

This is a basic configuration example. The specific configuration and code implementation can be adjusted as needed. When using http_perl_module, you need to pay attention to performance and security to avoid introducing unnecessary complexity.

03-058:--without-http

This obviously cannot be enabled.

In the Nginx configuration statement, without-http is a configuration parameter used to disable the HTTP core module. It is usually used together with the ./configure command to selectively disable HTTP-related modules and features when compiling Nginx to reduce the binary size of Nginx.

When configuring Nginx compilation options, you can enable or disable specific modules through the ./configure command. If you use without-http, it will disable all HTTP-related modules, including the core HTTP module and other HTTP service-related modules.

The following is an example that demonstrates how to use the without-http configuration parameter:

./configure --without-http
make
make install

In this example, the --without-http parameter tells the build system not to compile and include the HTTP module. This can be used to build an Nginx binary that only supports TCP/UDP proxies or other non-HTTP protocols.

Please note that disabling the HTTP module will cause Nginx to lose support for HTTP requests, so this configuration is suitable for some specific usage scenarios, such as building an Nginx instance only for TCP proxy. If you need to support HTTP requests, do not use without-http.

03-059:with-mail

Because it is not needed, there is no need to worry about whether it is enabled or not.

with-mail is one of the configuration parameters of Nginx, usually used together with the ./configure command to enable mail proxy-related functions when compiling Nginx. When using the --with-mail parameter, the compilation system will enable email-related modules so that Nginx can run as a mail proxy server (Mail Proxy).

The following is an example that demonstrates how to use the with-mail configuration parameter:

./configure --with-mail
make
make install

In this example, the --with-mail parameter tells the build system to compile and include modules related to the mail service, such as ngx_mail_module. This enables Nginx to handle SMTP (Simple Mail Transfer Protocol) and other email-related protocols.

After enabling the mail module, you can use mail-related instructions and modules in the Nginx configuration, such as configuring mail proxy, load balancing of mail traffic, etc.

It should be noted that with-mail is a compile-time option. You need to specify this option explicitly when compiling Nginx in order to include mail module support in the generated binary file. If you do not need Nginx to have the mail proxy function, you can choose not to use the with-mail parameter.

Question: A website written in Django needs to send a verification code to the user's email when the user registers. Is it necessary to use the Nginx email module?
No, you do not need to use Nginx’s email module to send verification code emails. Nginx is mainly a high-performance web server and reverse proxy server. It does not handle email sending functions itself. In Django applications, sending verification code emails is usually done by using an email service provider or a local mail server.

The following is the general process:

  1. Configure Django settings: In the settings file of the Django project, you need to configure parameters related to email sending, including email server address, port, authentication information, etc. This is usually done in a settings.py file, for example:

    # settings.py
    
    EMAIL_BACKEND = 'django.core.mail.backends.smtp.EmailBackend'
    EMAIL_HOST = 'your_smtp_server.com'
    EMAIL_PORT = 587
    EMAIL_USE_TLS = True
    EMAIL_HOST_USER = '[email protected]'
    EMAIL_HOST_PASSWORD = 'your_email_password'
    

    The above settings use the SMTP protocol, which you will need to replace with your actual mail server information.

  2. Send emails using Django: In Django views or other logic, you can use the send_mail function provided by Django to send emails. For example:

    # views.py
    
    from django.core.mail import send_mail
    
    def send_verification_email(email, verification_code):
        subject = 'Verify Your Account'
        message = f'Your verification code is {
            
            verification_code}.'
        from_email = '[email protected]'
        recipient_list = [email]
    
        send_mail(subject, message, from_email, recipient_list)
    

    Just call the send_verification_email function where the user registers or needs to send a verification code.

  3. Select an email service provider: You can also use third-party email service providers, such as SendGrid, Mailgun, SMTP.com, etc. These services provide stable email delivery services, generally have better reliability and performance. In this case, you need to configure Django's email settings according to the provider's documentation and use the provider's API to send the email.

In short, you do not need to use Nginx's mailbox module to implement the email sending function in Django. Instead, you can complete email sending through the email function provided by Django or a third-party email service provider.

03-060:with-stream-Handling traffic for non-HTTP protocols

Because it is not needed, there is no need to worry about whether it is enabled or not.

--with-streamIt is one of the compilation parameters of Nginx, used to enable the Stream module. The Stream module allows Nginx to act as a proxy for TCP and UDP, allowing it to handle non-HTTP protocol traffic, such as database connections, email protocols, etc.

When using the --with-stream compilation parameter, the compilation system will enable the Stream module, allowing Nginx to handle TCP and UDP traffic. In this way, Nginx can implement more flexible proxy and load balancing functions, not just limited to the HTTP protocol.

Here is a simple example showing how to use --with-stream:

./configure --with-stream
make
make install

In this example, the --with-stream parameter tells the build system to compile and include functionality related to the Stream module. This enables Nginx to handle TCP and UDP traffic, not just the HTTP protocol.

The Stream module provides some configuration directives, such as the stream block, for configuring TCP or UDP proxies. The following is a simple Nginx configuration example using the Stream module to implement a TCP proxy:

stream {
    server {
        listen 12345;
        proxy_pass backend_server:5678;
    }
}

In the example above, listen 12345; specifies the port that Nginx listens on, while proxy_pass backend_server:5678; proxies traffic to port 5678 of the backend server.

In general,--with-stream parameters enable Nginx's Stream module, enabling it to handle TCP and UDP traffic, providing a wider range of proxy and load balancing functions.

03-061:google_perftools_module

Because it is not needed, there is no need to worry about whether it is enabled or not.

As of the deadline for my knowledge (early 2022), google_perftools_module is not an official Nginx module, but a third-party module provided by the community. This module is related to Google Performance Tools.

Google Performance Tools is a set of open source tools for performance analysis and optimization. The gperftools library provides some tools for analyzing CPU, memory and stack, including pprof (performance profiling tool) and tcmalloc (Thread-safe memory allocator) etc.

google_perftools_moduleUsually used together with Nginx, Google performance tools are integrated into Nginx through Nginx's module mechanism, thereby enabling analysis and monitoring of Nginx runtime performance.

Here are some possible functions and effects of google_perftools_module:

  1. Profiling: Using the pprof tool, you can analyze the runtime performance of Nginx and understand the function call relationship and CPU usage. etc. to help identify performance bottlenecks.

  2. Memory analysis: If enabled tcmalloc, you can monitor the memory usage of the Nginx process to help find memory leaks or optimize memory allocation.

  3. Thread-safe memory allocation: tcmalloc provides a thread-safe memory allocator that can allocate memory more efficiently in a multi-threaded environment.

  4. Memory stack analysis: tcmalloc can generate memory stack information to help locate the location of memory allocation and help investigate memory-related issues.

Note that the specific usage and configuration may vary due to different Nginx versions and module versions. If you plan to use google_perftools_module, it is recommended to consult the official documentation or relevant community resources for the latest information and configuration examples.

03-062:cpp_test_module

Because it is not needed, there is no need to worry about whether it is enabled or not.

As of the deadline for my knowledge (early 2022), cpp_test_module is not an official Nginx module, but a sample module that demonstrates how to write Nginx modules using C++. This module does not provide actual functionality, but is intended to help developers understand how to use C++ to extend the functionality of Nginx.

In Nginx module development, officials usually use C language to write modules, because the main body of Nginx itself is written in C. However, with a little extra work, you can write Nginx modules in C++ to take advantage of C++'s object-oriented features and other capabilities.

If you are interested in using C++ to write Nginx modules, you can refer to this cpp_test_module example to learn how to combine the module development interface in the Nginx source code with the features of C++. Note that this may involve more complexity, since Nginx itself is based on the C language.

Here is a simple example showing how to use cpp_test_module:

#include <ngx_config.h>
#include <ngx_core.h>
#include <ngx_http.h>

extern "C" {
    
    
    static ngx_int_t ngx_http_cpp_test_handler(ngx_http_request_t *r);
    static char *ngx_http_cpp_test(ngx_conf_t *cf, ngx_command_t *cmd, void *conf);
    static ngx_int_t ngx_http_cpp_test_init(ngx_conf_t *cf);
}

static ngx_command_t ngx_http_cpp_test_commands[] = {
    
    
    {
    
    
        ngx_string("cpp_test"),
        NGX_HTTP_LOC_CONF | NGX_CONF_NOARGS,
        ngx_http_cpp_test,
        NGX_HTTP_LOC_CONF_OFFSET,
        0,
        NULL
    },
    ngx_null_command
};

static ngx_http_module_t ngx_http_cpp_test_module_ctx = {
    
    
    NULL,                                  /* preconfiguration */
    ngx_http_cpp_test_init,                 /* postconfiguration */
    NULL,                                  /* create main configuration */
    NULL,                                  /* init main configuration */
    NULL,                                  /* create server configuration */
    NULL,                                  /* merge server configuration */
    NULL,                                  /* create location configuration */
    NULL                                   /* merge location configuration */
};

ngx_module_t ngx_http_cpp_test_module = {
    
    
    NGX_MODULE_V1,
    &ngx_http_cpp_test_module_ctx,         /* module context */
    ngx_http_cpp_test_commands,            /* module directives */
    NGX_HTTP_MODULE,                       /* module type */
    NULL,                                  /* init master */
    NULL,                                  /* init module */
    NULL,                                  /* init process */
    NULL,                                  /* init thread */
    NULL,                                  /* exit thread */
    NULL,                                  /* exit process */
    NULL,                                  /* exit master */
    NGX_MODULE_V1_PADDING
};

static ngx_int_t ngx_http_cpp_test_handler(ngx_http_request_t *r) {
    
    
    // 处理请求的逻辑
    ngx_http_send_header(r);
    ngx_http_output_filter(r, NULL);
    return NGX_OK;
}

static char *ngx_http_cpp_test(ngx_conf_t *cf, ngx_command_t *cmd, void *conf) {
    
    
    ngx_http_core_loc_conf_t *clcf;

    clcf = ngx_http_conf_get_module_loc_conf(cf, ngx_http_core_module);
    clcf->handler = ngx_http_cpp_test_handler;

    return NGX_CONF_OK;
}

static ngx_int_t ngx_http_cpp_test_init(ngx_conf_t *cf) {
    
    
    // 模块初始化逻辑
    return NGX_OK;
}

This is just a simple example, in reality, writing C++ Nginx modules can involve more complex issues, including properly handling memory management, exception handling, linking issues, etc. In actual projects, you may need to delve into Nginx's module development interface and C++ features. If you have specific questions or needs, it is recommended to refer to official documentation and related resources.

03-063:--add-module=PATH

This is to add third-party or customized modules. This is what I used to install the geoip2 module in my other blog post.
For details, see: https://blog.csdn.net/wenhao_ir/article/details/134951013

03-064:--with-compat

After thinking about it for a long time, I think this module should not be enabled at the beginning, and then enabled if incompatibility occurs when adding third-party modules.

--with-compatIs one of the Nginx configuration parameters, which is used to enable the compatibility module when compiling Nginx. This parameter is usually used to compile old versions of Nginx modules and third-party modules into new versions of Nginx to ensure module compatibility on new versions.

When you upgrade Nginx to a new version, you may encounter problems where some modules are no longer compatible with the new version. To solve this problem, Nginx provides the --with-compat parameter for compatibility with older versions of modules.

Here is a simple example demonstrating how to use --with-compat:

./configure --with-compat
make
make install

By using the --with-compat parameter, the build system will try to remain compatible with binary modules of older versions of Nginx as much as possible. This way, you can use older versions of modules in newer versions of Nginx without having to modify or recompile the modules.

It should be noted that although --with-compat provides certain compatibility support, not all modules are fully compatible. Sometimes, specific modules may need to be adapted or updated to work with new versions of Nginx.

Overall, --with-compat is a useful option for improving module compatibility, but when using it, it is best to check the relevant official documentation and module update information to ensure Properly manage module compatibility.

03-065:--with-pcre

This parameter does not need to be specified explicitly, because Nginx will enable it by default and will look for the pcre library in the system, as shown in the following figure:
Insert image description here

--with-pcreIs one of the configuration parameters of Nginx, used to enable PCRE (Perl Compatible Regular Expressions) library support. PCRE is a library for processing regular expressions. It is compatible with Perl regular expression syntax and provides powerful regular expression functions.

When you use the --with-pcre parameter when compiling Nginx, the compilation system will link the PCRE library, allowing Nginx to use PCRE regular expressions in the configuration file. In this way, you can use the powerful regular expression function in Nginx configuration to perform URL matching, redirection, mapping and other operations.

Here is a simple example demonstrating how to use --with-pcre:

./configure --with-pcre
make
make install

In this example, the --with-pcre parameter tells the compilation system to enable support for the PCRE library when compiling Nginx.

In the Nginx configuration file, you can use regular expressions to match and process requests. For example:

location ~ /images/ {
    # 匹配以 /images/ 开头的 URL
    # 其他配置...
}

In this example, ~ means using regular expression matching, /images/ is a regular expression, and the matching starts with /images/ URLs starting with

In general, the--with-pcre parameter is used to enable PCRE regular expression library support, allowing Nginx to more flexibly utilize regular expressions for configuration and processing.

03-066:--with-zlib=DIR

This parameter is the same as the previous parameter--with-pcre and does not need to be explicitly specified because Nginx will also look for the zlib library in the system, as shown in the following figure:
Insert image description here

--with-zlib=DIRIt is one of the configuration parameters of Nginx and is used to specify the installation path of the zlib library. zlib is an open source library for data compression and decompression, which is widely used in many applications, including web servers, to improve transfer efficiency.

When you compile Nginx using the --with-zlib=DIR parameter, you need to provide the installation path of the zlib library. This tells the build system to look for the zlib library in the specified path when building Nginx and link it into the Nginx binary.

Here is a simple example demonstrating how to use --with-zlib=DIR:

./configure --with-zlib=/path/to/zlib
make
make install

In this example, /path/to/zlib is the installation path of the zlib library. By using the --with-zlib parameter, you tell the build system to look for the zlib library at the specified path.

The main purpose of zlib in Nginx is to compress HTTP responses to reduce the size of transmitted data and improve the loading speed of the website. When the client supports compression, Nginx can use zlib to gzip or deflate the response and then transmit it to the client, which then decompresses it. This can be controlled via the gzip related directive in the Nginx configuration.

In general, the--with-zlib=DIR parameter is used to enable zlib library support and specify the installation path of the zlib library. This enables Nginx to correctly link against the zlib library at compile time, in order to use zlib's compression and decompression capabilities at runtime.

03-067:--with-libatomic

Because it is not needed, there is no need to worry about whether it is enabled or not.

Personally, I don't think it is necessary, because as mentioned below, under normal circumstances, there may be no need to explicitly specify this parameter, because on the Centos7.9 system, the compiler already provides built-in support for atomic operations.

--with-libatomicIs one of the configuration parameters of Nginx, used to enable support for the libatomic library. libatomic is a library for atomic operations. It provides some atomic operations functions that can be used to ensure safe operations on shared data in multi-threaded programs.

In a multi-threaded environment, when multiple threads access and modify shared data at the same time, a race condition may occur, leading to data inconsistency or other problems. libatomic provides atomic operations that ensure operations on shared data are atomic without the need for locks, thus avoiding race conditions.

When compiling Nginx with --with-libatomic parameters, the libatomic library will be linked to use the atomic operations provided by libatomic in Nginx. This is beneficial for some concurrency performance optimizations, especially for highly concurrent Nginx instances running on multi-core systems.

Here is a simple example demonstrating how to use --with-libatomic:

./configure --with-libatomic
make
make install

In this example, the --with-libatomic parameter tells the compilation system to enable support for the libatomic library when compiling Nginx.

It should be noted that not all systems need to explicitly enable libatomic support, because on some systems, the compiler already provides built-in support for atomic operations. In some specific circumstances, using --with-libatomic may have a positive impact on Nginx performance, but in general, there may be no need to explicitly specify this parameter. Before compiling Nginx, it is recommended to check the system's documentation or relevant compilation options to understand whether you need to explicitly enable libatomic.

Q: Does Centos7.9 provide support for atomic operations?
CentOS 7.9 provides support for atomic operations, but this depends on the hardware and compiler used. On modern x86_64 architectures, commonly used GCC compilers provide atomic operations using hardware instructions without the need for additional library support.

In CentOS 7.9, the GCC version is the relatively older 4.8.5, but it still supports atomic operations on x86_64. Therefore, for general use, you generally do not need to explicitly enable --with-libatomic.

If you encounter problems related to atomic operations when compiling Nginx, it is recommended to try using the default settings first, because in most cases GCC will automatically use hardware support to provide atomic operations. If you have problems using the default settings, you may consider enabling libatomic support using the --with-libatomic parameter.

Before compiling Nginx, you can review the GCC version installed on your system and consider whether you need to update GCC to a newer version. In general, newer versions of GCC usually offer better performance and some new features.

03-068:--with-openssl=DIR

This parameter does not need to be specified explicitly, because Nginx will go to the system to find the openssl library, as shown in the following figure:
Insert image description here

--with-openssl=DIRIt is one of the configuration parameters of Nginx and is used to specify the installation path of the OpenSSL library. OpenSSL is an open source toolkit that provides Secure Sockets Layer (SSL) and Transport Layer Security (TLS) protocols, which are commonly used to encrypt network communications and provide secure data transmission.

When you compile Nginx using the --with-openssl=DIR parameter, you need to provide the installation path of the OpenSSL library. This tells the build system to look for the OpenSSL library at the specified path when building Nginx and link it into the Nginx binary.

Here is a simple example demonstrating how to use --with-openssl=DIR:

./configure --with-openssl=/path/to/openssl
make
make install

In this example, /path/to/openssl is the installation path of the OpenSSL library. By using the --with-openssl parameter, you tell the build system to look for the OpenSSL library at the specified path.

The main use of OpenSSL in Nginx is to encrypt and decrypt HTTPS traffic. When you enable SSL/TLS configuration in the Nginx configuration file, Nginx will use the encryption algorithm provided by OpenSSL to protect the security of data transmission.

In general, the--with-openssl=DIR parameter enables OpenSSL library support and specifies the installation path of the OpenSSL library. This enables Nginx to properly link against the OpenSSL library at compile time in order to use its encryption and decryption capabilities at runtime.

03-069:--with-debug

Because it is not needed, there is no need to worry about whether it is enabled or not.

--with-debugIs one of the configuration parameters of Nginx, used to enable debugging information when compiling Nginx. Using this parameter causes Nginx to include more debugging information in the binary, making it easier to troubleshoot and debug if problems arise.

Here is a simple example demonstrating how to use --with-debug:

./configure --with-debug
make
make install

By using the --with-debug parameter, the build system includes additional debugging information in the Nginx binary. This information includes variable names, line numbers, etc., which helps analyze the specific circumstances of code execution in the debugger.

In a production environment, it is generally not recommended to enable debug mode on Nginx because it increases the size of the binary and may also affect performance. In a production environment, you should compile with the normal configuration without the --with-debug parameter.

In the development environment or when troubleshooting is required, using the --with-debug parameter can provide more detailed debugging information to help developers locate and solve problems.

In general, the --with-debug parameter is used to include debugging information when compiling Nginx for troubleshooting and debugging if needed.

Guess you like

Origin blog.csdn.net/wenhao_ir/article/details/134965289