Nginx Quick Start Guide: Commands and Configuration Tutorial

In our basic article on Nginx, we summarize what Nginx is, and how to install and set it up on your system. Nginx is a high-performance web server and reverse proxy server, which can handle high concurrent network requests, and also provides functions such as load balancing, caching, and SSL encryption. Originally developed by Russian programmer Igor Sysoev, Nginx has become one of the most popular web servers in the world.

Nginx is a high-performance, scalable, feature-rich web server and reverse proxy server that can help users improve system availability and performance, protect application server security and stability, improve website access speed, and protect user privacy.

Nginx installation and setup

Using Nginx has the following advantages:

  • High performance: Nginx adopts an event-driven asynchronous non-blocking architecture, which can handle a large number of concurrent requests while maintaining low memory usage and CPU load.

  • Scalability: Nginx supports a modular architecture, and its functionality can be extended by adding different modules.

  • Load balancing: Nginx can distribute requests to multiple servers through load balancing, thereby improving system availability and performance.

  • Reverse proxy: Nginx can be used as a reverse proxy server to forward requests to the back-end application server, thereby protecting the security and stability of the application server.

  • Cache: Nginx can improve the access speed of the website through caching and reduce the number of requests to the back-end server.

  • SSL encryption: Nginx can provide SSL encryption to protect website security and user privacy.

In the following tutorials, we'll give you an overview of the basic commands and configuration options of modern web server software.

Central control unit: nginx.conf

Nginx is event based and therefore works differently than Apache. A single request is not classified as a new workflow (for which all modules must be loaded), but as an event. These events are partitioned into existing worker processes, maintained by the junior master process. The nginx.conf configuration file defines how many worker processes ultimately exist, and how server requests (i.e. events) are divided. You can find them in the files /usr/local/nginx/conf, /etc/nginx or /usr/local/etc/nginx.

Manage processes and adopt new configurations

Nginx starts automatically after installation, but you can start it with:

 
 

1

sudo service nginx start

Once the web server software is running, you can manage it by addressing the process (mainly the main process) with the -s parameter and specific signals. The syntax for the corresponding command is less impressive:

 
 

1

sudo nginx -s signal

For "Signal" you have the following four possibilities:

  • stop: nginx terminates immediately.

  • quit: nginx terminates after all active requests have been answered.

  • reload: Reload the configuration file.

  • reopen: Restart the log file.

The reload option for reloading configuration files is a great way to make changes without terminating the web server software and restarting it afterwards. In any case, to accept the changes you have to decide whether you want to completely restart the server or just reload nginx. If you chose the latter option and executed the following command, the master process was instructed to apply the changes to the nginx.conf file:

 
 

1

sudo nginx -s reload

To do this, first check the accuracy of the grammar. If there is positive feedback, the new setting will make the main process start new workflows and stop old ones at the same time. If the syntax cannot be validated, the old configuration state is preserved. All active workflows are terminated once all active requests have been processed.

Additionally, you can target nginx processes with tools such as kill. You just need the corresponding process ID, which can be found in the /usr/local/nginx/logs directory or the nginx.pid file in the /var/run directory. For example, if the main process has an ID of 1628, it can be terminated using the kill and quit signals in sequence.

 
 

1

sudo kill -s quit 1628

You can also use the server program ps to display a list of all running nginx processes:

 
 

1

sudo ps -ax | grep nginx

How to regulate the delivery of static content

You most likely use a web server to deliver files such as images, video, or static HTML content. For efficiency, it is best to select different local directories for different content types. First create a sample directory /data/html and place a sample HTML document index.html in it, then create a folder /data/images containing some sample images.

For the next step, these two directories must be entered into the configuration file by saving them in the server block directive, which in turn is a subdirective of the HTTP block directive. Various directives are already set by default, you can first turn them off with (off). Then simply create a separate server block statement:

 
 

1
2
3
4

http {
  server {
  }
}

In this server block, you should specify two directories containing images and HTML documents. The corresponding results are as follows:

 
 

1
2
3
4
5
6
7
8
9

server {
  location / {
    root /data/html;
  }

  location /images/ {
    root /data;
  }
}

This configuration is the default for a server listening on port 80 and accessible via localhost. All requests with URIs beginning with /images/ will now request files from the /data/images directory. If no suitable file exists, an error message will appear. All nginx events whose URI does not start with /images/ are delivered to the /data/html directory.

Don't forget to reload or restart nginx to apply the changes.

Set up a simple Nginx proxy server

Nginx is often used (instead of an actual server) to run a proxy server for incoming requests. It filters them based on various criteria, forwards them, and passes the appropriate response to the client. Caching proxies are especially popular. They serve static content stored locally directly and only forward all further requests to the server. Firewall proxies are also common and filter out unsafe or unwanted connections. Following is an example of a caching proxy that retrieves the requested image from a local directory and forwards all further requests to the web server.

As a first step, you need to define the master server in nginx.conf:

 
 

1
2
3
4
5
6
7

server {
  listen 8080;
  root /data/up1;

  location / {
  }
}

Contrary to the previous example, the list directive is used because port 8080 (rather than the standard port) will be used for incoming requests. You should also create a target directory /data/up1 and file the index.html page there.

Second, a proxy server and its capabilities to deliver image content are defined. This is performed using the ProxyPass directive, including details of the primary server protocol (http), name (localhost) and port (8080):

 
 

1
2
3
4
5
6
7
8
9

server {
  location / {
    proxy_pass http://localhost:8080;
  }

  location ~ \.(gif|jpg|png) $ {
    root /data/images;
  }
}

The second location block instructs the proxy server to answer all requests by retrieving the appropriate content from the local /data/images directory if their URI ends with typical image files such as .gif, .jpg, and .png. All other requests are forwarded to the master server. As with the previous setup, save the image proxy by passing a reload signal to the main process or by restarting nginx.

 

Guess you like

Origin blog.csdn.net/winkexin/article/details/131487189