Takeaway project optimization-02-mysql master-slave replication, read-write separation (shardingJdbc), Nginx (reverse proxy, load balancing)

St. Regis takeaway project optimization-Day02

Course content

  • MySQL master-slave replication
  • Read and write separation case
  • The project achieves read-write separation
  • Nginx - Overview
  • Nginx-commands
  • Nginx-application

foreword

1). Existing problems

In the process of realizing the previous basic functions, users of our background management system and mobile terminal directly operate the database MySQL when accessing data. The structure is as follows:

insert image description here

At present, there is only one MySQL server, so the following problems may exist:

1). All the pressure of reading and writing is borne by one database, which is very stressful

2). If the disk of the database server is damaged, the data will be lost, which is a single point of failure

2). Solution

In order to solve the two problems mentioned above, we can prepare two MySQLs, one master (Master) server and one slave (Slave) server. The data changes of the master database need to be synchronized to the slave database (master-slave replication) . When the user accesses our project, if it is a write operation (insert, update, delete), it will directly operate the main library; if it is a read (select) operation, it will directly operate the slave library (in this read-write separation structure, There can be multiple slave libraries), this structure is called read-write separation.

insert image description here

Today we need to implement the above architecture to solve the problems in business development.

1. MySQL master-slave replication

The MySQL database supports master-slave replication by default, without resorting to other technologies, we only need to configure it in the database. Next, we will introduce the master-slave replication from the following aspects:

1.1 Introduction

MySQL master-slave replication is an asynchronous replication process, and the bottom layer is based on the binary data that comes with the Mysql database.logFunction. It means that one or more MySQL databases (slave, that is, from the library ) copy the log from another MySQL database (master, that is, the main library ), and then parse the log and apply it to itself, and finally realize the data from the database and the master . The database data remains consistent. MySQL master-slave replication is a built-in function of the MySQL database without the need for third-party tools.

Binary log:

​ The binary log (BINLOG) records all DDL (data definition language) statements and DML (data manipulation language) statements, but does not include data query statements. This log plays an extremely important role in data recovery during disasters. MySQL's master-slave replication is realized through this binlog. By default, MySQL does not enable this log.

MySQL's master-slave replication principle is as follows:

insert image description here

The MySQL replication process is divided into three steps:

1). MySQL master writes data changes to the binary log (binary log)

2). The slave copies the binary log of the master to its relay log (relay log)

3). The slave redoes the events in the relay log and applies the data changes to its own database

1.2 Build

You can right-click the virtual machine-"management-"cloning is very convenient for you to Baidu, after cloning, just modify the ip, it is very easy

# 修改静态ip (文件里写了注释,很容易找到)
vim /etc/sysconfig/network-scripts/ifcfg-ens33
# 重启网络服务,立即生效
systemctl restart network

1.2.1 Preparations

Prepare two servers in advance, and install MySQL in the server. The server information is as follows:

database IP database version
Master 192.168.141.100 5.7.25
Slave 192.168.141.101 5.7.25

And do the following preparations on the two servers:

1). Firewall open port 3306

firewall-cmd --zone=public --add-port=3306/tcp --permanent

firewall-cmd --zone=public --list-ports

insert image description here

2). And start the two database servers:

systemctl start mysqld

Log in to MySQL to verify whether it starts normally

insert image description here

1.2.2 Main library configuration

Server: 192.168.141.100
Note that the following are all configurations under the 100 main server

1). Modify the configuration file /etc/my.cnf of the Mysql database

Add configuration at the bottom:

log-bin=mysql-bin   #[必须]启用二进制日志
server-id=100       #[必须]服务器唯一ID(唯一即可)

insert image description here

2). Restart the Mysql service

Execution command:

 systemctl restart mysqld

insert image description here

3). Create a user for data synchronization and authorize it

Log in to mysql and execute the following commands to create a user and authorize it:

GRANT REPLICATION SLAVE ON *.* to 'xiaoming'@'%' identified by 'Root@123456';

Note: The function of the above SQL is to create a user xiaoming with a password of Root@123456, and grant REPLICATION SLAVE permission to user xiaoming. It is often used to establish the user authority required for replication, that is, the slave must be authorized by the master to be a user with this authority, in order to be able to replicate through this user.

Of course, the new user name and password can be specified by yourself.

insert image description here

Description of MySQL password complexity:

insert image description here

​ At present, the default password verification policy level of mysql5.7 is MEDIUM, which requires the password to consist of: numbers, lowercase letters, uppercase letters, special characters, and a length of at least 8 characters

4). Log in to the Mysql database to view the master synchronization status

Execute the following SQL and record the values ​​of File and Position in the result

show master status;

insert image description here

Record the File: mysql-bin.000002Position:441

Note: The function of the above SQL is to check the status of the Master. Do not perform any operations after executing this SQL (otherwise the file and location may change)

1.2.3 Slave library configuration

Server: 192.168.141.101
Note that the following configurations are executed under the 101 server

1). Modify the configuration file /etc/my.cnf of the Mysql database

server-id=101 	#[必须]服务器唯一ID

It is enough to configure an id from the library. Only the main library needs to enable the binary log, which can also be seen from the start diagram.
Main library: binary log (binary log)
Slave library: relay log (relay log)

insert image description here

2). Restart the Mysql service

systemctl restart mysqld

insert image description here

3). Log in to the Mysql database, set the main database address and synchronization location

change master 
	to master_host='192.168.141.100',
	master_user='xiaoming',
	master_password='Root@123456',
	master_log_file='mysql-bin.000003',
	master_log_pos=154;

Start the IO thread that copies logs from the library

start slave;

Closing is also:stop slave

may need to be changed

master_host='192.168.141.100'  # ip改成自己主数据库服务器的
master_user='xiaoming' # 改成自己创建用户的 用户名
master_password='Root@123456' # 改成自己创建用户的 密码
master_log_file='mysql-bin.000002' # 改成刚刚主库查到的 File值
master_log_pos=154; # 改成刚刚主库查到的 Position

Parameter Description:

​ A. master_host : The IP address of the master library

​ B. master_user : The user name for accessing the master library for master-slave replication (created above in the master library)

​ C. master_password : The password corresponding to the username for accessing the master library for master-slave replication

​ D. master_log_file : From which log file to start synchronization (shown in the above query master status)

​ E. master_log_pos : From which position of the specified log file to start synchronization (shown in the above query master status)

insert image description here

4). View the status of the slave database

show slave status;

Copy the content to the text editor nodepad – look inside

+----------------------------------+-----------------+-------------+-------------+---------------+------------------+---------------------+----------------------------+---------------+-----------------------+------------------+-------------------+-----------------+---------------------+--------------------+------------------------+-------------------------+-----------------------------+------------+------------+--------------+---------------------+-----------------+-----------------+----------------+---------------+--------------------+--------------------+--------------------+-----------------+-------------------+----------------+-----------------------+-------------------------------+---------------+---------------+----------------+----------------+-----------------------------+------------------+--------------------------------------+----------------------------+-----------+---------------------+--------------------------------------------------------+--------------------+-------------+-------------------------+--------------------------+----------------+--------------------+--------------------+-------------------+---------------+----------------------+--------------+--------------------+
| Slave_IO_State                   | Master_Host     | Master_User | Master_Port | Connect_Retry | Master_Log_File  | Read_Master_Log_Pos | Relay_Log_File             | Relay_Log_Pos | Relay_Master_Log_File | Slave_IO_Running | Slave_SQL_Running | Replicate_Do_DB | Replicate_Ignore_DB | Replicate_Do_Table | Replicate_Ignore_Table | Replicate_Wild_Do_Table | Replicate_Wild_Ignore_Table | Last_Errno | Last_Error | Skip_Counter | Exec_Master_Log_Pos | Relay_Log_Space | Until_Condition | Until_Log_File | Until_Log_Pos | Master_SSL_Allowed | Master_SSL_CA_File | Master_SSL_CA_Path | Master_SSL_Cert | Master_SSL_Cipher | Master_SSL_Key | Seconds_Behind_Master | Master_SSL_Verify_Server_Cert | Last_IO_Errno | Last_IO_Error | Last_SQL_Errno | Last_SQL_Error | Replicate_Ignore_Server_Ids | Master_Server_Id | Master_UUID                          | Master_Info_File           | SQL_Delay | SQL_Remaining_Delay | Slave_SQL_Running_State                                | Master_Retry_Count | Master_Bind | Last_IO_Error_Timestamp | Last_SQL_Error_Timestamp | Master_SSL_Crl | Master_SSL_Crlpath | Retrieved_Gtid_Set | Executed_Gtid_Set | Auto_Position | Replicate_Rewrite_DB | Channel_Name | Master_TLS_Version |
+----------------------------------+-----------------+-------------+-------------+---------------+------------------+---------------------+----------------------------+---------------+-----------------------+------------------+-------------------+-----------------+---------------------+--------------------+------------------------+-------------------------+-----------------------------+------------+------------+--------------+---------------------+-----------------+-----------------+----------------+---------------+--------------------+--------------------+--------------------+-----------------+-------------------+----------------+-----------------------+-------------------------------+---------------+---------------+----------------+----------------+-----------------------------+------------------+--------------------------------------+----------------------------+-----------+---------------------+--------------------------------------------------------+--------------------+-------------+-------------------------+--------------------------+----------------+--------------------+--------------------+-------------------+---------------+----------------------+--------------+--------------------+
| Waiting for master to send event | 192.168.141.100 | xiaoming    |        3306 |            60 | mysql-bin.000001 |                 441 | localhost-relay-bin.000004 |           320 | mysql-bin.000001      | Yes              | Yes               |                 |                     |                    |                        |                         |                             |          0 |            |            0 |                 441 |             697 | None            |                |             0 | No                 |                    |                    |                 |                   |                |                     0 | No                            |             0 |               |              0 |                |                             |              100 | a2fae4c0-e35a-11ed-8483-000c29591af8 | /var/lib/mysql/master.info |         0 |                NULL | Slave has read all relay log; waiting for more updates |              86400 |             |                         |                          |                |                    |                    |                   |             0 |                      |              |                    |
+----------------------------------+-----------------+-------------+-------------+---------------+------------------+---------------------+----------------------------+---------------+-----------------------+------------------+-------------------+-----------------+---------------------+--------------------+------------------------+-------------------------+-----------------------------+------------+------------+--------------+---------------------+-----------------+-----------------+----------------+---------------+--------------------+--------------------+--------------------+-----------------+-------------------+----------------+-----------------------+-------------------------------+---------------+---------------+----------------+----------------+-----------------------------+------------------+--------------------------------------+----------------------------+-----------+---------------------+--------------------------------------------------------+--------------------+-------------+-------------------------+--------------------------+----------------+--------------------+--------------------+-------------------+---------------+----------------------+--------------+--------------------+
1 row in set (0.00 sec)

or \G to look at

Then you can see whether the master-slave synchronization is ready through Slave_IO_running and Slave_SQL_running in the status information. If both parameters are Yes, it means that the master-slave synchronization has been configured.

insert image description here

MySQL command line tricks:

​ \G : Add \G after the MySQL sql statement, which means that the query results will be printed column by column, so that each field can be printed on a separate line. The structure to be found is rotated 90 degrees to become vertical;

If Slave_IO_Running is displayed as No
It is because the virtual machine is directly copied, and the uuid is repeated. For the solution, refer to this blog, or Baidu
https://blog.csdn.net/qq_54217349/article/details/126501053

1.3 Testing

The master-slave replication environment has been set up. Next, we can connect to two MySQL servers through Navicat for testing. When testing, we only need to perform operations on the master library Master to check whether the data is synchronized from the slave library Slave. ( The above work is no problem, it should be automatically synchronized, mysql is awesome~ )

  1. First establish two connections (very simple)
    insert image description here

1). Create database test01 in master (100), refresh slave (101) to see if it can be synchronized

If there is a problem with writing sql, just use GUI to do it

insert image description here
insert image description here

The database created by the main library is automatically copied from the library

2). Create a user table under the test01 data of the master (100), refresh the slave (101) to see if it can be synchronized

insert image description here
insert image description here

3). Insert a piece of data in the user table of the master, and refresh the slave to see if it can be synchronized

insert image description here

2. Read-write separation case (shardingJdbc)

2.1 Background introduction

In the face of increasing system access, the throughput of the database is facing a huge bottleneck. For application systems with a large number of concurrent read operations and fewer write operations at the same time, the database is split into a master database and a slave database . Effectively avoid row locks caused by data updates, greatly improving the query performance of the entire system .

insert image description here

Through the separation of reading and writing, the access pressure of a single database can be reduced, the access efficiency can be improved, and the failure of a single database can also be avoided.

We have completed the master-slave replication structure in the first section, so how do we separate read and write through java code in the project, how to query the slave library when executing select, and execute insert, update, delete What about operating the main library? At this time, we need to introduce a new technology, ShardingJDBC.

2.2 Introduction to ShardingJDBC

Sharding-JDBC is positioned as a lightweight Java framework, providing additional services at the Java JDBC layer. It uses the client to directly connect to the database and provides services in the form of jar packages without additional deployment and dependencies. It can be understood as an enhanced version of the JDBC driver and is fully compatible with JDBC and various ORM frameworks.

Using Sharding-JDBC can easily realize the separation of database reading and writing in the program.

Sharding-JDBC has the following characteristics:

1). Applicable to any JDBC-based ORM framework, such as: JPA, Hibernate, Mybatis, Spring JDBC Template or use JDBC directly.

2). Support any third-party database connection pool, such as: DBCP, C3P0, BoneCP, Druid, HikariCP, etc.

3). Support any database that implements the JDBC specification. Currently supports MySQL, Oracle, SQLServer, PostgreSQL and any database that follows the SQL92 standard.

rely:

<dependency>
    <groupId>org.apache.shardingsphere</groupId>
    <artifactId>sharding-jdbc-spring-boot-starter</artifactId>
    <version>4.0.0-RC1</version>
</dependency>

2.3 Database environment

Create a database rw in the master database, and create a table. After the database and table structure are created, they will be automatically synchronized to the slave database. The SQL statement is as follows:

create database rw default charset utf8mb4;

use rw;

CREATE TABLE `user` (
  `id` bigint NOT NULL AUTO_INCREMENT,
  `name` varchar(255) DEFAULT NULL,
  `age` int(11) DEFAULT NULL,
  `address` varchar(255) DEFAULT NULL,
  PRIMARY KEY (`id`)
) ENGINE=InnoDB DEFAULT CHARSET=utf8mb4;

insert image description here

Error: [Err] 1055 - Expression #1 of ORDER BY clause is not in GROUP BY clause and contains nonaggregated column 'information_schema.PROFILING.SEQ' which is not functionally dependent on columns in GROUP BY clause; this is incompatible with sql_mode=only_ full_group_by
In fact, you can ignore it, the sql statement is actually executed normally

If you have to solve it:

vim /etc/my.cnf
sql_mode= STRICT_TRANS_TABLES,NO_ZERO_IN_DATE,NO_ZERO_DATE,ERROR_FOR_DIVISION_BY_ZERO,NO_ENGINE_SUBSTITUTION

Add a line at the end of the mysql configuration file:
insert image description here
then restart the mysql service

systemctl restart mysqld

Note that it has changed at this time show master status;, and needs to be reset from the database

2.4 Initial project import

Link: https://pan.baidu.com/s/1lN8H5GbjhALZfZYnVRtU8w
Extraction code: eo13

Our case is mainly to demonstrate the separation of reading and writing operations. For the basic business operations of adding, deleting, modifying and checking, we will not write any more. We can directly import the demo project (rw_demo) provided in the data. In the demo project, we The operation of adding, deleting, modifying and checking users has been completed. The specific project structure is as follows:

insert image description here

2.5 Read-write separation configuration

1). Add the maven coordinates of shardingJdbc in pom.xml

<dependency>
    <groupId>org.apache.shardingsphere</groupId>
    <artifactId>sharding-jdbc-spring-boot-starter</artifactId>
    <version>4.0.0-RC1</version>
</dependency>

2). Add data source configuration in application.yml

spring:
  sharding-sphere:
    datasource:
      names:
        master,slave
      # 主数据源
      master:
        type: com.alibaba.druid.pool.DruidDataSource
        driver-class-name: com.mysql.cj.jdbc.Driver
        url: jdbc:mysql://192.168.141.100:3306/rw?characterEncoding=utf-8&useSSL=false
        username: root
        password: 1234
      # 从数据源
      slave:
        type: com.alibaba.druid.pool.DruidDataSource
        driver-class-name: com.mysql.cj.jdbc.Driver
        url: jdbc:mysql://192.168.141.101:3306/rw?characterEncoding=utf-8&useSSL=false
        username: root
        password: 1234
    masters-lave:
      # 读写分离配置
      # 从库查的策略: 此处轮询查每个从数据库
      load-balance-algorithm-type: round_robin #轮询
      # 最终的数据源名称
      name: dataSource
      # 主库数据源名称
      master-data-source-name: master
      # 从库数据源名称列表,多个逗号分隔
      slave-data-source-names: slave
    props:
      sql:
        show: true #开启SQL显示,默认false

Just copy it to the bottom of application.yml

Configuration analysis:

insert image description here
Note that some configurations have to be modified to your own
insert image description here

3). Add configuration in application.yml (allow bean definition coverage)

spring:  
  main:
    allow-bean-definition-overriding: true

The purpose of this configuration item is that if there is a bean with the same name in the current project, the bean defined later will overwrite the bean defined earlier.

If this item is not configured, an error will be reported after the project starts:

insert image description here

The error message indicates that an error occurred when declaring the bean dataSource in SpringBootConfiguration under the org.apache.shardingsphere.shardingjdbc.spring.boot package, because there is a bean with the same name as dataSource in com.alibaba.druid.spring.boot.autoconfigure The DruidDataSourceAutoConfigure class under the package has been declared when it is loaded.

insert image description here
insert image description here

And what we need to use is the dataSource under the shardingjdbc package, so we need to configure the above properties so that the one loaded later will overwrite the one loaded first.

Finally, start the server again and find that 2 data sources have been created:
insert image description here

2.6 Testing

We use shardingjdbc to achieve read-write separation, just go through the simple configuration above. After the configuration is complete, we can restart the service and access the controller through postman to complete the addition, deletion, modification, and query of user information. We can use debug and logs to check each addition, deletion, modification, and query operation and which data is used. Source, which database is connected.

1). Save data

insert image description here

The console outputs the log, and you can see that the insert operation is indeed the master main library:

insert image description here

Looking at the two database tables, there is data, and they are exactly the same. It’s great that the operations on both sides are exactly the same (note that only the main library is modified)
insert image description here

insert image description here
The java code of crud is exactly the same as before. For the java project, the only difference is the database configuration in application.yml (then the mysql database is also manually configured)

The java code is exactly the same, but the yml configuration is different, and the mysql database is configuredThe mysql master-slave replication is directly implemented, the framework is awesome

2). Modify data

insert image description here

Console output log, you can see the operation master main library:

insert image description here

3). Query data

insert image description here

Console output log, you can see the operation slave main library:

insert image description here

4). Delete data

insert image description here

Console output log, you can see the operation master main library:

insert image description here

3. The project achieves read-write separation

3.1 Database environment preparation (import and export sql files)

Just use the database environment of master-slave replication that we built earlier in the virtual machine. Create the business database reggie of the St. Regis takeaway project in the main library, and import the relevant table structure and data (we can export the database data we used in local development before, and then import it to the main library in the server).

1). Export the data of your local reggie database to SQL file

insert image description here

In this way, when we developed it ourselves, the added test data is still there, which is convenient for testing.

2). In the master database master, create a database reggie and import the SQL file

The database created in the master will be automatically synchronized to the slave library

insert image description here

Import the sql file in master's reggie (select the sql file just exported)

insert image description here

3.2 Create a Git branch

At present, there are two branches master and v1.0 in git by default. Next, we will optimize the separation of reading and writing, so we will not operate on the master and v1.0 branches. We need to create a separate branch v1.1 on git. For the optimization of read-write separation, we will operate on this branch. The specific operation of creating a branch is consistent with the previous demonstration.

The currently created v1.1 branch is created based on the master branch, so the code of the current master branch is exactly the same as the code of the v1.1 branch, and then the code of v1.1 is also pushed to the remote warehouse.

3.3 Read-write separation configuration

1). Add dependencies to the pom.xml of the project

<dependency>
    <groupId>org.apache.shardingsphere</groupId>
    <artifactId>sharding-jdbc-spring-boot-starter</artifactId>
    <version>4.0.0-RC1</version>
</dependency>

2). Configure data source related information in the application.yml of the project

The original dataSource is deleted and overwritten directly with the following configuration

Note the first line spring: no need to copy

spring:
  sharding-sphere:
    datasource:
      names:
        master,slave
      # 主数据源
      master:
        type: com.alibaba.druid.pool.DruidDataSource
        driver-class-name: com.mysql.cj.jdbc.Driver
        url: jdbc:mysql://192.168.141.100:3306/reggie?serverTimezone=Asia/Shanghai&useUnicode=true&characterEncoding=utf-8&zeroDateTimeBehavior=convertToNull&useSSL=false&allowPublicKeyRetrieval=true
        username: root
        password: 1234
      # 从数据源
      slave:
        type: com.alibaba.druid.pool.DruidDataSource
        driver-class-name: com.mysql.cj.jdbc.Driver
        url: jdbc:mysql://192.168.141.101:3306/reggie?serverTimezone=Asia/Shanghai&useUnicode=true&characterEncoding=utf-8&zeroDateTimeBehavior=convertToNull&useSSL=false&allowPublicKeyRetrieval=true
        username: root
        password: 1234
    masters-lave:
      # 读写分离配置
      # 从库查的策略: 此处轮询查每个从数据库
      load-balance-algorithm-type: round_robin #轮询
      # 最终的数据源名称
      name: dataSource
      # 主库数据源名称
      master-data-source-name: master
      # 从库数据源名称列表,多个逗号分隔
      slave-data-source-names: slave
    props:
      sql:
        show: true #开启SQL显示,默认false
  main:
    allow-bean-definition-overriding: true

Start the server, and the SQLFeatureNotSupportedException error is resolved

Start the server, the error is as follows:

Caused by: java.sql.SQLFeatureNotSupportedException: isValid 	at org.apache.shardingsphere.shardingjdbc.jdbc.unsupported.AbstractUnsupportedOperationConnection.isValid

The solution is as follows: https://huaweicloud.csdn.net/638768a8dacf622b8df8b62c.html
insert image description here

package cn.whu.reggie.config;

import org.springframework.beans.factory.ObjectProvider;
import org.springframework.beans.factory.annotation.Value;
import org.springframework.boot.actuate.autoconfigure.jdbc.DataSourceHealthContributorAutoConfiguration;
import org.springframework.boot.actuate.health.AbstractHealthIndicator;
import org.springframework.boot.actuate.jdbc.DataSourceHealthIndicator;
import org.springframework.boot.jdbc.metadata.DataSourcePoolMetadataProvider;
import org.springframework.context.annotation.Configuration;
import org.springframework.util.StringUtils;

import javax.sql.DataSource;
import java.util.Map;

@Configuration
public class DataSourceHealthConfig extends DataSourceHealthContributorAutoConfiguration {
    
    

    @Value("${spring.datasource.dbcp2.validation-query:select 1}")
    private String defaultQuery;

    public DataSourceHealthConfig(Map<String, DataSource> dataSources, ObjectProvider<DataSourcePoolMetadataProvider> metadataProviders) {
    
    
        super(dataSources, metadataProviders);
    }

    @Override
    protected AbstractHealthIndicator createIndicator(DataSource source) {
    
    
        DataSourceHealthIndicator indicator = (DataSourceHealthIndicator) super.createIndicator(source);
        if (!StringUtils.hasText(indicator.getQuery())) {
    
    
            indicator.setQuery(defaultQuery);
        }
        return indicator;
    }
}

The setting of the server mysql master-slave configuration is permanent

After the virtual machine crashed and restarted, it was found that the slave library seemed to be able to automatically detect the current File of the main library, so there was no need to reset it in the slave library

If you don’t believe me, exit the main library from mysql and re-enter to check:

ctrl+z
mysql -uroot -p1234
show master status;

insert image description here
Re-execute directly from library

show slave status\G;

insert image description here
It's really syncing automatically.

Then the main library does not need to be set, and the slave library does not need to be set. Doesn’t it mean that mysql is automatically started after booting up?

3.4 Functional test

After the configuration is complete, we start the project for testing, directly access the system management background page with a browser , and then perform relevant business operations, just look at the log information output by the console.

Query operation:

insert image description here

Update operation:

insert image description here

Insert operation:

insert image description here

Delete operation:

insert image description here

3.5 Git merge code

We have completed the function of read-write separation, so next, we can submit and push the current branch v1.1 code to the remote warehouse.

insert image description here
insert image description here

Then, merge the code of v1.1 into the master branch, and then push it to the remote warehouse.

Switch to master first, then merge v1.1 under master

insert image description here

insert image description here

4. Nginx-Overview

4.1 Introduction

insert image description here

Nginx is a lightweight web server / reverse proxy server and email (IMAP/POP3) proxy server . It is characterized by less memory and strong concurrency capability. In fact, the concurrency capability of nginx is better than other web servers of the same type. Websites using nginx in mainland China include: Baidu, Jingdong, Sina, Netease, Tencent, Taobao, etc.

Nginx was developed by Igor Sysoyev for the second most visited Rambler.ru site in Russia (Russian: Рамблер). The first public version 0.1.0 was released on October 4, 2004.

Official website: https://nginx.org/

4.2 Download and install

4.2.1 Download

On the download page of Nginx's official website ( http://nginx.org/en/download.html ), the current Nginx version is displayed and a download link is provided. as follows:

insert image description here

In this project, the Nginx we are learning is the stable version 1.16, which we can download directly from the official website. Of course, the installation package of this version has also been provided in our course materials.

insert image description here
insert image description here
Open the ladder, download faster

The following installation commands are directly online on the linux side now

4.2.2 Installation

The following operations are all configured on the 100 master server

1). Install dependent packages

Since nginx is developed based on C language, it is necessary to install a C language compilation environment and third-party dependent libraries such as regular expression libraries.

yum -y install gcc pcre-devel zlib-devel openssl openssl-devel

2). Download the Nginx installation package

yum install wget
cd ~/software/
wget https://nginx.org/download/nginx-1.16.1.tar.gz

wget: (Install if not installed)

The wget command is used to download files from a specified URL. wget is very stable. It has strong adaptability in the case of narrow bandwidth and unstable network. If the download fails due to network reasons, wget will keep trying until the entire file is downloaded. If the server interrupts the download process, it will connect to the server again and continue the download from where it left off.

After executing the wget command, you will see the downloaded file in the current directory.
insert image description here

3). Decompress the nginx compressed package

tar -zxvf nginx-1.16.1.tar.gz

4). Configure the Nginx compilation environment

cd nginx-1.16.1
mkdir -p  /usr/local/nginx
./configure --prefix=/usr/local/nginx

illustrate:

​ The directory specified by --prefix is ​​the directory where we installed Nginx.

5). Compile & Install

In the directory where nginx-1.16.1 decompresses the file

make && make install

make: 编译 make install: 安装
/usr/local/nginxIt will be automatically installed into the directory we configured above

cd /usr/local/nginx/
ll

insert image description here

4.3 Directory structure

After installing Nginx, we can switch to the Nginx installation directory (/usr/local/nginx), let’s get familiar with the directory structure of Nginx first, as shown in the following figure:

cd /usr/local/
tree nginx/

insert image description here

Remark:

​ An instruction tree we used above, which can display the directory we specified in a tree structure. If you don't have this command, you can install it with the following command.

​ yum install tree

The key directories and files are as follows:

directory/file illustrate Remark
conf The storage directory of the configuration file
conf/nginx.conf Nginx core configuration file There are many nginx configuration files under conf, we mainly operate this core configuration file
html Store static resources (html, css, ) Static resources deployed to Nginx can be placed in the html directory
logs Store nginx logs (access logs, error logs, etc.)
sbin/nginx Binary file for starting and stopping Nginx service

5. Nginx-commands

5.1 Common commands

In Nginx, our binary executable file (nginx) is stored in the sbin directory. Although there is only one executable file, we can use this command with different parameters to achieve more powerful functions. Next, we will demonstrate the common commands of Nginx. When executing the following commands, they need to be executed in the /usr/local/nginx/sbin/ directory.

1). View version

./nginx -v

insert image description here

2). Check the configuration file

After modifying the nginx.conf core configuration file, before starting the Nginx service, you can check whether there is any error in the configuration of the conf/nginx.conf file. The command is as follows:

./nginx -t

insert image description here

3). Start

./nginx

After startup, we can check whether the nginx process exists through the ps -ef command.

insert image description here

Note: After the nginx service starts, there will be two processes by default.

After startup, we can directly access port 80 of Nginx, http://192.168.141.100/

So tomcat uses port 8080 and port 80 for nginx
insert image description here

Notice:

​ To access Nginx normally, you need to close the firewall or open the specified port number. The command to execute is as follows:

A. Turn off the firewall

​ systemctl stop firewalld

​ B. Open port 80

​ firewall-cmd --zone=public --add-port=80/tcp --permanent

​ firewall-cmd --reload

​ firewall-cmd --zone=public --list-ports

4). stop

./nginx -s stop

After stopping, we can view the nginx process:

ps -ef|grep nginx

insert image description here

4.x). Supplement

After starting and stopping nginx, some *_temp temporary directories will appear in the nginx directory. It is normal, we will not operate it, so we don’t care about it. The logs
insert image description here
directory is originally empty. Now after starting, closing and other operations, there should be some files
insert image description here
insert image description here
insert image description here

insert image description here
The process number of nginx is recorded in nginx.pid, so you have to start nginx before you can view it
insert image description here

5). Reload

After modifying the Nginx configuration file, it needs to be reloaded to take effect. You can use the following command to reload the configuration file:

./nginx -s reload
  • lx:
cd /usr/local/nginx/conf/
vim nginx.conf
# 将默认worker进程数由1修改为2

insert image description here
Reload the configuration file to make it effective (Of course, restarting the nginx service directly without reload is a bit troublesome (after all, reload is just a command)

/usr/local/nginx/sbin/nginx -s reload

check again

ps -ef | grep nginx

Now there are really two worker processes
insert image description here

5.2 Environment variable configuration

In the above, when we use the nginx command to start, stop, and reload the service, we need to use a command nginx, and this command is in the nginx/sbin directory. Every time we use this command, we need to switch to sbin The directory is only available, and it is relatively cumbersome to use. So can we execute this command to operate nginx in any directory? The answer is yes, just configure the environment variables of nginx.

Open the /etc/profile file through the vim editor, and add the sbin directory of nginx to the PATH environment variable, as follows:

vim /etc/profile
# PATH=后面或者 export PATH=后面加上nginx的sbin目录,注意中间以冒号隔开
/usr/local/nginx/sbin:

insert image description here

After modifying the configuration file, you need to execute source /etc/profileto make the file take effect. Next, we can execute nginx commands in any directory, such as:

insert image description here

6. Nginx-Application

After introducing and installing Nginx, this chapter will explain the use of Nginx, mainly from the following four aspects.

6.1 Configuration file structure

The nginx configuration file (conf/nginx.conf) is generally divided into three parts: global block, events block, and http block. What kind of information are configured in these three blocks, see the table below:

area responsibility
global block Configure global configuration related to nginx operation
events block Configuration related to network connection
http block Configure proxy, cache, logging, virtual host and other configurations

(All line configurations from the beginning of the file to events before are global blocks, that is, those not wrapped by {} are global blocks)

The specific structure diagram is as follows:

insert image description here

In the global block, events block and http block, we often configure the http block .

Multiple server blocks can be included in the http block, and each server block can be configured with multiple location blocks.

6.2 Deploy static resources

6.2.1 Introduction

Nginx can be used as a static web server to deploy static resources. The static resources mentioned here refer to some files that actually exist on the server side and can be displayed directly, such as common html pages, css files, js files, pictures, videos and other resources.

Compared with Tomcat, Nginx is more efficient in handling static resources, so in a production environment, static resources are generally deployed to Nginx.

Deploying static resources to Nginx is very simple, just copy the files to the html directory under the Nginx installation directory.

server {
    listen 80;				#监听端口	
    server_name localhost;	#服务器名称
    location / {			#匹配客户端请求url
        root html;			#指定静态资源根目录
        index index.html;	#指定默认首页
    }
}

6.2.2 Testing

In the data, we provide a static html file, we need to deploy this file to nginx, and then access html static resources through nginx.

1). Upload static resources to /usr/local/nginx/html directory

First write a simple html page locally: name it hello.html so as not to overwrite the index.html that comes with the system

<!DOCTYPE html>
<html lang="en">
<head>
    <meta charset="UTF-8">
    <title>我的Nginx</title>
</head>
<body>
<h1>我的Nginx</h1>
</body>
</html>

Then upload to:/usr/local/nginx/html

cd /usr/local/nginx/html
rz

Terrible, just accidentally overwrite the original one, so be it

insert image description here

2). Start nginx

insert image description here

3). Access

http://192.168.141.100/hello.html

insert image description here

http://192.168.141.100 , visit this address and visit the default home page of nginx

insert image description here

4). Configure home page

cd /usr/local/nginx/conf
vim nginx.conf

insert image description here

1. The default home page is index.html or index.htm

2. Multiple server blocks can be included in the http block, so multiple servers can be copied to listen to multiple ports

If we need to use hello.html as the home page of nginx, we can modify the index command of location and configure it as hello.html, as follows:

insert image description here

After the configuration is complete, we can use the command to check whether the configuration file is configured correctly: nginx -t

insert image description here

The configuration file has been modified, we need to reload it to take effect:

nginx -s reload

5). Access

http://192.168.141.100

insert image description here

6.3 Reverse proxy

6.3.1 Concept introduction

1). Forward proxy

The forward proxy server is a server located between the client and the original server (origin server). In order to obtain content from the original server, the client sends a request to the proxy and specifies the target (origin server), and then the proxy forwards the request to the original server. And return the obtained content to the client.

A typical use of a forward proxy is to provide a way for LAN clients inside the firewall to access the Internet.

Forward proxy generally sets up a proxy server (the user knows its existence) on the client side , forwards the request through the proxy server, and finally accesses the target server.

insert image description here

eg: The company is a local area network, and the public network can only be accessed through a proxy server

2). Reverse proxy

The reverse proxy server is located between the user and the target server, but for the user, the reverse proxy server is equivalent to the target server, that is, the user can directly access the reverse proxy server to obtain the resources of the target server, and the reverse proxy server is responsible for The request is forwarded to the target server . The user does not need to know the address of the target server, and does not need to make any settings on the client side. For the user, accessing the reverse proxy server is completely unaware.

insert image description here

Summary:
The forward proxy is the proxy server set by the client, and the user can perceive its existence. The
reverse proxy is the proxy server set by the server. The user does not know whether the server is a proxy, which hides the existence of the real background server for the user. Not only It facilitates the management of multiple background servers, and also enables real background servers to be deployed on a LAN without directly exposing them to the public network.

So in this section, we are going to use nginx as a reverse proxy server. In nginx, we can configure reverse proxy in nginx.conf:

server {
    listen 82;
    server_name localhost;
    location / {
        proxy_pass http://192.168.141.101:8080; 	#反向代理配置,将请求转发到指定服务
    }
}

The meaning of the above configuration is: When we access port 82 of nginx, according to the reverse proxy configuration, the request will be forwarded to the service corresponding to http://192.168.141.101:8080 .

6.3.2 Testing

Requirements: The java application is deployed in the slave server 192.168.141.101, the running port is 8080, and an accessible link /hello is provided. Now we need to forward the request to the 192.168.141.101:8080 slave server service through nginx when accessing port 82 of nginx.

insert image description here

1). Deploy and start the service at 192.168.141.101

Upload the helloworld-1.0-SNAPSHOT.jar provided in the data to the server, and run the service through the command java -jar helloworld-1.0-SNAPSHOT.jar.

cd /usr/local/app/helloProfix
# 没有这个目录就手动创建一下
rz
# 选中上传
java -jar helloworld-1.0-SNAPSHOT.jar

insert image description here

Now in fact, direct access to tomcat is also accessible
http://192.168.141.101:8080/hello
insert image description here

2). Configure reverse proxy in nginx.conf in 192.168.141.100

Under the main server, enter the nginx installation directory and edit the configuration file nginx.conf:

cd /usr/local/nginx/conf/
vim nginx.conf

In the http block, add a server block virtual host configuration, listen to port 82, and configure the reverse proxy proxy_pass:

server {
    listen 82;
    server_name localhost;
    location / {
        proxy_pass http://192.168.141.101:8080; 	#反向代理配置,将请求转发到指定服务
    }
}

Note that it is at the same level as the previous server
insert image description here

3). Check the configuration file and reload

nginx -t

insert image description here

nginx -s reload

4). Access

http://192.168.141.100:82/hello
insert image description here

★ After nginx is configured, visit http://192.168.141.100:82/hello , because the port 82 of nginx is accessed, and as soon as the nginx reverse proxy detects the request of port 82, it will immediately forward to the configured http: //192.168.141.101:8080 means that the request will be forwarded to http://192.168.141.101:8080/hello but the user cannot perceive it

Note: When accessing port 82, it may be impossible to access, because there is no open port number in the firewall. We can solve this problem in two ways:

A. Turn off the firewall

systemctl stop firewalld

B. Development designated port

firewall-cmd --zone=public --add-port=82/tcp --permanent

firewall-cmd --reload

6.4 Load Balancing

6.4.1 Concept introduction

The early website traffic and business functions were relatively simple, and a single server could meet the basic needs. However, with the development of the Internet, the business traffic is increasing and the business logic is becoming more and more complex. The performance of a single server and a single point The failure problem is highlighted, so multiple servers are required to form an application cluster to scale performance horizontally and avoid single point of failure.

Application cluster : Deploy the same application to multiple machines to form an application cluster, receive requests distributed by the load balancer, perform business processing and return response data

Load balancer : Distribute user requests to a server in the application cluster for processing according to the corresponding load balancing algorithm

insert image description here

We will use Nginx to implement the load balancer here, and Nginx's load balancing is based on reverse proxy, but at this time, the proxy server is not one, but multiple .

6.4.2 Testing

1). Upload the two jar packages provided in the data to the 192.168.141.101 server

insert image description here

When we are testing, there are not so many servers. We can start multiple services in one server and run them on different port numbers for testing.

2). Run the two uploaded jar packages, the running ports are 8080 and 8081 respectively

Since we execute the java -jar command will occupy the foreground window, so we can open two windows for testing.

insert image description here

insert image description here
Pay attention to open port 8081

3). Configure load balancing in nginx

Open the nginx configuration file nginx.conf and add the following configuration:

upstream targetserver{
    
    	 #upstream指令可以定义一组服务器
    server 192.168.141.101:8080;
    server 192.168.141.101:8081;
}

server {
    
    
    listen       8080;
    server_name  localhost;
    location / {
    
    
        proxy_pass http://targetserver;
    }
}

Upstream is a keyword, and targetserver is a name that can be chosen by yourself.
Two forwarding tomcat server addresses are configured, and the load balancing algorithm (default polling algorithm) will be automatically polled and forwarded to the two servers.

The specific configuration location is as follows:

insert image description here

4). Reload the nginx configuration file, visit

nginx -s reload

During the test, we directly access port 8080 of nginx (http://192.168.200.200:8080). At this time, nginx will forward the request to the next two servers according to the load balancing strategy.

insert image description here
insert image description here

In the above test process, we saw that the request was forwarded to 8080 and 8081 in a balanced manner, because the load balancing strategy of the mode is round robin .

If a server is killed, it will only be forwarded to another normal server intelligently

Note: All the port numbers mentioned above need to be opened in the firewall of the corresponding server, or the firewall should be completely closed

6.4.3 Load Balancing Strategy

The load balancing strategy nginx server has already done it for you, just configure it, how simple it is~

In addition to dealing with the above default polling strategy, Nginx also provides other load balancing strategies, as follows:

name illustrate features
polling default method
weight weight method Distribute requests according to the weight, and the probability of assigning a large weight to the request is high
ip_hash According to the ip allocation method Calculate the hash value according to the IP address requested by the client, and distribute the request according to the hash value, which will cause the request initiated by the same IP to be forwarded to the same server
least_conn According to the least connection method Which server currently handles fewer connections, the request is forwarded to this server first
url_hash According to the url distribution method Distribute the request according to the hash value of the url requested by the client. The same url request will be forwarded to the same server
fair By response time method Prioritize the distribution of requests to servers with short processing time

Weight configuration:

#upstream指令可以定义一组服务器
upstream targetserver{	
    server 192.168.200.201:8080 weight=10;
    server 192.168.200.201:8081 weight=5;
}

The weight of the above configuration is relative. In the above configuration, the effect is that, under the request of a large amount of data , the final number of requests received by the 8080 is twice that of the 8081. (2:1 when the number of requests is large enough)

Guess you like

Origin blog.csdn.net/hza419763578/article/details/130444596