Guli College 160,000-word notes + 1,600 pictures (19) - data synchronization, gateway

Project source code and required information
Link: https://pan.baidu.com/s/1azwRyyFwXz5elhQL0BhkCA?pwd=8z59
Extraction code: 8z59

demo19-canal data synchronization, gateway

1. canal application scenarios

In the previous statistical analysis function, we used remote service calls to obtain statistical data , which has a high degree of coupling and relatively low efficiency. At present, I adopt another implementation method, which is realized by synchronizing database tables in real time . For example, if we want to count the number of registered and logged in people every day, we only need to synchronize the membership table to the statistics database to realize local statistics, which is more efficient. High and low coupling, Canal is a very good database synchronization tool . Canal is an open source project under Alibaba, developed in pure Java, currently only supports MySQL data synchronization

2. Preparations

2.1 Analysis

1. Need to have a Linux virtual machine and a local Windows system

2. The Linux virtual machine needs:

  • Install the MySQL database
  • Create database and data table
  • Install the canal data synchronization tool

3. The local Windows system requires:

  • Install the MySQL database
  • Create database and data table

Among them, the database data table created in Linux has the same name and table structure as the database data table created in the local Windows system

3. The effect we want is: if the data in the linux library changes, then the data in our local Windows library will also change

2.2 MySQL database and table building in local Windows

1. We created the database guli early on, and here we directly create tables by right-clicking on the database guli

insert image description here

2. My table name here is member, and there are three fields

insert image description here

2.3 MySQL database building and table building in Linux virtual machine

1. Right click on "[email protected]" and select "Create Database"

insert image description here

2. What kind of database we created in the first step of "2.1 Database Design" of "demo03-Background Lecturer Management Module", and what kind of database is created here

insert image description here

3. When creating a database in the first step of "2.1 Database Design" of "demo03-Background Lecturer Management Module", the "database sorting rule" is selected as the default, and then after creating the database, the sorting rule automatically given to the guli library is "uf8mb4_general_ci" . But the sorting rule given by the guli library created in Linux MySQL is not "uf8mb4_general_ci", I don't know why, anyway, manually set the sorting rule to "uf8mb4_general_ci" now

①Right click on the guli database of linux and select "Change Database..."

insert image description here

②Select the database collation of "uf8mb4_general_ci" and click "Save"

insert image description here

4. Right click on the guli library of linux to create a table

insert image description here

5. How to fill in the second step of "2.2 MySQL database and table building in local Windows", how to fill in here

insert image description here

2.4 Enable the binlog function

1. The principle of canal is based on the mysql binlog technology, so the binlog write function of mysql must be enabled here

2. Use the following command to check whether the binlog function is enabled, and the green box circled is ON, indicating that it is enabled

show variables like 'log_bin';

insert image description here

3. If the green box in the above figure is OFF, it means that it is not enabled. You need to modify the configuration file to enable binlog (query online by yourself). Remember to restart mysql after modifying the configuration file

3. Install Canal

3.1 Download canal compressed package

1. Download address:

https://github.com/alibaba/canal/releases

2. The teacher uses canal.deployer-1.1.4tar.gzthe version, so I also downloaded this version here

insert image description here

3.2 Upload canal compressed files to linux

Use Xftp to upload the compressed file downloaded in the previous step to the linux virtual machine (I uploaded it to the opt directory here)

insert image description here

3.3 Unzip the canal compressed file

1. Execute cd /opt to enter the opt directory, first use the following command to create a folder in the opt directory to store the decompressed canal

mkdir canal

insert image description here

2. Then use the following command to decompress the canal compressed file to the canal directory created in the previous step

tar zxvf canal.deployer-1.1.4.tar.gz -C canal

insert image description here

3.4 Modify the configuration file

1. Use the following command to open the canal configuration file instance.properties in edit mode

vi /opt/canal/conf/example/instance.properties

insert image description here

2. Enter i to enter the insert mode, the first modification is as follows:

  • The red box circles the ip of linux (fill in the ip of your own linux virtual machine, note: when entering your own ip, you cannot use the numbers 0-9 on the right side of the keyboard, but you need to use the numbers 0-9 above the keyboard )
    • The teacher said that it is okay to not modify the ip here, just use the default one 127.0.0.1, but it is still recommended to modify it
  • The green box circles the port number of mysql in linux (the default is 3306)

insert image description here

3. The second modification is as follows:

  • The red box circled is the user name
  • The green frame is the user password
  • Fill in the user name and password you set in step 2 of "8.9 Remotely Connecting to MySQL"

insert image description here

4. The fourth place does not need to be modified, just use the default one in the configuration file:

  • The role of this configuration is to specify that all libraries and all tables match

insert image description here

Some other matching rules are as follows:

  • All libraries and all tables: .*or.*\\..*

  • All tables under the canal library:canal\\..*

  • Tables starting with canal under the canal library:canal\\.canal.*

  • A specific table under the canal library:canal.test1

  • Combination of multiple rules (comma separated):canal\\..*,mysql.test1,mysql.test2

5. After modification, press Escto exit the insert mode, then enter :wqto save and exit

3.5 Start and close the canal data synchronization tool

1. First use the following command to enter the bin directory

cd /opt/canal/bin

insert image description here

2. Then use the following command to start the canal data synchronization tool in the bin directory

./startup.sh

insert image description here

3. If you want to close the canal data synchronization tool, use the following command in the bin directory

./stop.sh

insert image description here

4. Client code

4.1 Create submodule canal_clientedu

1. Right click on the total module guli_parent and select New–>Module…

insert image description here

2. Create a Maven project

insert image description here

3. After filling in the information, click "Finish"

insert image description here

4.2 Introducing dependencies

Add the following code to the pom.xml of the canal_clientedu module to introduce the required dependencies (don't forget to refresh maven)

<dependencies>
    <dependency>
        <groupId>org.springframework.boot</groupId>
        <artifactId>spring-boot-starter-web</artifactId>
    </dependency>

    <!--mysql-->
    <dependency>
        <groupId>mysql</groupId>
        <artifactId>mysql-connector-java</artifactId>
    </dependency>

    <dependency>
        <groupId>commons-dbutils</groupId>
        <artifactId>commons-dbutils</artifactId>
    </dependency>

    <dependency>
        <groupId>org.springframework.boot</groupId>
        <artifactId>spring-boot-starter-jdbc</artifactId>
    </dependency>

    <dependency>
        <groupId>com.alibaba.otter</groupId>
        <artifactId>canal.client</artifactId>
    </dependency>
</dependencies>

insert image description here

4.3 Configure application.properties

Create a configuration file application.properties and write configuration

# 服务端口
server.port=10000
# 服务名
spring.application.name=canal-client

# 环境设置:dev、test、prod
spring.profiles.active=dev

# mysql数据库连接
spring.datasource.driver-class-name=com.mysql.cj.jdbc.Driver
spring.datasource.url=jdbc:mysql://localhost:3306/guli?useUnicode=true&characterEncoding=utf-8&serverTimezone=GMT%2B8
spring.datasource.username=root
spring.datasource.password=root

insert image description here

4.4 Create startup class

Create the package com.atguigu.canal under the java package of the canal_clientedu module, and then create the startup class CanalApplication under the canal package

@SpringBootApplication
public class CanalApplication {
    
    
    public static void main(String[] args) {
    
    
        SpringApplication.run(CanalApplication.class, args);
    }
}

insert image description here

4.5 Write canal client class

Create the package client under the canal package, and then create the client class CanalClient under the client package (the code is a fixed code, no expert typing is required, as long as you can understand it, you can modify it [I don’t understand it, and I don’t plan to understand it at the moment] ])

package com.atguigu.canal.client;

import com.alibaba.otter.canal.client.CanalConnector;
import com.alibaba.otter.canal.client.CanalConnectors;
import com.alibaba.otter.canal.protocol.CanalEntry.*;
import com.alibaba.otter.canal.protocol.Message;
import com.google.protobuf.InvalidProtocolBufferException;
import org.apache.commons.dbutils.DbUtils;
import org.apache.commons.dbutils.QueryRunner;
import org.springframework.stereotype.Component;

import javax.annotation.Resource;
import javax.sql.DataSource;
import java.net.InetSocketAddress;
import java.sql.Connection;
import java.sql.SQLException;
import java.util.List;
import java.util.Queue;
import java.util.concurrent.ConcurrentLinkedQueue;

@Component
public class CanalClient {
    
    

    //sql队列
    private Queue<String> SQL_QUEUE = new ConcurrentLinkedQueue<>();

    @Resource
    private DataSource dataSource;

    /**
     * canal入库方法
     */
    public void run() {
    
    

        CanalConnector connector = CanalConnectors.newSingleConnector(new InetSocketAddress("192.168.44.132",
                11111), "example", "", "");
        int batchSize = 1000;
        try {
    
    
            connector.connect();
            connector.subscribe(".*\\..*");
            connector.rollback();
            try {
    
    
                while (true) {
    
    
                    //尝试从master那边拉去数据batchSize条记录,有多少取多少
                    Message message = connector.getWithoutAck(batchSize);
                    long batchId = message.getId();
                    int size = message.getEntries().size();
                    if (batchId == -1 || size == 0) {
    
    
                        Thread.sleep(1000);
                    } else {
    
    
                        dataHandle(message.getEntries());
                    }
                    connector.ack(batchId);

                    //当队列里面堆积的sql大于一定数值的时候就模拟执行
                    if (SQL_QUEUE.size() >= 1) {
    
    
                        executeQueueSql();
                    }
                }
            } catch (InterruptedException e) {
    
    
                e.printStackTrace();
            } catch (InvalidProtocolBufferException e) {
    
    
                e.printStackTrace();
            }
        } finally {
    
    
            connector.disconnect();
        }
    }

    /**
     * 模拟执行队列里面的sql语句
     */
    public void executeQueueSql() {
    
    
        int size = SQL_QUEUE.size();
        for (int i = 0; i < size; i++) {
    
    
            String sql = SQL_QUEUE.poll();
            System.out.println("[sql]----> " + sql);

            this.execute(sql.toString());
        }
    }

    /**
     * 数据处理
     *
     * @param entrys
     */
    private void dataHandle(List<Entry> entrys) throws InvalidProtocolBufferException {
    
    
        for (Entry entry : entrys) {
    
    
            if (EntryType.ROWDATA == entry.getEntryType()) {
    
    
                RowChange rowChange = RowChange.parseFrom(entry.getStoreValue());
                EventType eventType = rowChange.getEventType();
                if (eventType == EventType.DELETE) {
    
    
                    saveDeleteSql(entry);
                } else if (eventType == EventType.UPDATE) {
    
    
                    saveUpdateSql(entry);
                } else if (eventType == EventType.INSERT) {
    
    
                    saveInsertSql(entry);
                }
            }
        }
    }

    /**
     * 保存更新语句
     *
     * @param entry
     */
    private void saveUpdateSql(Entry entry) {
    
    
        try {
    
    
            RowChange rowChange = RowChange.parseFrom(entry.getStoreValue());
            List<RowData> rowDatasList = rowChange.getRowDatasList();
            for (RowData rowData : rowDatasList) {
    
    
                List<Column> newColumnList = rowData.getAfterColumnsList();
                StringBuffer sql = new StringBuffer("update " + entry.getHeader().getTableName() + " set ");
                for (int i = 0; i < newColumnList.size(); i++) {
    
    
                    sql.append(" " + newColumnList.get(i).getName()
                            + " = '" + newColumnList.get(i).getValue() + "'");
                    if (i != newColumnList.size() - 1) {
    
    
                        sql.append(",");
                    }
                }
                sql.append(" where ");
                List<Column> oldColumnList = rowData.getBeforeColumnsList();
                for (Column column : oldColumnList) {
    
    
                    if (column.getIsKey()) {
    
    
                        //暂时只支持单一主键
                        sql.append(column.getName() + "=" + column.getValue());
                        break;
                    }
                }
                SQL_QUEUE.add(sql.toString());
            }
        } catch (InvalidProtocolBufferException e) {
    
    
            e.printStackTrace();
        }
    }

    /**
     * 保存删除语句
     *
     * @param entry
     */
    private void saveDeleteSql(Entry entry) {
    
    
        try {
    
    
            RowChange rowChange = RowChange.parseFrom(entry.getStoreValue());
            List<RowData> rowDatasList = rowChange.getRowDatasList();
            for (RowData rowData : rowDatasList) {
    
    
                List<Column> columnList = rowData.getBeforeColumnsList();
                StringBuffer sql = new StringBuffer("delete from " + entry.getHeader().getTableName() + " where ");
                for (Column column : columnList) {
    
    
                    if (column.getIsKey()) {
    
    
                        //暂时只支持单一主键
                        sql.append(column.getName() + "=" + column.getValue());
                        break;
                    }
                }
                SQL_QUEUE.add(sql.toString());
            }
        } catch (InvalidProtocolBufferException e) {
    
    
            e.printStackTrace();
        }
    }

    /**
     * 保存插入语句
     *
     * @param entry
     */
    private void saveInsertSql(Entry entry) {
    
    
        try {
    
    
            RowChange rowChange = RowChange.parseFrom(entry.getStoreValue());
            List<RowData> rowDatasList = rowChange.getRowDatasList();
            for (RowData rowData : rowDatasList) {
    
    
                List<Column> columnList = rowData.getAfterColumnsList();
                StringBuffer sql = new StringBuffer("insert into " + entry.getHeader().getTableName() + " (");
                for (int i = 0; i < columnList.size(); i++) {
    
    
                    sql.append(columnList.get(i).getName());
                    if (i != columnList.size() - 1) {
    
    
                        sql.append(",");
                    }
                }
                sql.append(") VALUES (");
                for (int i = 0; i < columnList.size(); i++) {
    
    
                    sql.append("'" + columnList.get(i).getValue() + "'");
                    if (i != columnList.size() - 1) {
    
    
                        sql.append(",");
                    }
                }
                sql.append(")");
                SQL_QUEUE.add(sql.toString());
            }
        } catch (InvalidProtocolBufferException e) {
    
    
            e.printStackTrace();
        }
    }

    /**
     * 入库
     * @param sql
     */
    public void execute(String sql) {
    
    
        Connection con = null;
        try {
    
    
            if(null == sql) return;
            con = dataSource.getConnection();
            QueryRunner qr = new QueryRunner();
            int row = qr.execute(con, sql);
            System.out.println("update: "+ row);
        } catch (SQLException e) {
    
    
            e.printStackTrace();
        } finally {
    
    
            DbUtils.closeQuietly(con);
        }
    }
}

Where the code was modified:

  • Fill in the ip address of your own linux virtual machine in line 35

4.6 Modify the startup class

We want the local library to be able to synchronize with the remote (linux virtual machine) library. If data is added to the remote library, the local library needs to add data, and if the data is modified in the remote library, the local library also needs to modify the data. To achieve this goal, we must ensure that our program is always in a monitoring state, and has been monitoring changes in the remote library. The run method in the class written in "4.5 Writing the canal client class" is used to monitor the remote library and perform related operations, so if we need to keep the method running, we need to modify the startup class :

1. Let the startup class implement the interface CommandLineRunner

insert image description here

2. Add the following code in the startup class

@Resource
private CanalClient canalClient;
@Override
public void run(String... strings) throws Exception {
    
    
    //项目启动,执行canal客户端监听
    canalClient.run();
}

insert image description here

  • The run method of the CommandLineRunner interface is rewritten. The function of the run method of the CommandLineRunner interface is: as long as the program is still executing, the code in the run method of the rewritten CommandLineRunner interface will always be executed

4.7 Testing

1. The backend service only needs to start canal_clientedu, and start canal in linux

2. Use SQLyog to insert data into the library in linux

INSERT INTO member VALUES(1,'lucy',20)

insert image description here

3. You can see that the member table of the local library has also inserted a piece of data, indicating that our data synchronization is successful

insert image description here

5. Gateway

5.1 Gateway concept

1. A wall between the client and the server can play many roles, such as: request forwarding, load balancing, permission control...

2. The teacher said that the gateway service also needs to be registered in the registration center (not very understanding)

3. When the client accesses, it will pass through the gateway first, and the gateway will do these processes: first do path matching in the Gateway Handler Mapping, if it can be matched, enter the Gateway Web Handler to start execution, and then enter the Filter filter, we can Implement rights management, cross-domain...

insert image description here

4. Several important concepts in Spring Cloud Gateway:

  • Routing: Routing is the most basic part of the gateway. Routing information consists of an ID, a destination URL, a set of assertions, and a set of Filters. If the assert route is true, the requested URL matches the configuration
  • Assertion: Assertion function in Java8. To put it bluntly, it is to declare the matching rules. For example, if the user visits eduservice now, let it jump to port 8001
  • Filter: modify the request and response, so that authority management, cross-domain, unified exception handling can be realized...

5.2 Create a sub-submodule api_gateway

1. First create the submodule infrastructure:

① Right click on the total project guli_parent and select New–>Module…

insert image description here

② Create a Maven project

insert image description here

③ After filling in the information, click "Finish"

insert image description here

④Because there are submodules under this submodule, the following code should be added to the pom.xml of the infrastructure module, indicating that the subproject should be changed to the pom type (don’t forget to refresh maven)

<packaging>pom</packaging>

insert image description here

⑤Because we don't want to write code in the infrastructure module, so delete the src directory under the module

insert image description here

2. Right click on the infrastructure module and select New–>Module…

insert image description here

3. Create a Maven project

insert image description here

4. After filling in the information, click "Finish"

insert image description here

5.3 Add dependencies

Add dependencies in the pom.xml of the api_gateway module (don't forget to refresh maven)

<dependencies>
    <dependency>
        <groupId>com.atguigu</groupId>
        <artifactId>common_utils</artifactId>
        <version>0.0.1-SNAPSHOT</version>
    </dependency>

    <dependency>
        <groupId>org.springframework.cloud</groupId>
        <artifactId>spring-cloud-starter-alibaba-nacos-discovery</artifactId>
    </dependency>

    <dependency>
        <groupId>org.springframework.cloud</groupId>
        <artifactId>spring-cloud-starter-gateway</artifactId>
    </dependency>

    <!--gson-->
    <dependency>
        <groupId>com.google.code.gson</groupId>
        <artifactId>gson</artifactId>
    </dependency>

    <!--服务调用-->
    <dependency>
        <groupId>org.springframework.cloud</groupId>
        <artifactId>spring-cloud-starter-openfeign</artifactId>
    </dependency>
</dependencies>

insert image description here

5.4 Create startup class

Create the package com.atguigu.gateway under the java package of the api_gateway module, and then create the startup class ApiGatewayApplication under the gateway package

@SpringBootApplication
@EnableDiscoveryClient
public class ApiGatewayApplication {
    
    
    public static void main(String[] args) {
    
    
        SpringApplication.run(ApiGatewayApplication.class, args);
    }
}

insert image description here

Because the gateway service needs to be registered in nacos, it is necessary to add annotation @EnableDiscoveryClient to the startup class

5.5 Configure application.properties

Create a configuration file application.properties and write configuration

# 服务端口
server.port=8222
# 服务名
spring.application.name=service-gateway
# nacos服务地址
spring.cloud.nacos.discovery.server-addr=127.0.0.1:8848

#使用服务发现路由
spring.cloud.gateway.discovery.locator.enabled=true

#设置路由id
spring.cloud.gateway.routes[0].id=service-acl
#设置路由的uri
spring.cloud.gateway.routes[0].uri=lb://service-acl
#设置路由断言,代理servicerId为auth-service的/auth/路径
spring.cloud.gateway.routes[0].predicates= Path=/*/acl/**

#配置service-cms服务
#设置路由id(可以随便起,但建议写服务名字)
spring.cloud.gateway.routes[1].id=service-cms
#设置路由的uri    lib://在nacos注册的服务名称
spring.cloud.gateway.routes[1].uri=lb://service-cms
#设置路由断言,代理servicerId为auth-service的/auth/路径
spring.cloud.gateway.routes[1].predicates= Path=/educms/**

spring.cloud.gateway.routes[2].id=service-edu
spring.cloud.gateway.routes[2].uri=lb://service-edu
spring.cloud.gateway.routes[2].predicates= Path=/eduservice/**

spring.cloud.gateway.routes[3].id=service-msm
spring.cloud.gateway.routes[3].uri=lb://service-msm
spring.cloud.gateway.routes[3].predicates= Path=/edumsm/**

spring.cloud.gateway.routes[4].id=service-order
spring.cloud.gateway.routes[4].uri=lb://service-order
spring.cloud.gateway.routes[4].predicates= Path=/eduorder/**

spring.cloud.gateway.routes[5].id=service-oss
spring.cloud.gateway.routes[5].uri=lb://service-oss
spring.cloud.gateway.routes[5].predicates= Path=/eduoss/**

spring.cloud.gateway.routes[6].id=service-statistics
spring.cloud.gateway.routes[6].uri=lb://service-statistics
spring.cloud.gateway.routes[6].predicates= Path=/staservice/**

#配置service-ucenter服务
spring.cloud.gateway.routes[7].id=service-ucenter
spring.cloud.gateway.routes[7].uri=lb://service-ucenter
spring.cloud.gateway.routes[7].predicates= Path=/educenter/**

spring.cloud.gateway.routes[8].id=service-vod
spring.cloud.gateway.routes[8].uri=lb://service-vod
spring.cloud.gateway.routes[8].predicates= Path=/eduvod/**

Line 9 spring.cloud.gateway.discovery.locator.enabled=truemeans using Gateway’s service discovery to implement the call (the difference from using nginx to implement request forwarding is that nginx uses path matching, while when using Gateway, it uses service discovery to match [personal understanding, I don’t know if it’s right])

5.6 Test

1. Start nacos, start service_edu and api_gateway in the backend project

2. Enter http://localhost:8222/eduservice/subject/getAllSubject in the address bar to see the data, indicating that our configuration is successful

insert image description here

5.7 Realize load balancing

Gateway has encapsulated load balancing for us, and we can achieve load balancing without configuring anything: If I make the service_edu service a cluster (implemented the same function, the service name is the same, but the port number is different), put this service in On the two servers, the port numbers are 8101 and 8102 respectively. When a client accesses eduservice, Gateway will decide to allocate the request to 8101 or 8102 according to certain rules (polling, weight, minimum request time), which is load balancing

insert image description here

5.8 Realize cross-domain, authority management, and exception handling

1. All are fixed codes, no need to type, this part of the code is put in the data

insert image description here

2. Copy and paste the three folders above to the gateway package of the api_gateway module

insert image description here

  • The CorsConfig class is a configuration class that adds a plug-in to make all requests cross-domain
  • The role of AuthGlobalFilter: specifies: which requests can be accessed, which requests cannot be accessed, and what value will be output when the access fails

3. Because we have implemented cross-domain in the CorsConfig class, we need to comment out or delete all cross-domain annotations @CrossOrigin under the service module, otherwise an error will be reported (but actually, because I can’t understand it at all CorsConfig, AuthGlobalFilter, ErrorHandlerConfig, and JsonExceptionHandler are four classes that I dare not use indiscriminately, so I commented out all these four classes, and the request forwarding used is still implemented using nginx)

4. Change 9001 in the address of the dev.env.js file in the config directory of the front-end project vue-admin-1010 to the port number of the gateway 8222

insert image description here

5. Change the address of the request.js file in the utils directory of the front-end project vue-front-1010 from 9001 to the port number of the gateway 8222

insert image description here

I read the comments and said that the authority management in the back is all cv. The teacher was in a hurry and didn't elaborate on how to achieve it. At first I wanted to listen to it again to make it familiar. Until I finished reading the gateway, I realized that the teacher really didn't It's time, it's all about cv, the key is that I'm not good enough, and I can't understand it if I don't explain it at all, so I won't follow the authority management later, let's come to an end here for this project.

2022.07.27 2022.09.22 Finished flowering ~~~

Guess you like

Origin blog.csdn.net/maxiangyu_/article/details/127033142