Sentinel's method to implement dynamically configured cluster flow control

Sentinel's method to implement dynamically configured cluster flow control

introduce

06-cluster-embedded-8081

Why use cluster flow control?

Compared with single-machine flow control, we set a single-machine current-limiting threshold for each machine. Under ideal circumstances, the current-limiting threshold of the entire cluster is the number of machines ✖️the single-machine threshold. However, in actual situations, the flow to each machine may be uneven, which may cause some machines to start limiting the flow when the total amount is not reached. Therefore, it will be impossible to accurately limit the overall traffic if it is restricted only in the single machine dimension. Cluster flow control can accurately control the total number of calls in the entire cluster. Combined with single-machine current limiting, the effect of flow control can be better exerted .

Based on the problem of uneven single-machine traffic and how to set the overall QPS of the cluster, we need to create a cluster current limiting mode. At this time, we naturally thought that we can find a server to specifically count the total number of calls. All instances communicate with this server to determine whether it can be called. This is the most basic method of cluster flow control.

principle

The principle of cluster current limiting is very simple. Just like stand-alone current limiting, statistics such as qps are required. The difference is that the stand-alone version performs statistics on each instance, while the cluster version has a dedicated instance for statistics.

This special token server is called Sentinel for statistical data. Other instances as Sentinel token clients will request tokens from the token server. If the token can be obtained, it means that the current qps has not reached the total threshold, otherwise This means that the total threshold of the cluster has been reached, and the current instance needs to be blocked, as shown in the following figure:

img

Compared with stand-alone flow control, there are two identities in cluster flow control:

  • Token Client: Cluster flow control client, used to communicate with the Token Server to request tokens. The cluster current limiting server will return the results to the client to decide whether to limit the current flow.
  • Token Server: The cluster flow control server handles requests from the Token Client and determines whether the token should be issued (whether to allow it to pass) based on the configured cluster rules.

There is only one identity in stand-alone flow control, and each sentinel is a token server.

Note that the token server in cluster current limiting is a single point. Once the token server dies, the cluster current limiting will degenerate into a single machine current limiting mode.

Sentinel cluster flow control supports two types of rules: current limiting rules and hotspot rules, and supports two forms of threshold calculation methods:

  • Cluster overall mode: that is, limiting the overall qps of a resource in the entire cluster to not exceed this threshold.
  • Single-machine amortization mode: The threshold configured in the single-machine amortization mode is equal to the limit that a single machine can bear. The token server will calculate the total threshold based on the number of connections (for example, there are 3 clients connected to the token server in independent mode, and then the configured single-machine amortization threshold is 10, then the calculated total number of clusters is 30), and the limit is based on the calculated total threshold. This method calculates the total threshold in real time based on the current number of connections, which is very suitable for environments where machines frequently change.

Deployment method

There are two deployment methods for token server:

One is independent deployment, which is to start a token server service separately to handle token client requests, as shown in the following figure:

img

If the independently deployed token server service fails, other token clients will degrade to the local flow control mode, that is, the stand-alone version of flow control. Therefore, this method of cluster flow limiting needs to ensure the high availability of the token server.

One is embedded deployment, which is started as a built-in token server and service in the same process. In this mode, each instance in the cluster is peer, and the token server and client can be changed at any time, as shown in the following figure:

img

In the embedded deployment mode, if the token server service hangs up, we can upgrade another token client to the token server. Of course, if we do not want to use the current token server, we can also choose another token client to take on this responsibility. Responsibility and switch the current token server to token client. Sentinel provides us with an API to switch between token server and token client:

http://192.168.175.1:8721/setClusterMode?mode=1

Among them, mode 0represents client, 1represents server, -1and represents shutdown.

Embedded mode

Import dependencies
        <dependency>
            <groupId>com.alibaba.csp</groupId>
            <artifactId>sentinel-cluster-server-default</artifactId>
            <version>2.0.0-alpha</version>
        </dependency>
        
        <dependency>
            <groupId>com.alibaba.csp</groupId>
            <artifactId>sentinel-cluster-client-default</artifactId>
            <version>2.0.0-alpha</version>
        </dependency>
        
        <dependency>
            <groupId>com.alibaba.csp</groupId>
            <artifactId>sentinel-datasource-nacos</artifactId>
            <version>2.0.0-alpha</version>
        </dependency>
        
        <dependency>
            <groupId>com.google.code.gson</groupId>
            <artifactId>gson</artifactId>
            <version>2.10</version>
        </dependency>
application.yml
server:
  port: 8081

spring:
  datasource:
    driver-class-name: com.mysql.cj.jdbc.Driver
    url: jdbc:mysql://localhost:3306/test?useSSL=false&useUnicode=true&characterEncoding=utf-8&serverTimezone=GMT%2B8
    username: root
    password: gj001212
    type: com.alibaba.druid.pool.DruidDataSource
  cloud:
    nacos:
      discovery:
        server-addr: 192.168.146.1:8839

    sentinel:
      transport:
        dashboard: localhost:7777
        port: 8719
      eager: true
      web-context-unify: false

  application:
    name: cloudalibaba-sentinel-clusterServer

mybatis-plus:
  configuration:
    log-impl: org.apache.ibatis.logging.stdout.StdOutImpl #开启sql日志

Modify the VM options configuration and start three instances with different ports.

-Dserver.port=9091 -Dproject.name=cloudalibaba-sentinel-clusterServer -Dcsp.sentinel.log.use.pid=true

-Dserver.port=9092 -Dproject.name=cloudalibaba-sentinel-clusterServer -Dcsp.sentinel.log.use.pid=true

-Dserver.port=9093 -Dproject.name=cloudalibaba-sentinel-clusterServer -Dcsp.sentinel.log.use.pid=true
Console configuration

After logging into the sentinel console and gaining access, we can see the cluster flow control on Sentinel:

Click to add Token Server.

[External link image transfer failed. The source site may have an anti-leeching mechanism. It is recommended to save the image and upload it directly (img-MNDP51sm-1683617959600) (C:\Users\86151\AppData\Roaming\Typora\typora-user-images\ image-20230509103407298.png)]

Sentinel combines nacos to implement cluster current limiting
ClusterGroupEntity parsing class
package com.liang.springcloud.alibaba.entity;

import java.util.Set;

/**
 * @author Eric Zhao
 * @since 1.4.1
 */
public class ClusterGroupEntity {
    
    

    private String machineId;
    private String ip;
    private Integer port;

    private Set<String> clientSet;

    public String getMachineId() {
    
    
        return machineId;
    }

    public ClusterGroupEntity setMachineId(String machineId) {
    
    
        this.machineId = machineId;
        return this;
    }

    public String getIp() {
    
    
        return ip;
    }

    public ClusterGroupEntity setIp(String ip) {
    
    
        this.ip = ip;
        return this;
    }

    public Integer getPort() {
    
    
        return port;
    }

    public ClusterGroupEntity setPort(Integer port) {
    
    
        this.port = port;
        return this;
    }

    public Set<String> getClientSet() {
    
    
        return clientSet;
    }

    public ClusterGroupEntity setClientSet(Set<String> clientSet) {
    
    
        this.clientSet = clientSet;
        return this;
    }

    @Override
    public String toString() {
    
    
        return "ClusterGroupEntity{" +
                "machineId='" + machineId + '\'' +
                ", ip='" + ip + '\'' +
                ", port=" + port +
                ", clientSet=" + clientSet +
                '}';
    }
}
Constants
/*
 * Copyright 1999-2018 Alibaba Group Holding Ltd.
 *
 * Licensed under the Apache License, Version 2.0 (the "License");
 * you may not use this file except in compliance with the License.
 * You may obtain a copy of the License at
 *
 *      http://www.apache.org/licenses/LICENSE-2.0
 *
 * Unless required by applicable law or agreed to in writing, software
 * distributed under the License is distributed on an "AS IS" BASIS,
 * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
 * See the License for the specific language governing permissions and
 * limitations under the License.
 */
package com.liang.springcloud.alibaba;

/**
 * @author Eric Zhao
 */
public final class Constants {
    
    

    public static final String FLOW_POSTFIX = "-flow-rules";
    public static final String PARAM_FLOW_POSTFIX = "-param-rules";
    public static final String SERVER_NAMESPACE_SET_POSTFIX = "-cs-namespace-set";
    public static final String CLIENT_CONFIG_POSTFIX = "-cluster-client-config";
    public static final String CLUSTER_MAP_POSTFIX = "-cluster-map";

    private Constants() {
    
    }
}

Create META-INF/service in the resources folder, then create a file called com.alibaba.csp.sentinel.init.InitFunc, and name the full path of the class that implements the InitFunc interface in the file. The content is as follows:

com.liang.springcloud.alibaba.init.ClusterInitFunc
ClusterInitFunc
package com.liang.springcloud.alibaba.init;

import java.util.List;
import java.util.Objects;
import java.util.Optional;

import com.alibaba.csp.sentinel.cluster.ClusterStateManager;
import com.alibaba.csp.sentinel.cluster.client.config.ClusterClientAssignConfig;
import com.alibaba.csp.sentinel.cluster.client.config.ClusterClientConfig;
import com.alibaba.csp.sentinel.cluster.client.config.ClusterClientConfigManager;
import com.alibaba.csp.sentinel.cluster.flow.rule.ClusterFlowRuleManager;
import com.alibaba.csp.sentinel.cluster.flow.rule.ClusterParamFlowRuleManager;
import com.alibaba.csp.sentinel.cluster.server.config.ClusterServerConfigManager;
import com.alibaba.csp.sentinel.cluster.server.config.ServerTransportConfig;
import com.alibaba.csp.sentinel.datasource.ReadableDataSource;
import com.alibaba.csp.sentinel.datasource.nacos.NacosDataSource;
import com.liang.springcloud.alibaba.Constants;
import com.liang.springcloud.alibaba.entity.ClusterGroupEntity;
import com.alibaba.csp.sentinel.init.InitFunc;
import com.alibaba.csp.sentinel.slots.block.flow.FlowRule;
import com.alibaba.csp.sentinel.slots.block.flow.FlowRuleManager;
import com.alibaba.csp.sentinel.slots.block.flow.param.ParamFlowRule;
import com.alibaba.csp.sentinel.slots.block.flow.param.ParamFlowRuleManager;
import com.alibaba.csp.sentinel.transport.config.TransportConfig;
import com.alibaba.csp.sentinel.util.AppNameUtil;
import com.alibaba.csp.sentinel.util.HostNameUtil;
import com.alibaba.fastjson.JSON;
import com.alibaba.fastjson.TypeReference;

/**
 * @author Eric Zhao
 */
public class ClusterInitFunc implements InitFunc {
    
    

    private static final String APP_NAME = AppNameUtil.getAppName();

    private final String remoteAddress = "localhost:8839";
    private final String groupId = "SENTINEL_GROUP";

    private final String flowDataId = APP_NAME + Constants.FLOW_POSTFIX;
    private final String paramDataId = APP_NAME + Constants.PARAM_FLOW_POSTFIX;
    private final String configDataId = APP_NAME + "-cluster-client-config";
    private final String clusterMapDataId = APP_NAME + Constants.CLUSTER_MAP_POSTFIX;

    @Override
    public void init() throws Exception {
    
    
        // Register client dynamic rule data source.
        initDynamicRuleProperty();

        // Register token client related data source.
        // Token client common config:
        initClientConfigProperty();
        // Token client assign config (e.g. target token server) retrieved from assign map:
        initClientServerAssignProperty();

        // Register token server related data source.
        // Register dynamic rule data source supplier for token server:
        registerClusterRuleSupplier();
        // Token server transport config extracted from assign map:
        initServerTransportConfigProperty();

        // Init cluster state property for extracting mode from cluster map data source.
        initStateProperty();
    }

    private void initDynamicRuleProperty() {
    
    
        ReadableDataSource<String, List<FlowRule>> ruleSource = new NacosDataSource<>(remoteAddress, groupId,
                flowDataId, source -> JSON.parseObject(source, new TypeReference<List<FlowRule>>() {
    
    }));
        FlowRuleManager.register2Property(ruleSource.getProperty());

        ReadableDataSource<String, List<ParamFlowRule>> paramRuleSource = new NacosDataSource<>(remoteAddress, groupId,
                paramDataId, source -> JSON.parseObject(source, new TypeReference<List<ParamFlowRule>>() {
    
    }));
        ParamFlowRuleManager.register2Property(paramRuleSource.getProperty());
    }

    private void initClientConfigProperty() {
    
    
        ReadableDataSource<String, ClusterClientConfig> clientConfigDs = new NacosDataSource<>(remoteAddress, groupId,
                configDataId, source -> JSON.parseObject(source, new TypeReference<ClusterClientConfig>() {
    
    }));
        ClusterClientConfigManager.registerClientConfigProperty(clientConfigDs.getProperty());
    }

    private void initServerTransportConfigProperty() {
    
    
        ReadableDataSource<String, ServerTransportConfig> serverTransportDs = new NacosDataSource<>(remoteAddress, groupId,
                clusterMapDataId, source -> {
    
    
            List<ClusterGroupEntity> groupList = JSON.parseObject(source, new TypeReference<List<ClusterGroupEntity>>() {
    
    });
            return Optional.ofNullable(groupList)
                    .flatMap(this::extractServerTransportConfig)
                    .orElse(null);
        });
        ClusterServerConfigManager.registerServerTransportProperty(serverTransportDs.getProperty());
    }

    private void registerClusterRuleSupplier() {
    
    
        // Register cluster flow rule property supplier which creates data source by namespace.
        // Flow rule dataId format: ${namespace}-flow-rules
        ClusterFlowRuleManager.setPropertySupplier(namespace -> {
    
    
            ReadableDataSource<String, List<FlowRule>> ds = new NacosDataSource<>(remoteAddress, groupId,
                    namespace + Constants.FLOW_POSTFIX, source -> JSON.parseObject(source, new TypeReference<List<FlowRule>>() {
    
    }));
            return ds.getProperty();
        });
        // Register cluster parameter flow rule property supplier which creates data source by namespace.
        ClusterParamFlowRuleManager.setPropertySupplier(namespace -> {
    
    
            ReadableDataSource<String, List<ParamFlowRule>> ds = new NacosDataSource<>(remoteAddress, groupId,
                    namespace + Constants.PARAM_FLOW_POSTFIX, source -> JSON.parseObject(source, new TypeReference<List<ParamFlowRule>>() {
    
    }));
            return ds.getProperty();
        });
    }

    private void initClientServerAssignProperty() {
    
    
        // Cluster map format:
        // [{"clientSet":["112.12.88.66@8729","112.12.88.67@8727"],"ip":"112.12.88.68","machineId":"112.12.88.68@8728","port":11111}]
        // machineId: <ip@commandPort>, commandPort for port exposed to Sentinel dashboard (transport module)
        ReadableDataSource<String, ClusterClientAssignConfig> clientAssignDs = new NacosDataSource<>(remoteAddress, groupId,
                clusterMapDataId, source -> {
    
    
            List<ClusterGroupEntity> groupList = JSON.parseObject(source, new TypeReference<List<ClusterGroupEntity>>() {
    
    });
            return Optional.ofNullable(groupList)
                    .flatMap(this::extractClientAssignment)
                    .orElse(null);
        });
        ClusterClientConfigManager.registerServerAssignProperty(clientAssignDs.getProperty());
    }

    private void initStateProperty() {
    
    
        // Cluster map format:
        // [{"clientSet":["112.12.88.66@8729","112.12.88.67@8727"],"ip":"112.12.88.68","machineId":"112.12.88.68@8728","port":11111}]
        // machineId: <ip@commandPort>, commandPort for port exposed to Sentinel dashboard (transport module)
        ReadableDataSource<String, Integer> clusterModeDs = new NacosDataSource<>(remoteAddress, groupId,
                clusterMapDataId, source -> {
    
    
            List<ClusterGroupEntity> groupList = JSON.parseObject(source, new TypeReference<List<ClusterGroupEntity>>() {
    
    });
            return Optional.ofNullable(groupList)
                    .map(this::extractMode)
                    .orElse(ClusterStateManager.CLUSTER_NOT_STARTED);
        });
        ClusterStateManager.registerProperty(clusterModeDs.getProperty());
    }

    private int extractMode(List<ClusterGroupEntity> groupList) {
    
    
        // If any server group machineId matches current, then it's token server.
        if (groupList.stream().anyMatch(this::machineEqual)) {
    
    
            return ClusterStateManager.CLUSTER_SERVER;
        }
        // If current machine belongs to any of the token server group, then it's token client.
        // Otherwise it's unassigned, should be set to NOT_STARTED.
        boolean canBeClient = groupList.stream()
                .flatMap(e -> e.getClientSet().stream())
                .filter(Objects::nonNull)
                .anyMatch(e -> e.equals(getCurrentMachineId()));
        return canBeClient ? ClusterStateManager.CLUSTER_CLIENT : ClusterStateManager.CLUSTER_NOT_STARTED;
    }

    private Optional<ServerTransportConfig> extractServerTransportConfig(List<ClusterGroupEntity> groupList) {
    
    
        return groupList.stream()
                .filter(this::machineEqual)
                .findAny()
                .map(e -> new ServerTransportConfig().setPort(e.getPort()).setIdleSeconds(600));
    }

    private Optional<ClusterClientAssignConfig> extractClientAssignment(List<ClusterGroupEntity> groupList) {
    
    
        if (groupList.stream().anyMatch(this::machineEqual)) {
    
    
            return Optional.empty();
        }
        // Build client assign config from the client set of target server group.
        for (ClusterGroupEntity group : groupList) {
    
    
            if (group.getClientSet().contains(getCurrentMachineId())) {
    
    
                String ip = group.getIp();
                Integer port = group.getPort();
                return Optional.of(new ClusterClientAssignConfig(ip, port));
            }
        }
        return Optional.empty();
    }

    private boolean machineEqual(/*@Valid*/ ClusterGroupEntity group) {
    
    
        return getCurrentMachineId().equals(group.getMachineId());
    }

    private String getCurrentMachineId() {
    
    
        // Note: this may not work well for container-based env.
        return HostNameUtil.getIp() + SEPARATOR + TransportConfig.getRuntimePort();
    }

    private static final String SEPARATOR = "@";
}

Standalone mode

Guess you like

Origin blog.csdn.net/qq_52183856/article/details/130581568