"Learn Sentinel Together" Principle - Full Analysis

If you reprint , please indicate the original source, thank you!

series of articles

"Learn Sentinel Together" Principle - Call Chain

"Learn Sentinel Together" Principle - Sliding Window

"Learn Sentinel Together" Principle - Entity Class

"Learn Sentinel Together" Actual Combat - Current Limit

"Learn Sentinel Together" Actual Combat - Console Chapter

"Learn Sentinel Together" Actual Combat - Rules Persistence

"Learn Sentinel Together" Actual Combat - Cluster Current Limiting

Sentinel series of tutorials, now uploaded to github and gitee:

sentinel-tutorial.png

Sentinel is open-sourced by Alibaba's middleware team and is a lightweight and highly available traffic control component for distributed service architecture. It mainly uses traffic as the entry point to help users protect services from multiple dimensions such as traffic control, circuit breaker downgrade, and system load protection. stability.

You may ask: What are the similarities and differences between Sentinel and Netflix Hystrix , the commonly used circuit breaker downgrade library ? Sentinel official website has a comparative article, here is an excerpt of a summary table, the specific comparison can be viewed by clicking this link .

Compare content Sentinel Hystrix
Quarantine Policy Semaphore isolation Thread pool isolation/semaphore isolation
circuit breaker downgrade strategy Based on response time or failure rate based on failure rate
Real-time indicator implementation sliding window Sliding window (based on RxJava)
Rule configuration Supports multiple data sources Supports multiple data sources
Extensibility multiple extension points form of plugin
Annotation-based support support support
Limiting Based on QPS, support current limit based on call relationship not support
traffic shaping Support slow start, constant speed mode not support
System load protection support not support
console Out of the box, you can configure rules, view second-level monitoring, machine discovery, etc. imperfect
Adaptation of common frameworks Servlet、Spring Cloud、Dubbo、gRPC 等 Servlet、Spring Cloud Netflix

As can be seen from the comparison table, Sentinel is more functional than Hystrix. In this article, let us understand the source code of Sentinel and uncover the mystery of Sentinel.

Project structure

Fork the source code of Sentinel to your own github repository, then clone the source code to the local, and then start the source code reading journey.

First let's take a look at the entire structure of the Sentinel project:

sentinel-project-structure.png

  • Sentinel-core core module, current limiting, downgrade, system protection, etc. are all implemented here
  • sentinel-dashboard console module, which can realize visual management of connected sentinel clients
  • The sentinel-transport transmission module provides basic monitoring server and client API interfaces, as well as some implementations based on different libraries
  • sentinel-extension extension module, which mainly implements partial extension of DataSource
  • The sentinel-adapter adapter module mainly implements the adaptation of some common frameworks
  • Sentinel-demo sample module, please refer to how to use sentinel to limit current, downgrade, etc.
  • sentinel-benchmark benchmark module that provides benchmarks for the accuracy of the core code

Run the sample

Basically, every framework will have sample modules, some are called example, some are called demo, and sentinel is no exception.

Then let's find an example from Sentinel's demo and run it to see the general situation. As mentioned above, the main core function of Sentinel is to do current limiting, downgrade and system protection, then we will start from "current limiting" to see Sentinel Realize the principle.

sentinel-basic-demo-flow-qps.png

You can see that there are many different examples in the sentinel-demo module. We find the flow package under the basic module. Below this package is the corresponding current limiting example, but there are many types of current limiting, so we just look for According to the class of qps current limiting, other current limiting methods are basically the same.

public class FlowQpsDemo {

    private static final String KEY = "abc";

    private static AtomicInteger pass = new AtomicInteger();
    private static AtomicInteger block = new AtomicInteger();
    private static AtomicInteger total = new AtomicInteger();

    private static volatile boolean stop = false;

    private static final int threadCount = 32;

    private static int seconds = 30;

    public static void main(String[] args) throws Exception {
        initFlowQpsRule();

        tick();
        // first make the system run on a very low condition
        simulateTraffic();

        System.out.println("===== begin to do flow control");
        System.out.println("only 20 requests per second can pass");

    }

    private static void initFlowQpsRule() {
        List<FlowRule> rules = new ArrayList<FlowRule>();
        FlowRule rule1 = new FlowRule();
        rule1.setResource(KEY);
        // set limit qps to 20
        rule1.setCount(20);
        // 设置限流类型:根据qps
        rule1.setGrade(RuleConstant.FLOW_GRADE_QPS);
        rule1.setLimitApp("default");
        rules.add(rule1);
        // 加载限流的规则
        FlowRuleManager.loadRules(rules);
    }

    private static void simulateTraffic() {
        for (int i = 0; i < threadCount; i++) {
            Thread t = new Thread(new RunTask());
            t.setName("simulate-traffic-Task");
            t.start();
        }
    }

    private static void tick() {
        Thread timer = new Thread(new TimerTask());
        timer.setName("sentinel-timer-task");
        timer.start();
    }

    static class TimerTask implements Runnable {

        @Override
        public void run() {
            long start = System.currentTimeMillis();
            System.out.println("begin to statistic!!!");

            long oldTotal = 0;
            long oldPass = 0;
            long oldBlock = 0;
            while (!stop) {
                try {
                    TimeUnit.SECONDS.sleep(1);
                } catch (InterruptedException e) {
                }
                long globalTotal = total.get();
                long oneSecondTotal = globalTotal - oldTotal;
                oldTotal = globalTotal;

                long globalPass = pass.get();
                long oneSecondPass = globalPass - oldPass;
                oldPass = globalPass;

                long globalBlock = block.get();
                long oneSecondBlock = globalBlock - oldBlock;
                oldBlock = globalBlock;

                System.out.println(seconds + " send qps is: " + oneSecondTotal);
                System.out.println(TimeUtil.currentTimeMillis() + ", total:" + oneSecondTotal
                    + ", pass:" + oneSecondPass
                    + ", block:" + oneSecondBlock);

                if (seconds-- <= 0) {
                    stop = true;
                }
            }

            long cost = System.currentTimeMillis() - start;
            System.out.println("time cost: " + cost + " ms");
            System.out.println("total:" + total.get() + ", pass:" + pass.get()
                + ", block:" + block.get());
            System.exit(0);
        }
    }

    static class RunTask implements Runnable {
        @Override
        public void run() {
            while (!stop) {
                Entry entry = null;

                try {
                    entry = SphU.entry(KEY);
                    // token acquired, means pass
                    pass.addAndGet(1);
                } catch (BlockException e1) {
                    block.incrementAndGet();
                } catch (Exception e2) {
                    // biz exception
                } finally {
                    total.incrementAndGet();
                    if (entry != null) {
                        entry.exit();
                    }
                }

                Random random2 = new Random();
                try {
                    TimeUnit.MILLISECONDS.sleep(random2.nextInt(50));
                } catch (InterruptedException e) {
                    // ignore
                }
            }
        }
    }
}

After executing the above code, the following results are printed:

sentinel-basic-demo-flow-qps-result.png

It can be seen that in the above results, the number of passes is not the same as what we expected. We expected that the number of requests per second allowed to pass is 20, but there are many requests for passes that exceed 20 at present.

The reason is that the code we tested here uses multi-threading, pay attention to threadCountthe value of , there are a total of 32 threads to simulate, and when resource protection is performed in SphU.entrythe is, there is no lock inside , so just This will cause the number of passes to be higher than 20 under high concurrency.

It can be described by the following model. There is a TimeTicker thread doing statistics, every 1 second. There are N RunTask threads simulating requests, and the accessed business code is protected by the resource key. According to the rules, only 20 requests per second are allowed to pass.

Since counters such as pass, block, and total are globally shared, and when multiple RunTask threads execute SphU.entry to apply for entry, there is no internal lock protection, so the number of passes exceeds the set threshold.

sentinel-basic-demo-flow-qps-module.png

In order to prove the correctness and reliability of the lower current limit in a single thread, then our model should become like this:

sentinel-basic-demo-flow-qps-single-thread-module.png

Then I change threadCountthe value of to 1, and only one thread executes this method. Look at the specific current limiting results. After executing the above code, the printed results are as follows:

sentinel-basic-demo-single-thread-flow-qps-result.png

It can be seen that the number of passes is basically maintained at 20, but the pass value of the first statistics still exceeds 20. What is the reason for this?

In fact, if you look closely at the code in the Demo, you can find that one thread is used for the simulation request, and another thread is used for the statistical results. The statistical thread counts the results every 1 second, and there is a time error between the two threads. of. From the timestamp printed by the TimeTicker thread, it can be seen that although the statistics are performed every second, there is still an error between the current printing time and the last time, not exactly 1000ms interval.

To truly verify the limit of 20 requests per second and ensure the accuracy of the data, it is necessary to do benchmark tests. This is not the focus of this article. Interested students can learn about jmh. The benchmark tests in Sentinel are also done through jmh.

In-depth principles

Through a simple sample program, we learned that sentinel can limit the flow of requests. In addition to current limit, it also has functions such as downgrade and system protection. Now let's clear the clouds and go deep into the source code to see the implementation principle of sentinel.

Start with the entrance first: SphU.entry(). This method will apply for an entry. If the application is successful, it means that the current is not limited, otherwise a BlockException will be thrown, and the surface has been limited.

The execution from the SphU.entry()method will enter Sph.entry(), the default implementation class of Sph is CtSph, and entry(ResourceWrapper resourceWrapper, int count, Object... args) throws BlockExceptionthis .

Let's take a look at the specific implementation of this method:

public Entry entry(ResourceWrapper resourceWrapper, int count, Object... args) throws BlockException {
    Context context = ContextUtil.getContext();
    if (context instanceof NullContext) {
        // Init the entry only. No rule checking will occur.
        return new CtEntry(resourceWrapper, null, context);
    }

    if (context == null) {
        context = MyContextUtil.myEnter(Constants.CONTEXT_DEFAULT_NAME, "", resourceWrapper.getType());
    }

    // Global switch is close, no rule checking will do.
    if (!Constants.ON) {
        return new CtEntry(resourceWrapper, null, context);
    }

    // 获取该资源对应的SlotChain
    ProcessorSlot<Object> chain = lookProcessChain(resourceWrapper);

    /*
     * Means processor cache size exceeds {@link Constants.MAX_SLOT_CHAIN_SIZE}, so no
     * rule checking will be done.
     */
    if (chain == null) {
        return new CtEntry(resourceWrapper, null, context);
    }

    Entry e = new CtEntry(resourceWrapper, chain, context);
    try {
    	// 执行Slot的entry方法
        chain.entry(context, resourceWrapper, null, count, args);
    } catch (BlockException e1) {
        e.exit(count, args);
        // 抛出BlockExecption
        throw e1;
    } catch (Throwable e1) {
        RecordLog.info("Sentinel unexpected exception", e1);
    }
    return e;
}

This method can be divided into the following parts:

  • 1. Detect parameters and global configuration items. If they do not meet the requirements, a CtEntry object will be returned directly, and the subsequent current limit detection will not be performed, otherwise, the following detection process will be entered.
  • 2. Obtain the corresponding SlotChain according to the packaged resource object
  • 3. Execute the entry method of SlotChain
    • 3.1. If the entry method of SlotChain throws BlockException, the exception will continue to be thrown upwards
    • 3.2. If the entry method of SlotChain is executed normally, the entry object will be returned in the end
  • 4. If the upper-layer method catches BlockException, it means that the request is limited, otherwise the request can be executed normally

The most important ones are the 2nd and 3rd steps. Let's break down these two steps.

Create SlotChain

First look at the method implementation of lookProcessChain:

private ProcessorSlot<Object> lookProcessChain(ResourceWrapper resourceWrapper) {
    ProcessorSlotChain chain = chainMap.get(resourceWrapper);
    if (chain == null) {
        synchronized (LOCK) {
            chain = chainMap.get(resourceWrapper);
            if (chain == null) {
                // Entry size limit.
                if (chainMap.size() >= Constants.MAX_SLOT_CHAIN_SIZE) {
                    return null;
                }

                // 具体构造chain的方法
                chain = Env.slotsChainbuilder.build();
                Map<ResourceWrapper, ProcessorSlotChain> newMap = new HashMap<ResourceWrapper, ProcessorSlotChain>(chainMap.size() + 1);
                newMap.putAll(chainMap);
                newMap.put(resourceWrapper, chain);
                chainMap = newMap;
            }
        }
    }
    return chain;
}

This method uses a HashMap for caching, and the key is the resource object. Locked here and done double check. The specific method of constructing the chain is created by: Env.slotsChainbuilder.build()this code. Then go into this method and see.

public ProcessorSlotChain build() {
    ProcessorSlotChain chain = new DefaultProcessorSlotChain();
    chain.addLast(new NodeSelectorSlot());
    chain.addLast(new ClusterBuilderSlot());
    chain.addLast(new LogSlot());
    chain.addLast(new StatisticSlot());
    chain.addLast(new SystemSlot());
    chain.addLast(new AuthoritySlot());
    chain.addLast(new FlowSlot());
    chain.addLast(new DegradeSlot());

    return chain;
}

Chain means chain. It can be seen from the build method that ProcessorSlotChain is a linked list with many Slots added to it. The specific implementation needs to be seen in DefaultProcessorSlotChain.

public class DefaultProcessorSlotChain extends ProcessorSlotChain {

    AbstractLinkedProcessorSlot<?> first = new AbstractLinkedProcessorSlot<Object>() {
        @Override
        public void entry(Context context, ResourceWrapper resourceWrapper, Object t, int count, Object... args)
            throws Throwable {
            super.fireEntry(context, resourceWrapper, t, count, args);
        }
        @Override
        public void exit(Context context, ResourceWrapper resourceWrapper, int count, Object... args) {
            super.fireExit(context, resourceWrapper, count, args);
        }
    };
    
    AbstractLinkedProcessorSlot<?> end = first;

    @Override
    public void addFirst(AbstractLinkedProcessorSlot<?> protocolProcessor) {
        protocolProcessor.setNext(first.getNext());
        first.setNext(protocolProcessor);
        if (end == first) {
            end = protocolProcessor;
        }
    }

    @Override
    public void addLast(AbstractLinkedProcessorSlot<?> protocolProcessor) {
        end.setNext(protocolProcessor);
        end = protocolProcessor;
    }
}

There are two variables of type AbstractLinkedProcessorSlot in DefaultProcessorSlotChain: first and end, which are the head node and tail node of the linked list.

When creating the DefaultProcessorSlotChain object, first create the first node, and then assign the first node to the tail node, which can be represented by the following figure:

slot-chain-1.png

After adding the first node to the linked list, the structure of the entire linked list becomes as follows:

slot-chain-2.png

After all nodes are added to the linked list, the structure of the entire linked list becomes as shown in the following figure:

slot-chain-3.png

This adds all Slot objects to the linked list, and each Slot inherits from AbstractLinkedProcessorSlot. The AbstractLinkedProcessorSlot is a chain of responsibility design. Each object has a next attribute that points to another AbstractLinkedProcessorSlot object. In fact, the chain of responsibility model exists in many frameworks, such as Netty, which is implemented through pipeline.

Now that you know how SlotChain is created, let's see how to execute the entry method of Slot.

Execute the entry method of SlotChain

The instance of ProcessorSlotChain obtained by the lookProcessChain method is DefaultProcessorSlotChain, then executing the chain.entry method will execute the entry method of DefaultProcessorSlotChain, and the entry method of DefaultProcessorSlotChain is as follows:

@Override
public void entry(Context context, ResourceWrapper resourceWrapper, Object t, int count, Object... args)
    throws Throwable {
    first.transformEntry(context, resourceWrapper, t, count, args);
}

That is to say, the entry of DefaultProcessorSlotChain is actually the transformEntry method of the executed first attribute.

The transformEntry method will execute the entry method of the current node, and the first node in the DefaultProcessorSlotChain rewrites the entry method, as follows:

@Override
public void entry(Context context, ResourceWrapper resourceWrapper, Object t, int count, Object... args)
    throws Throwable {
    super.fireEntry(context, resourceWrapper, t, count, args);
}

The entry method of the first node is actually the fireEntry method of the executed super, so continue to turn your attention to the fireEntry method, as follows:

@Override
public void fireEntry(Context context, ResourceWrapper resourceWrapper, Object obj, int count, Object... args)
    throws Throwable {
    if (next != null) {
        next.transformEntry(context, resourceWrapper, obj, count, args);
    }
}

As you can see from here, the execution entry is passed from the fireEntry method. The transformEntry method of the next node of the current node will be executed here. As has been analyzed above, the transformEntry method will trigger the entry of the current node, that is to say, the fireEntry method actually is the entry method that triggers the next node. The specific process is shown in the following figure:

slot-chain-entry-process.png

As can be seen from the figure, from the initial call to the entry() method of Chain, it has changed to call the entry() method of Slot in SlotChain. From the above analysis, we can know that the first Slot node in SlotChain is NodeSelectorSlot.

Execute the entry method of Slot

Now you can turn your attention to the entry method of the first node NodeSelectorSlot in the SlotChain. The specific code is as follows:

@Override
public void entry(Context context, ResourceWrapper resourceWrapper, Object obj, int count, Object... args)
    throws Throwable {
    
    DefaultNode node = map.get(context.getName());
    if (node == null) {
        synchronized (this) {
            node = map.get(context.getName());
            if (node == null) {
                node = Env.nodeBuilder.buildTreeNode(resourceWrapper, null);
                HashMap<String, DefaultNode> cacheMap = new HashMap<String, DefaultNode>(map.size());
                cacheMap.putAll(map);
                cacheMap.put(context.getName(), node);
                map = cacheMap;
            }
            // Build invocation tree
            ((DefaultNode)context.getLastNode()).addChild(node);
        }
    }

    context.setCurNode(node);
    // 由此触发下一个节点的entry方法
    fireEntry(context, resourceWrapper, node, count, args);
}

As you can see from the code, the NodeSelectorSlot node has done some of its own business logic processing. For details, you can go deep into the source code to continue tracking. Here is a general introduction to the functional responsibilities of each slot:

  • NodeSelectorSlotIt is responsible for collecting resource paths, and storing the call paths of these resources in a tree structure for current limiting and downgrading according to the call paths;
  • ClusterBuilderSlotIt is used to store the statistical information of the resource and the caller information, such as the RT, QPS, thread count, etc. of the resource, which will be used as the basis for multi-dimensional current limiting and downgrading;
  • StatistcSlotIt is used to record and count runtime information of different latitudes;
  • FlowSlotIt is used to limit the current according to the preset current limiting rules and the status of the previous slot statistics;
  • AuthorizationSlotThen according to the black and white list, to do black and white list control;
  • DegradeSlotThen, through statistical information and preset rules, the circuit breaker is downgraded;
  • SystemSlotThen control the total ingress flow through the state of the system, such as load1, etc.;

After executing the business logic processing, the fireEntry() method is called, thereby triggering the entry method of the next node. At this point, we know that the responsibility chain of sentinel is transmitted like this: after each Slot node executes its own business, it will call fireEntry to trigger the entry method of the next node.

So the above picture can be completed, as follows:

slot-chain-entry-whole-process.png

At this point, the entry() method of each node has been called through SlotChain. Each node will perform its own logic processing according to the created rules. When the statistical result reaches the set threshold, current limiting and downgrade will be triggered. and other events, specifically throwing BlockException.

Summarize

Sentinel is mainly based on 7 different Slots to form a linked list. Each Slot performs its own duties. After completing its own tasks, it will pass the request to the next Slot until it hits a rule in a Slot. Terminates by throwing BlockException.

The first three slots are responsible for statistics, and the latter slots are responsible for specific control based on the statistics results combined with the configured rules, whether to block the request or release.

There are also many options for the type of control: according to qps, number of threads, cold start, etc.

Then based on this core method, many other functions are derived:

  • 1. The dashboard console can visually control each connected sentinel client (by sending a heartbeat message), and communicate between the dashboard and the client through the http protocol.
  • 2. Persistence of rules. By implementing the DataSource interface, the configured rules can be persisted in different ways. The default rules are in memory
  • 3. Adapt to mainstream frameworks, including servlet, dubbo, rRpc, etc.

DashboardConsole

Sentinel-dashboard is a separate application, which is started through spring-boot. It mainly provides a lightweight console, which provides machine discovery, real-time monitoring of stand-alone resources, cluster resource aggregation, and rule management functions.

We only need a simple configuration of the application to use these functions.

1 Start the console

1.1 Download the code and compile the console

  • Download the console project
  • Package the code into a fat jar with the following command:mvn clean package

1.2 Start

Start the compiled console with the following command:

$ java -Dserver.port=8080 -Dcsp.sentinel.dashboard.server=localhost:8080 -jar target/sentinel-dashboard.jar

In the above command, we specified a JVM parameter -Dserver.port=8080to specify the Spring Boot startup port 8080.

2 Client access console

After the console is started, the client needs to follow the steps below to access the console.

2.1 Introducing the client jar package

By pom.xmlimporting the jar package:

<dependency>
    <groupId>com.alibaba.csp</groupId>
    <artifactId>sentinel-transport-simple-http</artifactId>
    <version>x.y.z</version>
</dependency>

2.2 Configure startup parameters

Add JVM parameters when starting to -Dcsp.sentinel.dashboard.server=consoleIp:portspecify console address and port. If you start multiple applications, you need to -Dcsp.sentinel.api.port=xxxxspecify the port of the client monitoring API (default is 8719).

In addition to modifying the JVM parameters, the same effect can also be achieved through the configuration file. For more details, please refer to Startup Configuration Items .

2.3 Trigger client initialization

To ensure that the client has traffic , Sentinel will initialize when the client is called for the first time and start sending heartbeat packets to the console.

Sentinel-dashboard is an independent web application that accepts connections from clients and then communicates with clients using the http protocol for communication. The relationship between them is shown in the following figure:

dashboard-client-transport.png

dashboard

After the dashboard is started, it will wait for the connection from the client. The specific method is MachineRegistryControllerthat there is a receiveHeartBeatmethod in . The client sends a heartbeat message by requesting this method through http.

After receiving the heartbeat message from the client, the dashboard will encapsulate the ip, port and other information passed by the client into an MachineInfoobject , and then add the object to a ConcurrentHashMap through MachineDiscoverythe interface addMachinemethod and save it.

There will be a problem here, because the client's information is stored in the dashboard's memory, so when the dashboard application restarts, the client information that has been sent before will be lost.

client

When the client starts, it will select one through CommandCenterInitFunc, and only select one CommandCenter to start.

Before starting, all CommandHandler implementation classes will be scanned and obtained by means of spi, and then all CommandHandlers will be registered in a HashMap for later use.

PS: Consider, why CommandHandler does not need to do persistence, but is directly stored in memory.

After registering CommandHandler, CommandCenter is started immediately. Currently CommandCenter has two implementation classes:

  • SimpleHttpCommandCenter starts a server through ServerSocket and accepts socket connections
  • NettyHttpCommandCenter starts a server through Netty and accepts channel connections

After CommandCenter starts, it waits for the dashboard to send a message. When the message is received, it will process the message through the specific CommandHandler, and then return the processing result to the dashboard.

It should be noted here that the message sent by the dashboard to the client is sent through the asynchronous httpClient, in the HttpHelper class.

But the weird thing is that since it is sent asynchronously, and a CountDownLatch is used to wait for the return of the message, and then get the result, doesn't this lose the meaning of asynchronous? The specific code is as follows:

private String httpGetContent(String url) {
    final HttpGet httpGet = new HttpGet(url);
    final CountDownLatch latch = new CountDownLatch(1);
    final AtomicReference<String> reference = new AtomicReference<>();
    httpclient.execute(httpGet, new FutureCallback<HttpResponse>() {
        @Override
        public void completed(final HttpResponse response) {
            try {
                reference.set(getBody(response));
            } catch (Exception e) {
                logger.info("httpGetContent " + url + " error:", e);
            } finally {
                latch.countDown();
            }
        }

        @Override
        public void failed(final Exception ex) {
            latch.countDown();
            logger.info("httpGetContent " + url + " failed:", ex);
        }

        @Override
        public void cancelled() {
            latch.countDown();
        }
    });
    try {
        latch.await(5, TimeUnit.SECONDS);
    } catch (Exception e) {
        logger.info("wait http client error:", e);
    }
    return reference.get();
}

Adaptation of mainstream frameworks

Sentinel has also adapted some mainstream frameworks, so that when using mainstream frameworks, you can also enjoy the protection of sentinel. Currently supported adapters include the following:

  • Web Servlet
  • Dubbo
  • Spring Boot / Spring Cloud
  • gRPC
  • Apache RocketMQ

In fact, the adaptation is to pass the extension points of those mainstream frameworks, and then add the sentinel current limiting and downgrading code to the extension points. Take a look at the Servlet adaptation code, the specific code is:

public class CommonFilter implements Filter {

    @Override
    public void init(FilterConfig filterConfig) {

    }

    @Override
    public void doFilter(ServletRequest request, ServletResponse response, FilterChain chain)
        throws IOException, ServletException {
        HttpServletRequest sRequest = (HttpServletRequest)request;
        Entry entry = null;

        try {
        	// 根据请求生成的资源
            String target = FilterUtil.filterTarget(sRequest);
            target = WebCallbackManager.getUrlCleaner().clean(target);

            // “申请”该资源
            ContextUtil.enter(target);
            entry = SphU.entry(target, EntryType.IN);

            // 如果能成功“申请”到资源,则说明未被限流
            // 则将请求放行
            chain.doFilter(request, response);
        } catch (BlockException e) {
        	// 否则如果捕获了BlockException异常,说明请求被限流了
        	// 则将请求重定向到一个默认的页面
            HttpServletResponse sResponse = (HttpServletResponse)response;
            WebCallbackManager.getUrlBlockHandler().blocked(sRequest, sResponse);
        } catch (IOException e2) {
            // 省略部分代码
        } finally {
            if (entry != null) {
                entry.exit();
            }
            ContextUtil.exit();
        }
    }

    @Override
    public void destroy() {

    }
}

Extend the Servlet Filter to implement a Filter, and then limit the flow of the request in the doFilter method. If the request is flow-limited, the request will be redirected to a default page, otherwise the request will be released to the next Filter.

Rules are persistent and dynamic

Sentinel's philosophy is that developers only need to pay attention to the definition of resources. When the definition of resources is successful, various flow control degradation rules can be dynamically added.

Sentinel provides two ways to modify rules:

  • Direct modification via API ( loadRules)
  • Modify by DataSourceadapting to different data sources

It is more intuitive to modify through the API, and different rules can be modified through the following three APIs:

FlowRuleManager.loadRules(List<FlowRule> rules); // 修改流控规则
DegradeRuleManager.loadRules(List<DegradeRule> rules); // 修改降级规则
SystemRuleManager.loadRules(List<SystemRule> rules); // 修改系统规则

DataSource extension

The above loadRules()method only accepts in-memory rule objects, but the rules in memory will be lost after the application is restarted. More often, the rules can be stored in a file, database or configuration center.

DataSource The interface provides us with the ability to interface with any configuration source. Implementing DataSourcean interface .

It is officially recommended to push the rules to the unified rule center after setting the rules through the console. Users only need to implement the DataSource interface to monitor the rule changes in the rule center to obtain the changed rules in real time .

DataSourceCommon implementations of extensions are:

  • Pull mode : The client actively polls and pulls rules from a rule management center, which can be SQL, files, or even VCS. The way to do this is simple, but the disadvantage is that changes cannot be obtained in time;
  • Push mode : The rule center pushes uniformly, and the client monitors changes at all times by registering listeners, such as using configuration centers such as Nacos and Zookeeper. This method has better real-time and consistency guarantees.

So far, the basic situation of sentinel has been analyzed. For more detailed content, you can continue to read the source code to study.

For more original good articles, please pay attention to "Houyi Code by Code"

{{o.name}}
{{m.name}}

Guess you like

Origin http://43.154.161.224:23101/article/api/json?id=324248522&siteId=291194637