Sike Tomcat series (2) - EndPoint source parsing

Sike Tomcat series (2) - EndPoint source parsing

In the previous section we describe the overall architecture of Tomcat, we know that the Tomcat divided into two major components, a connector and a container. And this time we talk about EndPointthe components that belong to the connector inside. It is a communication endpoint, is responsible for external implement TCP / IP protocol. EndPointIs an interface, which is the concrete implementation class AbstractEndpoint, while AbstractEndpointa specific category have AprEndpoint, Nio2Endpoint, NioEndpoint.

  • AprEndpoint: Corresponds to the APR mode, simple understanding is to solve the problem of asynchronous IO from the operating system level, and greatly improve the processing performance of the server response. However, to enable this mode you need to install some other dependencies.
  • Nio2Endpoint: Use code to implement asynchronous IO
  • NioEndpoint: Use the NIO JAVA to achieve a non-blocking IO, Tomcat is the default startup to start, and this is the focus of our talk.

NioEndpoint important components

We know that NioEndpointthe principle or multiplexer for Linux use, but in simple terms it is a multiplexer in two steps.

  1. Create a Selector, a registered variety Channel on it, and then call the select method, the channel has to wait for the event of interest occurs.
  2. If there is interest happened, for example, a reading event, information will be read out from the channel.

And NioEndpointin order to achieve the above two steps, with five components. These five components are LimitLatch, Acceptor, Poller, SocketProcessor,Executor

/**
 * Threads used to accept new connections and pass them to worker threads.
 */
protected List<Acceptor<U>> acceptors;

/**
 * counter for nr of connections handled by an endpoint
 */
private volatile LimitLatch connectionLimitLatch = null;
/**
 * The socket pollers. 
 */
private Poller[] pollers = null;

内部类

SocketProcessor

/**
 * External Executor based thread pool.
 */
private Executor executor = null;

复制代码

We can see the five components defined in the code. The five specific components are doing it?

  • LimitLatch: Connect the controller, responsible for controlling the maximum number of connections
  • Acceptor: Responsible for receiving new connection, and then returns an Channelobject to thePoller
  • Poller: NIO can be seen as in Selectorcharge of monitoring Channelthe state of
  • SocketProcessor: Task class can be seen as a packaged
  • Executor: Tomcat own extended thread pool, to perform tasks like

The relationship is simply expressed by the following chart

Then we take a look at each of the key code for each component inside

LimitLatch

We said above LimitLatchis mainly used to control the maximum number of connections Tomcat can receive, if exceeded this connection, the Tomcat will connect this thread is blocked waiting for, there are other connections such as the re-release of the consumer connection. So LimitLatchhow is it done? We can see LimitLatchthis class


public class LimitLatch {

    private static final Log log = LogFactory.getLog(LimitLatch.class);

    private class Sync extends AbstractQueuedSynchronizer {
        private static final long serialVersionUID = 1L;

        public Sync() {
        }

        @Override
        protected int tryAcquireShared(int ignored) {
            long newCount = count.incrementAndGet();
            if (!released && newCount > limit) {
                // Limit exceeded
                count.decrementAndGet();
                return -1;
            } else {
                return 1;
            }
        }

        @Override
        protected boolean tryReleaseShared(int arg) {
            count.decrementAndGet();
            return true;
        }
    }

    private final Sync sync;
    //当前连接数
    private final AtomicLong count;
    //最大连接数
    private volatile long limit;
    private volatile boolean released = false;
}

复制代码

We can see it inside realized AbstractQueuedSynchronizer, AQS is actually a framework to achieve its class to customize the thread of control when to hang when released. limitParameter is the maximum number of connections to control. We can see the AbstractEndpointcall LimitLatchof countUpOrAwaitmethods to determine if you can get a connection.

    public void countUpOrAwait() throws InterruptedException {
        if (log.isDebugEnabled()) {
            log.debug("Counting up["+Thread.currentThread().getName()+"] latch="+getCount());
        }
        sync.acquireSharedInterruptibly(1);
    }
复制代码

AQS is how to know when it blocked thread? That connection can not get it? These rely on users to realize his AbstractQueuedSynchronizerown to define when to get a connection, when the connection is released. Sync can see the class overrides tryAcquireSharedand tryReleaseSharedmethods. In tryAcquireSharedthe definition of the current connection number is greater than once the maximum number of connections is set, then the process will be returned -1representing the thread into this queue for AQS.

Acceptor

AcceptorIs connected to receive, we can see that Acceptorimplements Runnablethe interface, which will then open a new thread to execute the Acceptorrun method it? In AbstractEndpointthe startAcceptorThreadsmethod.

protected void startAcceptorThreads() {
    int count = getAcceptorThreadCount();
    acceptors = new ArrayList<>(count);

    for (int i = 0; i < count; i++) {
        Acceptor<U> acceptor = new Acceptor<>(this);
        String threadName = getName() + "-Acceptor-" + i;
        acceptor.setThreadName(threadName);
        acceptors.add(acceptor);
        Thread t = new Thread(acceptor, threadName);
        t.setPriority(getAcceptorThreadPriority());
        t.setDaemon(getDaemon());
        t.start();
    }
}

复制代码

You can see here you can set several open Acceptor, the default is a. And a port corresponds to only one ServerSocketChannel, then ServerSocketChannelwhere is initialized it? We can see that in Acceptor<U> acceptor = new Acceptor<>(this);passing this into a sentence, then it should be a Endpointcomponent initialization connection. In NioEndpointthe initServerSocketmethod of initializing the connection.

// Separated out to make it easier for folks that extend NioEndpoint to
// implement custom [server]sockets
protected void initServerSocket() throws Exception {
    if (!getUseInheritedChannel()) {
        serverSock = ServerSocketChannel.open();
        socketProperties.setProperties(serverSock.socket());
        InetSocketAddress addr = new InetSocketAddress(getAddress(), getPortWithOffset());
        serverSock.socket().bind(addr,getAcceptCount());
    } else {
        // Retrieve the channel provided by the OS
        Channel ic = System.inheritedChannel();
        if (ic instanceof ServerSocketChannel) {
            serverSock = (ServerSocketChannel) ic;
        }
        if (serverSock == null) {
            throw new IllegalArgumentException(sm.getString("endpoint.init.bind.inherited"));
        }
    }
    serverSock.configureBlocking(true); //mimic APR behavior
}

复制代码

That which we can see two things

  1. In the bind method of the second parameter represents the operating system of the queue length, i.e., no longer accept Tomcat is connected (the maximum number of connections reaches the set), but the operating system level, or can accept the connection, in which case this would connection information into the queue, then the queue size is set by this parameter.
  2. ServerSocketChannelIs set to become blocked mode, that is blocking the way to accept the connection. There may be doubt. NIO programming in peacetime in Channel not all set to non-blocking mode it? Here to explain, if it is set to non-blocking mode then it must be set to a Selectorconstant polling, but accepts the connection only needs to block a channel can be.

It should be noted that each Acceptorgenerated PollerEventobject into Pollerare taken randomly in the queue Pollerobject, the specific code can be seen below, it Polleris Queuethe object would be provided SynchronizedQueue<PollerEvent>, because there may be multiple Acceptorsimultaneous to this Pollerqueue into PollerEventobjects .

public Poller getPoller0() {
    if (pollerThreadCount == 1) {
        return pollers[0];
    } else {
        int idx = Math.abs(pollerRotater.incrementAndGet()) % pollers.length;
        return pollers[idx];
    }
}

复制代码

What is the operating system level connections it? In the TCP three-way handshake, Socket each system will typically maintain two queues LISTEN state, a semi-connection queue (SYN): The client has received the SYN connection; the other is connected to a full queue (ACCEPT): These link has received the client's ACK, completed a three-way handshake, waiting to be removed using the application calls the accept method.

All Acceptorshare this connection, in Acceptorthe runprocess, put some important code.

 @Override
    public void run() {
        // Loop until we receive a shutdown command
        while (endpoint.isRunning()) {
            try {
                //如果到了最大连接数,线程等待
                endpoint.countUpOrAwaitConnection();
                U socket = null;
                try {
                    //调用accept方法获得一个连接
                    socket = endpoint.serverSocketAccept();
                } catch (Exception ioe) {
                    // 出异常以后当前连接数减掉1
                    endpoint.countDownConnection();
                }
                // 配置Socket
                if (endpoint.isRunning() && !endpoint.isPaused()) {
                    // setSocketOptions() will hand the socket off to
                    // an appropriate processor if successful
                    if (!endpoint.setSocketOptions(socket)) {
                        endpoint.closeSocket(socket);
                    }
                } else {
                    endpoint.destroySocket(socket);
                }
    }

复制代码

Which we can get two points

  1. It will first determine whether the run-time to reach the maximum number of connections, if reached then blocked thread to wait, which is called LimitLatchthe component judgment.
  2. The most important step is to configure the socket which is endpoint.setSocketOptions(socket)code
 protected boolean setSocketOptions(SocketChannel socket) {
        // Process the connection
        try {
            // 设置Socket为非阻塞模式,供Poller调用
            socket.configureBlocking(false);
            Socket sock = socket.socket();
            socketProperties.setProperties(sock);

            NioChannel channel = null;
            if (nioChannels != null) {
                channel = nioChannels.pop();
            }
            if (channel == null) {
                SocketBufferHandler bufhandler = new SocketBufferHandler(
                        socketProperties.getAppReadBufSize(),
                        socketProperties.getAppWriteBufSize(),
                        socketProperties.getDirectBuffer());
                if (isSSLEnabled()) {
                    channel = new SecureNioChannel(socket, bufhandler, selectorPool, this);
                } else {
                    channel = new NioChannel(socket, bufhandler);
                }
            } else {
                channel.setIOChannel(socket);
                channel.reset();
            }
            //注册ChannelEvent,其实是将ChannelEvent放入到队列中,然后Poller从队列中取
            getPoller0().register(channel);
        } catch (Throwable t) {
            ExceptionUtils.handleThrowable(t);
            try {
                log.error(sm.getString("endpoint.socketOptionsError"), t);
            } catch (Throwable tt) {
                ExceptionUtils.handleThrowable(tt);
            }
            // Tell to close the socket
            return false;
        }
        return true;
    }

复制代码

In fact, there is a significant Acceptorand a Pollerbind, then two components communicate through queues, each Poller maintains a SynchronizedQueuequeue, ChannelEventput into the queue, and then Pollerremove the event from the queue consumption.

Poller

We can see Pollerthat NioEndpointthe inner class, but it also implements Runnablethe interface, you can see a Quene and Selector maintains in its class, is defined as follows. So essentially Pollerit is Selector.

private Selector selector;
private final SynchronizedQueue<PollerEvent> events = new SynchronizedQueue<>();
复制代码

Emphasis in its run method, where the deletion of some code, only showcase important.

  @Override
        public void run() {
            // Loop until destroy() is called
            while (true) {
                boolean hasEvents = false;
                try {
                    if (!close) {
                        //查看是否有连接进来,如果有就将Channel注册进Selector中
                        hasEvents = events();
                    }
                    if (close) {
                        events();
                        timeout(0, false);
                        try {
                            selector.close();
                        } catch (IOException ioe) {
                            log.error(sm.getString("endpoint.nio.selectorCloseFail"), ioe);
                        }
                        break;
                    }
                } catch (Throwable x) {
                    ExceptionUtils.handleThrowable(x);
                    log.error(sm.getString("endpoint.nio.selectorLoopError"), x);
                    continue;
                }
                if (keyCount == 0) {
                    hasEvents = (hasEvents | events());
                }
                Iterator<SelectionKey> iterator =
                    keyCount > 0 ? selector.selectedKeys().iterator() : null;
                // Walk through the collection of ready keys and dispatch
                // any active event.
                while (iterator != null && iterator.hasNext()) {
                    SelectionKey sk = iterator.next();
                    NioSocketWrapper socketWrapper = (NioSocketWrapper) sk.attachment();
                    // Attachment may be null if another thread has called
                    // cancelledKey()
                    if (socketWrapper == null) {
                        iterator.remove();
                    } else {
                        iterator.remove();
                        processKey(sk, socketWrapper);
                    }
                }

                // Process timeouts
                timeout(keyCount,hasEvents);
            }

            getStopLatch().countDown();
        }

复制代码

Chief among these is to call a events()method, is to continue to see if there is a queue of Pollereventevents, if any, will be taken out and then put inside Channeltaken out to the register Selector, and then continuously polls all the registered Channelto see if an event occurs .

SocketProcessor

We know that Pollerin polling Channelwhen an event occurs, the event will call this package together, and then to the thread pool for execution. Well, this is the wrapper class SocketProcessor. And we open such, be able to see it implements the Runnableinterface used to define the thread pool Executortask execution thread. Well here it is how Channelthe byte stream into Tomcat needs of ServletRequestobjects? In fact, it called Http11Processorto convert the byte stream object.

Executor

ExecutorIn fact, it is a customized version of Tomcat thread pool. We can see its class definition, you can find it is actually extends the Java thread pool.

public interface Executor extends java.util.concurrent.Executor, Lifecycle

复制代码

In the two most important parameters of the process is the implementation of the core thread pool threads and the maximum number of threads, the normal Java thread pool is this.

  1. If the current thread core is smaller than the number of threads, then a task to create a thread.
  2. If the current thread is greater than the core number of threads, then the task will again put the task to the task queue. All threads grab tasks.
  3. If the queue is full, then you begin to create a temporary thread.
  4. If the total number of threads to the maximum number of threads and the queue was full, then throw an exception.

But the custom thread pool in Tomcat is not the same, by rewriting the executerealization of his task processing logic method.

  1. If the current thread core is smaller than the number of threads, then a task to create a thread.
  2. If the current thread is greater than the core number of threads, then the task will again put the task to the task queue. All threads grab tasks.
  3. If the queue is full, then you begin to create a temporary thread.
  4. If the total number of threads to the maximum number of threads, to get the task queue again, try again added to the task queue.
  5. If at this time or full, would throw an exception.

The difference lies in the difference between the fourth step, treatment strategies native thread pool is as long as the current number of threads is greater than the maximum number of threads, then throw an exception, and Tomcat is that if the current number of threads is greater than the maximum number of threads, you try again, if or full will throw an exception. Here is the custom thread pool executeimplementation logic.

public void execute(Runnable command, long timeout, TimeUnit unit) {
    submittedCount.incrementAndGet();
    try {
        super.execute(command);
    } catch (RejectedExecutionException rx) {
        if (super.getQueue() instanceof TaskQueue) {
            //获得任务队列
            final TaskQueue queue = (TaskQueue)super.getQueue();
            try {
                if (!queue.force(command, timeout, unit)) {
                    submittedCount.decrementAndGet();
                    throw new RejectedExecutionException(sm.getString("threadPoolExecutor.queueFull"));
                }
            } catch (InterruptedException x) {
                submittedCount.decrementAndGet();
                throw new RejectedExecutionException(x);
            }
        } else {
            submittedCount.decrementAndGet();
            throw rx;
        }

    }
}
复制代码

In the code, we can see that there is such a sentence submittedCount.incrementAndGet();, why would phrase it? We can look at the definition of this parameter. In simple terms this parameter is to define the number of tasks the task has been submitted to the thread pool, but not yet executed.

/**
 * The number of tasks submitted but not yet finished. This includes tasks
 * in the queue and tasks that have been handed to a worker thread but the
 * latter did not start executing the task yet.
 * This number is always greater or equal to {@link #getActiveCount()}.
 */
private final AtomicInteger submittedCount = new AtomicInteger(0);

复制代码

Why is there such an argument it? We know that custom queue is inherited LinkedBlockingQueue, but LinkedBlockingQueuethe queue default is no border. So we pass a parameter maxQueueSizeto the constructor of the queue. But in Tomcat task queue default is unlimited, then this will be a problem, if the current thread to reach the core number of threads, then start adding tasks to the queue, it will have been added successfully. Then it will not create a new thread. So under what circumstances do you want to create a new thread?

Create a new thread in the thread pool will have two locations, one is less than the core thread, a task to create a thread. Another thread is more than the core and the task queue is full, temporary thread is created.

So how to provide for the task queue is full of it? If you set the maximum length of the queue of course good, but Tomcat is not set by default, so the default is unlimited. So Tomcat is TaskQueueinherited LinkedBlockingQueue, rewrite offermethod, in which the definition of what time it returns false.

@Override
public boolean offer(Runnable o) {
    if (parent==null) return super.offer(o);
    //如果当前线程数等于最大线程数,此时不能创建新线程,只能添加进任务队列中
    if (parent.getPoolSize() == parent.getMaximumPoolSize()) return super.offer(o);
    //如果已提交但是未完成的任务数小于等于当前线程数,说明能处理过来,就放入队列中
    if (parent.getSubmittedCount()<=(parent.getPoolSize())) return super.offer(o);
    //到这一步说明,已提交但是未完成的任务数大于当前线程数,如果当前线程数小于最大线程数,就返回false新建线程
    if (parent.getPoolSize()<parent.getMaximumPoolSize()) return false;
    return super.offer(o);
}

复制代码

This is submittedCountthe meaning, purpose is to at infinite if the task queue length, so there is the opportunity to create a new thread pool thread.

to sum up

The above knowledge, some are watching Li double number of teacher-depth summary of dismantling Tomcat, and in conjunction with in-depth understanding of the source code a bit, just to see the article at the time when he knows that, but then the time will find in-depth source I do not understand. So if you just read without the use of knowledge, then the knowledge will never be his. By Tomcat connectors this small source of learning, in addition to some of the practical application of common knowledge, for example AQS, lock application, point a custom thread pool to be considered, NIO application, and so on. As well as learning design thinking in general and modular design, and now the feeling is very similar to the micro-service, the interior is divided into a functional point of a variety of modules, so whether in the future to replace or upgrade can ease.

Past Articles

How breakpoint debugging Tomcat source code

Sike Tomcat series (1) - Overall architecture

Find the problem of a strange journey StackOverflowError

Freehand line and a simple RPC framework

Freehand line and a simple RPC framework (2) - renovation project

A simple hand line and IOC

Guess you like

Origin juejin.im/post/5d119281518825431f1622da