Coyote 1-Web Request Network Model Design de Tomcat

Bienvenidos a todos a prestar atención a  github.com/hsfxuebao  , espero que les sea útil. Si creen que es posible, hagan clic en Estrella.

Primero, echemos un vistazo al diagrama de capas del módulo Tomcat:imagen.png

Catalina es una implementación de contenedor de servlet proporcionada por Tomcat, que es responsable de procesar las solicitudes de los clientes y generar respuestas. Sin embargo, solo el servidor del contenedor de servlets no puede proporcionar servicios al mundo exterior. El enlazador también debe recibir la solicitud del cliente, analizarla de acuerdo con el protocolo establecido (como HTTP) y luego entregarla al contenedor de servlets para su Procesando. Se puede decir que Servlet容器和链接器是Tomcat最核心的两个组件son la base de un servidor de aplicaciones Java.

Este capítulo presenta principalmente la implementación del enlazador proporcionada por Tomcat, incluidos los protocolos admitidos y los métodos de E/S.

1. Introducción a Coyote

Coyote是Tomcat链接器框架的名称, si Tomcat服务器提供的供客户端访问的外部接口. El cliente establece una conexión con el servidor a través de Coyote., envía la solicitud y recibe la respuesta.

Coyote encapsula la comunicación de red subyacente (procesamiento de respuesta y solicitud de socket), proporciona una interfaz unificada para el contenedor de Catalina y desacopla el contenedor de Catalina de los protocolos de solicitud y métodos de E/S específicos. Coyote将Socket输入转换为Request对象,交由Catalina容器进行处理,处理请求完成后,Catalinai通过Coyote提供的Response对象将结果写入输出流.

Coyote 独立的模块,只负责具体协议和I/O的处理no está directamente relacionado con la implementación de la especificación Servlet, por lo que incluso los objetos Solicitud y Respuesta no implementan las interfaces correspondientes a la especificación Servlet, sino que las encapsulan como ServletRequest y ServletResponse en Catalina.

La interacción entre Coyote y Catalinal se puede hacer como se muestra en la figura:imagen.png

En Coyoter, Tomcat admite los siguientes 3 protocolos de transporte:

  • Protocolo HTTP/1.1: Este es el protocolo de acceso adoptado por la gran mayoría de las aplicaciones web, utilizado principalmente cuando Tomcat se ejecuta solo (no integrado con el servidor web).
  • Protocolo AJP: se utiliza para integrarse con servidores web (como Apache HTTP Server) para lograr la optimización de los recursos estáticos y la implementación de clústeres, actualmente es compatible con AJP/1.3.
  • HTTP/2.0协议:下一代HTTP协议,自Tomcat8.5以及9.0版本开始支持。截至目前,主流浏览器的最新版本均已支持HTTP/2.0。

针对HTTP和AJP协议,Coyote又按照I/O方式分别提供了不同的选择方案(自8.5.0/9.0版本起,Tomcat移除了对BIO的支持):

  • NIO:采用Java NIO类库实现。
  • NIO2:采用JDK7最新的NIO2类库实现。
  • APR:采用APR(Apache可移植运行库)实现。APR是使用C/C+编写的本地库,如果选择该方案,需要单独安装APR库。

我们可以采用一种简单的分层视图来描述Tomcat对协议及I/O方式的支持,如图所示。 imagen.png 在8.O之前,Tomcat默认采用的I/O方式为BIO,之后改为NIO。无论NIO、NIO2还是APR,在性能方面均优于以往的BIO。如果采用APR,甚至可以达到接近于Apache HTTP Server的响应性能。

在Coyote中,HTTP/2.0的处理方式与HTTP/1.1和AJP不同,采用一种升级协议的方式实现,这也是由HTTP/2.0的传输方案所决定的,这一点接下来会讲到。

2. 核心组件

我们先说明链接器中涉及的主要概念。Connector接口如图: imagen.png

图中,我们并没有列出所有的实现类,只是通过接口、抽象类来展现Connector核心概念及其依赖关系(AbstractEndPoint,Handler的引用位于AbstractEndPoint各个实现类,此处仅为了便于展现其依赖关系)。 在Connector中有如下几个核心概念。

  • Endpoint:Coyote通信端点,即通信监听的接口,是具体的Socket接收处理类,是对传输层的抽象。Tomcat并没有Endpoint接口,而是提供了一个抽象类AbstractEndpoint。根据I/O方式的不同,提供了NioEndpoint(NIO)、AprEndpoint(APR)以及Nio2Endpoint(NIO2)3个实现(8.0及之前版本还有JIoEndpoint(BIO)。

  • Processor:Coyote协议处理接口,负责构造Request和Response对象,并通过Adapter将其提交到Catalina容器处理,是对应用层的抽象。Processor是单线程的,Tomcat在同一次链接中复用Processor。Tomcat按照协议的不同提供了3个实现类:Http11Processor(HTTP/1.1)、AjpProcessor(AJP)、StreamProcessor(HTTP/2.0)。除此之外,它还提供了两个用于升级协议处理的实现:UpgradeProcessorInternal和UpgradeProcessorExternal,前者用于处理内部支持的升级协议(如HTTP/2.0和WebSocket。后者用于处理外部扩展的升级协议支持。

  • ProtocolHandler:Coyote协议接口,通过封装Endpoint和Processor,实现针对具体协议的处理功能。Tomcat按照协议和I/O提供了6个实现类:Http11NioProtocol、Http11AprProtocol、Http11Nio2Protocol、AjpNioProtocol、AjpAprProtocol、AjpNio2Protocol。我们在$CATALINA_BASE/conf/server.xml中设置链接器时,至少要指定具体的ProtocolHandler(当然,也可以指定协议名称。如“HTTP/1.1”,如果服务器安装了APR,那么将使用Hutpl1 AprProtocol,否则使用Httpl1NioProtocol,Tomcat7以及之前版本则会是Htp11Protocol)。

  • UpgradeProtocol:Tomcat采用UpgradeProtocol接口表示HTTP升级协议,当前只提供了一个实现(Htp2Protocol)用于处理HTTP/2.0。它根据请求创建一个用于升级处理的令牌UpgradeToken,该令牌中包含了具体的HTTP升级处理器HttpUpgradeHandler,HTTP/2.O的处理器实现为Http2UpgradeHandler。Tomcat中的WebSocket也是通过UpgradeToken机制实现的。

3. Connector启动

Tomacat启动框架图:

imagen.png

我们看StandardService.startInternal():

protected void startInternal() throws LifecycleException {

    ...
    // mapperListener启动
    mapperListener.start();

    // Start our defined Connectors second
    synchronized (connectorsLock) {
        for (Connector connector: connectors) {
            // If it has already failed, don't try and start it
            if (connector.getState() != LifecycleState.FAILED) {
                connector.start();
            }
        }
    }
}
复制代码

接下来Connector.startInternal()->ProtocalHandler.start()->AbstractProtocol.start()->AbstractEndpoint.satrt(),接下来我们重点以NioEndpoint为例:

public void startInternal() throws Exception {

    if (!running) {
        // 开始启动端点
        running = true;
        paused = false;

        if (socketProperties.getProcessorCache() != 0) {
            processorCache = new SynchronizedStack<>(SynchronizedStack.DEFAULT_SIZE,
                    socketProperties.getProcessorCache());
        }
        if (socketProperties.getEventCache() != 0) {
            eventCache = new SynchronizedStack<>(SynchronizedStack.DEFAULT_SIZE,
                    socketProperties.getEventCache());
        }
        if (socketProperties.getBufferPool() != 0) {
            nioChannels = new SynchronizedStack<>(SynchronizedStack.DEFAULT_SIZE,
                    socketProperties.getBufferPool());
        }

        // Create worker collection
        // 创建worker线程组(类似 netty boss 和work组)
        if (getExecutor() == null) {
            // 默认worker线程池10个线程等待处理
            createExecutor();
        }

        // 连接限制,最大链接数量为8*1024
        initializeConnectionLatch();

        // Start poller thread
        // 启动一个poller线程,只有一个线程
        poller = new Poller();
        Thread pollerThread = new Thread(poller, getName() + "-Poller");
        pollerThread.setPriority(threadPriority);
        pollerThread.setDaemon(true);
        pollerThread.start();

        // 启动接收者线程
        startAcceptorThread();
    }
}

protected void startAcceptorThread() {
    // 接收者线程 只有一个线程
    acceptor = new Acceptor<>(this);
    String threadName = getName() + "-Acceptor";
    acceptor.setThreadName(threadName);
    Thread t = new Thread(acceptor, threadName);
    t.setPriority(getAcceptorThreadPriority());
    t.setDaemon(getDaemon());
    t.start();
}
复制代码

在这里,主要是启动三个线程,分别为Acceptor、Poller和Worker线程,下面我们分别看一下这个三个线程之间的关系是什么?

3.1 Acceptor线程(单个线程)

Acceptor线程run()方法:

public void run() {

    int errorDelay = 0;
    long pauseStart = 0;

    try {
        // Loop until we receive a shutdown command
        while (!stopCalled) {
            ...
            // 检查端口状态
            while (endpoint.isPaused() && !stopCalled) {
                ...
            }

            ...
            try {
                ...
                U socket = null;
                try {
                    // Accept the next incoming connection from the server
                    // socket
                    // 不断的serverSock.accept()接收端口8080来的数据,端口开启一个线程在后台接收数据
                    socket = endpoint.serverSocketAccept();
                } catch (Exception ioe) {
                   
                }
                // Successful accept, reset the error delay
                errorDelay = 0;

                // Configure the socket
                if (!stopCalled && !endpoint.isPaused()) {
                    // setSocketOptions() will hand the socket off to
                    // an appropriate processor if successful
                    // Acceptor接收到数据,封装数据
                    if (!endpoint.setSocketOptions(socket)) {
                        endpoint.closeSocket(socket);
                    }
                } else {
                    endpoint.destroySocket(socket);
                }
            } catch (Throwable t) {
               ...
            }
        }
    } finally {
        stopLatch.countDown();
    }
    state = AcceptorState.ENDED;
}

protected boolean setSocketOptions(SocketChannel socket) {
    NioSocketWrapper socketWrapper = null;
    try {
        ...
        // 封装成NioSocketWrapper
        NioSocketWrapper newWrapper = new NioSocketWrapper(channel, this);
        channel.reset(socket, newWrapper);
        connections.put(socket, newWrapper);
        socketWrapper = newWrapper;

        // Set socket properties
        // Disable blocking, polling will be used
        socket.configureBlocking(false);
        if (getUnixDomainSocketPath() == null) {
            socketProperties.setProperties(socket.socket());
        }

        socketWrapper.setReadTimeout(getConnectionTimeout());
        socketWrapper.setWriteTimeout(getConnectionTimeout());
        socketWrapper.setKeepAliveLeft(NioEndpoint.this.getMaxKeepAliveRequests());
        // 注册事件
        poller.register(socketWrapper);
        return true;
    } catch (Throwable t) {
      
    }
    // Tell to close the socket if needed
    return false;
}

public void register(final NioSocketWrapper socketWrapper) {
    // 注册读事件
    socketWrapper.interestOps(SelectionKey.OP_READ);//this is what OP_REGISTER turns into.
    PollerEvent event = null;
    if (eventCache != null) {
        event = eventCache.pop();
    }
    if (event == null) {
        event = new PollerEvent(socketWrapper, OP_REGISTER);
    } else {
        event.reset(socketWrapper, OP_REGISTER);
    }
    // 添加到events中
    addEvent(event);
}
复制代码

Acceptor线程主要负责接收8080端口的请求,然后调用endpoint.setSocketOptions(socket)封装数据为NioSocketWrapper对象并注册读事件,并将请求添加到events 队列中。

3.2 Poller线程(单个线程)

接下来,Poller的run()方法:

public void run() {
    // Loop until destroy() is called
    while (true) {

        boolean hasEvents = false;

        try {
            if (!close) {
                // poller是进行事件处理
                hasEvents = events();
               ...
            }
            ...
        } 
        ...

        Iterator<SelectionKey> iterator =
            keyCount > 0 ? selector.selectedKeys().iterator() : null;
        // Walk through the collection of ready keys and dispatch
        // any active event.
        while (iterator != null && iterator.hasNext()) {
            SelectionKey sk = iterator.next();
            iterator.remove();
            NioSocketWrapper socketWrapper = (NioSocketWrapper) sk.attachment();
            // Attachment may be null if another thread has called
            // cancelledKey()
            // 前面判断是否有事件
            if (socketWrapper != null) {
                // 事件到来以后进行处理
                processKey(sk, socketWrapper);
            }
        }
        ...
    }
}

protected void processKey(SelectionKey sk, NioSocketWrapper socketWrapper) {
    try {
        if (close) {
            cancelledKey(sk, socketWrapper);
        } else if (sk.isValid()) {
            if (sk.isReadable() || sk.isWritable()) {
                if (socketWrapper.getSendfileData() != null) {
                    processSendfile(sk, socketWrapper, false);
                } else {
                    // 防止重复读取socket中的数据
                    unreg(sk, socketWrapper, sk.readyOps());
                    boolean closeSocket = false;
                    // Read goes before write
                    if (sk.isReadable()) {
                        ...
                         // 处理
                        } else if (!processSocket(socketWrapper, SocketEvent.OPEN_READ, true)) {
                            closeSocket = true;
                        }
                    }
                    ...
                }
            }
        } else {
            // Invalid key
            cancelledKey(sk, socketWrapper);
        }
    } 
    ...
}

public boolean processSocket(SocketWrapperBase<S> socketWrapper,
        SocketEvent event, boolean dispatch) {
    try {
        if (socketWrapper == null) {
            return false;
        }
        // 封装SocketProcessorBase对象
        SocketProcessorBase<S> sc = null;
        if (processorCache != null) {
            sc = processorCache.pop();
        }
        if (sc == null) {
            sc = createSocketProcessor(socketWrapper, event);
        } else {
            // sc保存socketWrapper,event
            sc.reset(socketWrapper, event);
        }

        // 线程池(worker)
        Executor executor = getExecutor();
        if (dispatch && executor != null) {
            executor.execute(sc);
        } else {
            sc.run();
        }
    } 
    ...
    return true;
}
复制代码

Poller线程,主要从 events 中获取请求,并将请求封装成SocketProcessorBase对象,然后交给executor(一般叫worker)线程池处理请求。

3.3 Worker线程池(默认10个线程)

我们直接追踪SocketProcessorBase的run()方法:

public final void run() {
    synchronized (socketWrapper) {
        // It is possible that processing may be triggered for read and
        // write at the same time. The sync above makes sure that processing
        // does not occur in parallel. The test below ensures that if the
        // first event to be processed results in the socket being closed,
        // the subsequent events are not processed.
        if (socketWrapper.isClosed()) {
            return;
        }
        doRun();
    }
}

protected abstract void doRun();
复制代码

追踪NioEndpoint.doRun():

protected void doRun() {
    ...
    try {
        int handshake = -1;
        try { // https的握手
            if (socketWrapper.getSocket().isHandshakeComplete()) {
                // No TLS handshaking required. Let the handler
                // process this socket / event combination.
                handshake = 0;
            } 
        }
        ...
        catch (CancelledKeyException ckx) {
            handshake = -1;
        }
        if (handshake == 0) {
            SocketState state = SocketState.OPEN;
            // Process the request from this socket
            if (event == null) {
                state = getHandler().process(socketWrapper, SocketEvent.OPEN_READ);
            } else {
                // 处理
                state = getHandler().process(socketWrapper, event);
            }
            if (state == SocketState.CLOSED) {
                poller.cancelledKey(getSelectionKey(), socketWrapper);
            }
        } else if (handshake == -1 ) {
            ...
        }
        ...
    } ...
       
}
复制代码

接下来,AbstractEndpoint.process()->AbstractProtocol.ConnectionHandler.process()->AbstractProcessorLight.process():

public SocketState process(SocketWrapperBase<?> socketWrapper, SocketEvent status)
        throws IOException {

    SocketState state = SocketState.CLOSED;
    Iterator<DispatchType> dispatches = null;
    do {
        if (dispatches != null) {
            DispatchType nextDispatch = dispatches.next();
            if (getLog().isDebugEnabled()) {
                getLog().debug("Processing dispatch type: [" + nextDispatch + "]");
            }
            state = dispatch(nextDispatch.getSocketStatus());
            if (!dispatches.hasNext()) {
                state = checkForPipelinedData(state, socketWrapper);
            }
        } else if (status == SocketEvent.DISCONNECT) {
            // Do nothing here, just wait for it to get recycled
        } else if (isAsync() || isUpgrade() || state == SocketState.ASYNC_END) {
            state = dispatch(status);
            state = checkForPipelinedData(state, socketWrapper);
        } else if (status == SocketEvent.OPEN_WRITE) {
            // Extra write event likely after async, ignore
            state = SocketState.LONG;
        } else if (status == SocketEvent.OPEN_READ) {
            // 处理
            state = service(socketWrapper);
        } else if (status == SocketEvent.CONNECT_FAIL) {
            logAccess(socketWrapper);
        } else {
            // Default to closing the socket if the SocketEvent passed in
            // is not consistent with the current state of the Processor
            state = SocketState.CLOSED;
        }

        if (getLog().isDebugEnabled()) {
            getLog().debug("Socket: [" + socketWrapper +
                    "], Status in: [" + status +
                    "], State out: [" + state + "]");
        }

        if (isAsync()) {
            state = asyncPostProcess();
            if (getLog().isDebugEnabled()) {
                getLog().debug("Socket: [" + socketWrapper +
                        "], State after async post processing: [" + state + "]");
            }
        }

        if (dispatches == null || !dispatches.hasNext()) {
            // Only returns non-null iterator if there are
            // dispatches to process.
            dispatches = getIteratorAndClearDispatches();
        }
    } while (state == SocketState.ASYNC_END ||
            dispatches != null && state != SocketState.CLOSED);

    return state;
}
复制代码

接下来到,Http11Processor.service(),如下:

public SocketState service(SocketWrapperBase<?> socketWrapper)
    throws IOException {
    RequestInfo rp = request.getRequestProcessor();
    rp.setStage(org.apache.coyote.Constants.STAGE_PARSE);

    // Setting up the I/O
    setSocketWrapper(socketWrapper);

    ...
    while (!getErrorState().isError() && keepAlive && !isAsync() && upgradeToken == null &&
            sendfileState == SendfileState.DONE && !protocol.isPaused()) {

        // Parsing the request header
        try {
            // 解析
            if (!inputBuffer.parseRequestLine(keptAlive, protocol.getConnectionTimeout(),
                    protocol.getKeepAliveTimeout())) {
                if (inputBuffer.getParsingRequestLinePhase() == -1) {
                    return SocketState.UPGRADING;
                } else if (handleIncompleteRequestLineRead()) {
                    break;
                }
            }

            // Process the Protocol component of the request line
            // Need to know if this is an HTTP 0.9 request before trying to
            // parse headers.
            prepareRequestProtocol();

            if (protocol.isPaused()) {
                // 503 - Service unavailable
                response.setStatus(503);
                setErrorState(ErrorState.CLOSE_CLEAN, null);
            } else {
                keptAlive = true;
                // Set this every time in case limit has been changed via JMX
                request.getMimeHeaders().setLimit(protocol.getMaxHeaderCount());
                // Don't parse headers for HTTP/0.9
                if (!http09 && !inputBuffer.parseHeaders()) {
                    // We've read part of the request, don't recycle it
                    // instead associate it with the socket
                    openSocket = true;
                    readComplete = false;
                    break;
                }
                if (!protocol.getDisableUploadTimeout()) {
                    socketWrapper.setReadTimeout(protocol.getConnectionUploadTimeout());
                }
            }
        } catch (IOException e) {
            if (log.isDebugEnabled()) {
                log.debug(sm.getString("http11processor.header.parse"), e);
            }
            setErrorState(ErrorState.CLOSE_CONNECTION_NOW, e);
            break;
        } catch (Throwable t) {
            ExceptionUtils.handleThrowable(t);
            UserDataHelper.Mode logMode = userDataHelper.getNextMode();
            if (logMode != null) {
                String message = sm.getString("http11processor.header.parse");
                switch (logMode) {
                    case INFO_THEN_DEBUG:
                        message += sm.getString("http11processor.fallToDebug");
                        //$FALL-THROUGH$
                    case INFO:
                        log.info(message, t);
                        break;
                    case DEBUG:
                        log.debug(message, t);
                }
            }
            // 400 - Bad Request
            response.setStatus(400);
            setErrorState(ErrorState.CLOSE_CLEAN, t);
        }

        ...

        // Process the request in the adapter
        if (getErrorState().isIoAllowed()) {
            try {
                rp.setStage(org.apache.coyote.Constants.STAGE_SERVICE);
                // 封装了请求的详细信息
                getAdapter().service(request, response);
                // Handle when the response was committed before a serious
                // error occurred.  Throwing a ServletException should both
                // set the status to 500 and set the errorException.
                // If we fail here, then the response is likely already
                // committed, so we can't try and set headers.
                if(keepAlive && !getErrorState().isError() && !isAsync() &&
                        statusDropsConnection(response.getStatus())) {
                    setErrorState(ErrorState.CLOSE_CLEAN, null);
                }
            } 
            ...
        }
        ...
    }
}
复制代码

接下来直接到CoyoteAdapter.service(),后续流程看Tomcat的Catalina篇5-Web请求处理(请求映射)

3.4 三个线程池的关系

Acceptor线程、Poller线程和Worker线程池的处理关系:Acceptor线程负责接收数据请求,发送事件给queue,poller从queue中获取事件,拿到直接交给worker线程池处理,如下: imagen.png Acceptor线程步骤

  1. Accepor一直接受8080的请求,会尝试endpoint.setSocketOptions(socket)
  2. socket封装为socketWrapper
  3. poller先来注册事件socketWrapper
  4. 创建-个PollerEvent,添加到事件队列SynchronizedQueue<PollerEvent>

Poller线程步骤

  1. Poller一直判断是否有事件events.poll(),有事件就拿到
  2. 读取socket的内容并处理processSocket(socketWrapper)
  3. processSocket的时候poller会拿到Worker线程池,我们的socketWrapper会被封装到SocketProcessorBase里面,把这个SocketProcessorBase直接扔给线程池
  4. SocketProcessorBase会被线程池的一个线程进行处理,最终会被ConnectorHandler.process进行处理,交给Http11Processor.process进行处理
  5. Http11Processor.service会接手这个Socket,进行处理

综合上面分析,可以Coyote的处理流程: imagen.png

4. Http请求处理流程图

imagen.png

上图从接口层面描述了Connectorl的请求处理,接下来让我们看一下它的详细过程(以HTTP为例):

  1. Connector启动时,会同时启动其持有的Endpoint实例。Endpoint并行运行多个线程(由属性acceptorThreadCount确定),每个线程运行一个AbstractEndpoint.Acceptor实例。在AbstractEndpoint.Acceptor实例中监听端口通信I/O方式不同,具体的处理方式也不同),而且只要Endpoint处于运行状态,始终循环监听。

  2. 当监听到请求时,Acceptor将Socket封装为SocketWrapper实例(此时并未读取数据),并交由一个SocketProcessor对象处理(此过程也由线程池异步处理)。此部分根据I/O方式的不同处理会有所不同,如NIO采用轮询的方式检测SelectionKey是否就绪。如果就绪,则获取一个有效的SocketProcessor对象并提交线程池处理。

  3. SocketProcessor是一个线程池Worker实例,每一个I/O方式均有自己的实现。它首先判断Socket的状态(如完成SSL握手),然后提交到ConnectionHandler处理。

  4. ConnectionHandler是AbstractProtocol的一个内部类,主要用于为链接选择一个合适的Processor实现以进行请求处理。

    • 为了提升性能,它针对每个有效的链接都会缓存其Processor对象。不仅如此,当前链接关闭时,其Processor对象还会被释放到一个回收队列(升级协议不会回收),这样后续链接可以重置并重复利用,以减少对象构造。

    • 因此,在处理请求时,它首先会从缓存中获取当前链接的Processor对象。如果不存在,则尝试根据协商协议构造Processor(如HTTP/2.0请求)。如果不存在协商协议(如HTTP/1.1请求)则从回收队列中获取一个已释放的Processor对象使用。如果回收队列中没有可用的对象,那么由具体的协议创建一个Processor使用(同时注册到缓存)。

    • 然后,ConnectionHandler调用Processor.process()方法进行请求处理。如果不是协议协商的请求(如普通的HTTP/1.1请求或者AJP请求),那么Processor则会直接调用CoyoteAdapter.service()方法将其提交到Catalina容器处理。如果是协议协商请求,Processor会返回SocketState.UPGRADING,ConnectionHandler进行协议升级。

    无论HTTP/2.O还是WebSocket,在建立链接时会首先通过HTTP/L.l进行协议协商,此时服务器接收到的是带有特殊请求头的HTTP/L.1链接,因此仍由Http11Processor处理,它对于协议协商的请求会返回SocketState.UPGRADING,并由ConnectionHandleri进行具体的升级处理。

  5. Cuando se actualiza el protocolo, ConnectionHandler obtendrá un objeto UpgradeToken del procesador actual (si no, el valor predeterminado es HTTP/2) y construirá una instancia de actualización del procesador (si es un protocolo compatible con Tomcat (como HTTP/2 y WebSocket), será UpgradeProcessorInternal , de lo contrario, UpgradeProcessorExternal) reemplaza el Procesador actual y libera el Procesador actual para su reciclaje. Después del reemplazo, el Procesador de actualización realizará el procesamiento posterior del enlace.

  6. Inicialice a través del método init() del objeto HttpUpgradeHandler en UpgradeToken para estar listo para comenzar a habilitar el nuevo protocolo.

Artículo de referencia

tomcat-9.0.60-src análisis de código fuente 
Tomcat análisis de arquitectura
serie tomcat análisis de Tomcat artículos de código fuente interpretación de código fuente de tomcat
artículos de código fuente de Tomcat - Mapeador de solicitudes

Supongo que te gusta

Origin juejin.im/post/7085227989859827748
Recomendado
Clasificación