Geeks time Tomcat && Jetty 02

   So what is "Off the hook", it is to do what? If we need to do some cleanup work at the JVM is closed, such as the brush cached data to disk, or clean up some temporary files can be registered to a JVM "close the hook." "Off the hook" is actually a thread, JVM will try to run before stopping method to perform this thread. Tomcat "off the hook" is actually a stop on the implementation of the method Server, stop method Server will release and clean up all the resources.

Engine that what they have done it? We know that the most important function of the container assembly is processing the request, and the request Engine vessel "process", in fact, put forward the request to a particular Host to deal with child container, concrete is achieved through Valve's

 

Jetty Server Connector listening client can have multiple requests on a different port, the request processing component for Handler may be used depending on different scenarios Handler. This design improves flexibility Jetty, the need to support Servlet, you can use ServletHandler; the need to support Session, then add a SessionHandler. In other words we can not use the Servlet or Session, you do not configure this Handler on the line

In Tomcat each connector has its own thread pool, but in all of the Connector Jetty share a global thread pool

  Selector Jetty of the SelectorManager class management, and managed Selector called ManagedSelector. Internal SelectorManager have a ManagedSelector array, the real work is ManagedSelector

   SelectorManager choose a Selector Selector from the array itself to handle the Channel, and to create a task Accept ManagedSelector

The first step, calling the register method to Channel Selector registered with the Selector, get a SelectionKey

Next, create a EndPoint and Connection, and with this SelectionKey (Channel) tied together

   There AbstractHandlerContainer under AbstractHandler, Why do we need such a class? This is actually a transition, in order to achieve a chain called a Handler Handler internal bound to have other references, so the name of the class where only Container, meaning that such Handler Handler contains other references

    By comparing FIG Tomcat architecture, you can see, the container assembly Jetty and Tomcat Handler components is substantially equivalent to the concept, the Jetty WebAppContext Context corresponding to the Tomcat components, are corresponding to a Web application; and Jetty the ServletHandler Tomcat in the corresponding Wrapper component that is responsible for initializing and call Servlet, and realized Filter functions

Optimize and improve the startup speed Tomcat

1. Clean unnecessary Web applications

2. Clean the XML configuration file

3. Clean JAR file

4. Clean other files

   Synchronous non-blocking I / O: read user thread ever initiated calls, not data to the kernel space, and each time failed to return until the data to the kernel space, this time read after the call, waiting to copy data from kernel space to user space During that time, the thread is blocked, and other data to the user space and then wake up the thread

    I / O multiplexer: read operation is divided into two steps the user thread, the thread select call first launched, the purpose is to ask the kernel data ready? And other core data is ready, the user initiates the call to read the thread again. Waiting for copying data from kernel space to user space this time, the thread is blocked. Why it is called I / O multiplexing it? Since the verification status of a call can select a plurality of data channels (Channel) inwardly, so called multiplex

Asynchronous I / O: read user threads simultaneously initiate calls to register a callback function, read returns immediately after the kernel and other data is ready, then call the specified callback function to complete the process. In this process, the user thread has not been blocked.

NIO API can not Selector, is a non-blocking synchronization. IO multiplexing is used Selector

Tomcat how to implement asynchronous I / O

   You need to pay attention to Nio2Endpoint with NioEndpoint One obvious difference is that, Nio2Endpoint no Poller component is not Selector. Why is this? Because asynchronous I / O mode, Selector to the kernel to do the work.

  The Tomcat Endpoint assembly when receiving the network data a need preallocated Buffer, byte array is called Buffer byte [], Java JNI calls by the address to the piece of Buffer C code, C code is read by the operating system API Socket piece and the stuffing data to Buffer. Java NIO API provides two Buffer to receive data: HeapByteBuffer and DirectByteBuffer, the following code demonstrates how to create two Buffer.

 

    That HeapByteBuffer and DirectByteBuffer What difference does it make? HeapByteBuffer partitioned heap object itself JVM, and it holds the byte array byte [] is allocated on the heap in the JVM. However, if the network receives data HeapByteBuffer, the need to copy the data from the kernel to a temporary local memory, and then copied from the temporary memory to the local JVM stack, rather than directly from the kernel stack to copy the JVM. Why is this? This is because the data is copied from the kernel stack to process the JVM, JVM may occur GC, GC during the object may be moved, that is to say the heap JVM byte array may be moved, so the failure to address Buffer a . If this intermediate transit via the local memory, the local memory from the JVM stack of the copying process can be guaranteed not JVM GC.

   If you use HeapByteBuffer, you will find more between the JVM heap and kernel layer of transit, and DirectByteBuffer used to solve this problem, DirectByteBuffer in the JVM heap object itself, but it holds an array of bytes allocated on the heap is not from the JVM but allocated from local memory. DirectByteBuffer object has a long address type field, a record of local memory addresses, so that receives data, transmitted directly to the local memory address to the program C, program C data from the core network will be copied to the local memory, the JVM can read the local memory, a copy this way less than HeapByteBuffer, therefore in general it's speed will be several times faster than HeapByteBuffer. You can deepen understanding through the above chart.

Use the following sendfile

     Under Java WebSocket specification, Java WebSocket application consists of a series of WebSocket Endpoint components. Endpoint is a Java object that represents one end of the WebSocket connection, as if handling Servlet HTTP request, as you can see it as WebSocket interface handles messages. The difference is that with Servlet, Tomcat will each create a WebSocket connection Endpoint instance. You can define and implement Endpoint two ways

 

    It constructs a WebSocketContainer example, you can WebSocketContainer understood as a special processing vessel WebSocket Endpoint request. That Tomcat will scan to the Endpoint subclass and add annotations @ServerEndpoint classes registered to this container, and this container also maintains the mapping between the Endpoint URL, so by requesting a URL to be able to find specific Endpoint processing requests WebSocket

 

Published 386 original articles · won praise 2 · Views 9849

Guess you like

Origin blog.csdn.net/kuaipao19950507/article/details/104848744