How is the new connection received by the Netty server bound to the worker thread pool?

 

More technical sharing can follow me

Foreword

Original: How is the new connection received by the Netty server bound to the worker thread pool?

In the previous analysis of the process of detecting new connections on the Netty server, it was mentioned that after reading the new connection, the NioServerSocketChannel will cyclically call the pipeline.fireChannelRead () method bound to the server Channel, package each new connection as a parameter, and then pass this method to It passes along the pipeline of the server channel, that is, flows on the handler chain of the channel. This part of the details will be broken down in detail later. 

Let's take a look at how the boss thread pool and worker thread pool configured by Netty cooperate in the flow of the pipeline connected to the new channel on the server side.

Source code analysis of the server's new connection accessor

Briefly review the previous article: How does Netty handle new connection access events? In the analysis of the process of detecting new connections on the Netty server, recall the source code of the read () method of the NioMessageUnsafe class:

Looking at the last red box, the new connection is passed along the Channel pipeline in the loop. NioMessageUnsafe is the internal interface of the Netty Channel-Unsafe server implementation class mentioned earlier.

So what happens when these new connections are subsequently delivered? This is also a key issue—that is, how the newly connected Channel of the Netty client is associated with Netty's I / O thread after being encapsulated. Let's look at the new connection accessor mentioned earlier. The associated function is mainly realized by this accessor.

To return to the truth, look at the source code of ServerBootstrapAcceptor, which is an internal class that inherits ChannelInboundHandlerAdapter (later explain Netty's pipeline mechanism).

Now review the server startup process first. The core operation of the server startup is to bind the port, that is, start in the serverBootstrap.bind (xx); method in the user code, which will call the doBind method of ServerBootstrap, and call the initAndRegister () method of ServerBootstrap in the doBind method, which is A method to initialize the server channel and register the I / O multiplexer, as shown below:

This method creates a NioServerSocketChannel on the server side through reflection, and creates a ServerSocketChannel that stores the JDK and some components, such as pipelines, etc., and then performs the initialization operation of the Channel—that is, the init (channel) method of ServerBootstrap (the analysis is the server code, so Just look at the implementation of init by the ServerBootstrap class), there is a logic to create a new connection accessor in the init method. As shown in the red box below, when configuring the server pipeline in init, a ServerBootstrapAcceptor handler is added by default:

First complete the whole process:

1. First, the init method of ServerBootstrap adds a ChannelInitializer to the pipeline of the server channel. In the void initChannel (Channel ch) method implemented by this class, the server handler configured in the user code is added first. As I said before, this The handler configuration on the server side is rarely used (ie. Handler () API), and the most commonly used is to configure the handler for the client, ie. ChildHandler ()

2. Then add a new connection accessor asynchronously-ServerBootstrapAcceptor. Specifically, the operation of adding ServerBootstrapAcceptor to the pipeline is encapsulated into a task, and the NIO thread delegated to the server is executed asynchronously. Has been executed. That is, the minimum pipeline structure of the Netty server channel is as follows:

Here we come into contact with the concept of Netty's inbound events and outbound events in advance. The so-called inbound events-that is, inbound events, that is, Netty's NIO thread actively initiates operations that are oriented to user business handlers, that is, events that are passively initiated. Spread through the fireXXX method.

For example, the Channel connection is successful, the Channel is closed, the Channel has data to read, the I / O multiplexer registered on the Channel is successful, the Channel deregisters the I / O multiplexer, and the exception is thrown. These are passive executions. The callback events, their processing has a special handler implementation, unified call inbound handler. Conversely, there are outbound events and outbound handlers. Outbound events-outbound events, are events initiated by user threads or user code. The following are outbound events:

For example, the server actively binds the port, actively closes the connection, the client actively connects to the server, and the server (client) actively writes out messages. The characteristics of these events are initiated by the user. For these two types of events, in addition to the handlers provided by Netty by default, users can also customize inbound / outbound handlers to implement their own interception logic, which is also the idea of ​​the chain of responsibility (also called chain of responsibility) model.

After all, we continue to analyze the process of the server reading the new connection. Now we are analyzing the new connection access, so we only look at the inbound handler. First know that the sequence of inbound event flow starts from the head node of the pipeline, passes through each inbound handler node, and flows to the end of the tail node. Here is Head-> ServerBootstrapAcceptor-> Tail. as follows:

You must also know that the tail node is essentially an inbound handler, and the head node is essentially an outbound handler, which will be dismantled in detail later. It doesn't matter why you don't know why.

As mentioned earlier, the read () method of the NioMessageUnsafe class will finally pass out the new connection of the read client, as follows:

Specifically, it triggers the subsequent ChannelRead event of each inbound handler (channelRead is said to be an inbound event). Inbound events are all propagated from the head node of the pipeline, HeadContext, and this event is triggered to propagate. It is the pipeline.fireChannelRead (xxx) method.

Remember when the server started, there is a piece of code as follows: andserverBootstrap.handler(new ServerHandler())serverBootstrap.childHandler(new ServerHandler());

At that time, I gave this conclusion: the handler added by the .handler method is added to the pipeline of the server channel, which is added when the server is initialized, and the handler added by the .childHandler method is added to the pipeline of the client channel , Is added when handling new connection access. Now that the reason is known, when ServerBootstrap calls init, first pipeline.addLast (handler), and then add a ServerBootstrapAccepter, so that the server pipeline may also be head-hander> serverBootStrapAccepter> tail this structure, as follows (very familiar structure):

It must be understood here that the two operations add the handler to the pipeline of the server and the client respectively.

serverBootStrapAccepter itself is also an inbound handler. According to the previous analysis, the propagation sequence of inbound events is head-> user-defined inbound handler-> ServerBootstrapAcceptor-> tail. I did not define a handler for the server in my demo, so I directly called the channelRead method of ServerBootstrapAcceptor, which is The key point of the access device needs to be studied. The source code of the channelRead method of ServerBootstrapAcceptor is as follows;

ServerBootstrapAcceptor is an internal class of ServerBootstrap. Let's take a look at the debugging process. Once it comes up, it will force msg into Channel, that is, the msg variable received here is essentially the new connection of the client that has just been read-encapsulated by Netty as its custom Channel. The subsequent ServerBootstrapAcceptor mainly did three things:

1. The yellow one is the above analysis. Add the user-configured client channel handler in the accessor: that is, add the user to the client's pipeline through the .childHandler () custom ChannelHandler in the server code, which will be explained in detail later.

2. In yellow 2, set the user-configured options and attrs, mainly to set the childOptions and childAttrs of the client Channel, childOptions are the attributes configured for the TCP protocol at the bottom of the channel, childAttrs are some attributes of the channel itself, its essence is a map, For example, it can store the current channel survival time, key, etc.

3. In yellow 3, select an NIO thread in the worker thread pool and bind it to the client Channel—the child variable in the code. This step is an asynchronous operation and is implemented through the register method. This method reuses the code logic for registering the I / O multiplexer for the server channel when the server starts. This last step is divided into two small steps:

  • The worker thread pool selects a NioEventLoop thread to bind to the new connection through EventLoop's thread selector-Chooser's next () method, which has the same logic as the server-side thread pool

  • Register the new channel of the client to the I / O multiplexer of this NioEventLoop and register the OP_READ event for it

The two small steps are analyzed in detail below. I followed up the register through debug and came to the register method of MultithreadEventLoopGroup. The source code is as follows:

Finally, enter the parent class io / netty / util / concurrent / MultithreadEventExecutorGroup class, see here is very familiar, will enter the thread selector of the previously analyzed NioEventLoopGroup.

The optimization method used here-select a NioEventLoop thread by bit operation. It is found that idx is 0, that is, the thread in the workerGroup thread pool has just selected the first one at this time, because this is the first client connection received by the server I am currently running, so when it comes to a new connection, it will be in order. Start the subsequent thread to bind to it, if it is bound to the last one, then idx will start from 0 again and loop back and forth. . . Note that the NIO thread has not been started yet. Netty has been optimized, as mentioned earlier, Netty's thread pool is delayed start.

After selecting the NioEventLoop thread in the register method of the MultithreadEventLoopGroup class, the next () method will return a NioEventLoop instance, and then continue to call the register method of the instance, that is, the next step will jump to the register method of the direct parent class SingleThreadEventLoop of the NioEventLoop, the following source code :

Called into the second register method, the channel () method inside returns the client's NioSocketChannel, and the unsafe () method is a NioByteUnsafe instance, which finally calls the client channel's Unsafe register method. That is the internal class of AbstractChannel-AbstractUnsafe's register method, the source code is as follows:

The code of this method should be very familiar. I analyzed it before the Netty server started, that is, the logic of registering the I / O multiplexer for the new connection of the client reused this set of code, which Also benefited from Netty's good architectural design.

Let's analyze the logic of executing AbstractUnsafe's register method:

1. First verify the I / O thread and Channel of the current client, and then at the yellow 1, determine whether the current thread is a NIO thread, obviously here is false, because although a client NIO thread has been selected at this time, but The NIO thread has not been started, the entire registration logic is still running under the user thread, my demo is the main thread, as evidenced below, so 1 fails here, and then execute the code in else to delegate the real registration logic to the just started The client ’s NIO thread executes asynchronously, which also ensures thread safety.

2. Look at the yellow two places, that is, in the else code, the previously selected NIO thread will be started through the execute method of NioEventLoop (of course, if it has already been started, then the startup step will be skipped), while driving the registered task, this is really here Starting the NIO thread can also prove that Netty's thread pool has achieved delayed start,

3. Finally, looking at the yellow three places, I entered the register0 method and looked at its implementation source code, as follows:

The most critical method is the doRegister () method. Look at the red box. I entered this method and found that it is now in the subclass AbstractNioChannel. This is very familiar, or it is the same as the process of registering ServerSocketChannel on the server side, as follows:

It is the logic of the JDK registered Channel Selector that Netty encapsulates. In this method, register the client Channel to the I / O multiplexer of the client's NioEventLoop thread, and attach the NioSocketChannel object to the JDK Channel, but the registered I / O event of interest at this time is still 0, what No attention, that is, the client channel is still in the initialization state, and the real registration I / O event is still in the back process.

Note that this method writes the registration logic in an infinite loop. The purpose of learning this usage is to ensure that a thing must be completed, even if certain exceptions occur.

Go back to the register0 method, and look again. After the registration is completed, the handlerAdded event in the suspended state will be triggered first, that is, the code in yellow 1 will be executed first. This corresponds to adding a user-defined client handler for the new connection of the client. Logic. Then the yellow 2 is executed to trigger and propagate the event that the current Channel has been successfully registered. If the current Channel is still alive, it will continue to execute the code in three places, that is, the event of successful Channel connection (active state) is propagated for the first registered new Channel.

Finally, if the current Channel is not registered for the first time, it will determine whether to configure the automatic read message (Netty defaults to read priority), if it is, then the code in yellow 4 will be executed, and the subsequent detailed explanation.

Small knot

The core of allocating NIO threads for new connections and registering I / O multiplexers for new connections is to understand ServerBootstrapAcceptor, and thus know the minimum pipeline composition of the server channel: Head-> ServerBootstrapAcceptor-> Tail

理解ServerBootstrapAcceptor:

1. Delay to add childHandler-add a custom ChannelHandler to the newly connected pipeline. It must be added after the current Channel has registered the I / O multiplexer

2. Set options and attrs-set childOptions and childAttrs

3. Select NioEventLoop and register to the Selector. The core is to call Chooser's next () method of the worker thread pool to select a NioEventLoop. Through its doRegister () method, register the new connection to the Selector bound to the worker thread. The new connection here and the Selector have a many-to-one relationship.

Welcome attention

dashuai's blog is a lifelong learning practitioner, a big programmer, and focuses on work experience, sharing of study notes and daily vomiting, including but not limited to the Internet industry, with some PDF e-books, materials, and help to promote, welcome Paizhuan!

Guess you like

Origin www.cnblogs.com/kubixuesheng/p/12723456.html