Comparison of three open source network frameworks

mina selector model:
1. In Mina 2.0, Selector management is handled by org.apache.mina.transport.socket.nio.NioProcessor, each NioProcessor object saves a Selector, responsible for specific select, wakeup, channel registration and cancellation, registration and judgment of read and write events, actual IO read and write operations, etc.
2.
SimpleIoProcessorPool defaults to the size of cpu+1; in this way, each connection is associated with a NioProcessor, that is, a Selector object to avoid In order to avoid the consequence that all connections share a Selector overload and cause the server to respond slowly.

3. But notice that NioSocketAcceptor also has a Selector, what is this Selector used for? That is, the Selector that centrally handles the OP_ACCEPT event, which is mainly used for connection access, and is not mixed with the Selector that handles read and write events. Therefore, the default open Selector of Mina is cpu+2.

4. It can be seen that mina2 is still a reactor multi-threading model;

netty5.0 selector model:
1. Each NioEventLoop has a selector object;
2. ServerBootstrap b = new ServerBootstrap();
b.group(bossGroup, workerGroup); 2 NioEventLoopGroups are created when the server starts, they are actually 2 independent Reactor thread pools; one is used to receive the client's TCP connection establishment and closure; the other is used for IO related 3. From this, it can be seen that netty
is a master-slave thread pool model;


Grizzly:
Grizzly is more conservative, it starts two Selectors by default, one of which is responsible for accept, Another management of IO read and write events responsible for connections


1. In the case of dealing with a large number of connections, multiple Selectors are better than a single Selector
2. In the case of multiple Selectors, the Selector that handles OP_READ and OP_WRITE should be separated from the Selector that handles OP_ACCEPT , that is to say, a separate Selector object should be used to process access to avoid IO read and write events affecting the access speed.
3. The number of Selectors, mina is cpu+2 by default, and grizzly has 2 in total. I prefer the strategy of mina, but I think a judgment should be made on the number of CPUs. If the number of CPUs exceeds 8, then More Selector threads may bring a larger overhead of thread switching. The default strategy of mina is not suitable. Fortunately, this value can be set through the API.


4.netty learns
  ChannelFuture f = b.bind(PORT).sync();
1.bind executes to obtain a ServerSocketChannel and registers it in the Selector in EventLoop;
2. Binds ServerSocketChannel to the specified IP and port
EventLoopGroup bossGroup = new NioEventLoopGroup() The following things happened:
      1. An instance of NioEventLoop type with the number of processors x 2 was created for NioEventLoopGroup. Each NioEventLoop instance holds a thread and a task queue of type LinkedBlockingQueue
      2. The execution logic of the thread is implemented by NioEventLoop
      3. Each NioEventLoop instance holds a selector and optimizes the selector. Doug
lea personal website:
http ://gee.cs.oswego.edu/dl/

Guess you like

Origin http://10.200.1.11:23101/article/api/json?id=326886913&siteId=291194637