1 Basic introduction of Netty
1.1 Review the I/O model in Java: BIO, NIO, AIO
-
A simple understanding of the I/O model: that is
用什么样的通道进行数据的发送和接收
, it largely determines the performance of program communication. -
Java supports a total of 3 network programming models/IO models:
BIO
,NIO
,AIO
. -
Java BIO
:同步并阻塞(传统阻塞型)
, the server implementation mode is that一个连接对应一个线程
when the client has a connection request, the server needs to start a thread for processing. If the connection does not do anything, it will cause unnecessary thread overhead.
-
Java NIO
:同步非阻塞
, the server implementation mode is一个线程处理多个请求(连接)
, that is, the connection requests sent by the client will be registered on the multiplexer, and the multiplexer will process the I/O request when it polls the connection .
-
Java AIO(NIO.2)
:异步非阻塞
, the concept introduced异步通道
byProactor 模式
and simplifies the programming.有效的请求才启动线程
Its characteristic is that the operating system first notifies the server program to start the thread to process after the completion. application.
Application Scenario Analysis of BIO, NIO, and AIO
- The BIO method is suitable for
连接数目比较小且固定的架构
this method. This method has relatively high requirements on server resources, and the concurrency is limited to the application. It is the only choice before JDK1.4, but the program is simple and easy to understand. - The NIO method is applicable to
连接数目多且连接比较短(轻操作)的架构
, for example, chat servers, barrage systems, communication between servers, etc. Programming is more complicated,JDK1.4
start to support. - The AIO method is suitable for
连接数目多且连接比较长(重操作)的架构
, for example, an album server, which fully invokes the OS to participate in concurrent operations. The programming is more complicated, andJDK7
it is supported.
1.2 What is Netty? Why is Netty needed?
What is Netty
-
Netty is a Java open source framework provided by JBOSS and is now an independent project on Github.
-
Netty is a high
异步的
- performance, high-reliability platform for rapid development . It simplifies and streamlines the development process of NIO.基于事件驱动
网络应用框架
网络 IO 程序
-
Netty is mainly aimed at
TCP协议下
high-concurrency applications for Clients, or applications大量数据持续传输
in .
-
The essence of Netty is one
NIO框架
, which is suitable for various服务器通讯
related application scenarios. -
To thoroughly understand Netty, we need to learn NIO first, so that we can read the source code of Netty.
Problems with native NIO
- NIO's class library and API are complicated and troublesome to use: you need to be proficient in
Selector
,ServerSocketChannel
,SocketChannel
, andByteBuffer
so on. - Need to have other additional skills: Be familiar with Java multi-thread programming, because NIO programming involves Reactor mode, you must be very familiar with multi-thread and network programming in order to write high-quality NIO programs.
- The development workload and difficulty are very large: for example, the client faces disconnection and reconnection, network interruption, half-packet read and write, cache failure, network congestion and abnormal flow processing, etc.
- Bugs in JDK NIO: for example the infamous
Epoll Bug
,它会导致 Selector 空轮询,最终导致 CPU 100%
. Until the JDK 1.7 version, this problem still exists and has not been fundamentally resolved.
This is the reason for asking why Netty is needed
Netty encapsulates the NIO API that comes with JDK to solve the above problems.
- Elegant design: unified API blocking and non-blocking sockets for various transport types; based on a flexible and extensible event model, which can clearly separate concerns; highly customizable threading model - single thread, one or more thread pools .
- Ease of use: Well-documented Javadoc, user guide and examples; no other dependencies, JDK 5 (Netty3.x) or 6 (Netty 4.x) is enough.
- High performance, higher throughput: lower latency; reduced resource consumption; minimize unnecessary memory copying.
- Security: Full SSL/TLS and StartTLS support.
- The community is active and constantly updated: the community is active, the version iteration cycle is short, and the bugs found can be fixed in time. At the same time, more new functions will be added
1.3 Application scenarios of Netty
Internet industry: In a distributed system , remote service calls are required between nodes , and a high-performance RPC framework is essential. As an asynchronous high-performance communication framework, Netty is often used by these RPC frameworks as a basic communication component.
Typical applications include: the RPC framework Dubbo
of uses the Dubbo protocol for inter-node communication. The Dubbo protocol uses Netty as the basic communication component by default to implement internal communication between process nodes.
game industry
- Whether it is a mobile game server or a large-scale online game, the Java language has been more and more widely used;
- As a high-performance basic communication component, Netty provides TCP/UDP and HTTP protocol stacks, which are convenient for customizing and developing private protocol stacks, and account login servers;
- High-performance communication between map servers can be conveniently carried out through Netty.
2 Traditional blocking I/O service model & Reactor mode
Currently existing threading models are:
- Traditional blocking I/O service model
- Reactor mode
According to the number of Reactors and the number of processing resource pool threads, there are three typical implementations:
- Single Reactor single thread;
- Single Reactor multi-thread;
- Master-slave Reactor multithreading
Netty is mainly based on the 主从 Reactor
multi- threaded model 改进
, in which the master-slave Reactor multi-threaded model有多个 Reactor
2.1 Traditional blocking I/O service model
Working principle diagram: yellow boxes represent objects, blue boxes represent threads, and white boxes represent methods (API)
Model Features
- Use blocking IO mode to get input data
- Each connection requires an independent thread to complete data input, business processing, and data return
problem analysis
- When the number of concurrency is large, a large number of threads will be created, taking up a lot of system resources
- After the connection is created, if the current thread has no data to read temporarily, the thread will be blocked in the read operation, resulting in a waste of thread resources
2.2 Reactor mode
It is recommended that you learn about NIO first: https://blog.csdn.net/qq_36389060/category_11777885.html
For the two shortcomings of the traditional blocking I/O service model, the solution:
- Based on the I/O multiplexing model: multiple connections share a blocking object, and the application only needs to wait in one blocking object without blocking and waiting for all connections. When a connection has new data that can be processed, the operating system notifies the application, the thread returns from the blocked state, and begins business processing
- Multiplexing thread resources based on the thread pool: it is no longer necessary to create a thread for each connection, and the business processing task after the connection is completed is assigned to the thread for processing, and one thread can process the business of multiple connections.
I/O multiplexing combined with thread pool is the basic design idea of Reactor mode
- The corresponding name of Reactor:
- reactor mode
- Distributor mode (Dispatcher)
- Notifier pattern (notifier)
- Reactor mode
- One or more requests, simultaneously passed to
服务处理器(基于事件驱动)
; - The server-side program processes multiple incoming requests and dispatches them to the corresponding processing threads synchronously, so the Reactor mode is also called the Dispatcher mode;
- The Reactor mode uses
IO复用
to listen to events, and distributes them to a certain thread (process) after receiving the event这点就是网络服务器高并发处理关键
.
- One or more requests, simultaneously passed to
Core components in Reactor mode
- Reactor: Reactor runs in a separate thread,
负责监听和分发事件
dispatching to appropriate handlers to react to IO events. It acts like a telephone operator for the company, it takes calls from customers and diverts the lines to the appropriate contacts; - Handlers: Handlers perform actual events to be done by I/O events, similar to actual officials in a company that a client wants to talk to. Reactor responds to I/O events by dispatching appropriate handlers, which perform non-blocking operations.
Reactor pattern taxonomy
According to the number of Reactors and the number of processing resource pool threads, there are 3 typical implementations
- Single Reactor Single Thread
- Single Reactor Multithreading
- Master-slave Reactor multithreading
2.2.1 Single Reactor Single Thread
plan description
- Select is the standard network programming API introduced in the previous I/O multiplexing model, which enables applications to monitor multiple connection requests through a blocking object
- The Reactor object monitors the client request event through Select, and distributes it through Dispatch after receiving the event
- If it is a connection establishment request event, the Acceptor handles the connection request through Accept, and then creates a Handler object to handle subsequent business processing after the connection is completed
- If it is not a connection establishment event, Reactor will dispatch the Handler corresponding to the call connection to respond
- Handler will complete the complete business process of Read == "Business Processing == "Send
Analysis of program advantages and disadvantages
The server side uses 一个线程
to 多路复用
handle all IO operations (including connection, reading, writing, etc.), the coding is simple and clear, but if the number of client connections is large, it will not be able to support it. The previous NIO case belongs to this model.
- Advantages: The model is simple, there are no problems of multi-threading, process communication, and competition, and all are completed in one thread
- shortcoming:
- Performance issues, only one thread, can not fully play the performance of multi-core CPU. When the Handler is processing the business on a certain connection, the entire process cannot handle other connection events, which can easily lead to performance bottlenecks
- Reliability issues, unexpected termination of threads, or entering an infinite loop will cause the communication module of the entire system to be unavailable, unable to receive and process external messages, resulting in node failure
Usage scenario: The number of clients is limited, and the business processing is very fast, such as the time complexity of Redis in business processing O(1).
2.2.2 Single Reactor Multithreading
plan description
- The Reactor object monitors client request events through select, and distributes them through dispatch after receiving the event
- If a connection request is established, the right Acceptor handles the connection request through accept, and then creates a Handler object to handle various events after the connection is completed
- If it is not a connection request, it will be processed by the handler corresponding to the connection called by the reactor
handler 只负责响应事件
, do not do specific business processing, after reading data through read, it will be distributed to a thread in the worker thread pool to process businessworker 线程池
An independent thread will be allocated to complete the real business, and the result will be returned to the handler- After the handler receives the response, it returns the result to the client via send
Analysis of program advantages and disadvantages
- Advantages: It can make full use of the processing power of multi-core cpu.
- Disadvantages: Multi-thread data sharing and access are more complicated, reactor handles all event monitoring and response, runs in a single thread, and is prone to performance bottlenecks in high-concurrency scenarios.
2.2.3 Master-slave Reactor multithreading
plan description
For the single-reactor multi-threaded model, the Reactor runs in a single thread, and it is easy to become a performance bottleneck in a high-concurrency scenario, so the Reactor can be run in multiple threads.
- The Reactor main thread MainReactor object listens to the connection event through select, and after receiving the event, handles the connection event through the Acceptor
- After the Acceptor handles the connection event, the MainReactor assigns the connection to the SubReactor
- SubReactor adds the connection to the connection queue for listening, and creates a handler for various event processing
- When a new event occurs, subreactor will call the corresponding handler for processing
- The handler reads the data through read and distributes it to the following worker threads for processing
- The worker thread pool allocates independent worker threads for business processing and returns results
Pros and cons of the program
- advantage
- The data interaction between the parent thread and the child thread is simple and clear. The parent thread only needs to receive new connections, and the child thread completes subsequent business processing.
- The data interaction between the parent thread and the child thread is simple. The Reactor main thread only needs to pass the new connection to the child thread, and the child thread does not need to return data.
- Disadvantages: high programming complexity
Combining examples: This model is widely used in many projects, including Nginx master-slave Reactor multi-process model, Memcached master-slave multi-threading, Netty master-slave multi-threading model support
3 Netty threading model
Netty is mainly 主从 Reactors 多线程模型
based a certain number of 改进
Reactors in which the master-slave Reactor multithreading model has multiple Reactors .
A brief overview of the Netty threading model
- The BossGroup thread maintains the Selector and only focuses on Accecpt;
- When the Accept event is received, the corresponding SocketChannel is obtained, encapsulated into a NIOscoketChannel and registered to the Worker thread (event loop), and maintained;
- When the Worker thread listens to the event of its own interest in the channel in the selector, it will process it (by the handler). Note that the handler has been added to the channel
Netty threading model detailed description
- Netty abstracts two sets of thread pools : BossGroup and WorkerGroup
- BossGroup is responsible for receiving client connections
- WorkerGroup is responsible for reading and writing of the network
- BossGroup and WorkerGroup types are both NioEventLoopGroup , NioEventLoopGroup is equivalent to one
事件循环组
, in this group含有多个事件循环
, each event loop is NioEventLoop - NioEventLoop represents a thread that executes processing tasks in a continuous cycle,
每个NioEventLoop 都有一个selector
and is used to monitor the network communication of the socket bound to it . - NioEventLoopGroup can have multiple threads, that is, it can contain multiple NioEventLoops
- The steps executed by NioEventLoop in each Boss Group:
- 1) Poll accept event
- 2) Process the accept event, establish a connection with the client, generate
NioScocketChannel
,并将其注册 Worker Group 上的某个 NIOEventLoop 上的 selector
- 3) Process tasks in the task queue, namely runAllTasks
- The steps performed by the NIOEventLoop cycle in each Worker Group:
- 1) Polling for read/write events
- 2) Handle I/O events, that is, read/write events, and process them on the corresponding NioSocketChannel
- 3) Process tasks in the task queue, namely runAllTasks
- Each Worker NIOEventLoop uses a pipeline (pipeline) when processing business. The pipline contains the channel, that is, the corresponding channel can be obtained through the pipline, and the pipline maintains a lot of handlers (processors) to perform a series of processing on our data.
- The handler (processor) has Netty built-in, and we can also define it ourselves.