Netty introductory study notes 1-Definition

1. Definition

Netty is an asynchronous, event-driven network application framework for rapid development of maintainable, high-performance network servers and clients.

Official website: Netty: Home

2. Status

Netty's position in the Java network application framework is like: the Spring framework's position in JavaEE development

The following frameworks all use Netty because they have network communication needs!

  • Cassandra - nosql database

  • Spark - Big data distributed computing framework

  • Hadoop - Big data distributed storage framework

  • RocketMQ - ali open source message queue

  • ElasticSearch - search engine

  • gRPC - rpc framework

  • Dubbo - rpc framework

  • Spring 5.x - flux api completely abandons tomcat and uses netty as the server side

  • Zookeeper - distributed coordination framework

3. Advantages of Netty

  • Netty vs NIO, heavy workload and many bugs

    • Need to build the protocol yourself

    • Solve TCP transmission problems, such as sticky packets and half-packets

    • epoll empty polling causes CPU 100%

    • Enhance the API to make it easier to use, such as FastThreadLocal => ThreadLocal, ByteBuf => ByteBuffer

  • Netty vs other network application frameworks

    • Mina is maintained by apache. In the future, the 3.x version may be significantly refactored, destroying API backward compatibility. Netty's development iteration is faster, the API is simpler, and the documentation is better.

    • Time-tested, 16 years, Netty version

      • 2.x 2004

      • 3.x 2008

      • 4.x 2013

      • 5.x is obsolete (no significant performance improvement, high maintenance costs)

4. Hello World example

Develop a simple server and client

  • The client sends hello, world to the server

  • The server only receives and does not return

(1) Add dependencies

<dependency>
    <groupId>io.netty</groupId>
    <artifactId>netty-all</artifactId>
    <version>4.1.39.Final</version>
</dependency>

(2) Server side

new ServerBootstrap()
    .group(new NioEventLoopGroup()) // 1
    .channel(NioServerSocketChannel.class) // 2
    .childHandler(new ChannelInitializer<NioSocketChannel>() { // 3
        protected void initChannel(NioSocketChannel ch) {
            ch.pipeline().addLast(new StringDecoder()); // 5
            ch.pipeline().addLast(new SimpleChannelInboundHandler<String>() { // 6
                @Override
                protected void channelRead0(ChannelHandlerContext ctx, String msg) {
                    System.out.println(msg);
                }
            });
        }
    })
    .bind(8080); // 4

Code interpretation

  • At 1, create NioEventLoopGroup, which can be simply understood as 线程池 + Selectorwill be expanded in detail later.

  • At 2, select the service Scoket implementation class, where NioServerSocketChannel represents the server-side implementation based on NIO, and other implementations are

  • At 3, why is the method called childHandler? The handlers added next are all for SocketChannel, not ServerSocketChannel. ChannelInitializer processor (executed only once), its function is to wait for the client SocketChannel to establish a connection, and then execute initChannel to add more processors

  • At 4, the listening port bound to ServerSocketChannel

  • At 5, the processor of SocketChannel decodes ByteBuf => String

  • At 6, the business processor of SocketChannel uses the processing result of the previous processor.

(3) Client

new Bootstrap()
    .group(new NioEventLoopGroup()) // 1
    .channel(NioSocketChannel.class) // 2
    .handler(new ChannelInitializer<Channel>() { // 3
        @Override
        protected void initChannel(Channel ch) {
            ch.pipeline().addLast(new StringEncoder()); // 8
        }
    })
    .connect("127.0.0.1", 8080) // 4
    .sync() // 5
    .channel() // 6
    .writeAndFlush(new Date() + ": hello world!"); // 7

Code interpretation

  • At 1, create NioEventLoopGroup, the same as Server

  • At 2, select the client Socket implementation class. NioSocketChannel represents the client implementation based on NIO. Other implementations include

  • At 3, add the SocketChannel processor, ChannelInitializer processor (executed only once). Its function is to wait for the client SocketChannel to establish a connection, and then execute initChannel to add more processors.

  • At 4, specify the server and port to connect to

  • At 5, many methods in Netty are asynchronous, such as connect. In this case, you need to use the sync method to wait for connect to complete the connection establishment.

  • At 6, get the channel object, which is the channel abstraction and can perform data reading and writing operations.

  • At 7, write the message and clear the buffer

  • At 8, the message will be processed by the channel handler. Here, String => ByteBuf is sent.

  • The data is transmitted through the network and reaches the server. The handlers at 5 and 6 on the server are triggered successively to complete a process.

(4)Processing process

5. Need to establish correct concepts

  • Understand channel as a channel of data

  • Understand msg as flowing data. The initial input is ByteBuf, but after pipeline processing, it will become other types of objects, and the final output will become ByteBuf.

  • Understand handler as the data processing process

    • There are multiple processes, which together form a pipeline. The pipeline is responsible for publishing events (reading, reading completion...) and propagating them to each handler. The handler processes the events it is interested in (rewriting the corresponding event processing method)

    • Handlers are divided into two categories: Inbound and Outbound.

  • Understand eventLoop as a worker that processes data

    • Workers can manage the IO operations of multiple channels, and once a worker is responsible for a channel, it must be responsible for it (binding)

    • Workers can perform IO operations and perform task processing. Each worker has a task queue, and the queue can be stacked with pending tasks for multiple channels. Tasks are divided into ordinary tasks and scheduled tasks.

    • Workers process data in sequence according to the pipeline order and according to the handler's plan (code). Different workers can be designated for each process.

Guess you like

Origin blog.csdn.net/puzi0315/article/details/129217770