Netty Learning (2) -- Overview and First Experience

1 Overview

1.1. What is Netty?

Netty is an open source framework for Java provided by Trustin Lee and is now a standalone project on GitHub. Netty is a NIO-based client-server programming framework.

Netty is an asynchronous, event-driven network application framework for rapid development of maintainable, high-performance network servers and clients.

1.3. Status of Netty

The status of Netty in the Java network application framework is like: the status of the Spring framework in JavaEE development.

The following frameworks all use Netty, and they all have network communication requirements!

  • Cassandra - NoSQL database
  • Spark - Big Data Distributed Computing Framework
  • Hadoop - Big Data Distributed Storage Framework
  • RockMQ - Ali open source message queue
  • ElasticSearch - search engine
  • gRPC - rpc framework
  • Dubbo - rpc framework
  • Spring 5.x - Fulx api completely abandons Tomcat and uses Netty as the server
  • Zookeeper - Distributed coordination framework

1.4. Advantages of Netty

  • Netty vs. NIO (heavy workload, many bugs)
    • You need to build your own protocol
    • Solve TCP transmission problems, such as sticky packets, half packets
    • epoll empty polling causes CPU 100%
    • The API has been enhanced to make it easier to use. Such as FastThreadLocal => ThreadLocal, ByteBuf => ByteBuffer.
  • Netty compared to other web application frameworks
    • Mina is maintained by Apache. The Netty API is cleaner and better documented.

2、Hello World

2.1. First experience

Develop a simple server and client

  • The client sends hello, world to the server
  • The server only receives and does not return

2.2. Import dependencies

1. Netty dependency

<dependency>
  <groupId>io.netty</groupId>
  <artifactId>netty-all</artifactId>
  <version>4.0.42.Final</version>
</dependency>

2. Log dependencies:

<!-- lombok,IDEA 需要配置插件 --> 
<dependency>
    <groupId>org.projectlombok</groupId>
    <artifactId>lombok</artifactId>
    <version>1.18.12</version>
</dependency>
<!-- slf4j -->
<dependency>
    <groupId>org.slf4j</groupId>
    <artifactId>slf4j-api</artifactId>
    <version>1.7.7</version>
</dependency>
<!-- slf4j 到 log4j 的实现 -->
<dependency>
    <groupId>org.apache.logging.log4j</groupId>
    <artifactId>log4j-slf4j-impl</artifactId>
    <version>2.9.1</version>
</dependency>

3. log4j2.xml File configuration

<?xml version="1.0" encoding="UTF-8"?>
<!--日志级别以及优先级排序: OFF > FATAL > ERROR > WARN > INFO > debugModeOpen > TRACE > ALL -->
<!--Configuration后面的status,这个用于设置log4j2自身内部的信息输出,可以不设置,当设置成trace时,你会看到log4j2内部各种详细输出-->
<!--monitorInterval:Log4j能够自动检测修改配置 文件和重新配置本身,设置间隔秒数-->
<configuration status="INFO" monitorInterval="30">
    <!--先定义所有的appender-->
    <appenders>
        <!--这个输出控制台的配置-->
        <console name="Console" target="SYSTEM_OUT">
                <!--输出日志的格式-->
            	<!-- <PatternLayout pattern="[%d{MM-dd HH:mm:ss:SSS}] [%p] - %l - %m%n"/> -->
                <PatternLayout pattern="%date{HH:mm:ss} [%-5level] [%thread] %logger{17} - %m%n"/>
        </console>

        <!--日志大于50M时,自动备份存档,生成新的文件-->
<!--        <RollingFile name="logfile" fileName="${sys:catalina.home}/logs/system.log" filePattern="${sys:catalina.home}/logs/$${date:yyyy-MM}/system-%d{yyyy-MM-dd}-%i.log">-->
        <RollingFile name="logfile" fileName="G:/Typora/Netty/Netty/logs/system.log" filePattern="G:/Typora/Netty/Netty/logs/$${date:yyyy-MM}/system-%d{yyyy-MM-dd}-%i.log">
            <ThresholdFilter level="INFO" onMatch="ACCEPT" onMismatch="DENY"/>
            <PatternLayout pattern="%date{HH:mm:ss} [%-5level] [%thread] %logger{17} - %m%n"/>
            <Policies>
                <TimeBasedTriggeringPolicy/>
                <SizeBasedTriggeringPolicy size="50 MB"/>
            </Policies>
            <!--最多10个日志文件-->
            <DefaultRolloverStrategy compressionLevel="0" max="10"/>
        </RollingFile>
    </appenders>

    <!--然后定义logger,只有定义了logger并引入的appender,appender才会生效-->
	<!--    <loggers>
        &lt;!&ndash;过滤掉spring、apache和druid的一些无用的DEBUG信息&ndash;&gt;
        <logger name="org.springframework" level="ERROR"> </logger>
        <logger name="com.alibaba.druid" level="ERROR"> </logger>
        <logger name="org.apache" level="ERROR"> </logger>
        &lt;!&ndash;vm和jsp两种视图同时存在时,如果找不到vm,就会去找jsp文件。但是这里老是会打印error日志&ndash;&gt;
        <logger name="org.apache.velocity" level="FATAL"> </logger>

        <root level="info">
            <appender-ref ref="Console"/>
            <appender-ref ref="logfile"/>
        </root>
    </loggers>-->
    <loggers>
        <!-- com.example 包下打印日志 -->
        <logger name="com.example" level="DEBUG" additivity="false">
            <appender-ref ref="Console" />
        </logger>
        <root level="ERROR">
            <appender-ref ref="Console" />
        </root>
    </loggers>
</configuration>

2.3, server-side code

public class HelloServer {
    
    
    public static void main(String[] args) {
    
    
        // 1、服务端启动器,负责组装 netty 组件,启动服务器
        new ServerBootstrap()
                // 2、BossEventLoop WorkerEventLoop(包含selector, Thread), group 组
                .group(new NioEventLoopGroup())
                // 3、选择 服务器的 ServerSocketChannel 实现 (EpollServerSocketChannel、OioServerSocketChannel)
                .channel(NioServerSocketChannel.class)
                // 4、boss负责处理连接 worker(child)负责处理读写。决定 worker(child) 能执行哪些操作(handler)
                .childHandler(
                        // 5、Channel 代表和客户端进行数据读写的通道;Initializer 初始化,负责添加别的 handler
                    new ChannelInitializer<NioSocketChannel>() {
    
    
                    @Override
                    protected void initChannel(NioSocketChannel nioSocketChannel) throws Exception {
    
    
                        // 6、添加具体的 handler
                        // 将 ByteBuf 转换为字符串
                        nioSocketChannel.pipeline().addLast(new StringDecoder());
                        // 自定义 handler
                        nioSocketChannel.pipeline().addLast(new ChannelInboundHandlerAdapter() {
    
    
                            @Override   // 读事件
                            public void channelRead(ChannelHandlerContext ctx, Object msg) throws Exception {
    
    
                                // 打印上一步转换好的字符串
                                System.out.println(msg);
                            }
                        });
                    }
                })
                // 7、绑定监听端口
                .bind(8888);
    }
}

2.4, customer service code

public class HelloClient {
    
    
    public static void main(String[] args) throws InterruptedException {
    
    
        // 1、客户端启动器
        new Bootstrap()
                // 2、添加 EventLoop
                .group(new NioEventLoopGroup())
                // 3、选择客户端 Channel 事件
                .channel(NioSocketChannel.class)
                // 4、添加处理器,
                .handler(new ChannelInitializer<NioSocketChannel>() {
    
    
                    @Override   // 连接建立后被调用
                    protected void initChannel(NioSocketChannel ch) throws Exception {
    
    
                        ch.pipeline().addLast(new StringEncoder());
                    }
                })
                // 5、连接到服务器
                .connect(new InetSocketAddress("localhost", 8888))
                .sync()
                .channel()
                // 6、向服务器发送数据
                .writeAndFlush("hello, world");
    }
}

2.5. Execution flowchart

insert image description here

2.6, some understanding

  • Understand Channel as a data channel.
  • Understand msg as flowing data, initially input ByteBuf, but after pipeline (pipeline) processing, it will become other types of objects, and finally the output will become ByteBuf.
  • Understand the handler as the processing procedure of data processing
    • There are many processes, and together they are the pipeline (pipeline). The pipeline is responsible for publishing events (reading, reading completion...) to each handler, and the handler processes the events it is interested in (rewriting the corresponding event processing method)
    • The handler is divided into two types: Inbound (inbound) and Outbound (outbound).
  • Understand EventLoop as a worker that processes data
    • Workers can manage the IO operations of multiple Channels, and once workers are in charge of a Channel, they are responsible for the end (binding).
    • Workers can perform both IO operations and task processing. Each worker has a task queue in which multiple pending tasks of multiple channels can be stacked. Tasks are divided into ordinary tasks and scheduled tasks.
    • Workers process data according to the rules (codes) of the handler in sequence according to the pipeline sequence. Different workers can be assigned to each operation (not IO operations).

Guess you like

Origin blog.csdn.net/weixin_43989102/article/details/126735668