Netty4.x builds server and client and detailed explanation of each key class

NioEventLoopGroup

It is a multi-threaded event loop that handles I/O operations . Netty provides various EventLoopGroup implementations for different types of transports. In this example, we are implementing a server-side application, so two NioEventLoopGroups will be used. The first one, often called the "boss", accepts incoming connections. The second one, usually called the "worker", handles the traffic of the accepted connection after the boss accepts the connection and registers the accepted connection with the worker thread. How many threads are used and how they are mapped to the created channels depends on the EventLoopGroup implementation and can even be configured via the constructor.

ServerBootstrap

It is a helper class for setting up the server. You can set up the server directly using Channel. However, please note that this is a tedious process and in most cases you will not need to do this.
Here, we specify the NioServerSocketChannel class, which is used to instantiate a new Channel to accept incoming connections .

ChannelInitializer

It is a special handler used to help users configure new Channels . You will most likely want to configure the new Channel's ChannelPipeline by adding some handlers (such as DiscardServerHandler) for your network application. As your application becomes more complex, you may add more handlers to the pipeline and eventually extract this anonymous class into a top-level class.

ChannelOptions

You can also set parameters specific to the Channel implementation. We are writing a TCP/IP server, so this allows us to set socket options such as tcpNoDelay and keepAlive.
option() is used to receive the NioServerSocketChannel for incoming connections.
childOption() is used for channels accepted by the parent ServerChannel, in this case NioSocketChannel.

netty server code

1、server

package com.zs.netty.server;
    
import io.netty.bootstrap.ServerBootstrap;

import io.netty.channel.ChannelFuture;
import io.netty.channel.ChannelInitializer;
import io.netty.channel.ChannelOption;
import io.netty.channel.EventLoopGroup;
import io.netty.channel.nio.NioEventLoopGroup;
import io.netty.channel.socket.SocketChannel;
import io.netty.channel.socket.nio.NioServerSocketChannel;
    
/**
 * Discards any incoming data.
 */
public class DiscardServer {
    
    
    
    private int port;
    
    public DiscardServer(int port) {
    
    
        this.port = port;
    }
    
    public void run() throws Exception {
    
    
        EventLoopGroup bossGroup = new NioEventLoopGroup(); // (1)
        EventLoopGroup workerGroup = new NioEventLoopGroup();
        try {
    
    
            ServerBootstrap b = new ServerBootstrap(); // (2)
            b.group(bossGroup, workerGroup)
             .channel(NioServerSocketChannel.class) // (3)
             .childHandler(new ChannelInitializer<SocketChannel>() {
    
     // (4)
                 @Override
                 public void initChannel(SocketChannel ch) throws Exception {
    
    
                     ch.pipeline().addLast(new DiscardServerHandler());
                 }
             })
             .option(ChannelOption.SO_BACKLOG, 128)          // (5)
             .childOption(ChannelOption.SO_KEEPALIVE, true); // (6)
    
            // Bind and start to accept incoming connections.
            ChannelFuture f = b.bind(port).sync(); // (7)
    
            // Wait until the server socket is closed.
            // In this example, this does not happen, but you can do that to gracefully
            // shut down your server.
            f.channel().closeFuture().sync();
        } finally {
    
    
            workerGroup.shutdownGracefully();
            bossGroup.shutdownGracefully();
        }
    }
    
    public static void main(String[] args) throws Exception {
    
    
        int port = 8080;
        if (args.length > 0) {
    
    
            port = Integer.parseInt(args[0]);
        }

        new DiscardServer(port).run();
    }
}

2. DiscardServerHandler code

package com.zs.netty.server;

import io.netty.buffer.ByteBuf;

import io.netty.channel.ChannelHandlerContext;
import io.netty.channel.ChannelInboundHandlerAdapter;
import io.netty.util.ReferenceCountUtil;

/**
 * Handles a server-side channel.
 */
public class DiscardServerHandler extends ChannelInboundHandlerAdapter {
    
     // (1)

//    @Override
//    public void channelRead(ChannelHandlerContext ctx, Object msg) {
    
     // (2)
//        // Discard the received data silently.
//        ((ByteBuf) msg).release(); // (3)
//
//
//    }

    @Override
    public void channelRead(ChannelHandlerContext ctx, Object msg) {
        ByteBuf in = (ByteBuf) msg;
        try {
            while (in.isReadable()) {
    
     // (1)
                System.out.print((char) in.readByte());
                System.out.flush();
            }
        } finally {
    
    
            ReferenceCountUtil.release(msg); // (2)
        }
    }
    @Override
    public void exceptionCaught(ChannelHandlerContext ctx, Throwable cause) {
    
     // (4)
        // Close the connection when an exception is raised.
        cause.printStackTrace();
        ctx.close();
    }
}

3. Use telnet to test

Interlude: mac installation telnet

When using brew install to install, because the source is: https://homebrew.bintray.com/, the installation reports a 502 gateway error. After modifying the source, a 404 error is reported. So after reinstalling homebrew,
execute the command /bin/zsh -c "$(curl -fsSL https://gitee.com/cunkai/HomebrewCN/raw/master/Homebrew.sh)"
and execute brew install telnet again.

Enter telnet in the terminal
as follows:

$ telnet
telnet> telnet localhost 8080
你好,netty 

At this time, the client's information will be received on the server's console.

4. Use ECHO PROTOCAL (response protocol) to respond to the client

4.1 ChannelHandlerContext

This object provides various operations that enable you to trigger various I/O events and operations. Here we call write(Object) to write the received message verbatim. Note that we do not release the received message as in the DISCARD example. This is because Netty frees it for you when writing it out.
write(object) does not write messages to the network. It buffers it internally and then flushes it to the network via ctx.flush(). Or, for simplicity, you can call ctx.writeAndFlush(msg).
If you run the telnet command again, you will see the server return what you sent it.

The complete source code for the echo server is in the io.net.example.echo package for this distribution.

Let’s modify the channelRead method

@Override
    public void channelRead(ChannelHandlerContext ctx, Object msg) {
    
    
        ctx.write(msg); // (1)
        ctx.flush(); // (2)
    }

5. Time agreement

5.1, netty service-time agreement

The protocol used in this section is the TIME protocol. It differs from the previous example in that it sends a message containing a 32-bit integer without receiving any requests, and closes the connection after the message is sent. In this example, you'll learn how to construct and send a message, and close the connection when finished.

Because we will ignore any received data and instead send the message as soon as the connection is established, we cannot use the channelRead() method this time. Instead, we should override the channelActive() method. Here is the implementation:

package com.zs.netty.server;

import io.netty.buffer.ByteBuf;
import io.netty.channel.ChannelFuture;
import io.netty.channel.ChannelFutureListener;
import io.netty.channel.ChannelHandlerContext;
import io.netty.channel.ChannelInboundHandlerAdapter;

public class TimeServerHandler extends ChannelInboundHandlerAdapter {
    
    

    @Override
    public void channelActive(final ChannelHandlerContext ctx) {
    
     // (1)
        final ByteBuf time = ctx.alloc().buffer(4); // (2)
        time.writeInt((int) (System.currentTimeMillis() / 1000L + 2208988800L));
        
        final ChannelFuture f = ctx.writeAndFlush(time); // (3)
        f.addListener(new ChannelFutureListener() {
    
    
            @Override
            public void operationComplete(ChannelFuture future) {
    
    
                assert f == future;
                ctx.close();
            }
        }); // (4)
    }
    
    @Override
    public void exceptionCaught(ChannelHandlerContext ctx, Throwable cause) {
    
    
        cause.printStackTrace();
        ctx.close();
    }
}

The following content is translated from the official website:

As mentioned before, the channelActive() method will be called when the connection is established and ready to generate a communication stream. Let us write a 32-bit integer to represent the current time in this method.

To send a new message, we need to allocate a new buffer to contain the message. We will be writing a 32-bit integer, so we need a ByteBuf with a capacity of at least 4 bytes. Get the current ByteBufAllocator through ChannelHandlerContext.alloc() and allocate a new buffer.

As usual, we write a constructed message.

But wait, where is the flip? Didn't we call java.nio.ByteBuffer.flip() before sending the message with NIO? ByteBuf has no such method because it has two pointers; one for read operations and the other used for write operations. When writing to a ByteBuf, the writer index increases, but the reader index does not change. The reader index and writer index represent the start and end positions of the message respectively.

In contrast, NIO buffers do not provide a clear way to determine where the message content begins and ends without calling the flip method. When you forget to flip the buffer, you will get into trouble because no data or incorrect data will be sent. Such errors don't happen in Netty because different operation types have different pointers. When you get used to it, you'll find that it makes your life easier - a life that's not out of control!

Another point to note is that the ChannelHandlerContext.write() (and writeAndFlush()) methods return a ChannelFuture. ChannelFuture represents I/O operations that have not yet occurred. This means that any requested operations may not have been performed yet, since all operations in Netty are asynchronous. For example, the following code might close the connection before the message is sent:

Channel ch = ...;
ch.writeAndFlush(message);
ch.close();

Therefore, you need to call the close() method after the ChannelFuture is completed, which is returned by the write() method, and when the write operation is completed, it will notify its listeners. Note that close() may not close the connection immediately, it will return a ChannelFuture.

So how do we get notified when the write request completes? It's as simple as adding a ChannelFutureListener to the returned ChannelFuture. Here we create a new anonymous ChannelFutureListener which closes the Channel when the operation is complete.

5.2, netty client-time protocol

Unlike DISCARD and ECHO servers, we need a client for the TIME protocol because humans cannot convert 32-bit binary data into dates on the calendar. In this section, we will discuss how to ensure that the server is working properly and learn how to write a client using Netty.

The biggest and only difference between the server and the client in Netty is the use of different Bootstrap and Channel implementations. Please look at the code below:

package io.netty.example.time;

public class TimeClient {
    
    
    public static void main(String[] args) throws Exception {
    
    
        String host = args[0];
        int port = Integer.parseInt(args[1]);
        EventLoopGroup workerGroup = new NioEventLoopGroup();
        
        try {
    
    
            Bootstrap b = new Bootstrap(); // (1)
            b.group(workerGroup); // (2)
            b.channel(NioSocketChannel.class); // (3)
            b.option(ChannelOption.SO_KEEPALIVE, true); // (4)
            b.handler(new ChannelInitializer<SocketChannel>() {
    
    
                @Override
                public void initChannel(SocketChannel ch) throws Exception {
    
    
                    ch.pipeline().addLast(new TimeClientHandler());
                }
            });
            
            // Start the client.
            ChannelFuture f = b.connect(host, port).sync(); // (5)

            // Wait until the connection is closed.
            f.channel().closeFuture().sync();
        } finally {
    
    
            workerGroup.shutdownGracefully();
        }
    }
}

Bootstrap is similar to ServerBootstrap, except that it is used for non-server channels, such as client-side or connectionless channels.
If only one EventLoopGroup is specified, it will be used as the boss group and worker group. However, the boss worker is not used on the client side.
Instead of NioServerSocketChannel, NioSocketChannel is used to create client channels.
Note that you do not use childOption() like you use ServerBootstrap because the client SocketChannel has no parent class.
We should call the connect() method instead of the bind() method.
As you can see, it's not really different from server-side code. What about the ChannelHandler implementation? It should receive a 32-bit integer from the server, convert it to a human-readable format, print the converted time, and close the connection:

package io.netty.example.time;

import java.util.Date;

public class TimeClientHandler extends ChannelInboundHandlerAdapter {
    
    
    @Override
    public void channelRead(ChannelHandlerContext ctx, Object msg) {
    
    
        ByteBuf m = (ByteBuf) msg; // (1)
        try {
    
    
            long currentTimeMillis = (m.readUnsignedInt() - 2208988800L) * 1000L;
            System.out.println(new Date(currentTimeMillis));
            ctx.close();
        } finally {
    
    
            m.release();
        }
    }

    @Override
    public void exceptionCaught(ChannelHandlerContext ctx, Throwable cause) {
    
    
        cause.printStackTrace();
        ctx.close();
    }
}

In TCP/IP, Netty reads data sent from the peer into ByteBuf.
It looks very simple and not any different from the server side example. However, this handler sometimes refuses to throw an IndexOutOfBoundsException exception. We'll discuss why this happens in the next section.

5.3 Testing

According to the official description, I created a server, started the server, and then started the client. The current time will be received on the client console.
Insert image description here

6. Stream-based transmission

A small caveat with Socket Buffers
In stream-based transports such as TCP/IP, received data is stored in the socket receive buffer. Unfortunately, the buffer for stream-based transfers is not a queue of packets, but a queue of bytes. This means that even if you send two messages as two separate packets, the operating system will not see them as two messages, but just a set of bytes. Therefore, there is no guarantee that what you read will exactly match what your remote companion writes. For example, let's assume that an operating system's TCP/IP stack receives three packets:

Insert image description here
Due to this general property of stream-based protocols, there is a high chance that in your application they will be read in the following fragmented form:
Insert image description here
three packets split and merged into four buffers

Therefore, regardless of whether the receiving part is server-side or client-side, the received data should be organized into one or more meaningful frames that can be easily understood by the application logic. In the above example, the received data should look like this:
Insert image description here

6.1 First solution

Now let's return to the TIME client example. We have the same problem here. A 32-bit integer is a very small amount of data, and it is unlikely to be fragmented very often. The problem, however, is that it can fragment, and as traffic increases, so does the likelihood of fragmentation.

The simplest solution is to create an internal accumulation buffer and wait until all 4 bytes have been received into the internal buffer. Here is the modified TimeClientHandler implementation that fixes the problem New TimeClientHandler1:

package com.zs.netty.client;

import io.netty.buffer.ByteBuf;
import io.netty.channel.ChannelHandlerContext;
import io.netty.channel.ChannelInboundHandlerAdapter;

import java.util.Date;

public class TimeClientHandler1 extends ChannelInboundHandlerAdapter {
    
    
    private ByteBuf buf;

    @Override
    public void handlerAdded(ChannelHandlerContext ctx) {
    
    
        buf = ctx.alloc().buffer(4); // (1)
    }

    @Override
    public void handlerRemoved(ChannelHandlerContext ctx) {
    
    
        buf.release(); // (1)
        buf = null;
    }

    @Override
    public void channelRead(ChannelHandlerContext ctx, Object msg) {
    
    
        ByteBuf m = (ByteBuf) msg;
        buf.writeBytes(m); // (2)
        m.release();

        if (buf.readableBytes() >= 4) {
    
     // (3)
            long currentTimeMillis = (buf.readUnsignedInt() - 2208988800L) * 1000L;
            System.out.println(new Date(currentTimeMillis));
            ctx.close();
        }
    }

    @Override
    public void exceptionCaught(ChannelHandlerContext ctx, Throwable cause) {
    
    
        cause.printStackTrace();
        ctx.close();
    }
}

ChannelHandler has two life cycle listener methods: handlerAdded() and handlerRemoved(). As long as it doesn't block for a long time, you can perform arbitrary (de)initialization tasks.
First, all received data must be accumulated into buf.
The handler must then check if buf has enough data (in this case 4 bytes) and continue with the actual business logic. Otherwise, when more data arrives, Netty will call the channelRead() method again and eventually all 4 bytes will be accumulated.

6.2 Second solution

Although the first solution already solved the problem with the TIME client, the modified handler doesn't look that clean. Imagine a more complex protocol that consists of multiple fields, such as variable length fields. Your ChannelInboundHandler implementation will quickly become unmaintainable.

You may have noticed that multiple ChannelHandlers can be added to a ChannelPipeline, so a single ChannelHandler can be split into multiple modular ChannelHandlers to reduce application complexity. For example, you can split the TimeClientHandler into two handlers:

TimeDecoder to handle fragmentation issues, and
the original simple version of TimeClientHandler.
Fortunately, Netty provides an extensible class that can help you write your first class out of the box:

package com.zs.netty.client;

import io.netty.buffer.ByteBuf;
import io.netty.channel.ChannelHandlerContext;
import io.netty.handler.codec.ByteToMessageDecoder;

import java.util.List;

public class TimeDecoder extends ByteToMessageDecoder {
    
     // (1)
    @Override
    protected void decode(ChannelHandlerContext ctx, ByteBuf in, List<Object> out) {
    
     // (2)
        if (in.readableBytes() < 4) {
    
    
            return; // (3)
        }
        
        out.add(in.readBytes(4)); // (4)
    }
}

ByteToMessageDecoder is an implementation of ChannelInboundHandler that can easily handle fragmentation issues.
Whenever new data is received, ByteToMessageDecoder calls the decode() method and uses an internally maintained accumulation buffer.
Decode() can decide not to add anything when there is not enough data in the accumulation buffer. When more data is received, ByteToMessageDecoder will call decode() again.
If decode() adds an object to out, it means that the decoder successfully decoded the message. ByteToMessageDecoder will discard the read portion of the accumulation buffer. Remember, you don't need to decode multiple messages. ByteToMessageDecoder will continue calling the decode() method until it adds nothing.
Now that we have another handler to insert into the ChannelPipeline, we should modify the ChannelInitializer implementation in timclient:

 @Override
 public void initChannel(SocketChannel ch) throws Exception {
    
    
             ch.pipeline().addLast(new TimeDecoder(),new TimeClientHandler1());
 }

TimeDecoder can also inherit from ReplayingDecoder

public class TimeDecoder extends ReplayingDecoder<Void> {
    
    
    @Override
    protected void decode(
            ChannelHandlerContext ctx, ByteBuf in, List<Object> out) {
    
    
        out.add(in.readBytes(4));
    }
}

Additionally, Netty provides out-of-the-box decoders that enable you to implement most protocols very easily and help you avoid ending up with a single unmaintainable handler implementation.

7. Use POJO instead of ByteBuf

All of the examples we've reviewed so far have used ByteBuf as the primary data structure for protocol messages. In this section, we will improve the TIME protocol client and server examples to use POJOs instead of ByteBufs.

The advantages of using POJOs in ChannelHandlers are obvious; by separating the code that extracts information from the ByteBuf from the handler, the handler will become more maintainable and reusable. In the TIME client and server examples, we are only reading a 32-bit integer, and using ByteBuf directly is not a big problem. However, you will find that separation is necessary when implementing a real protocol.

First, let's define a new type called UnixTime.

package io.netty.example.time;

import java.util.Date;

public class UnixTime {
    
    

    private final long value;
    
    public UnixTime() {
    
    
        this(System.currentTimeMillis() / 1000L + 2208988800L);
    }
    
    public UnixTime(long value) {
    
    
        this.value = value;
    }
        
    public long value() {
    
    
        return value;
    }
        
    @Override
    public String toString() {
    
    
        return new Date((value() - 2208988800L) * 1000L).toString();
    }
}

Client modification

@Override
protected void decode(ChannelHandlerContext ctx, ByteBuf in, List<Object> out) {
    
    
    if (in.readableBytes() < 4) {
    
    
        return;
    }

    out.add(new UnixTime(in.readUnsignedInt()));
}
@Override
public void channelRead(ChannelHandlerContext ctx, Object msg) {
    
    
    UnixTime m = (UnixTime) msg;
    System.out.println(m);
    ctx.close();
}

The server can also be used

@Override
public void channelActive(ChannelHandlerContext ctx) {
    
    
    ChannelFuture f = ctx.writeAndFlush(new UnixTime());
    f.addListener(ChannelFutureListener.CLOSE);
}

Now, the only missing piece is an encoder, which is an implementation of ChannelOutboundHandler that converts UnixTime back to ByteBuf. It is much simpler than writing a decoder because there is no need to deal with packet fragmentation and assembly when encoding messages.

package io.netty.example.time;

public class TimeEncoder extends ChannelOutboundHandlerAdapter {
    
    
    @Override
    public void write(ChannelHandlerContext ctx, Object msg, ChannelPromise promise) {
    
    
        UnixTime m = (UnixTime) msg;
        ByteBuf encoded = ctx.alloc().buffer(4);
        encoded.writeInt((int)m.value());
        ctx.write(encoded, promise); // (1)
    }
}

There are a lot of important things in this line.

First, we pass the original ChannelPromise as-is so that when the encoded data is actually written to the network, Netty marks it as success or failure.

Second, we didn't call ctx.flush(). There is a separate handler method void flush(ChannelHandlerContext ctx) whose purpose is to override the flush() operation.

To simplify it further, you can use MessageToByteEncoder:

public class TimeEncoder extends MessageToByteEncoder<UnixTime> {
    
    
    @Override
    protected void encode(ChannelHandlerContext ctx, UnixTime msg, ByteBuf out) {
    
    
        out.writeInt((int)msg.value());
    }
}

The last remaining task is to insert a TimeEncoder into the server-side ChannelPipeline before the TimeServerHandler, which is a simple exercise.
The server side is modified as follows: add encode
Insert image description here

7.1. TimeServer processing flow

Insert image description here

8. Stop the application

Shutting down a Netty application is usually as simple as shutting down all EventLoopGroups you created via shutdownelegant(). It returns a Future that notifies you when the EventLoopGroup has completely terminated and all channels belonging to the group have been closed.

Guess you like

Origin blog.csdn.net/superzhang6666/article/details/125553645