Implementation of server-client communication based on Netty

personal blog

http://www.milovetingting.cn

Realization of communication between server and client based on Netty

Foreword

This article introduces the simple use method of server-client communication based on Netty, and implements a simple Demo of server-client command communication on this basis.

What is Netty

Netty is a NIO client-server framework that can quickly and easily develop network applications, such as protocol servers and clients. It greatly simplifies network programming, such as the development of TCP and UDP socket servers. Provide an asynchronous event-driven network application framework and tools to quickly develop maintainable high-performance and high-scalability protocol servers and clients.

The above content is excerpted from https://netty.io/wiki/user-guide-for-4.x.html

Netty has the following characteristics:

  • Unified API for various transmission types-blocking and non-blocking sockets
  • Higher throughput, lower latency
  • Reduce resource consumption
  • Reduce unnecessary memory copy
  • Full SSL / TLS and StartTLS support

The above content is excerpted from https://netty.io/

Getting started

For the use of Netty, you can refer to Netty's official documentation. Here, take 4.x as an example to demonstrate the use of Netty on the server and client. Document address: https://netty.io/wiki/user-guide-for-4.x.html

Eclipse is used here for development. Both the server and the client are placed in one project.

New Java Project

Server

First need to import the netty jar package. Netty-all-4.1.48.Final.jar is used here.

NettyServer

New NettyServer class

public class NettyServer {

	private int mPort;

	public NettyServer(int port) {
		this.mPort = port;
	}

	public void run() {
		EventLoopGroup bossGroup = new NioEventLoopGroup();
		EventLoopGroup workerGroup = new NioEventLoopGroup();
		try {
			ServerBootstrap b = new ServerBootstrap();
			b.group(bossGroup, workerGroup).channel(NioServerSocketChannel.class)
					// 指定连接队列大小
					.option(ChannelOption.SO_BACKLOG, 128)
					//KeepAlive
					.childOption(ChannelOption.SO_KEEPALIVE, true)
					//Handler
					.childHandler(new ChannelInitializer<SocketChannel>() {

						@Override
						protected void initChannel(SocketChannel channel) throws Exception {
							channel.pipeline().addLast(new NettyServerHandler());
						}
					});
			ChannelFuture f = b.bind(mPort).sync();
			if (f.isSuccess()) {
				LogUtil.log("Server,启动Netty服务端成功,端口号:" + mPort);
			}
			// f.channel().closeFuture().sync();
		} catch (Exception e) {
			e.printStackTrace();
		} finally {
			// workerGroup.shutdownGracefully();
			// bossGroup.shutdownGracefully();
		}
	}

}

NettyServerHandler

During initialization, you need to specify the Handle to handle Channel-related services.

public class NettyServerHandler extends ChannelInboundHandlerAdapter {

	@Override
	public void channelActive(ChannelHandlerContext ctx) throws Exception {
		LogUtil.log("Server,channelActive");
	}

	@Override
	public void channelRead(ChannelHandlerContext ctx, Object msg) throws Exception {
		LogUtil.log("Server,接收到客户端发来的消息:" + msg);
	}

	@Override
	public void exceptionCaught(ChannelHandlerContext ctx, Throwable cause) throws Exception {
		LogUtil.log("Server,exceptionCaught");
		cause.printStackTrace();
	}

	@Override
	public void channelInactive(ChannelHandlerContext ctx) throws Exception {
		LogUtil.log("Server,channelInactive");
	}

}

After the above steps, the basic settings of the server are completed.

Client

The client and server are roughly similar during initialization, but they are simpler than the server.

NettyClient

public class NettyClient {

	private String mHost;

	private int mPort;

	private NettyClientHandler mClientHandler;

	private ChannelFuture mChannelFuture;

	public NettyClient(String host, int port) {
		this.mHost = host;
		this.mPort = port;
	}

	public void connect() {
		EventLoopGroup workerGroup = new NioEventLoopGroup();
		try {
			Bootstrap b = new Bootstrap();
			mClientHandler = new NettyClientHandler();
			b.group(workerGroup).channel(NioSocketChannel.class)
					// KeepAlive
					.option(ChannelOption.SO_KEEPALIVE, true)
					// Handler
					.handler(new ChannelInitializer<SocketChannel>() {

						@Override
						protected void initChannel(SocketChannel channel) throws Exception {
							channel.pipeline().addLast(mClientHandler);
						}
					});
			mChannelFuture = b.connect(mHost, mPort).sync();
			if (mChannelFuture.isSuccess()) {
				LogUtil.log("Client,连接服务端成功");
			}
			mChannelFuture.channel().closeFuture().sync();
		} catch (Exception e) {
			e.printStackTrace();
		} finally {
			workerGroup.shutdownGracefully();
		}
	}
}

NettyClientHandler

public class NettyClientHandler extends ChannelInboundHandlerAdapter {

	@Override
	public void channelActive(ChannelHandlerContext ctx) throws Exception {
		LogUtil.log("Client,channelActive");
	}

	@Override
	public void channelRead(ChannelHandlerContext ctx, Object msg) throws Exception {
		LogUtil.log("Client,接收到服务端发来的消息:" + msg);
	}

	@Override
	public void exceptionCaught(ChannelHandlerContext ctx, Throwable cause) throws Exception {
		LogUtil.log("Client,exceptionCaught");
		cause.printStackTrace();
	}

	@Override
	public void channelInactive(ChannelHandlerContext ctx) throws Exception {
		LogUtil.log("Client,channelInactive");
	}

}

At this point, the basic settings of the client are completed.

Connect to the server

Create a new Main class to test whether the server and client can connect normally.

public class Main {

	public static void main(String[] args) {
		try {
			String host = "127.0.0.1";
			int port = 12345;
			NettyServer server = new NettyServer(port);
			server.run();
			Thread.sleep(1000);
			NettyClient client = new NettyClient(host, port);
			client.connect();
		} catch (Exception e) {
			e.printStackTrace();
		}
	}

}

Run the main method, the output log is as follows:

2020-4-13 0:11:02--Server,启动Netty服务端成功,端口号:12345
2020-4-13 0:11:03--Client,channelActive
2020-4-13 0:11:03--Client,连接服务端成功
2020-4-13 0:11:03--Server,channelActive

It can be seen that the client successfully connects to the server, and the channelActive method of the Handler set in the server and the client will be called back.

Server and client communication

After the connection between the server and the client is successful, we often need to communicate between the two parties. It is assumed here that after the connection is successful, the server sends a welcome message "Hello, client" to the client, and after receiving the message from the server, the client also replies to the server with a message "Hello, server" . Let's implement specific functions.

Modify the channelActive method and channelRead method in the server NettyServerHandler, send messages to the client in the channelActive method, and parse the messages sent by the client in the channelRead method

public class NettyServerHandler extends ChannelInboundHandlerAdapter {

	@Override
	public void channelActive(ChannelHandlerContext ctx) throws Exception {
		LogUtil.log("Server,channelActive");
		ByteBuf byteBuf = Unpooled.copiedBuffer("你好,客户端", Charset.forName("utf-8"));
		ctx.writeAndFlush(byteBuf);
	}

	@Override
	public void channelRead(ChannelHandlerContext ctx, Object msg) throws Exception {
		ByteBuf buf = (ByteBuf) msg;
		byte[] buffer = new byte[buf.readableBytes()];
		buf.readBytes(buffer);
		String message = new String(buffer, "utf-8");
		LogUtil.log("Server,接收到客户端发来的消息:" + message);
	}

}

Modify the channelRead method in the client NettyClientHandler, and reply to the server when receiving a message from the server

public class NettyClientHandler extends ChannelInboundHandlerAdapter {

	@Override
	public void channelRead(ChannelHandlerContext ctx, Object msg) throws Exception {
		ByteBuf buf = (ByteBuf) msg;
		byte[] buffer = new byte[buf.readableBytes()];
		buf.readBytes(buffer);
		String message = new String(buffer,"utf-8");
		LogUtil.log("Client,接收到服务端发来的消息:" + message);
		
		ByteBuf byteBuf = Unpooled.copiedBuffer("你好,服务端", Charset.forName("utf-8"));
		ctx.writeAndFlush(byteBuf);
	}

}

After running, the output log is as follows:

2020-4-13 0:29:16--Server,启动Netty服务端成功,端口号:12345
2020-4-13 0:29:17--Client,channelActive
2020-4-13 0:29:17--Client,连接服务端成功
2020-4-13 0:29:17--Server,channelActive
2020-4-13 0:29:17--Client,接收到服务端发来的消息:你好,客户端
2020-4-13 0:29:17--Server,接收到客户端发来的消息:你好,服务端

It can be seen that the server and the client can communicate normally.

Sticking and unpacking

In actual usage scenarios, there may be a problem of sending a large amount of data in a short time. We simulate this scenario. After the client connects to the server, the server sends 100 messages to the client. For analysis, the client does not reply after receiving the server message.

Modify the channelActive method of NettyServerHandler in the server

@Override
	public void channelActive(ChannelHandlerContext ctx) throws Exception {
		LogUtil.log("Server,channelActive");
		for (int i = 0; i < 100; i++) {
			ByteBuf byteBuf = Unpooled.copiedBuffer("你好,客户端", Charset.forName("utf-8"));
			ctx.writeAndFlush(byteBuf);
		}
	}

Modify the channelRead method of NettyClientHandler in the client

@Override
	public void channelRead(ChannelHandlerContext ctx, Object msg) throws Exception {
		ByteBuf buf = (ByteBuf) msg;
		byte[] buffer = new byte[buf.readableBytes()];
		buf.readBytes(buffer);
		String message = new String(buffer, "utf-8");
		LogUtil.log("Client,接收到服务端发来的消息:" + message);

        //ByteBuf byteBuf = Unpooled.copiedBuffer("你好,服务端", Charset.forName("utf-8"));
        //ctx.writeAndFlush(byteBuf);
	}

After running, some of the output results are as follows:

2020-4-13 0:35:28--Server,启动Netty服务端成功,端口号:12345
2020-4-13 0:35:29--Client,channelActive
2020-4-13 0:35:29--Client,连接服务端成功
2020-4-13 0:35:29--Server,channelActive
2020-4-13 0:35:29--Client,接收到服务端发来的消息:你好,客户端
2020-4-13 0:35:29--Client,接收到服务端发来的消息:你好,客户端你好,客户端你好,客户端你好,客户端你好,客户端你好,客户端你好,客户端你好,客户端你好,客户端你好,客户端你好,客户端你好,客户端你好,客户端你好,客户端你好,客户端你好,客户端你好,客户端
2020-4-13 0:35:29--Client,接收到服务端发来的消息:你好,客户端

As you can see, multiple messages have "sticked" together.

What is sticking and unpacking

TCP is a "streaming" protocol. The so-called stream is a string of data without boundaries. The bottom layer of TCP does not understand the specific meaning of upper-layer business data, it will divide the packets according to the actual situation of the TCP buffer, so in business, it is believed that a complete packet may be split into multiple packets by TCP to send. It is possible to encapsulate multiple small packets into a large data packet to send. This is the so-called TCP sticking and unpacking problem.

The above content is excerpted from TCP sticking / unpacking and Netty solutions

solution

In the absence of Netty, if the user needs to unpack, the basic principle is to continuously read data from the TCP buffer. Every time after reading, it is necessary to judge whether it is a complete data packet. If the current data is not enough Spliced ​​into a complete business data packet, then retain the data, continue to read from the TCP buffer, until you get a complete data packet. If the currently read data plus the already read data are enough to be spliced ​​into a data packet, then the read data is spliced ​​onto the data read this time to form a complete business data package and passed to the business logic, redundant The data is still retained so that it can be stitched with the data read next time.

The above content is excerpted from a thorough understanding of Netty, this article is enough

With Netty, the solution to this problem is much simpler. Netty has provided four unpackers:

  • FixedLengthFrameDecoder: a fixed-length unpacker, Netty will send a fixed-length packet to the next channelHandler
  • LineBasedFrameDecoder: line unpacker, each packet is sent with newline delimiters
  • DelimiterBasedFrameDecoder: Delimiter unpacker, you can customize the delimiter, line unpacker is a special case of delimiter unpacker
  • LengthFieldBasedFrameDecoder: Unpacker based on length field, if the custom protocol contains a field of length field, you can use this unpacker

Here, we choose the separator unpacker

First define the separator

public class Config {
	public static final String DATA_PACK_SEPARATOR = "#$&*";
}

In the channelHandler configuration on the server side, it needs to be increased

@Override
protected void initChannel(SocketChannel channel) throws Exception {
    //这个配置需要在添加Handler前设置
	channel.pipeline().addLast(new DelimiterBasedFrameDecoder(1024,Unpooled.copiedBuffer(Config.DATA_PACK_SEPARATOR.getBytes())));
	channel.pipeline().addLast(new NettyServerHandler());
	}

In the configuration of the client's channelHandler, it also needs to be added

@Override
protected void initChannel(SocketChannel channel) throws Exception {
    //这个配置需要在添加Handler前设置
	channel.pipeline().addLast(new DelimiterBasedFrameDecoder(1024,Unpooled.copiedBuffer(Config.DATA_PACK_SEPARATOR.getBytes())));
	channel.pipeline().addLast(new NettyServerHandler());
	}

When sending data, add a separator at the end of the data:

@Override
	public void channelActive(ChannelHandlerContext ctx) throws Exception {
		LogUtil.log("Server,channelActive");
		for (int i = 0; i < 100; i++) {
			ByteBuf byteBuf = Unpooled.copiedBuffer("你好,客户端"+Config.DATA_PACK_SEPARATOR, Charset.forName("utf-8"));
			ctx.writeAndFlush(byteBuf);
		}
	}

After running, you can find that the problems of "sticking" and "unpacking" have been solved.

Heartbeat

In network applications, in order to determine whether the connection still exists, it is generally detected by sending heartbeat packets. In Netty, the steps to configure heartbeat packets are as follows

In the configuration of the client's channelHandler, it needs to be increased

@Override
protected void initChannel(SocketChannel channel) throws Exception {
			channel.pipeline().addLast(new IdleStateHandler(5, 5, 10));
            //...
						}

In NettyClientHandler, override userEventTriggered method

@Override
	public void userEventTriggered(ChannelHandlerContext ctx, Object evt) throws Exception {
		IdleStateEvent event = (IdleStateEvent) evt;
		LogUtil.log("Client,Idle:" + event.state());
		switch (event.state()) {
		case READER_IDLE:

			break;
		case WRITER_IDLE:
			ByteBuf byteBuf = Unpooled.copiedBuffer("心跳^v^v", Charset.forName("utf-8"));
			break;
		case ALL_IDLE:
			break;
		default:
			super.userEventTriggered(ctx, evt);
			break;
		}
	}

When write idle reaches the configured time, send a heartbeat message to the server

After running, the log output is as follows:

2020-4-13 1:22:50--Server,启动Netty服务端成功,端口号:12345
2020-4-13 1:22:51--Client,channelActive
2020-4-13 1:22:51--Client,连接服务端成功
2020-4-13 1:22:51--Server,channelActive
2020-4-13 1:22:51--Client,接收到服务端发来的消息:你好,客户端
2020-4-13 1:22:56--Client,Idle:WRITER_IDLE
2020-4-13 1:22:56--Server,接收到客户端发来的消息:心跳^v^
2020-4-13 1:22:56--Client,Idle:READER_IDLE
2020-4-13 1:23:01--Client,Idle:WRITER_IDLE
2020-4-13 1:23:01--Server,接收到客户端发来的消息:心跳^v^
2020-4-13 1:23:01--Client,Idle:READER_IDLE

It can be seen that the heartbeat packets are output normally according to the time we configured.

Configure encoder and decoder

When we send data above, we need to convert String through ByteBuf, and by configuring the encoding and decoder, we can send the string directly. The configuration is as follows:

Add the following configuration to the channelHandler on the server and client respectively:

@Override
protected void initChannel(SocketChannel channel) throws Exception {
	//...
	//这个配置需要在添加Handler前设置
	channel.pipeline().addLast("encoder", new StringEncoder());
	channel.pipeline().addLast("decoder", new StringDecoder());
    //...
}

When sending a message, you can send it directly ctx.writeAndFlush("心跳^v^" + Config.DATA_PACK_SEPARATOR).

Source code

At this point, the simplest demo of the communication between the server and the client has been completed. Source address: https://github.com/milovetingting/Samples/tree/master/NettyDemo

Use advanced

On the basis of the above, let's implement one of the following requirements:

  • The client needs to log in to the server

  • After the client logs in successfully, the server can send an instruction message to the client. After receiving the message and processing the message, the client needs to report to the server

Package connection

In order to facilitate the expansion of the program, we extract the part of the client connected to the server. The connection method is defined through an interface, and the concrete implementation of the connection is implemented by subclasses.

Define the interface

public interface IConnection {

	/**
	 * 连接服务器
	 * 
	 * @param host     服务器地址
	 * @param port     端口
	 * @param callback 连接回调
	 */
	public void connect(String host, int port, IConnectionCallback callback);

}

Here you also need to define the callback interface for the connection

public interface IConnectionCallback {

	/**
	 * 连接成功
	 */
	public void onConnected();

}

Concrete connection implementation class

public class NettyConnection implements IConnection {

	private NettyClient mClient;

	@Override
	public void connect(String host, int port, IConnectionCallback callback) {
		if (mClient == null) {
			mClient = new NettyClient(host, port);
			mClient.setConnectionCallBack(callback);
			mClient.connect();
		}
	}

}

To facilitate management of connections, define a connection management class

public class ConnectionManager implements IConnection {

	private static IConnection mConnection;

	private ConnectionManager() {

	}

	static class ConnectionManagerInner {
		private static ConnectionManager INSTANCE = new ConnectionManager();
	}

	public static ConnectionManager getInstance() {
		return ConnectionManagerInner.INSTANCE;
	}

	public static void initConnection(IConnection connection) {
		mConnection = connection;
	}

	private void checkInit() {
		if (mConnection == null) {
			throw new IllegalAccessError("please invoke initConnection first!");
		}
	}

	@Override
	public void connect(String host, int port, IConnectionCallback callback) {
		checkInit();
		mConnection.connect(host, port, callback);
	}

}

Call connection:

public class Main {

	public static void main(String[] args) {
		try {
			String host = "127.0.0.1";
			int port = 12345;
			NettyServer server = new NettyServer(port);
			server.run();
			Thread.sleep(1000);
			ConnectionManager.initConnection(new NettyConnection());
			ConnectionManager.getInstance().connect(host, port, new IConnectionCallback() {

				@Override
				public void onConnected() {
					LogUtil.log("Main,onConnected"););
				}
			});
		} catch (Exception e) {
			e.printStackTrace();
		}

	}

}

Before calling the connect method, you need to call initConnection to specify the specific connection class

Message Bean definition

After the connection is successful, the server will send a welcome message to the client. For ease of management, we define a message bean

public class Msg {

	/**
	 * 欢迎
	 */
	public static final int TYPE_WELCOME = 0;

	public int type;

	public String msg;

}

The server sends a welcome message

The server sends a message

public class NettyServerHandler extends ChannelInboundHandlerAdapter {

	private ChannelHandlerContextWrapper mChannelHandlerContextWrapper;

	@Override
	public void channelActive(ChannelHandlerContext ctx) throws Exception {
		LogUtil.log("Server,channelActive");
		mChannelHandlerContextWrapper = new ChannelHandlerContextWrapper(ctx);
		MsgUtil.sendWelcomeMsg(mChannelHandlerContextWrapper);
	}
}

Here, by defining a ChannelHandlerContextWrapper class to uniformly manage the message separator

public class ChannelHandlerContextWrapper {

	private ChannelHandlerContext mContext;

	public ChannelHandlerContextWrapper(ChannelHandlerContext context) {
		this.mContext = context;
	}

	/**
	 * 包装writeAndFlush方法
	 * 
	 * @param object
	 */
	public void writeAndFlush(Object object) {
		mContext.writeAndFlush(object + Config.DATA_PACK_SEPARATOR);
	}

}

Going further, encapsulate and send the welcome message by defining the MsgUtil class

public class MsgUtil {

	/**
	 * 发送欢迎消息
	 * 
	 * @param wrapper
	 */
	public static void sendWelcomeMsg(ChannelHandlerContextWrapper wrapper) {
		Msg msg = new Msg();
		msg.type = Msg.TYPE_WELCOME;
		msg.msg = "你好,客户端";
		wrapper.writeAndFlush(Global.sGson.toJson(msg));
	}

}

Client message reception

For the client, in order to facilitate the processing of the message, we need to define a method to receive the message. This is achieved by adding a registerMsgCallback method to the IConnection interface

public interface IConnection {

	/**
	 * 连接服务器
	 * 
	 * @param host     服务器地址
	 * @param port     端口
	 * @param callback 连接回调
	 */
	public void connect(String host, int port, IConnectionCallback callback);

	/**
	 * 注册消息回调
	 * 
	 * @param callback
	 */
	public void registerMsgCallback(IMsgCallback callback);

}

Here, you also need to add the IMsgCallback interface

public interface IMsgCallback {

	/**
	 * 接收到消息时的回调
	 * 
	 * @param msg
	 */
	public void onMsgReceived(Msg msg);

}

Corresponds to the implementation class

public class NettyConnection implements IConnection {

	private NettyClient mClient;

	@Override
	public void connect(String host, int port, IConnectionCallback callback) {
		if (mClient == null) {
			mClient = new NettyClient(host, port);
			mClient.setConnectionCallBack(callback);
			mClient.connect();
		}
	}

	@Override
	public void registerMsgCallback(IMsgCallback callback) {
		if (mClient == null) {
			throw new IllegalAccessError("please invoke connect first!");
		}
		mClient.registerMsgCallback(callback);
	}

}

Message distribution

On the client side, in order to facilitate the processing of messages, we divide the message types

Modify Message Bean

public class Msg {

	/**
	 * 欢迎
	 */
	public static final int TYPE_WELCOME = 0;

	/**
	 * 心跳
	 */
	public static final int TYPE_HEART_BEAT = 1;

	/**
	 * 登录
	 */
	public static final int TYPE_LOGIN = 2;

	public static final int TYPE_COMMAND_A = 3;

	public static final int TYPE_COMMAND_B = 4;

	public static final int TYPE_COMMAND_C = 5;

	public int type;

	public String msg;
}

Assuming that the messages are serial, they need to be processed one by one. To facilitate the management of messages, add the MsgQueue class

public class MsgQueue {

	private PriorityBlockingQueue<Msg> mQueue;

	private boolean using;

	private MsgQueue() {
		mQueue = new PriorityBlockingQueue<>(128, new Comparator<Msg>() {

			@Override
			public int compare(Msg msg1, Msg msg2) {
				int res = msg2.priority - msg1.priority;
				if (res == 0 && msg1.time != msg2.time) {
					return (int) (msg2.time - msg1.time);
				}
				return res;
			}
		});
	}

	public static MsgQueue getInstance() {
		return MsgQueueInner.INSTANCE;
	}

	private static class MsgQueueInner {
		private static final MsgQueue INSTANCE = new MsgQueue();
	}

	/**
	 * 将消息加入消息队列
	 * 
	 * @param msg
	 */
	public void enqueueMsg(Msg msg) {
		mQueue.add(msg);
	}

	/**
	 * 从消息队列获取消息
	 * 
	 * @return
	 */
	public synchronized Msg next() {
		if (using) {
			return null;
		}
		Msg msg = mQueue.poll();
		if (msg != null) {
			makeUse(true);
		}
		return msg;
	}

	/**
	 * 标记使用状态
	 * 
	 * @param use
	 */
	public synchronized void makeUse(boolean use) {
		using = use;
	}

	/**
	 * 是否能够使用
	 * 
	 * @return
	 */
	public synchronized boolean canUse() {
		return !using;
	}

}

Increase the message distribution class MsgDispatcher

public class MsgDispatcher {

	private static Map<Integer, Class<? extends IMsgHandler>> mHandlerMap;

	static {
		mHandlerMap = new HashMap<>();
		mHandlerMap.put(Msg.TYPE_WELCOME, WelcomeMsgHandler.class);
		mHandlerMap.put(Msg.TYPE_HEART_BEAT, HeartBeatMsgHandler.class);
		mHandlerMap.put(Msg.TYPE_LOGIN, HeartBeatMsgHandler.class);
		mHandlerMap.put(Msg.TYPE_COMMAND_A, CommandAMsgHandler.class);
		mHandlerMap.put(Msg.TYPE_COMMAND_B, CommandBMsgHandler.class);
		mHandlerMap.put(Msg.TYPE_COMMAND_C, CommandCMsgHandler.class);
	}

	public static void dispatch() {
		if (MsgQueue.getInstance().canUse()) {
			Msg msg = MsgQueue.getInstance().next();
			if (msg == null) {
				return;
			}
			dispatch(msg);
		}
	}

	public static void dispatch(Msg msg) {
		try {
			IMsgHandler handler = (IMsgHandler) Class.forName(mHandlerMap.get(msg.type).getName()).newInstance();
			handler.handle(msg);
		} catch (InstantiationException e) {
			e.printStackTrace();
		} catch (IllegalAccessException e) {
			e.printStackTrace();
		} catch (ClassNotFoundException e) {
			e.printStackTrace();
		}
	}

}

Message processing

Define IMsgHandler, define the processing method here, the specific implementation is realized by the subclass

public interface IMsgHandler {

	/**
	 * 处理消息
	 * 
	 * @param msg
	 */
	public void handle(Msg msg);

}

For unified management, define the Base class BaseCommandHandler

public abstract class BaseCommandHandler implements IMsgHandler {

	@Override
	public void handle(Msg msg) {
		execute(msg);
	}

	public final void execute(Msg msg) {
		LogUtil.log("Client,received command:" + msg);
		doHandle(msg);
		MsgQueue.getInstance().makeUse(false);
		LogUtil.log("Client,report command:" + msg);
		MsgDispatcher.dispatch();
	}

	public abstract void doHandle(Msg msg);

}

In BaseCommandHandler, define the execute method and call it in sequence: the reported message has been received successfully, the message has been processed, and the reported message has been processed. The message reporting part here just outputs a log instead. In actual business, an abstract method can be extracted and implemented by subclasses.

Define subclasses, inherited from BaseCommandHandler

public class LoginMsgHandler extends BaseCommandHandler {

	@Override
	public void doHandle(Msg msg) {
		LogUtil.log("Client,handle msg:" + msg);
	}

}

Corresponding heartbeat type messages, welcome type messages, etc., can be implemented by adding corresponding processing classes, which will not be expanded here.

Processing when a message is received

public class Main {

	public static void main(String[] args) {
		try {
			String host = "127.0.0.1";
			int port = 12345;
			NettyServer server = new NettyServer(port);
			server.run();
			Thread.sleep(1000);
			ConnectionManager.initConnection(new NettyConnection());
			ConnectionManager.getInstance().connect(host, port, new IConnectionCallback() {

				@Override
				public void onConnected() {
					LogUtil.log("Main,onConnected");

					ConnectionManager.getInstance().registerMsgCallback(new IMsgCallback() {

						@Override
						public void onMsgReceived(Msg msg) {
							MsgQueue.getInstance().enqueueMsg(msg);
							MsgDispatcher.dispatch();
						}
					});
				}
			});
		} catch (Exception e) {
			e.printStackTrace();
		}

	}

}

Client login

Modify the message bean to increase the login request and response

public class Msg {

	/**
	 * 欢迎
	 */
	public static final int TYPE_WELCOME = 0;

	/**
	 * 心跳
	 */
	public static final int TYPE_HEART_BEAT = 1;

	/**
	 * 登录
	 */
	public static final int TYPE_LOGIN = 2;

	public static final int TYPE_COMMAND_A = 3;

	public static final int TYPE_COMMAND_B = 4;

	public static final int TYPE_COMMAND_C = 5;

	public int type;

	public String msg;

	public int priority;

	public long time;

	/**
	 * 登录请求信息
	 * 
	 * @author Administrator
	 *
	 */
	public static class LoginRuquestInfo {
		/**
		 * 用户名
		 */
		public String user;

		/**
		 * 密码
		 */
		public String pwd;

		@Override
		public String toString() {
			return "LoginRuquestInfo [user=" + user + ", pwd=" + pwd + "]";
		}
	}

	/**
	 * 登录响应信息
	 * 
	 * @author Administrator
	 *
	 */
	public static class LoginResponseInfo {

		/**
		 * 登录成功
		 */
		public static final int CODE_SUCCESS = 0;

		/**
		 * 登录失败
		 */
		public static final int CODE_FAILED = 100;

		/**
		 * 响应码
		 */
		public int code;

		/**
		 * 响应数据
		 */
		public String data;

		public static class ResponseData {
			public String token;
		}

		@Override
		public String toString() {
			return "LoginResponseInfo [code=" + code + ", data=" + data + "]";
		}

	}
}

Send login request

public class Main {

	public static void main(String[] args) {
		try {
			String host = "127.0.0.1";
			int port = 12345;
			NettyServer server = new NettyServer(port);
			server.run();
			Thread.sleep(1000);
			ConnectionManager.initConnection(new NettyConnection());
			ConnectionManager.getInstance().connect(host, port, new IConnectionCallback() {

				@Override
				public void onConnected() {
					LogUtil.log("Main,onConnected");

					ConnectionManager.getInstance().registerMsgCallback(new IMsgCallback() {

						@Override
						public void onMsgReceived(Msg msg) {
							MsgQueue.getInstance().enqueueMsg(msg);
							MsgDispatcher.dispatch();
						}
					});

					Msg msg = new Msg();
					msg.type = Msg.TYPE_LOGIN;

					Msg.LoginRuquestInfo request = new LoginRuquestInfo();
					request.user = "wangyz";
					request.pwd = "wangyz";

					Gson gson = new Gson();
					msg.msg = gson.toJson(request);

					ConnectionManager.getInstance().sendMsg(msg);
				}
			});
		} catch (Exception e) {
			e.printStackTrace();
		}

	}

}

Here, Gson is introduced to send the message Bean into a json string.

Corresponding to the server, in order to facilitate the parsing of the message, the corresponding message modification Bean is also required. The specific distribution and processing of messages by the server is similar to that of the client, and will not be expanded here.

Source code

Due to space limitations, the priority processing of the instructions in the Demo, the issuance of instructions on the simulated server, etc., no further details are introduced here. For details, please refer to the source code: https://github.com/milovetingting/Samples/tree/master/Netty

postscript

This article introduces the basic usage of server-client communication based on Netty, and on this basis, implements processing and reporting of server-side instructions. The data format of the communication in the demo uses json, and the optimization method can be implemented with protobuf. Here only the communication process and simple encapsulation are shown, so protobuf is not used. The demo only implements the general process, there may be untested bugs, let's be a reference idea.

End~

Guess you like

Origin www.cnblogs.com/milovetingting/p/12689103.html