Netty网络编程-入门篇(三)-TCP粘包拆包问题

版权声明:本文为博主原创文章,未经博主允许不得转载。 https://blog.csdn.net/QQB67G8COM/article/details/90268849

TCP粘包、拆包:
TCP是一个“流”协议,所谓流就是没有界限的一串数据。大家可以想象下如果河里的水就好比数据,他们是连成一片的,没有分界线,TCP底层并不了解上层业务数据具体的含义,它会根据TCP缓冲区的实际情况进行包的划分,在业务上,我们一个完整的包可能会被TCP分成多个包进行发送,也可能吧多个小包封装成一个大的数据包发送出去,这就是所谓的粘包、拆包问题。
TCP粘包、拆包发生的原因:
1、应用程序write写入的字节大小大于套接字发送缓冲区的大小
2、进行MSS大小的TCP分段
3、以太网帧的payload大于MTU进行IP分片
粘包拆包问题的解决方案,根据业界主流协议的有三种方案:
1.消息定长,例如每个报文的大小固定为200个字节,如果不够,控温补空格;
2.在包尾部增加特殊字符进行分割,例如加回车等
3.将消息分为消息头和消息体,在消息头中包含表示消息总长度的字段,然后进行业务逻辑的处理

Netty如何去解决粘包拆包问题:
1.分隔符拆包
2.消息定长

先上菜1(通过自定义特殊分隔符来解决粘包问题):
Server端代码:

import java.nio.ByteBuffer;

import io.netty.bootstrap.ServerBootstrap;
import io.netty.buffer.ByteBuf;
import io.netty.buffer.Unpooled;
import io.netty.channel.ChannelFuture;
import io.netty.channel.ChannelHandlerAdapter;
import io.netty.channel.ChannelHandlerContext;
import io.netty.channel.ChannelInitializer;
import io.netty.channel.ChannelOption;
import io.netty.channel.EventLoopGroup;
import io.netty.channel.nio.NioEventLoopGroup;
import io.netty.channel.socket.SocketChannel;
import io.netty.channel.socket.nio.NioServerSocketChannel;
import io.netty.handler.codec.DelimiterBasedFrameDecoder;
import io.netty.handler.codec.FixedLengthFrameDecoder;
import io.netty.handler.codec.string.StringDecoder;
import io.netty.handler.codec.string.StringEncoder;

public class Server {

	public static void main(String[] args) throws Exception{
		//1 创建2个线程,boss是负责接收客户端的连接,worker负责进行数据传输
		EventLoopGroup bossGroup = new NioEventLoopGroup();
		EventLoopGroup workerGroup = new NioEventLoopGroup();
		
		//2 创建服务器辅助类
		ServerBootstrap b = new ServerBootstrap();
		b.group(bossGroup, workerGroup)
		 .channel(NioServerSocketChannel.class)
		 .childHandler(new ChannelInitializer<SocketChannel>() {
			@Override
			protected void initChannel(SocketChannel sc) throws Exception {
				//设置特殊分隔符和解码器
//				ByteBuf buf = Unpooled.copiedBuffer("$_".getBytes());
//				sc.pipeline().addLast(new DelimiterBasedFrameDecoder(1024, buf));
				sc.pipeline().addLast(new StringDecoder());
				sc.pipeline().addLast(new ServerHandler());
			}
		 })
		 .option(ChannelOption.SO_BACKLOG, 1024)
		 .option(ChannelOption.SO_SNDBUF, 32*1024)
		 .option(ChannelOption.SO_RCVBUF, 32*1024)
		 .option(ChannelOption.SO_KEEPALIVE, true);
		
		ChannelFuture cf = b.bind(8088).sync();
		
		//等待服务器监听端口关闭
		cf.channel().closeFuture().sync();
		
		bossGroup.shutdownGracefully();
		workerGroup.shutdownGracefully();
		
	}
	
	private static class ServerHandler extends ChannelHandlerAdapter {

		@Override
		public void channelActive(ChannelHandlerContext ctx) throws Exception {
			System.out.println(" server channel active... ");
		}

		@Override
		public void channelRead(ChannelHandlerContext ctx, Object msg) throws Exception {
			String request = (String)msg;
			System.out.println("Server :" + msg);
			String response = "服务器响应:" + msg + "$_";
			ctx.writeAndFlush(Unpooled.copiedBuffer(response.getBytes()));
		}

		@Override
		public void channelReadComplete(ChannelHandlerContext ctx) throws Exception {
			System.out.println(" server channel read complete... ");
		}

		@Override
		public void exceptionCaught(ChannelHandlerContext ctx, Throwable t) throws Exception {
			ctx.close();
		}
	}
	
}

Client端代码:

import io.netty.bootstrap.Bootstrap;
import io.netty.buffer.ByteBuf;
import io.netty.buffer.Unpooled;
import io.netty.channel.ChannelFuture;
import io.netty.channel.ChannelHandlerAdapter;
import io.netty.channel.ChannelHandlerContext;
import io.netty.channel.ChannelInitializer;
import io.netty.channel.EventLoopGroup;
import io.netty.channel.nio.NioEventLoopGroup;
import io.netty.channel.socket.SocketChannel;
import io.netty.channel.socket.nio.NioSocketChannel;
import io.netty.handler.codec.DelimiterBasedFrameDecoder;
import io.netty.handler.codec.FixedLengthFrameDecoder;
import io.netty.handler.codec.string.StringDecoder;
import io.netty.handler.codec.string.StringEncoder;
import io.netty.util.ReferenceCountUtil;

public class Client {

	public static void main(String[] args) throws Exception {
		EventLoopGroup group = new NioEventLoopGroup();
		Bootstrap b = new Bootstrap();
		b.group(group)
		 .channel(NioSocketChannel.class)
		 .handler(new ChannelInitializer<SocketChannel>() {
			@Override
			protected void initChannel(SocketChannel sc) throws Exception {
				//对消息包添加分隔符和解码器
//				ByteBuf buf = Unpooled.copiedBuffer("$_".getBytes());
//				sc.pipeline().addLast(new DelimiterBasedFrameDecoder(1024, buf));	
				sc.pipeline().addLast(new StringDecoder());
				sc.pipeline().addLast(new ClientHandler());
			}
		});
		
		ChannelFuture cf = b.connect("127.0.0.1", 8088).sync();
		
		cf.channel().writeAndFlush(Unpooled.wrappedBuffer("mynameisljj$_".getBytes()));
		cf.channel().writeAndFlush(Unpooled.wrappedBuffer("whoareyou$_".getBytes()));
		cf.channel().writeAndFlush(Unpooled.wrappedBuffer("nishuosha??$_".getBytes()));
		
		//等待Client端口关闭
		cf.channel().closeFuture().sync();
		group.shutdownGracefully();
		
	}
	
	private static class ClientHandler extends ChannelHandlerAdapter {
		@Override
		public void channelActive(ChannelHandlerContext ctx) throws Exception {
			System.out.println("client channel active... ");
		}

		@Override
		public void channelRead(ChannelHandlerContext ctx, Object msg) throws Exception {
			try {
				System.out.println("Client: " + (String)msg);//由于事先在管道中使用解码器将ByteBuf解码成字符串了,传进来的msg就是字符串对象
			} finally {
				ReferenceCountUtil.release(msg);
			}
		}

		@Override
		public void channelReadComplete(ChannelHandlerContext ctx) throws Exception {
			System.out.println(" client channel read complete... ");
		}

		@Override
		public void exceptionCaught(ChannelHandlerContext ctx, Throwable cause) throws Exception {
			ctx.close();
		}
	}
}

上面的代码中客户端连续向服务器发送消息:

cf.channel().writeAndFlush(Unpooled.wrappedBuffer("mynameisljj$_".getBytes()));
cf.channel().writeAndFlush(Unpooled.wrappedBuffer("whoareyou$_".getBytes()));
cf.channel().writeAndFlush(Unpooled.wrappedBuffer("nishuosha??$_".getBytes()));

会长生粘包的问题,明明是发送三次请求,由于发送的速度太快,在加上TCP的”流“性质,服务端将三次请求的数据当作一次来处理了,因此输出:

server channel active... 
Server :mynameisljj$_whoareyou$_nishuosha??$_       //看到了没,产生了粘包的问题
server channel read complete... 

客户端这样处理,每发送一次消息的时候线程都睡1秒,这时候服务端就能识别客户端发送的是请求包了:

cf.channel().writeAndFlush(Unpooled.wrappedBuffer("mynameisljj$_".getBytes()));
TimeUnit.SECONDS.sleep(1);
cf.channel().writeAndFlush(Unpooled.wrappedBuffer("whoareyou$_".getBytes()));
TimeUnit.SECONDS.sleep(1);
cf.channel().writeAndFlush(Unpooled.wrappedBuffer("nishuosha??$_".getBytes()));

但是上面这样处理实际环境中得慢到什么时候啊,因此可以通过添加自定义的特殊分隔符的形式来解决的问题:

ByteBuf buf = Unpooled.copiedBuffer("$_".getBytes());
sc.pipeline().addLast(new DelimiterBasedFrameDecoder(1024, buf));	

上菜2(通过消息定长的方式来解决粘包问题):
io.netty.handler.codec
Class FixedLengthFrameDecoder //netty中的定长译码器
一种译码器,它将接收到的字节数按固定的字节数分开。例如,如果你收到以下四个片段包:
±–±-----±----------±----+
| A | BC | DEFG | HI |
±–±-----±----------±----+
一个FixedLengthFrameDecoder(3)将它们解码成以下三个具有固定长度的数据包:
±------±-------±------+
| ABC | DEF | GHI |
±------±-------±------+

以上的做法很简单,直接添加:

sc.pipeline().addLast(new FixedLengthFrameDecoder(10));

猜你喜欢

转载自blog.csdn.net/QQB67G8COM/article/details/90268849
今日推荐