Netty source analysis - Create Channel (c)

        I look at Netty start classes

private void start() throws Exception {
        EventLoopGroup bossGroup = new NioEventLoopGroup(1);
        EventLoopGroup workerGroup = new NioEventLoopGroup();
        try {
            ServerBootstrap bootstrap = new ServerBootstrap();
            bootstrap.group(bossGroup, workerGroup).channel(NioServerSocketChannel.class)
                    .option(ChannelOption.SO_BACKLOG, 128)
                    .option(ChannelOption.SO_KEEPALIVE, true)
                    .handler(new LoggingHandler(LogLevel.INFO))
                    .localAddress(new InetSocketAddress(port))
                    .childHandler(new ChannelInitializer<SocketChannel>() {
                        protected void initChannel(SocketChannel ch) throws Exception {
                            ch.pipeline().addLast(new IdleStateHandler(5, 0, 0, TimeUnit.MINUTES));
                            ch.pipeline().addLast(new ProtobufVarint32FrameDecoder());
                            ch.pipeline().addLast(new ProtobufDecoder(ChannelRequestProto.ChannelRequest.getDefaultInstance()));
                            ch.pipeline().addLast(new ProtobufVarint32LengthFieldPrepender());
                            ch.pipeline().addLast(new ProtobufEncoder());
                            ch.pipeline().addLast(new HeartBeatServerHandler());
                            ch.pipeline().addLast(new XtsCoreServerHandler());
                        }
                    });
            ChannelFuture future = bootstrap.bind().sync();
            future.channel().closeFuture().sync();
        } catch (Exception e) {
            bossGroup.shutdownGracefully().sync();
            workerGroup.shutdownGracefully().sync();
        }
    }

        Netty create two event loop group EventLoopGroup, this would correspond to the model mentioned above, the duty cycle of the first event the group is responsible for receiving new client and the client is connected to the multiplexer Channel Register above. The second responsibility is to deal with the event loop set client events such as reading and writing.

        Netty use ServerBootstrap this way the chain of programming code to start the server together, and very easy to use and elegant. So we look at the source code is also going to read a good article written by Netty source clever design ideas and understanding way, clever use of our own code. Next, I'll posted the source code, and the important part of the interpretation and highlighted

        Facie group method 

public ServerBootstrap group (EventLoopGroup parentGroup, EventLoopGroup childGroup) {
         Super .group (parentGroup); // first event cycle group, we call it the parent group of cycles, which is a group continue to call the parent class method, which is AbstractBootstrap and Fu to the group member variable 
IF (childGroup == null ) { the throw new new a NullPointerException ( "childGroup" ); } IF ( the this .childGroup =! null ) { the throw new new IllegalStateException ( "childGroup already SET" ); } the this .childGroup = childGroup; // set the second event cycle, we call it sub-cycle group, childGroup directly assigned to the member variables return the this ; // return to their home, this is the reason the chain program }

       Next is the channel method, it is provided channal type NioServerSocketChannel (of course, the client is NioSocketChannel), the method used here is created at the time of reflection to create instances, related to the back again specifically.

public B Channel (Class <? the extends C> channelClass) {
         IF (channelClass == null ) {
             the throw  new new a NullPointerException ( "channelClass" ); 
        } 
        return ChannelFactory ( new new ReflectiveChannelFactory <C> (channelClass)); // NioServerSocketChannel use ReflectiveChannelFactory packaging factory, and is provided to AbstractBootstrap 
member variable of channelFactory //, notice here is channal parent channal
}

      .option method is to set some parameters of the parent information, which is not to say, handler of course, here .handler (new LoggingHandler (LogLevel.INFO)) method is parented by the event loop is set.

.childHandler ( new new ChannelInitializer <the SocketChannel> () {// This is of course provided the sub-group's event loop handlers, but this method is not initChannel calling here, this will be specifically mentioned later.
                         protected  void initChannel (the SocketChannel CH) throws Exception { 
                            ch.pipeline () addLast (. new new IdleStateHandler (. 5, 0, 0 , TimeUnit.MINUTES)); 
                            . ch.pipeline () addLast ( new new ProtobufVarint32FrameDecoder ()); 
                            . ch.pipeline () addLast ( new new ProtobufDecoder (ChannelRequestProto.ChannelRequest.getDefaultInstance ())); 
                            . ch.pipeline () addLast ( new new ProtobufVarint32LengthFieldPrepender());
                            ch.pipeline().addLast(new ProtobufEncoder());
                            ch.pipeline().addLast(new HeartBeatServerHandler());
                            ch.pipeline().addLast(new XtsCoreServerHandler());
                        }
                    });

      Well, then into the formal part.

ChannelFuture future = bootstrap.bind () sync ();. // here is from the bind () into the

     Method entered doBind

private ChannelFuture doBind(final SocketAddress localAddress) {
        final ChannelFuture regFuture = initAndRegister();  
        final Channel channel = regFuture.channel();
        if (regFuture.cause() != null) {
            return regFuture;
        }
        ... 省略一大波代码
    }

     Enter initAndRegister

final ChannelFuture initAndRegister() {
        Channel channel = null;
        try {
            channel = channelFactory.newChannel(); // 这里就是创建一个父级的channel
            init(channel);
        } catch (Throwable t) {
            if (channel != null) {
                // channel can be null if newChannel crashed (eg SocketException("too many open files"))
                channel.unsafe().closeForcibly();
                // as the Channel is not registered yet we need to force the usage of the GlobalEventExecutor
                return new DefaultChannelPromise(channel, GlobalEventExecutor.INSTANCE).setFailure(t);
            }
            // as the Channel is not registered yet we need to force the usage of the GlobalEventExecutor
            return new DefaultChannelPromise(new FailedChannel(), GlobalEventExecutor.INSTANCE).setFailure(t);
        }
      ...省略一大波代码
    }

 

 That's Why I highlighted earlier NioServerSocketChannel used ReflectiveChannelFactory factory package, is on the map here. Then enter ReflectiveChannelFactory 

 

NioServerSocketChannel using the constructor with no arguments instantiate this class. ok, no-argument constructor that we look at the NioServerSocketChannel.

 Passed a default multiplexer creator

Use it to call openServerSocketChannel () method to create a ServerSocketChannel, this particular method is the content inside the NIO, who are interested take a look yourself.

See here, we know the channel is created (using the parent level), to return here

大家千万不要忽视一点,这里有个this,我当时就没注意到这里,粗心了,导致其中有一步始终想不通,后来重新仔细看的时候,打自己的心都有了。

这里继续调用了另外一个有参的构造方法。

不断调用父类构造方法,就进入到

 

这里设置了父级的成员变量channel,并且把感兴趣的key设置为16(接收新的客户端),并且设置非阻塞。这里在第一篇启动NIO服务端的时候,也有这句,大家应该也记得。

我们继续看调用的父类构造方法。

我们知道了为Channel设置了一个ID,并且创建了Pipleline.并且初始化了两个上下文分别为头和尾,通过链表链接。

我们说到这里,简单回顾一下Pipeline. 我们开看下ChannelPipeline官方说明

讲到了, Pipeline 是 Channel中出站和入站操作的处理器或拦截器的一个列表。同时官方给出了一个表单我也贴出来

下图说明了I/O读写事件是怎么在PipeLine中的Handlers中传递的。需要通过 ChannelHandlerContext, 例如 ChannelHandlerContext#fireChannelRead(Object) 和  ChannelHandlerContext#write(Object)

这点了解Netty的一下子应该就看得明白,后面设计到PipeLine的地方我们再展开讲解,继续回到NioServerSocketChannel 的有参构造方法,继续往下看。

这里为刚刚创建的channel创建了一个配置类,并且是一个内部类。传入了channel和套接字。

不断往下跟,看到这里

这里传入了一个小内存分配器,也就是说为这个channel初始化了一个分配器。

ok,我们来简单说下这个分配器,后面在Netty的内存模型部分,我们再细说。

 

构造方法,传入了三个默认值,并且说明了 默认分配缓冲区的大小为1024 ,最小是64,最大是65536

通篇看一下,看到了一个非常重要的静态代码块

依次往sizeTable添加元素:[16 , (512-16)]之间16的倍数。即,16、32、48...496
然后再往sizeTable中添加元素:[512 , 512 * (2^N)),N > 1; 直到数值超过Integer的限制(2^31 - 1);
根据sizeTable长度构建一个静态成员常量数组SIZE_TABLE,并将sizeTable中的元素赋值给SIZE_TABLE数组。注意List是有序的,所以是根据插入元素的顺序依次的赋值给SIZE_TABLE,SIZE_TABLE从下标0开始。SIZE_TABLE为预定义好的以从小到大的顺序设定的可分配缓冲区的大小值的数组。因为AdaptiveRecvByteBufAllocator作用是可自动适配每次读事件使用的buffer的大小。这样当需要对buffer大小做调整时,只要根据一定逻辑从SIZE_TABLE中取出值,然后根据该值创建新buffer即可。

先了解这些,具体更加详细的内容,我们后面再介绍
 
好了,讲到这里,Channel创建完成。

 

 

Guess you like

Origin www.cnblogs.com/huxipeng/p/10747993.html