深入浅出Netty之二 server启动

以netty提供的echo server作为分析入口,echoServer代码:

public void run() {
		// 构造NioServerSocketChannelFactory,初始化bootstrap
		ServerBootstrap bootstrap = new ServerBootstrap(
				new NioServerSocketChannelFactory(
						Executors.newCachedThreadPool(),
						Executors.newCachedThreadPool()));

		// 创建一个自定义的PipelineFactory
		bootstrap.setPipelineFactory(new ChannelPipelineFactory() {
			public ChannelPipeline getPipeline() throws Exception {
				return Channels.pipeline(new EchoServerHandler());
			}
		});

		// 开始bind端口,启动server
		bootstrap.bind(new InetSocketAddress(port));
	}

 Server启动过程:

一.ServerBootstrap配置,NioServerSocketChannelFactory初始化

         1. 构造NioWorkerPool,启动worker线程,默认个数CPU*2

           public NioServerSocketChannelFactory(
            Executor bossExecutor, Executor workerExecutor,
            int workerCount) {
        this(bossExecutor, new NioWorkerPool(workerExecutor, workerCount));
    }

         2.创建worker数组

          workers = new AbstractNioWorker[workerCount];

        for (int i = 0; i < workers.length; i++) {
            workers[i] = createWorker(workerExecutor);
        }
        this.workerExecutor = workerExecutor;

 


        protected NioWorker createWorker(Executor executor) {
        	return new NioWorker(executor);
    	}
        3.打开选择器
        AbstractNioWorker(Executor executor) {
        this.executor = executor;
        openSelector();
    }

       4. 启动worker线程

		selector = Selector.open();
		....
            DeadLockProofWorker.start(executor, new ThreadRenamingRunnable(this, "New I/O  worker #" + id));
		...

      5.初始化NioServerSocketPipelineSink,将workerPool注册到PipelineSink中

sink = new NioServerSocketPipelineSink(workerPool);

二. bind过程

1.构造叫binder的upstreamHandler,用来捕获bind事件

ChannelHandler binder = new Binder(localAddress, futureQueue);

  2.构造一个默认的ChannelPipeline作为主的ChannelPipeline

ChannelPipeline bossPipeline = pipeline();

  3.将binder注册上pipeline

bossPipeline.addLast("binder", binder);

  4.使用之前构造的NioServerSocketChannelFactory创建主Channel

Channel channel = getFactory().newChannel(bossPipeline);
 

a.在父类AbstractChannel中关联pipeline和channel

pipeline.attach(this, sink);

  b.打开一个ServerSocketChannel

socket = ServerSocketChannel.open();

  c.使用非阻塞模式

socket.configureBlocking(false);
 

d.触发channelOpen事件

fireChannelOpen(this);
 

e.在父channle中的pipeline链中执行,这里只有一个handler就是之前的binder

channel.getPipeline().sendUpstream(
                new UpstreamChannelStateEvent(
                        channel, ChannelState.OPEN, Boolean.TRUE));
 

f.binder处理channelOpen事件

1.将之前在ServerBootstrap中注册的ChannelFactory和主Channel关联

 evt.getChannel().getConfig().setPipelineFactory(getPipelineFactory());
 

2.按parent和child对设置的参数进行归类,将parent的参数和主channel关联

evt.getChannel().getConfig().setOptions(parentOptions);
 

3.对之前打开的ServerSocketChannel执行bind

evt.getChannel().bind(localAddress)
 

4.在工具类Channels中执行bind,其实是发送一个bind的downStream事件

channel.getPipeline().sendDownstream(new DownstreamChannelStateEvent(
                channel, future, ChannelState.BOUND, localAddress));
 

5.由于主channel的pipeline只有一个binder的upstreamHandler,所以这个bind事件直接就到Pipeline最下面的Sink中处理了,这里是NioServerSocketPipelineSink

6.NioServerSocketPipelineSink处理bind事件

public void eventSunk(
            		ChannelPipeline pipeline, ChannelEvent e) throws Exception {
        	Channel channel = e.getChannel();
        	if (channel instanceof NioServerSocketChannel) {
            		handleServerSocket(e);
        	} else if (channel instanceof NioSocketChannel) {
            		handleAcceptedSocket(e);
		}
    	}
 

7.这里起的是服务端,所以channel是一个NioServerSocketChannel

8.处理BOUND事件

case BOUND:
            if (value != null) {
                bind(channel, future, (SocketAddress) value);
            } else {
                close(channel, future);
            }
            break;
 

9.拿到channel的socket执行bind

channel.socket.socket().bind(localAddress, channel.getConfig().getBacklog());
 

10.bind成功后,触发BOUND事件

channel.getPipeline().sendUpstream(
                new UpstreamChannelStateEvent(
                        channel, ChannelState.BOUND, localAddress));
 

11.binder处理BOUND这个upstream事件,这里bind没有处理,因为没有扩展接口

12.启动BOSS线程,用来接受新的请求

Executor bossExecutor =
                ((NioServerSocketChannelFactory) channel.getFactory()).bossExecutor;
            DeadLockProofWorker.start(bossExecutor,
                    new ThreadRenamingRunnable(new Boss(channel),
                            "New I/O server boss #" + id + " (" + channel + ')'));
 

13.Boss构造中,创建一个selector,并将之前已经bind的channel注册上去,注册的key是accept

selector = Selector.open();
	...
                channel.socket.register(selector, SelectionKey.OP_ACCEPT);
 

g.由于主channel的pipeline只有一个handler,所以事件处理在binder这里结束了

h.向阻塞队列发成功信号

boolean finished = futureQueue.offer(evt.getChannel().bind(localAddress));
 

5.主线程从阻塞队列中获取信号,如果bind不成功,则抛出异常

future = futureQueue.poll(Integer.MAX_VALUE, TimeUnit.SECONDS);
 

6.bind结束,server启动成功,此时Boss和worker线程都已成功启动,可以对外服务了。

猜你喜欢

转载自iwinit.iteye.com/blog/1743941