来自:http://flychao88.iteye.com/blog/1553058
可参考:http://www.360doc.com/content/12/1120/22/203871_249194900.shtml
NioServerSocketChannelFactory创建服务端的ServerSocketChannel,采用多线程执行非阻塞IO,和Mina的设计
模式一样,都采用了Reactor模式。其中bossExecutor、workerExecutor是两个线程池,bossExecutor用来接收客户端连接,workerExecutor用来执行非阻塞的IO操作,主要是read,write。
- package netty;
- import org.jboss.netty.bootstrap.ServerBootstrap;
- import org.jboss.netty.channel.ChannelFactory;
- import org.jboss.netty.channel.ChannelPipeline;
- import org.jboss.netty.channel.ChannelPipelineFactory;
- import org.jboss.netty.channel.Channels;
- import org.jboss.netty.channel.socket.nio.NioServerSocketChannelFactory;
- import org.jboss.netty.handler.codec.string.StringDecoder;
- import org.jboss.netty.handler.codec.string.StringEncoder;
- import java.net.InetSocketAddress;
- import java.util.concurrent.Executors;
- /**
- * Created by IntelliJ IDEA.
- * User: flychao88
- * Date: 12-6-6
- * Time: 上午10:14
- * To change this template use File | Settings | File Templates.
- */
- public class DiscardServer {
- public static void main(String[] args) throws Exception {
- ChannelFactory factory = new NioServerSocketChannelFactory(
- Executors.newCachedThreadPool(),
- Executors.newCachedThreadPool());
- ServerBootstrap bootstrap = new ServerBootstrap (factory);
- bootstrap.setPipelineFactory(new ChannelPipelineFactory() {
- public ChannelPipeline getPipeline() {
- ChannelPipeline pipeline = Channels.pipeline();
- pipeline.addLast("encode",new StringEncoder());
- pipeline.addLast("decode",new StringDecoder());
- pipeline.addLast("handler",new DiscardServerHandler());
- return pipeline;
- }
- });
- bootstrap.setOption("child.tcpNoDelay", true);
- bootstrap.setOption("child.keepAlive", true);
- bootstrap.bind(new InetSocketAddress(8080));
- }
- }
- package netty;
- import org.jboss.netty.buffer.ChannelBuffer;
- import org.jboss.netty.buffer.ChannelBuffers;
- import org.jboss.netty.channel.*;
- /**
- * Created by IntelliJ IDEA.
- * User: flychao88
- * Date: 12-6-6
- * Time: 上午10:10
- * To change this template use File | Settings | File Templates.
- */
- public class DiscardServerHandler extends SimpleChannelUpstreamHandler {
- @Override
- public void messageReceived(ChannelHandlerContext ctx, MessageEvent e) {
- System.out.println("服务器接收1:"+e.getMessage());
- }
- @Override
- public void exceptionCaught(ChannelHandlerContext ctx, ExceptionEvent e) {
- e.getCause().printStackTrace();
- Channel ch = e.getChannel();
- ch.close();
- }
- }
- package netty;
- import org.jboss.netty.bootstrap.ClientBootstrap;
- import org.jboss.netty.channel.ChannelFactory;
- import org.jboss.netty.channel.ChannelPipeline;
- import org.jboss.netty.channel.ChannelPipelineFactory;
- import org.jboss.netty.channel.Channels;
- import org.jboss.netty.channel.socket.nio.NioClientSocketChannelFactory;
- import org.jboss.netty.handler.codec.string.StringDecoder;
- import org.jboss.netty.handler.codec.string.StringEncoder;
- import java.net.InetSocketAddress;
- import java.util.concurrent.Executors;
- /**
- * Created by IntelliJ IDEA.
- * User: flychao88
- * Date: 12-6-6
- * Time: 上午10:21
- * To change this template use File | Settings | File Templates.
- */
- public class TimeClient {
- public static void main(String[] args) throws Exception {
- ChannelFactory factory = new NioClientSocketChannelFactory(
- Executors.newCachedThreadPool(),
- Executors.newCachedThreadPool());
- ClientBootstrap bootstrap = new ClientBootstrap(factory);
- bootstrap.setPipelineFactory(new ChannelPipelineFactory() {
- public ChannelPipeline getPipeline() {
- ChannelPipeline pipeline = Channels.pipeline();
- pipeline.addLast("encode",new StringEncoder());
- pipeline.addLast("decode",new StringDecoder());
- pipeline.addLast("handler",new TimeClientHandler());
- return pipeline;
- }
- });
- bootstrap.setOption("tcpNoDelay" , true);
- bootstrap.setOption("keepAlive", true);
- bootstrap.connect (new InetSocketAddress("127.0.0.1", 8080));
- }
- }
- package netty;
- /**
- * Created by IntelliJ IDEA.
- * User: flychao88
- * Date: 12-6-6
- * Time: 上午10:22
- * To change this template use File | Settings | File Templates.
- */
- import org.jboss.netty.buffer.ChannelBuffer;
- import org.jboss.netty.buffer.ChannelBuffers;
- import org.jboss.netty.channel.*;
- import java.util.Date;
- public class TimeClientHandler extends SimpleChannelUpstreamHandler {
- @Override
- public void channelConnected(ChannelHandlerContext ctx, ChannelStateEvent e) {
- e.getChannel().write("abcd");
- }
- @Override
- public void messageReceived(ChannelHandlerContext ctx, MessageEvent e) {
- e.getChannel().close();
- }
- @Override
- public void exceptionCaught(ChannelHandlerContext ctx, ExceptionEvent e) {
- e.getCause().printStackTrace();
- e.getChannel().close();
- }
- }
二、服务器的启动和客服端connect的过程
1 、服务器启动
bootstrap.bind(…)-> 触发ServerSocketChannel.open()的事件(sendupstream)->捕捉open事件,channel.bind-> Channels.bind(…) 发起bind命令(sendDownstream)-> PipelineSink进行处理-> 使用socket进行bind,启动boss进程。
Boostrap.bind 方法包含两个参数 NioServerSocketChannelFactory、ChannelPipelineFactory。NioServerSocketChannelFactory包含连个线程池bossExecutor和workerExecutor,workerExecutor: 包含缺省为处理器个数×2个NioWorker进程。
2、服务器处理连接
Boss启动后,在监听accept事件, 将捕获到的事件作为一个task放到一个niowork进程
的registerTaskQueue队列中。
3、服务器端接收并处理数据
NioWorker.run()->nioworker. processSelectedKeys()->Nioworker. Read()将从SocketChannel读取的数据封装成ChannelBuffer ->调用fireMessageReceived(channel,buffer)产生upstream事件 –> 由注册到Pipeline中的Hanlder进行处理
4、客户端connection
同服务器启动一样也需要创建一个NioClientSocketChannelFactory和一个ChannelPipelineFactory。 同服务器端不同的是client端的boss进程不要监听,它将本地发出的建立的链接的请求封装成task放入一个registerTaskQueue,boss负责消费Queue队列中的消息。
三、来自:http://fbi.taobao.org/?p=86
Server端整体来说和client端很像,在创建NioServerSocketChannelFactory时需要指定两种类型的Thread,一种是boss,还有一种是worker;每个监听端口都有自己的boss线程,比如你在服务端开启了80和443端口,那将会有两个boss线程,一旦有连接创建,即accept,那就会交给worker thread处理。
在Server端启动过程中事件发生顺序:
UpStream.ChannelState.OPEN—–>DownStream.ChannelState.BOUND(需要绑定)
——–>UpStream.ChannelState.BOUND(已经绑定)——>DownStream.CONNECTED(需要连接,应该是注册Selector的意思)——->UpStream.CONNECTED(连接成功)
Netty在处理开启ServerSocket监听上,使用了Pipeline&Handlers的方式,会在内部构建一个DefaultPipeline并且加入一个UpStreamHandler(Bind)
- ChannelHandler binder = new Binder(localAddress, futureQueue);
- ChannelHandler parentHandler = getParentHandler();
- ChannelPipeline bossPipeline = pipeline();
- bossPipeline.addLast(“binder”, binder);
创建Channel之后发出一个UpStream.ChannelState.OPEN事件
- channel.getPipeline().sendUpstream(
- new UpstreamChannelStateEvent(
- channel, ChannelState.OPEN, Boolean.TRUE))
前面注册的BinderHandler会处理这个事件,并且发出一个DownStreamEvent的Bind事件,表明需要将该Channel绑定至指定的地址,接着就会执行绑定逻辑
- NioServerSocketPipelineSink
- private void bind(
- NioServerSocketChannel channel, ChannelFuture future,
- SocketAddress localAddress) {
- boolean bound = false;
- boolean bossStarted = false;
- try {
- channel.socket.socket().bind(localAddress,
- channel.getConfig().getBacklog());
- bound = true;
- future.setSuccess();
- fireChannelBound(channel, channel.getLocalAddress());
- // 取出一个boss线程,然后交给Boss类去处理
- Executor bossExecutor =
- ((NioServerSocketChannelFactory) channel.getFactory()).bossExecutor;
- DeadLockProofWorker.start(bossExecutor,
- new ThreadRenamingRunnable(new Boss(channel),
- “New I/O server boss #” + id + “ (“ + channel + ‘)’));
- bossStarted = true;
- } catch (Throwable t) {
- future.setFailure(t);
- fireExceptionCaught(channel, t);
- } finally {
- if (!bossStarted >> bound) {
- close(channel, future);
- }
- }
- }
在Boss类中进行select register和client类似, ServerChannel只注册Accept事件:
- Boss(NioServerSocketChannel channel) throws IOException {
- this.channel = channel;
- selector = Selector.open();
- boolean registered = false;
- try {
- channel.socket.register(selector, SelectionKey.OP_ACCEPT);
- registered = true;
- } finally {
- if (!registered) {
- closeSelector();
- }
- }
- channel.selector = selector;
- }
当有新的连接建立,会交给workpool去处理,boss只负责accept到新的连接,新的SocketChannel会被注册到一个work中去,这里以后就和client完全一样了
- public void run() {
- final Thread currentThread = Thread.currentThread();
- channel.shutdownLock.lock();
- try {
- for (;;) {
- try {
- if (selector.select(1000) > 0) {
- selector.selectedKeys().clear();
- }
- // accept connections in a for loop until no new connection is ready
- for (;;) {
- SocketChannel acceptedSocket = channel.socket.accept();
- if (acceptedSocket == null) {
- break;
- }
- registerAcceptedChannel(acceptedSocket, currentThread);
- }
- ……
- }
- }
- } finally {
- channel.shutdownLock.unlock();
- closeSelector();
- }
- }
- private void registerAcceptedChannel(SocketChannel acceptedSocket, Thread currentThread) {
- try {
- ChannelPipeline pipeline =
- channel.getConfig().getPipelineFactory().getPipeline();
- // 选择一个work,然后将新创建的连接(channel)注册到work上
- NioWorker worker = nextWorker();
- NioAcceptedSocketChannel acceptChannel=new NioAcceptedSocketChannel(
- channel.getFactory(), pipeline, channel,
- NioServerSocketPipelineSink.this, acceptedSocket,
- worker, currentThread);
- worker.register(acceptChannel, null);
- } catch (Exception e) {
- ……
- }
- }
这里有几个注意点:
1.最好不要让bossWork和works同用一组ThreadPool,因为那样boosWork不仅要处理连接,某些Channel还会直接注册到boss的select上,那样,boss就不仅仅是boss了,比worker还苦逼
2.因为schannel的io read和handler在一个线程上,因此如果handler的处理时间太长会使得不能及时处理后续到的事件(这个有另外的解决办法)