Deep understanding of NIO and Epoll

Deep understanding of NIO and Epoll

IO model

The IO model means what kind of channel is used to send and receive data. Java supports a total of 3 network programming IO modes: BIO, NIO, and AIO.

BIO (Blocking IO)

This is a synchronous blocking model, and a client can only link and process one thread. Insert picture description here
Code example:

package com.atzxm.bio;

/**
 * @author zhouximin
 * @create 2021-01-17-下午7:13
 */
import java.io.IOException;
import java.net.ServerSocket;
import java.net.Socket;

public class SocketServer {
    
    
    public static void main(String[] args) throws IOException {
    
    
        ServerSocket serverSocket = new ServerSocket(9000);
        while (true) {
    
    
            System.out.println("等待连接。。");
            //阻塞方法
            Socket clientSocket = serverSocket.accept();
            System.out.println("有客户端连接了。。");
            handler(clientSocket);

            /*new Thread(new Runnable() {
                @Override
                public void run() {
                    try {
                        handler(clientSocket);
                    } catch (IOException e) {
                        e.printStackTrace();
                    }
                }
            }).start();*/
        }
    }

    private static void handler(Socket clientSocket) throws IOException {
    
    
        byte[] bytes = new byte[1024];
        System.out.println("准备read。。");
        //接收客户端的数据,阻塞方法,没有数据可读时就阻塞
        int read = clientSocket.getInputStream().read(bytes);
        System.out.println("read完毕。。");
        if (read != -1) {
    
    
            System.out.println("接收到客户端的数据:" + new String(bytes, 0, read));
        }
        clientSocket.getOutputStream().write("HelloClient".getBytes());
        clientSocket.getOutputStream().flush();
    }
}
  • Disadvantages:
    • The read operation in the IO code is a blocking operation. If the connection does not perform data read and write operations, it will block the thread and waste resources.
    • If there are too many threads, it will cause too many server threads and too much pressure, such as C10K problems
  • Application scenario
    • The BIO method is suitable for a relatively small and fixed number of connections. This method requires higher server resources, but the program is simple and easy to understand.

NIO (Non Blocking IO)

Synchronous non-blocking, the server implementation mode is that one thread can handle multiple requests (connections), the connection requests sent by the client will be registered on the multiplexer selector, and the multiplexer polls until there is an IO request connected Processing, JDK1.4 began to be introduced.
Application scenarios: The
NIO method is suitable for architectures with a large number of connections and relatively short connections (light operation), such as chat servers, barrage systems, inter-server communication, and more complex programming.
NIO non-blocking code examples:

package com.atzxm.nio;


import java.io.IOException;
import java.net.InetSocketAddress;
import java.nio.ByteBuffer;
import java.nio.channels.ServerSocketChannel;
import java.nio.channels.SocketChannel;
import java.util.ArrayList;
import java.util.Iterator;
import java.util.List;

public class NioServer {
    
    

    // 保存客户端连接
    static List<SocketChannel> channelList = new ArrayList<>();

    public static void main(String[] args) throws IOException, InterruptedException {
    
    

        // 创建NIO ServerSocketChannel,与BIO的serverSocket类似
        ServerSocketChannel serverSocket = ServerSocketChannel.open();
        serverSocket.socket().bind(new InetSocketAddress(9000));
        // 设置ServerSocketChannel为非阻塞
        serverSocket.configureBlocking(false);
        System.out.println("服务启动成功");

        while (true) {
    
    
            // 非阻塞模式accept方法不会阻塞,否则会阻塞
            // NIO的非阻塞是由操作系统内部实现的,底层调用了linux内核的accept函数
            SocketChannel socketChannel = serverSocket.accept();
            if (socketChannel != null) {
    
     // 如果有客户端进行连接
                System.out.println("连接成功");
                // 设置SocketChannel为非阻塞
                socketChannel.configureBlocking(false);
                // 保存客户端连接在List中
                channelList.add(socketChannel);
            }
            // 遍历连接进行数据读取
            Iterator<SocketChannel> iterator = channelList.iterator();
            while (iterator.hasNext()) {
    
    
                SocketChannel sc = iterator.next();
                ByteBuffer byteBuffer = ByteBuffer.allocate(128);
                // 非阻塞模式read方法不会阻塞,否则会阻塞
                int len = sc.read(byteBuffer);
                // 如果有数据,把数据打印出来
                if (len > 0) {
    
    
                    System.out.println("接收到消息:" + new String(byteBuffer.array()));
                } else if (len == -1) {
    
     // 如果客户端断开,把socket从集合中去掉
                    iterator.remove();
                    System.out.println("客户端断开连接");
                }
            }
        }
    }
}
  • Save all the Channels obtained through serverSocket.accept() through a channelList. Then it will traverse all the channelList to try to read the data. However, some just created a connection without sending data, so there will be many invalid channels.
  • Summary: If there are too many connections, there will be a large number of invalid traversals. If there are 10,000 connections, only 1,000 of them have write data, but since the other 9,000 connections are not disconnected, we still have to poll and traverse each time Ten thousand times, nine out of ten traversals are invalid, which is obviously not a satisfactory state.

NIO introduces multiplexer code examples:

package com.atzxm.nio;


import java.io.IOException;
import java.net.InetSocketAddress;
import java.nio.ByteBuffer;
import java.nio.channels.SelectionKey;
import java.nio.channels.Selector;
import java.nio.channels.ServerSocketChannel;
import java.nio.channels.SocketChannel;
import java.util.Iterator;
import java.util.Set;

public class NioSelectorServer {
    
    

    public static void main(String[] args) throws IOException, InterruptedException {
    
    

        // 创建NIO ServerSocketChannel
        ServerSocketChannel serverSocket = ServerSocketChannel.open();
        //ServerSocketChannel serverSocket2 = ServerSocketChannel.open();
        //绑定端口
        serverSocket.socket().bind(new InetSocketAddress(9000));
        //serverSocket2.socket().bind(new InetSocketAddress(9001));
        // 设置ServerSocketChannel为非阻塞
        serverSocket.configureBlocking(false);
        //serverSocket2.configureBlocking(false);
        // 打开Selector处理Channel,即创建epoll
        Selector selector = Selector.open();
        // 把ServerSocketChannel注册到selector上,并且selector对客户端accept连接操作感兴趣并且会生成
        //一个SelectionKey
        SelectionKey register = serverSocket.register(selector, SelectionKey.OP_ACCEPT);
        //serverSocket2.register(selector, SelectionKey.OP_ACCEPT);
        System.out.println("服务启动成功");

        while (true) {
    
    
            // 阻塞等待需要处理的事件发生
            selector.select();

            // 获取selector中注册的全部事件发生的 SelectionKey 实例
            Set<SelectionKey> selectionKeys = selector.selectedKeys();
            Iterator<SelectionKey> iterator = selectionKeys.iterator();

            // 遍历SelectionKey对事件进行处理
            while (iterator.hasNext()) {
    
    
                SelectionKey key = iterator.next();
                // 如果是OP_ACCEPT事件,则进行连接获取和事件注册
                if (key.isAcceptable()) {
    
    
                    ServerSocketChannel server = (ServerSocketChannel) key.channel();
                    //生成一个SocketChannel 并且将其注册到selector中。
                    SocketChannel socketChannel = server.accept();
                    socketChannel.configureBlocking(false);
                    // 这里只注册了读事件,如果需要给客户端发送数据可以注册写事件
                    socketChannel.register(selector, SelectionKey.OP_READ);
                    System.out.println("客户端连接成功");
                } else if (key.isReadable()) {
    
      // 如果是OP_READ事件,则进行读取和打印
                    SocketChannel socketChannel = (SocketChannel) key.channel();
                    ByteBuffer byteBuffer = ByteBuffer.allocate(128);
                    int len = socketChannel.read(byteBuffer);
                    // 如果有数据,把数据打印出来
                    if (len > 0) {
    
    
                        System.out.println("接收到消息:" + new String(byteBuffer.array()));
                    } else if (len == -1) {
    
     // 如果客户端断开连接,关闭Socket
                        System.out.println("客户端断开连接");
                        socketChannel.close();
                    }
                }
                //从事件集合里删除本次处理的key,防止下次select重复处理
                iterator.remove();
            }
        }
    }
}



Insert picture description here

  • In order to solve the shortcomings of ordinary Nio, a multiplexer (Selector) is introduced
    . After adding a multiplexer, each time it will only respond to the socketChannel where the event occurs. Note: Because we only register one accept selector, we can only respond to one accept selector each time we select.
  • How to check whether the registered socketChannel responds to events?
    Before jdk1.4, the Linux functions select and poll were used to traverse all sockchannels to see if they were responding to events.
    After 1.4, use the epoll model.
    Source code analysis of several key functions
  1. Selector.open() //Create a multiplexer and
    trace the source code to
    • int epfd = epoll_create(256)
      will call Linux's epoll_create() function, which will return a file descriptor. The corresponding epoll file can be found through epfd.
  2. socketChannel.register(selector, SelectionKey.OP_READ)
    trace the source code to
    • pollWrapper.add(fd)
      fd is to register the file corresponding to socketChannel and put it into a collection in pollWrapper, which is temporarily associated with the seletor
  3. selector.select() //Block waiting for events that need to be processed to occur,
    trace the source code to
    • pollWrapper.poll(timeout);
    • updateRegistrations();
      • epollCtl()
        will call Linux's epoll_ctl():
      int epoll_ctl(int epfd, int op, int fd, struct epoll_event *event);
      
      Use the epoll instance referenced by the file descriptor epfd to perform an op operation on the target file descriptor fd.
      The parameter epfd represents the file descriptor corresponding to epoll, and the parameter fd represents the file descriptor corresponding to the socket.
      The parameter op has the following values:
      EPOLL_CTL_ADD: Register a new fd to epfd and associate the event event;
      EPOLL_CTL_MOD: modify the monitoring event of the registered fd;
      EPOLL_CTL_DEL: remove fd from epfd and ignore the bound event , At this time event can be null; the
      parameter event is a structure
    struct epoll_event {
    
    
	    __uint32_t   events;      /* Epoll events */
	    epoll_data_t data;        /* User data variable */
	};
	
	typedef union epoll_data {
    
    
	    void        *ptr;
	    int          fd;
	    __uint32_t   u32;
	    __uint64_t   u64;
	} epoll_data_t;

There are many optional values ​​for events, and here are only the most common ones:
EPOLLIN: indicates that the corresponding file descriptor is readable;
EPOLLOUT: indicates that the corresponding file descriptor is writable;
EPOLLERR: indicates the corresponding file descriptor An error has occurred;
0 is returned on success, -1 on failure

int epoll_wait(int epfd, struct epoll_event *events, int maxevents, int timeout);

Wait for the event on the file descriptor epfd. On epoll, there is an event response in the rdlist, if there is an event response, it will be put in the keys of the selector, if it is not blocked.
epfd is the file descriptor corresponding to Epoll, events represents the collection of all available events of the caller, maxevents represents the maximum number of events to be returned, and timeout is the timeout period.

The bottom layer of I/O multiplexing is mainly implemented by Linux kernel functions (select, poll, epoll). Windows does not support epoll. The bottom layer of Windows is based on the select function of winsock2 (not open source)

Guess you like

Origin blog.csdn.net/null_zhouximin/article/details/112756039