Linux network programming - network io and select, poll, epoll

question

1. What are select, poll, and epoll?

2. What is the difference?

1. The communication between hosts is inseparable from the network, and the communication method can use TCP, UDP, broadcast, etc.

TCP, socket connection:

First understand how the socket is used

Socket mainly has the following functions: socket, listen, connect, bind, accept, send, sendto, recv, recvfrom, close, shutdown

The processes in the network are all communicated by sockets. The header files in the linux and windows environments are mainly:

#include <sys/socket.h>和#include <winSock2.h>

The specific usage and functions of each function are introduced below.

1.socket

int socket (int domain, int type, int protocol), this function creates a socket file descriptor whose protocol family is domain, protocol type is type, and protocol number is protocol. If the function call is successful, it will return a file descriptor identifying the socket, and return -1 on failure.

domain, the parameter domain of the function socket() is used to set the domain of network communication, and the function socket() selects the communication protocol family according to this parameter, and AF_INET is commonly used.

type, the parameter type of the function socket() is used to set the protocol type of socket communication, mainly SOCKET_STREAM (streaming socket) (Tcp connection, providing serialized, reliable, two-way connection byte stream. Support Out-of-band data transmission), SOCK——DGRAM (data packet socket) (support UDP connection (message without connection state)), etc.

protocol, the third parameter protocol of the function socket() is used to formulate a specific type of a protocol, that is, a certain type in the type type. Usually there is only one specific type in a certain protocol, so the protocol parameter can only be set to 0; but some protocols have multiple specific types, you need to set this parameter to select a specific type.

errno, the function socket() is not always executed successfully, and errors may occur. There are many reasons for errors, which can be obtained through errno. The header file #include <errno.h>

Return value, if successful return a non-negative descriptor, unsuccessfully return -1

2.bind

int bind (int sockfd, const struct sockaddr* myaddr, socklen_t addrlen), when the socket function returns a descriptor, it only exists in the space of its protocol family, and does not assign a specific protocol address (here refers to IPv4/IPv6 and combination of port numbers), the bind function can bind a set of fixed addresses to sockfd.

sockfd is the descriptor returned by the socket function;

myaddr specifies the IP and port number you want to bind, both of which must use the network byte order - that is, the big endian mode;

addrlen is the length of the previous struct sockaddr (equivalent to sockaddr_in).

In order to unify the representation method of the address structure, the interface function is unified, so that different address structures can be called by functions such as bind(), connect(), recvfrom(), and sendto(). However, in general programming, this data structure is not directly operated, but another equivalent data structure sockaddr_in is used. For example:

struct sockaddr_in servaddr;

memset(&servaddr, 0, sizeof(servaddr));
servaddr.sin_family = AF_INET;
servaddr.sin_addr.s_addr = htonl(INADDR_ANY);
servaddr.sin_port = htons(9999);

bind(listenfd, (struct sockaddr *)&servaddr, sizeof(servaddr))

Usually, the server will be bound to a well-known protocol address when it is started to provide services, and the client can connect to the server through it; the client can specify the IP or port or not, if it is not allocated, the system will automatically allocate it. This is why usually the server will call bind() before listening, but the client will not call it, but will be randomly generated by the system when connect(). The allocation of sockfd is a bigmap approach

3.listen

int listen(int sockfd, int backlog), return 0 for success, -1 for failure

The function listen is only called by the tcp server and does two things:

1. When the function socket creates a socket, it is assumed to be an active socket, that is, it is a client socket that will call connect to initiate a connection, and the function listen converts an unconnected socket into a passive socket , indicating that the kernel should accept connection requests directed to this socket. Calling the function listen causes the socket to transition from the CLOSED state to the LISTEN state.

2. The second parameter of the function specifies the maximum number of connections that the kernel queues for this socket.

Generally speaking, this function should be called after calling magic socket and bind, and before calling function accept.

For a given listening socket, the kernel maintains two queues:

1. The connection queue is not completed, and an entry is opened for each such SYN: it has been sent by the client and arrived at the server, and the server is waiting to complete the corresponding TCP three-way handshake process, and these sockets are in the SYN-RCVD state;

2. Completed connection queue: create an entry for each client who has completed the TCP three-way handshake process. These sockets are in the ESTABLISHED state.

The sum of the two queues cannot exceed the backlog.

4.connect

int connect (int sockfd, conststruct sockaddr *addr, socklen_t addrlen) returns 0 for success, -1 for failure.

The connection established with the TCP server through this function actually initiates a three-way handshake process, and returns only after the connection succeeds or fails. The parameter sockfd is the local descriptor, addr is the server address, and addrlen is the length of the socket address.

The connect function of UDP, the result is different from the tcp call, and there is no three-way handshake process. The kernel just records the ip and port number of the other party, they are included in the socket address structure passed to connect, and immediately return to the calling process.

5.accept

int accept(int sockfd, struct sockaddr *addr, socklen_t *addrlen) return value A non-negative descriptor indicates success, -1 indicates failure.

6.send

size_t send(int sockfd, const void *buf, size_t len, int flags) return value successfully returns the number of bytes successfully copied to the send buffer (may be less than len), -1 means failure;

in:

sockfd: sender socket descriptor (non-listening descriptor)

buf: The buffer of the data to be sent by the application (#define MAXLNE 4096 char buf[MAXLNE];)

len: the actual length of the data to be sent

flag: generally set to 0

Each TCP socket has a send buffer whose size can be changed with the SO_SNDBUF option. The process of calling the send function is actually the process of the kernel copying user data to the send buffer of the TCP socket: if len is greater than the size of the send buffer, return -1; otherwise, check whether the remaining space in the buffer can accommodate the send If the length of len is not enough, copy a part and return the copy length (referring to non-blocking send, if it is blocking send, it must wait for all data to be copied to the buffer before returning, so the return value of blocking send must be equal to len); If the buffer is full, wait for sending, and copy to the buffer after there is remaining space; if an error occurs during the copy process, return -1. For the cause of the error, check the value of errno.

If send occurs when the network is disconnected while waiting for the protocol to send data, it will return -1. Note: The successful return of send does not mean that the other party has received the data. If a network error occurs during the subsequent protocol transmission, the next send will return -1 sending error. The data sent by TCP to the other party must be confirmed by the other party before the data in the sending buffer can be deleted. Otherwise, it will be cached in the buffer until the sending is successful (determined by TCP reliable data transmission).

7.recv

size_t recv(int sockfd,void *buf, size_t len,int flags)

in:

sockfd: Receiver socket descriptor;

buf: Specifies the buffer address for storing received data;

len: the specified buffer length for receiving data;

flags: generally specified as 0

Indicates copying data from the receive buffer. On success, returns the number of bytes copied, on failure -1 is returned. In blocking mode, recv/recvfrom will block until at least one byte (TCP)/at least one complete UDP datagram is returned in the buffer, and it will be in a dormant state when there is no data. If it is not blocked, it will return immediately, if there is data, it will return the copied data size, otherwise it will return error -1, and set the error code to EWOULDBLOCK.

8.close (header file unistd.h)

The default function of close is to mark the socket as "closed" and immediately return to the calling process. The socket descriptor can no longer be used by the process: that is, it cannot be used as a parameter for read and write (send and recv). But TCP will try to send the data that has been queued in the send buffer, and then operate according to the normal TCP connection termination sequence (disconnect 4 handshakes - 4 TCP segments headed by FIN).

9.shutdown

Shutdown can not only flexibly control the reading, writing or reading and writing functions of closing the connection, but also immediately execute the corresponding disconnection action (send the FIN section to terminate the connection, etc.), at this time, no matter how many processes share this socket descriptor, will no longer be able to send and receive data.

10.select

The advantages and disadvantages of select: Compared with multi-threading (one request for one thread, the upper limit is C10K), there is not much difference between using the select function for IO requests and the synchronous blocking model, and there are even more additional monitoring sockets and calling the select function operation, the efficiency is even worse. However, the biggest advantage of using select is that users can process multiple socket IO requests in one thread at the same time. Users can register multiple sockets, and then continuously call select to read the activated sockets, so as to achieve the purpose of processing multiple IO requests in the same thread at the same time. In the synchronous blocking model, this goal must be achieved through multi-threading.

Select related API introduction and use

#include <sys/select.h> #include <sys/time.h> #include <sys/types.h> #include <unistd.h> int select(int maxfdp, fd_set *readset, fd_set *writeset, fd_set *exceptset,struct timeval *timeout);

Parameter Description:

maxfdp: The total number of monitored file descriptors, which is 1 greater than the maximum value of file descriptors in all file descriptor sets, because file descriptors are counted from 0;

readfds, writefds, exceptset: respectively point to the set of descriptors corresponding to events such as readable, writable, and exception.

timeout: It is used to set the timeout period of the select function, that is, tell the kernel how long to wait before giving up waiting. timeout == NULL means wait for an infinite time.

Return value: timeout returns 0; failure returns -1; success returns an integer greater than 0, this integer represents the number of ready descriptors.

The following introduces several common macros related to the select function:

#include <sys/select.h> int FD_ZERO(int fd, fd_set *fdset); //All bits of an fd_set type variable are set to 0 int FD_CLR(int fd, fd_set *fdset); //Clear a certain bit You can use int FD_SET(int fd, fd_set *fd_set); //Set a certain position of the variable int FD_ISSET(int fd, fd_set *fdset); //Test whether a certain position is set

After declaring a file descriptor set, all bits must be zeroed with FD_ZERO. Then set the position corresponding to the descriptor we are interested in, as follows:

fd_set rset; int fd; FD_ZERO(&rset); FD_SET(fd, &rset); FD_SET(stdin, &rset);

1.socket blocking mode

The so-called blocking mode block, as the name implies, means that when a process or thread executes these functions, it must wait for an event to occur. If the event does not occur, the process or thread is blocked and the function cannot return immediately.

2.socket non-blocking mode

The so-called non-blocking method non-block means that the process or thread does not have to wait for the event to occur when executing this function. Once the function is executed, it must return, and the difference in the return value reflects the execution of the function. If the event occurs, it is the same as the blocking method. If the event does not occur, a code is returned to inform that the event has not occurred, and the process or thread continues to execute, so the efficiency is higher

In-depth understanding of the select model:

The key to understanding the select model is to understand fd_set. For the convenience of explanation, the length of fd_set is taken as 1 byte, and each bit in fd_set can correspond to a file descriptor fd. Then a 1-byte long fd_set can correspond to a maximum of 8 fds.

(1) Execute fd_set set; FD_ZERO(&set); then set is 0000,0000 in bits.

(2) If fd=5, execute FD_SET(fd,&set); after set becomes 0001,0000 (the fifth position is 1)

(3) If fd=2 and fd=1 are added, the set becomes 0001, 0011

(4) Execute select(6,&set,0,0,0) to block and wait

(5) If readable events occur on both fd=1 and fd=2, select returns, and set becomes 0000,0011 at this time. Note: fd=5 with no event is cleared.

Based on the above discussion, the characteristics of the select model can be easily obtained:

(1) The number of file descriptors that can be monitored depends on the value of sizeof(fd_set). sizeof(fd_set)=512 on the server, each bit represents a file descriptor, then the largest file descriptor supported on my server is 512*8=4096. It is said that it is adjustable, and others say that although it is adjustable, the upper limit of adjustment is subject to the variable value when compiling the kernel.

(2) When fd is added to the select monitoring set, a data structure array is also used to save the fd placed in the select monitoring set. One is to use the array as the source data and fd_set for FD_ISSET judgment after the select returns. The second is that after the return of select, the fd that was added before but no event occurred will be cleared, and each time before starting select, fd must be obtained from the array and added one by one (FD_ZERO first), and the maximum value of fd, maxfd, will be obtained while scanning the array. The first parameter for select.

(3) It can be seen that the select model must add fd cyclically before select, take maxfd, and use FD_ISSET to determine whether an event occurs after select returns.

Simple socket communication: #include <errno.h>

#include <netinet/in.h>
#include <stdio.h>
#include <stdlib.h>
#include <string.h>
#include <sys/socket.h>
#include <sys/types.h>
#include <unistd.h>
 
#define MAXLNE  4096

//8m * 4G = 128 , 512  一个pthread线程8M
//C10k
void *client_routine(void *arg) {
    int connfd = *(int *)arg;
    char buff[MAXLNE];
    while (1) {
        int n = recv(connfd, buff, MAXLNE, 0);
        if (n > 0) {
            buff[n] = '\0';
            printf("recv msg from client: %s\n", buff);

        send(connfd, buff, n, 0);
        } else if (n == 0) {
            close(connfd);
            break;
        }
        
        //close(connfd);
    }
}

int main(int argc, char **argv) 
{
    int listenfd, connfd, n;
    struct sockaddr_in servaddr;
    char buff[MAXLNE];
 
    if ((listenfd = socket(AF_INET, SOCK_STREAM, 0)) == -1) {
        printf("create socket error: %s(errno: %d)\n", strerror(errno), errno);
        return 0;
    }
 
    memset(&servaddr, 0, sizeof(servaddr));
    servaddr.sin_family = AF_INET;
    servaddr.sin_addr.s_addr = htonl(INADDR_ANY);
    servaddr.sin_port = htons(9999);
 
    if (bind(listenfd, (struct sockaddr *)&servaddr, sizeof(servaddr)) == -1) {
        printf("bind socket error: %s(errno: %d)\n", strerror(errno), errno);
        return 0;
    }
 
    if (listen(listenfd, 10) == -1) {
        printf("listen socket error: %s(errno: %d)\n", strerror(errno), errno);
        return 0;
    }
 #if 0   //这种方法,缺点是只能连接一个客户端使用
    struct sockaddr_in client;
    socklen_t len = sizeof(client);
    if ((connfd = accept(listenfd, (struct sockaddr *)&client, &len)) == -1) {
        printf("accept socket error: %s(errno: %d)\n", strerror(errno), errno);
        return 0; //只能获取一次connfd,因此只有一个用户accept建立TCP连接
    }

    printf("========waiting for client's request========\n");
    while (1) {

        n = recv(connfd, buff, MAXLNE, 0);
        if (n > 0) {
            buff[n] = '\0';
            printf("recv msg from client: %s\n", buff);

        send(connfd, buff, n, 0);
        } else if (n == 0) {
            close(connfd);
        }
        
        //close(connfd);
    }
#elif 0  //可以建立多个用户连接,但每个用户只能发送一次请求
    while (1) {
        struct sockaddr_in client;
        socklen_t len = sizeof(client);
        if ((connfd = accept(listenfd, (struct sockaddr *)&client, &len)) == -1) { 
        //若已完成连接队列为空,会阻塞在这里
            printf("accept socket error: %s(errno: %d)\n", strerror(errno), errno);
            return 0;
        }
        printf("========waiting for client's request========\n");
        n = recv(connfd, buff, MAXLNE, 0);
        if (n > 0) {
            buff[n] = '\0';
            printf("recv msg from client: %s\n", buff);

        send(connfd, buff, n, 0);
        } else if (n == 0) {
            close(connfd);
        }
        
        //close(connfd);
    }
#elif 0   //创建线程,为每个用户连接单独创建一个线程,优点是逻辑简单,缺点是大量的用户不适用,c1k已经不错了  while(1){    struct sockaddr_in client;
    socklen_t len = sizeof(client);
    if ((connfd = accept(listenfd, (struct sockaddr *)&client, &len)) == -1) {
	    printf("accept socket error: %s(errno: %d)\n", strerror(errno), errno);
	    return 0;
    }

    pthread_t threadid;
    pthread_create(&threadid, NULL, client_routine, (void*)&connfd);  }#else
    //select ,select第五个参数表示轮询时间,如果select第5的参数为NULL自带阻塞的功能,
    // 
    fd_set rfds, rset; //bitmap类型的

    FD_ZERO(&rfds);  //清空里面的数据
    FD_SET(listenfd, &rfds);//设置listenfd

    int max_fd = listenfd;

    while (1) {

        rset = rfds; //调用select,可能会修改rfds里面的值,因此一定要定义一个变量

        int nready = select(max_fd+1, &rset, NULL, NULL, NULL);//返回的表示就绪描述符的数目
        //第一个参数要比rfds中最大值+1,第二个参数读的集合,第三个参数表示写的集合,第四个参数表示错的集合,第五个参数表示轮询的时间,NULL表示无限时间,成功返回后,没有数据的将被置为0

        if (FD_ISSET(listenfd, &rset)) { //若listenfd还存在,说明存在请求连接的用户

            struct sockaddr_in client;
            socklen_t len = sizeof(client);
            if ((connfd = accept(listenfd, (struct sockaddr *)&client, &len)) == -1) {
                printf("accept socket error: %s(errno: %d)\n", strerror(errno), errno);
                return 0;
            }

            FD_SET(connfd, &rfds);

            if (connfd > max_fd) max_fd = connfd;

            if (--nready == 0) continue; //如果为true,说明其他accept后的用户没有数据过来,直接执行continue

        }

        int i = 0;
        for (i = listenfd+1;i <= max_fd;i ++) {

            if (FD_ISSET(i, &rset)) { // 

                n = recv(i, buff, MAXLNE, 0);
                if (n > 0) {
                    buff[n] = '\0';
                    printf("recv msg from client: %s\n", buff);

                    send(i, buff, n, 0);
                } else if (n == 0) {

                    FD_CLR(i, &rfds);
                    //printf("disconnect\n");
                    close(i);
                    
                }
                if (--nready == 0) break;
            }

        }
        

    }
    
#endif
 
    close(listenfd);
    return 0;
}

selectSummary:

Select essentially performs the next step by setting or checking the data structure storing the fd flag. The disadvantages of this are:

1. The number of fds that can be monitored by a single process is limited, that is, the size of the listening port is limited. Generally speaking, this number has a great relationship with the system memory, and the specific number can be checked by cat/proc/sys/fs/file-max. The 32-bit machine defaults to 1024. The default value of 64-bit machine is 2048. The upper limit can monitor C1000K fd

2. When scanning the socket, it is a linear scan, that is, polling is used, and the efficiency is low: when there are many sockets, each select() must traverse FD_SETSIZE Sockets to complete the scheduling, no matter which Socket is active, it is traversed all over again. This wastes a lot of CPU time. If you can register a callback function for the socket, and when they are active, they will automatically complete the relevant operations, then polling will be avoided, which is exactly what epoll and kqueue do.

3. It is necessary to maintain a data structure used to store a large number of fds, which will cause high copy overhead when the user space and kernel space pass the structure.

11.poll

#include <poll.h>
int poll(struct pollfd *fds, nfds_t nfds, int timeout)

The pollfd structure is defined as follows:

struct pollfd
{
 int fd; /* 文件描述符 */
 short events; /* 等待的事件 */
 short revents; /* 实际发生了的事件 */
} ;

1. The first parameter: used to point to an array of struct pollfd type, each pollfd structure specifies a monitored file descriptor, instructing poll() to monitor multiple file descriptors. The events field of each structure is the event mask for monitoring the file descriptor, which is set by the user. The revents field is the operation result event mask of the file descriptor. The kernel sets this field when the call returns. Any event requested in the events field may be returned in the revents field. The following table lists some constant values ​​for the specified events flag and the test revents flag:

The constant indicates whether it can be used as the input of events and whether it can be used as the return result of revents

POLLIN Ordinary or priority band data can be read

POLLRDNORM Ordinary data readable function

POLLRDBAND priority band data can be read

POLLPRI High priority data readable capability

POLLOUT Ordinary data readable function

POLLWRNORM Ordinary data can be written

POLLWRBAND priority data can be written

POLLERR An error may occur

A POLLHUP hang can occur

The POLLNVAL descriptor is not an open file capable of

2. The second parameter: nfds specifies the number of monitored elements in the array, which is 1 greater than the maximum value of all file descriptors;

3. The third parameter: timeout specifies the number of milliseconds to wait, and poll will return regardless of whether the I/O is ready or not. Timeout is specified as a negative value to indicate an infinite timeout, making poll() hang until a specified event occurs; timeout is 0 to indicate that the poll call returns immediately and lists the file descriptors ready for I/O, but does not wait for other events . In this case, poll(), like its name, returns immediately once elected.

#elif 0


    struct pollfd fds[POLL_SIZE] = {0};
    fds[0].fd = listenfd;
    fds[0].events = POLLIN;

    int max_fd = listenfd;
    int i = 0;
    for (i = 1;i < POLL_SIZE;i ++) {
        fds[i].fd = -1;
    }

    while (1) {

        int nready = poll(fds, max_fd+1, -1);

    
        if (fds[0].revents & POLLIN) {

            struct sockaddr_in client;
            socklen_t len = sizeof(client);
            if ((connfd = accept(listenfd, (struct sockaddr *)&client, &len)) == -1) {
                printf("accept socket error: %s(errno: %d)\n", strerror(errno), errno);
                return 0;
            }

            printf("accept \n");
            fds[connfd].fd = connfd;
            fds[connfd].events = POLLIN;

            if (connfd > max_fd) max_fd = connfd;

            if (--nready == 0) continue;
        }

        //int i = 0;
        for (i = listenfd+1;i <= max_fd;i ++)  {

            if (fds[i].revents & POLLIN) {
                
                n = recv(i, buff, MAXLNE, 0);
                if (n > 0) {
                    buff[n] = '\0';
                    printf("recv msg from client: %s\n", buff);

                    send(i, buff, n, 0);
                } else if (n == 0) { //

                    fds[i].fd = -1;

                    close(i);
                    
                }
                if (--nready == 0) break;

            }

        }

    }

12.epoll

epoll is an enhanced version of the multiplexed IO interface select/poll under Linux. It can significantly reduce the CPU utilization of the system when there are only a few active programs in a large number of concurrent connections, because it will not reuse the file descriptor set. Passing the result forces developers to re-prepare the set of file descriptors to be listened to each time before waiting for an event. Another reason is that when getting an event, it does not need to traverse the entire set of listened-to descriptors, as long as it traverses the The kernel IO event wakes up asynchronously and joins the descriptor set of the Ready queue. In addition to providing the level trigger (Level Triggered) of IO events such as select/poll, epoll also provides edge trigger (Edge Triggered), which makes it possible for user space programs to cache the IO status, reduce the calls of epoll_wait/epoll_pwait, and improve Application efficiency.

advantage:

  • Support a process to open a large number of socket descriptors
  • IO efficiency does not decrease linearly as the number of FDs increases
  • Kernel fine-tuning

epoll working mode

epoll has two working modes: LT and ET

  • LT (level-triggered) is the default working method and supports both block and no-block sockets. In this approach, the kernel tells you whether a file descriptor is ready, and then you can perform IO operations on the ready fd. If you do nothing, the kernel will continue to notify you. Therefore, programming errors in this mode are less likely to occur. Traditional select/poll are representatives of this model.
  • ET (edge-triggered) is a high-speed working mode and only supports no-block socket. In this mode, the kernel tells you via epoll when a descriptor becomes ready from not ready. It then assumes that you know the file descriptor is ready, and won't send any more ready notifications for that file descriptor until you do something that causes that file descriptor to no longer be ready (for example, you An EWOULDBLOCK error was caused when sending, receiving, or receiving a request, or sending and receiving less than a certain amount of data). But please note that if you have not performed IO operations on this fd (causing it to become unready again), the kernel will not send more notifications (only once), but in the TCP protocol, the acceleration effect of the ET mode still needs to be updated. Multiple benchmark confirmations.

epoll interface

The interface of epoll has three functions:

1. int epoll_create(int size);

Create an epoll handle, and size is used to tell the kernel how big the total number of listeners is. This parameter is different from the first parameter in select(), giving the value of fd+1 for the maximum listener. It should be noted that when the epoll handle is created, it will occupy a fd value. If you view /proc/process id/fd/ under linux, you can see this fd, so after using epoll, you must Call close() to close, otherwise fd may be exhausted. The parameter can be greater than 0.

2. int epoll_ctl(int epfd, int op, int fd, struct epoll_event *event);

The event registration function of epoll, which is different from select(), tells the kernel what type of event to listen to when listening to the event, but first registers the type of event to be listened to here. The first parameter is the return value of epoll_create(), and the second parameter represents the action, represented by three macros:

EPOLL_CTL_ADD: Register a new fd to epfd;

EPOLL_CTL_MOD: modify the listening event of the registered fd;

EPOLL_CTL_DEL: delete a fd from epfd;

The third parameter is the fd that needs to be monitored, and the fourth parameter is to tell the kernel what events need to be monitored. The structure of struct epoll_event is as follows:

typedef union epoll_data {
      void *ptr;
      int fd;
      __uint32_t u32;
      __uint64_t u64;
  } epoll_data_t;
  
  struct epoll_event {
      __uint32_t events; /* Epoll events */
      epoll_data_t data; /* User data variable */
  };

events can be a collection of the following macros:

EPOLLIN: Indicates that the corresponding file descriptor can be read (including the normal closing of the peer SOCKET);

EPOLLOUT: Indicates that the corresponding file descriptor can be written;

EPOLLPRI: Indicates that the corresponding file descriptor has urgent data readable (here it should indicate the arrival of out-of-band data);

EPOLLERR: Indicates that the corresponding file descriptor has an error;

EPOLLHUP: Indicates that the corresponding file descriptor is hung up;

EPOLLET: Set EPOLL to Edge Triggered (Edge Triggered) mode, which is relative to Level Triggered (Level Triggered);

EPOLLONESHOT: Only listen to one event. After listening to this event, if you need to continue to monitor this socket, you need to add this socket to the EPOLL queue again.

3. int epoll_wait(int epfd, struct epoll_event * events, int maxevents, int timeout);
Wait for the event to be generated, similar to the select() call. The parameter events is used to get the collection of events from the kernel. maxevents tells the kernel how big the events are. The value of this maxevents cannot be greater than the size when creating epoll_create(). The parameter timeout is the timeout time (milliseconds, 0 will return immediately, -1 will Not sure, there is also a saying that it is permanently blocked). This function returns the number of events that need to be processed, such as returning 0 to indicate that it has timed out.

The framework of epoll

The header file #include <sys/epoll.h>, first, create an epoll handle through epoll_create(int maxfds). This function will return a new epoll handle, and all subsequent operations will be performed through this handle. After using it, remember to use close() to close the created epoll handle.

Then, in your network main loop, call epoll_wait(int epfd, epoll_event* events, int max events, int timeout) every frame to query all network interfaces to see which one can be read and which one can be written. The basic syntax is: nfds = epoll_wait(kdpfd, events, maxevents, -1);
where kdpfd is the handle created by epoll_create, and events is a pointer to epoll_event*. When the epoll_wait function is successfully operated, all events will be stored in events read and write events. max_events is the current number of socket handles that need to be monitored. The last timeout is the timeout of epoll_wait. When it is 0, it means to return immediately. When it is -1, it means to wait until there is an event range. When it is any positive integer, it means to wait for such a long time. If there is no event, then return. Generally, if the network main loop is a separate thread, you can use -1 to wait, which can guarantee some efficiency. If it is in the same thread as the main logic, you can use 0 to ensure the efficiency of the main loop.

Next, after the epoll_wait scope should be a loop that traverses all events.

#else

    //poll/select --> 
    // epoll_create 
    // epoll_ctl(ADD, DEL, MOD)
    // epoll_wait

    int epfd = epoll_create(1); //int size

    struct epoll_event events[POLL_SIZE] = {0};
    struct epoll_event ev;

    ev.events = EPOLLIN;
    ev.data.fd = listenfd;

    epoll_ctl(epfd, EPOLL_CTL_ADD, listenfd, &ev);

    while (1) {

        int nready = epoll_wait(epfd, events, POLL_SIZE, 5);
        if (nready == -1) {
            continue;
        }

        int i = 0;
        for (i = 0;i < nready;i ++) {

            int clientfd =  events[i].data.fd;
            if (clientfd == listenfd) {

                struct sockaddr_in client;
                socklen_t len = sizeof(client);
                if ((connfd = accept(listenfd, (struct sockaddr *)&client, &len)) == -1) {
                    printf("accept socket error: %s(errno: %d)\n", strerror(errno), errno);
                    return 0;
                }

                printf("accept\n");
                ev.events = EPOLLIN;
                ev.data.fd = connfd;
                epoll_ctl(epfd, EPOLL_CTL_ADD, connfd, &ev);

            } else if (events[i].events & EPOLLIN) {

                printf("recv\n");
                n = recv(clientfd, buff, MAXLNE, 0);
                if (n > 0) {
                    buff[n] = '\0';
                    printf("recv msg from client: %s\n", buff);

                    send(clientfd, buff, n, 0);
                } else if (n == 0) { //


                    ev.events = EPOLLIN;
                    ev.data.fd = clientfd;

                    epoll_ctl(epfd, EPOLL_CTL_DEL, clientfd, &ev);

                    close(clientfd);
                    
                }

            }

        }

    }
struct sockaddr_in client;
    socklen_t len = sizeof(client);
    if ((connfd = accept(listenfd, (struct sockaddr *)&client, &len)) == -1) { 
        printf("accept socket error: %s(errno: %d)\n", strerror(errno), errno);
        return 0;
    }
    pthread_t threadid;
    pthread_create(&threadid, NULL, client_routine, (void *)&connfd);

Reference blog: Network io and select, poll, epoll - give up - Blog Garden

Lion's homepage will constantly update C/C++ Linux system and programmer interview experience, as well as back-end development, server development and other knowledge system content; please follow + like and update

 

Guess you like

Origin blog.csdn.net/m0_58687318/article/details/126604351