epoll highly encapsulates reactor, the underlying framework for almost all visible servers

content

foreword

What is reactor and how to understand it?

Process analysis of components required by reactor

components

process

How to encapsulate epoll's IO driver into a reactor event reactor driver

reactor block analysis implementation

Part of the process of registering an event handler

Multiplexer monitors multiplex IO events

The event dispatcher distributes events to the corresponding handlers

 Analysis of various specific event handlers

accept_cb : new connection arrival event handler

recv_cb : handler for read events 

send_cb write event handler​

reactor overall code and test results

Summarize this chapter


 

foreword

  • Dear friends, Xiaojie started from today on what he has learned in the direction of web server development, and writes essays while learning. This series starts with the epoll package reactor, from 0 to 1, and Xiaojie is the same from 0 To 1, when Xiaojie learned advanced network IO before, he learned the system calls that support IO multiplexing such as select poll and epoll, but they are all in a very simple part. After doing some exercises, they are also encapsulated according to the interface. The simplest server, but these have not borrowed the essence of the source code
  • Almost everything he wrote was written according to his own understanding, but Xiaojie found that the encapsulation was not strong, and he felt that the things he wrote were scattered and without framework. Then Xiaojie wanted to go in the direction of server development, so he searched the Internet. An institution for systematic learning. After that, Xiaojie will write everything he has learned into blog essays, and share and discuss techniques with fellow bloggers. If you have any questions after reading Xiaojie's blog post, please give your valuable opinions in the comment area, Xiaojie will be very grateful. If you think the series is helpful to you, please pay attention to Xiaojie and let us work together study progress

What is reactor and how to understand it?

  • Reactor is a design pattern and an important model of the server. Framework: It is an event-driven reactor pattern and an efficient event processing model.
  • reactor reactor: When the event comes, the execution and the event type may be different, so we need to register different event handlers in advance. When the event arrives, epoll_wait obtains multiple events that arrive at the same time, and distributes the events according to different types of data Give the event processing mechanism (event handler), that is, which interface functions we registered in advance
  • Think about the design idea and way of thinking of the reactor model: It needs to be event-driven. When the corresponding event occurs, we need to automatically call the corresponding function according to the event, so we need to register the interface of the processing function in the reactor in advance. The function is composed of The reactor calls it instead of calling it directly in the main function, so I need to use a callback function. -------- Essence: function pointer
  • The IO in reactor uses multiplexed IO, such as select poll epoll, in order to improve the processing capability of IO events, improve the efficiency of IO event processing, and support higher concurrency    

Process analysis of components required by reactor

Reactor mode is a relatively common mode for processing concurrent I/O. It is used to synchronize I/O. The central idea is to register all I/O events to be processed on an I/O multiplexer, while the main thread / The process is blocked on the multiplexer; once an I/O event arrives or is ready (file descriptor or socket read and write), the multiplexer returns and the corresponding I/O event registered in advance distributed to the corresponding processors.

components

  • Multiplexer: Provided by the operating system, on linux, it is usually system calls such as select, poll, epoll, etc.

watermark,type_d3F5LXplbmhlaQ,shadow_50,text_Q1NETiBA5bCP5p2wMzEy,size_20,color_FFFFFF,t_70,g_se,x_16

  • Event Dispatcher: Divide the ready events returned from the multiplexer into the corresponding processing functions and distribute them to the event handlers

watermark,type_d3F5LXplbmhlaQ,shadow_50,text_Q1NETiBA5bCP5p2wMzEy,size_20,color_FFFFFF,t_70,g_se,x_16

  • Event handler: handles corresponding IO events 

watermark,type_d3F5LXplbmhlaQ,shadow_50,text_Q1NETiBA5bCP5p2wMzEy,size_20,color_FFFFFF,t_70,g_se,x_16

process

  1. Registering events and corresponding event handlers
  2. The multiplexer waits for an event to arrive
  3. When the event arrives, the event dispatcher is triggered to distribute the event to the corresponding processor
  4. The event handler processes the event, and then registers a new event, (for example, when processing a read event, it needs to be set as a write event and then registered after the processing is completed, because after reading, we need to process data according to business requirements, and then send it back to respond Client results, so naturally it needs to be changed to write events, and it needs to be re-registered)

How to encapsulate epoll's IO driver into a reactor event reactor driver

  • In fact, the process and operation mode are now clear, and then the key lies in this packaging, how should the IO event fd be encapsulated, and how should the reactor be encapsulated
  • First of all, we need the interface API for the event. In order to use the reactor to call the api function in the post-sequence, then the fd must also be needed. Then, in order to facilitate the temporary storage of data, we need the recvbuffer and sendbuffer in the user mode, and then the two buffers in the user mode. We also need to encapsulate the size of the data, why? Because we need to pass in these two parameters when we send and recv. So the general framework of such an analysis comes out

watermark,type_d3F5LXplbmhlaQ,shadow_50,text_Q1NETiBA5bCP5p2wMzEy,size_20,color_FFFFFF,t_70,g_se,x_16

  • How should we set this callback function? To meet our subsequent needs?

First we must pass in fd as a parameter, then we need to pass in the event type events and we need to pass in the sockitem structure pointer, because if it is a read IO event, we need to write the data read from the client to the The sockitem is in the recvbuffer of the user space, and if it is to write an IO event, we need to write the data of the sockitem in the sendbuffer of the user space back to the client

Then we can set the return value to int type, so the interface is designed for the following results,

watermark,type_d3F5LXplbmhlaQ,shadow_50,text_Q1NETiBA5bCP5p2wMzEy,size_20,color_FFFFFF,t_70,g_se,x_16

  • Then there is the encapsulation for the reactor

First of all, we definitely need an epoll handle, so epfd must be encapsulated in it, and secondly, we need a container to store the triggered IO events, so far we should set a sruct epoll_event events[512]; store the triggered IO events in it, that is, store all the triggered IO events. The required global data is encapsulated into a reactorwatermark,type_d3F5LXplbmhlaQ,shadow_50,text_Q1NETiBA5bCP5p2wMzEy,size_20,color_FFFFFF,t_70,g_se,x_16

reactor block analysis implementation

  • Part of the process of registering an event handler

watermark,type_d3F5LXplbmhlaQ,shadow_50,text_Q1NETiBA5bCP5p2wMzEy,size_20,color_FFFFFF,t_70,g_se,x_16

watermark,type_d3F5LXplbmhlaQ,shadow_50,text_Q1NETiBA5bCP5p2wMzEy,size_20,color_FFFFFF,t_70,g_se,x_16

  • Multiplexer monitors multiplex IO events

watermark,type_d3F5LXplbmhlaQ,shadow_50,text_Q1NETiBA5bCP5p2wMzEy,size_20,color_FFFFFF,t_70,g_se,x_16

  • The event dispatcher distributes events to the corresponding handlers

watermark,type_d3F5LXplbmhlaQ,shadow_50,text_Q1NETiBA5bCP5p2wMzEy,size_20,color_FFFFFF,t_70,g_se,x_16

  •  Analysis of various specific event handlers

accept_cb : new connection arrival event handler

watermark,type_d3F5LXplbmhlaQ,shadow_50,text_Q1NETiBA5bCP5p2wMzEy,size_20,color_FFFFFF,t_70,g_se,x_16

recv_cb : handler for read events 

watermark,type_d3F5LXplbmhlaQ,shadow_50,text_Q1NETiBA5bCP5p2wMzEy,size_20,color_FFFFFF,t_70,g_se,x_16

send_cb write event handlerwatermark,type_d3F5LXplbmhlaQ,shadow_50,text_Q1NETiBA5bCP5p2wMzEy,size_20,color_FFFFFF,t_70,g_se,x_16

reactor overall code and test results

#include <stdio.h>
#include <stdlib.h>
#include <unistd.h>
#include <sys/types.h>
#include <sys/socket.h>
#include <string.h>
#include <arpa/inet.h>
#include <sys/epoll.h>
#include <errno.h>
#include <fcntl.h>

typedef struct sockaddr SA;

#define BUFFSIZE 1024

struct sockitem {
	int sockfd;
	//事件处理器,处理函数回调接口
	int (*callback)(int fd, int events, void* arg);

	//读写函数
	char recvbuffer[BUFFSIZE];
	char sendbuffer[BUFFSIZE];
	//读写字节数
	int rlen;
	int slen;
};

struct reactor {
	int epfd;
	struct epoll_event events[512];
};

//定义全局的eventloop --> 事件循环
struct reactor* eventloop = NULL;

//申明这些事件处理器函数
int recv_cb(int fd, int events, void *arg);
int accept_cb(int fd, int events, void* arg);
int send_cb(int fd, int evnts, void* arg);


int recv_cb(int fd, int events, void *arg) {
	struct sockitem* si = (struct sockitem*)arg;
	struct epoll_event ev;//后面需要 

	//处理IO读事件
	int ret = recv(fd, si->recvbuffer, BUFFSIZE, 0);
	if (ret < 0) {
		if (errno == EAGAIN || errno == EWOULDBLOCK) { //
			return -1;
		} else {
			
		}

		//出错了,从监视IO事件红黑树中移除结点,避免僵尸结点
		ev.events = EPOLLIN;
		epoll_ctl(eventloop->epfd, EPOLL_CTL_DEL, fd, &ev);

		close(fd);
		free(si);		

	} else if (ret == 0) {
		//对端断开连接
		printf("fd %d disconnect\n", fd);

		ev.events = EPOLLIN;
		epoll_ctl(eventloop->epfd, EPOLL_CTL_DEL, fd, &ev);
		//close同一断开连接,避免客户端大量的close_wait状态
		close(fd);
		free(si);	

	} else {
		//打印接收到的数据
		printf("recv: %s, %d Bytes\n", si->recvbuffer, ret);
		//设置sendbuffer
		si->rlen = ret;
		memcpy(si->sendbuffer, si->recvbuffer, si->rlen);
		si->slen = si->rlen;
		//注册写事件处理器
		struct epoll_event ev;
		ev.events = EPOLLOUT | EPOLLET;
	
		si->sockfd = fd;
		si->callback = send_cb;
		ev.data.ptr = si;

		epoll_ctl(eventloop->epfd, EPOLL_CTL_MOD, fd, &ev);
	}

}

int accept_cb(int fd, int events, void* arg) {
	//处理新的连接。 连接IO事件处理流程
	struct sockaddr_in cli_addr;
	memset(&cli_addr, 0, sizeof(cli_addr));
	socklen_t cli_len = sizeof(cli_addr);

	int cli_fd = accept(fd, (SA*)&cli_addr, &cli_len);
	if (cli_fd <= 0) return -1;

	char cli_ip[INET_ADDRSTRLEN] = {0};	//存储cli_ip

	printf("Recv from ip %s at port %d\n", inet_ntop(AF_INET, &cli_addr.sin_addr, cli_ip, sizeof(cli_ip)),
		ntohs(cli_addr.sin_port));
	//注册接下来的读事件处理器
	struct epoll_event ev;
	ev.events = EPOLLIN | EPOLLET;
	struct sockitem* si = (struct sockitem*)malloc(sizeof(struct sockitem));
	si->sockfd = cli_fd;
	si->callback = recv_cb;//设置事件处理器

	ev.data.ptr = si;
	epoll_ctl(eventloop->epfd, EPOLL_CTL_ADD, cli_fd, &ev);

	return cli_fd;

}

int send_cb(int fd, int events, void* arg) {
	//处理send IO事件
	struct sockitem *si = (struct sockitem*)arg;
	send(fd, si->sendbuffer, si->slen, 0); 

	//再次注册IO读事件处理器
	struct epoll_event ev;
	ev.events = EPOLLIN | EPOLLET;

	si->sockfd = fd;
	si->callback = recv_cb;//设置事件处理器
	ev.data.ptr = si;

	epoll_ctl(eventloop->epfd, EPOLL_CTL_MOD, fd, &ev);

}



int main(int argc, char* argv[]) {
	if (argc != 2) {
		fprintf(stderr, "uasge: %s <port>", argv[0]);
		return 1;
	}

	int sockfd = socket(AF_INET, SOCK_STREAM, 0);
	struct sockaddr_in serv_addr;
	int port = atoi(argv[1]);

	//确定服务端协议地址簇
	memset(&serv_addr, 0, sizeof(serv_addr));
	serv_addr.sin_family = AF_INET;
	serv_addr.sin_addr.s_addr = INADDR_ANY;
	serv_addr.sin_port = htons(port);

	//进行绑定
	if (-1 == bind(sockfd, (SA*)&serv_addr, sizeof(serv_addr))) {
		fprintf(stderr, "bind error");
		return 2;
	}

	if (-1 == listen(sockfd, 5)) {
		fprintf(stderr, "listen error");
		return 3;
	}

	//init eventloop
	eventloop = (struct reactor*)malloc(sizeof(struct reactor));
	//创建epoll句柄.
	eventloop->epfd = epoll_create(1);
	//注册建立连接IO事件处理函数
	struct epoll_event ev;
	ev.events = EPOLLIN;
	struct sockitem* si = (struct sockitem*)malloc(sizeof(struct sockitem));
	si->sockfd = sockfd;
	si->callback = accept_cb;//设置事件处理器

	ev.data.ptr = si;
	//将监视事件加入到reactor的epfd中
	epoll_ctl(eventloop->epfd, EPOLL_CTL_ADD, sockfd, &ev);

	while (1) {
		//多路复用器监视多个IO事件
		int nready = epoll_wait(eventloop->epfd, eventloop->events, 512, -1);
		if (nready < -1) {
			break;
		}

		int i = 0;
		//循环分发所有的IO事件给处理器
		for (i = 0; i < nready; ++i) {
			if (eventloop->events[i].events & EPOLLIN) {
				struct sockitem* si = (struct sockitem*)eventloop->events[i].data.ptr;
				si->callback(si->sockfd, eventloop->events[i].events, si);
			} 

			if (eventloop->events[i].events & EPOLLOUT) {
				struct sockitem* si = (struct sockitem*)eventloop->events[i].data.ptr;
				si->callback(si->sockfd, eventloop->events[i].events, si);
			}
		}


	}

	return 0;
}

watermark,type_d3F5LXplbmhlaQ,shadow_50,text_Q1NETiBA5bCP5p2wMzEy,size_20,color_FFFFFF,t_70,g_se,x_16

Summarize this chapter

  • The core of this chapter is to implement a classic network model, the design pattern reactor event loop, and the event-driven reactor pattern.
  • Component: Event handler: callback function callback Event dispatcher (distributes events to corresponding event handlers), multiplexer (multiplexing technology provided by operating systems such as select poll epoll)
  • process:
  1. Register event handlers, and write event handlers
  2.  Multiplexing to monitor the arrival of multiple IO events
  3. Distribute the triggered IO event to the corresponding event handler for processing

 

 

 

Guess you like

Origin blog.csdn.net/weixin_53695360/article/details/123894158