How lucky airship to kill a group code + [6139371] precision landing, worry-free return of blood

How lucky airship to kill a group code + [6139371] precision landing, worry-free return of blood

Q technical expert group 6,139,371 online manual one guidance throughout the day! Years of experience and knowledge to share! Win team teach you practical plans!

epoll interface is a program to solve the Linux kernel handle a large number of file descriptors proposed. Linux at the interface belongs to multiple I / O interface multiplexing reinforcement select / poll of. Linux which is often applied to a highly concurrent service program, particularly in a large number of concurrent connections in only a few connections in the case of active (usually the case), the CPU utilization rate in this case can significantly improve the process.

epoll uses event-driven, and the design is very efficient. Get events when the user space, does not need to be listening to traverse the collection of all descriptors file descriptors, but to traverse those descriptors after being awakened kernel I / O events asynchronously added to the ready queue and return to user space collection.
epoll provides two trigger modes, the trigger level (LT) and the edge-triggered (ET). Of course, related to the I / O operation will inevitably have non-blocking and blocking schemes. The current relatively high efficiency is epoll + ET + non-blocking I / O model, in the specific case of the current situation should be a reasonable choice of the optimal mix of programs.

Then explain the following order:
(1) general use epoll interface
(2) epoll Interface + non-blocking
(3) epoll Interface + + non-blocking edge trigger
(4) epoll reactor model (focus, Libevent library of core ideas)

First, the basic idea epoll interface overview
epoll design:
(1) build an epoll file system in the Linux kernel, the file system uses a red-black tree to build, add and remove red-black tree in the above high efficiency, It is one of the reasons epoll efficient. Interested Baidu red-black tree can understand, but here you only need to know the algorithm to ultra-high efficiency.
(2) epoll red-black tree using asynchronous wake-up event, the kernel monitor I / O, the incident red-black tree search kernel and the corresponding node data into asynchronous event queue wakeup.
(3) epoll using mmap data storage I / O mapping from user space to kernel space to speed. This method is currently the Linux inter-process communication in the fastest delivery and consumes a minimum, the data transfer process does not involve a method of system calls.


Compared with the traditional interfaces epoll select / poll, it has the following advantages:
(1) supports a single process to open a large number of file descriptors. The maximum number of file descriptors open process is limited, rather than by its own implementation limits. The existence of a single process can select the number of open file descriptors maximum limit, select this limit is to limit itself to achieve. Usually 1024. poll using a linked list, but also far more than the select.
(2) Linux the I / O efficiency will not increase linearly with the number of file descriptors lowered. Compared to select / poll, when in a high concurrency (e.g., 100,000, 1,000,000). In such a large socket set, any one time, in fact, only a portion of the socket is "active". handling select / poll that performs a linear scan with such a large collection and have socket events are processed, which will greatly waste of CPU resources. Thus improving epoll, due to the I / O event occurs, the kernel will be active socket into the queue and handed over to the user space mmap accelerate the program to get the collection is in the active set of socket, instead of all socket sets.
(3) using mmap acceleration messaging kernel and user space. mode select / poll is used in the collection of all listening file to be copied into kernel space (kernel mode to user mode handover). Next set of kernel polling detection, when an event occurs, the kernel from the collection and collection to the user space. Epoll look how to do, the kernel and a common program memory, see epoll generally described this figure 01, the user switch area mmap acceleration data interaction does not involve privilege (user mode to kernel mode, the kernel mode to user mode ). For non-kernel memory in kernel space has permission to read it.
More about mmap can look at my other blog post up to understand.
https://blog.csdn.net/qq_36359022/article/details/79992287

Next we combine epoll general description of the contents of 01 and above, will be upgraded to illustrate a general description epoll 02.

Let us look at the red-black tree file system plug-in. In epfd root, a listener descriptor mount 5 and to establish a connection with the client cfd. fd deletions red-black tree in accordance with the operation mode, each corresponding to a file descriptor has a structure which is

/ *
* - [the epoll structure description 01] -
* /
struct {epoll_event
__uint32_t Events; / * * Epoll Events /
epoll_data_t Data; / * the User Data variable * /
};

Union epoll_data {typedef
void * PTR;
int fd;
uint32_t U32;
uint64_t U64;
} epoll_data_t;
. 1
2
. 3
. 4
. 5
. 6
. 7
. 8
. 9
10
. 11
12 is
13 is
14
in the general description epoll 02, each associated with a structure that is fd body epoll_event. It requires the listener by event type and a consortium composed. General interface epoll fd passed to the Commonwealth itself.

Thus, the interface used epoll general procedure as follows:
(1) create an object using epoll_create epoll (), which is associated with the object epfd, subsequent operations using the epfd use epoll object, and this is the object epoll red-black tree, as epfd descriptor only able to associate it.
(2) call epoll_ctl () be increased to epoll object, delete and other operations.
(3) call epoll_wait () can block (or non-blocking or timing) Returns a collection of events to be processed.
(3) handle the event.

/ *
* - [General Description epoll interface 01] -
* /
int main (void)
{
/ *
* omitted here common network programming initialization mode (from the listen request to the last)
* and the error processing portion is omitted, the I put behind all the source code, just to put an important step
* Some initialization did not write
* /
// [1] create a epoll objects
ep_fd = epoll_create (OPEN_MAX); / * create epoll model, ep_fd pointing red and black root node * /
listen_ep_event.events = EPOLLIN; / * specified read event listener Note: the default is level-triggered * LT /
listen_ep_event.data.fd = listen_fd; / * Note: the general epoll put in here * fd /
// [2] will listen_fd and provided to the corresponding tree structure
epoll_ctl (ep_fd, EPOLL_CTL_ADD, listen_fd, & listen_ep_event);

the while (. 1) {
// [. 3] is a blocking server (default) listening to events, ep_event is an array, all the events structure satisfies the condition after the loading
n_ready = epoll_wait (ep_fd, ep_event, the OPEN_MAX, -1);
for (I = 0; I <n_ready; I ++) {
temp_fd = ep_event [I] .data.fd;

IF (ep_event [I] .events & EPOLLIN) {
IF (temp_fd == listen_fd) {// new connection description soon
connect_fd = Accept (listen_fd, (struct the sockaddr *) & client_socket_addr, & client_socket_len);
// coming to the tree initialization structure
temp_ep_event.events = EPOLLIN;
temp_ep_event.data.fd = connect_fd;
// tree
epoll_ctl (ep_fd, EPOLL_CTL_ADD, connect_fd, & temp_ep_event);
}
the else {// CFD data coming
n_data = read (temp_fd, buf, sizeof (buf));
IF (n_data == 0) {// the client closes
epoll_ctl (ep_fd, EPOLL_CTL_DEL, temp_fd, NULL) // the tree
Close (temp_fd);
}
the else IF (n_data <0) {}

{do
// data processing
} the while ((n_data = Read (temp_fd, buf, the sizeof (buf)))> 0);
}
}

IF the else (ep_event [I] .events & EPOLLOUT) {
// write event handling
}
the else IF (ep_event [I] .events & EPOLLERR) {
// event handling exceptions
}
}
}
Close (listen_fd);
Close (ep_fd);
}
. 1
2
. 3
. 4
. 5
. 6
. 7
. 8
. 9
10
. 11
12 is
13 is
14
15
16
. 17
18 is
. 19
20 is
21 is
22 is
23 is
24
25
26 is
27
28
29
30
31 is
32
33 is
34 is
35
36
37 [
38 is
39
40
41 is
42 is
43 is
44 is
45
46 is
47
48
49
50
51 is
52 is
53 is
54 is
55
56 is
57 is
two, the epoll trigger level (LT), epoll edge triggered (the ET)
(. 1) triggers the epoll level, this is the default mode.
When you set a trigger level after a readable event, for example, when data arrival and data to be read in the buffer. This time, even though I have not read the data, as long as there is data in the buffer will trigger a second time, until the buffer is not data.
(2) epoll edge triggered, this approach requires setting

listen_ep_event.events = EPOLLIN | EPOLLET; / * Edge Trigger * /
1
when set edge-triggered later, in a readable incident as an example of "data coming" it is triggered.

Summary:
1. The high and low sub-example is
the horizontal trigger: no data 0, data 1. There has been a buffer data is 1, it has been triggered.
Edge Trigger: 0 is no data, data 1 is, as long as the 0 to 1 before the rising edge of trigger.
2.
Buffer-readable data, level-triggered trigger ⇒
buffer has data arrives, the trigger edge triggered ⇒

So why say more edge-triggered (ET) efficiency of it?
(1) Edge trigger only triggered only in the moment of arrival data, often in accepting large amounts of data server will accept data head (level trigger The trigger for the first time, edge-triggered for the first time).
(2) The server then decides whether or not to take up this data by analyzing the head. At this point, if you do not accept the data, trigger levels need to be manually cleared and Edge triggering can work to clear the program will clear a timer to do their own return immediately.
(3) If accepted, the two methods can receive a complete data while.

Three, epoll + nonblocking I / O
generally most epoll in the first 01 code interface description, we use the function to read the data for the read function, such default blocking function is, in the absence It will always block waiting for data arrival time data.
(1) Data coming 100B, when calling read in epoll mode, even if the read () is blocking the wait would not be here, because since the run to the read (), explanation data buffer has data, so this place is no influences.
(2) the development server, is generally not directly similar read () system call this type of function (kernel buffer only), uses a number of packaged library function (kernel buffer with a user buffer +) or own function package.
For example: Use readn () function is provided to return the read 200B, 100B is assumed that the arrival of data, event triggers readable, and the program to be used readn () 200B to read, if it is then the time of blocking, here formed deadlock
process is: 100B ⇒ trigger event readable ⇒ readn () call ⇒ readn () enough 200B, obstruction ⇒ cfd has arrived 200B ⇒ in this case the program () at the pause in readn, there is no opportunity to call epoll_wait () ⇒ complete deadlock

Resolution:
This is set to a non-blocking cfd tree before

/ * Modify cfd nonblocking read * /
In Flag = the fcntl (cfd, F_GETFL);
In Flag | = the O_NONBLOCK;
the fcntl (connect_fd, the F_SETFL, In Flag);
. 1
2
. 3
. 4
four, epoll + non-blocking I / O + edge trigger
/ *
* - [nonblocking the epoll + + description 01 using the edge-triggered] -
* /

/ * Other cases the same, and epoll interface description * General Description * 01 is consistent
only the second most and third most of the two pieces of code that can * change
* /

/ * ... * ... /
listen_ep_event.events = EPOLLIN | EPOLLET; / * Edge Trigger * /
/ * ... ... * /
/ * Modify cfd is non-blocking reads * /
Flag = fcntl (cfd, F_GETFL);
In Flag | = the O_NONBLOCK;
the fcntl (connect_fd, the F_SETFL, In Flag);
/ * ... ... * /
. 1
2
. 3
. 4
. 5
. 6
. 7
. 8
. 9
10
. 11
12 is
13 is
14
15
16
. 17
five, epoll reactor model ( Libevent library core idea)
a first step, epoll reactor model prototype - epoll model
we will upgrade 02 epoll general description epoll reactor model is generally described 01
here and the general epoll different interfaces is now officially known as epoll model ( epoll + ET + + custom nonblocking structure)
(1) Remember every structure in the file descriptor corresponding to the red-black tree do? Its structure is described in the first structure described epoll most 01. In epoll interface code, we in the Commonwealth of the structure of the file descriptor is uploaded into itself, then the difference between the model and epoll epoll interface is the most essential epoll model, incoming Commonwealth is a custom structure a pointer, the basic structure of the structure comprises at least
struct my_events {
int m_fd; listening file //
void * m_arg; // generic parameter
void (* call_back) (void * arg); // callback function
/ *
* you can here more encapsulated data content
* user buffer e.g., node status, time, etc. on the tree node
* /
};
/ *
* Note: you need to open up space for the array type my_events itself, and each the times with epoll_data_t in front of the tree
pointing to a my_events element * ptr.
* /
1
2
3
4
5
6
7
8
9
10
11
12
13
According to this model, we can make all of the events in the program has its own handler, you only need to use ptr to pass.
(2) epoll_wait () after returning, epoll model will not epoll interface using the general description of the event 01 code classification process approach, but directly calling the event the corresponding callback function like this

/ *
* - [model described using the epoll 01] -
* /
the while (. 1) {
/ * black trees listener, the event did not meet one second return * 0 /
int = n_ready epoll_wait (ep_fd, Events, MAX_EVENTS, 1000);
IF (n_ready> 0) {
for (I = 0; I <n_ready; I ++)
Events [I] .data.ptr-> call_back (/ * void * Arg * /);
}
the else
/ *
* (. 3) can here do a lot of other work, for example, do not remove the timing of the data did not finish
* you can also do something, and database-related settings
* you play big point out here the code can also be distributed out
* /
}
1
2
3
4
5
. 6
. 7
. 8
. 9
10
. 11
12 is
13 is
14
15
16
. 17


The second step, epoll reactor model forming
here, and will also be the final shape of epoll, from the front here if you understand, you epoll knowledge has been more than seven or eight of
us first recall the picture is less epoll model we and patted ideas.
(1) edge-triggered and file descriptor settings on each tree set non-blocking
(2) call epoll_create () to create a epoll objects
(3) call epoll_ctl () be increased to epoll object or delete the
tree on the corresponding file descriptor structure, the structure should satisfy filled event with a custom structure ptr, at this time, listening event with the callback function has been determined, right?
(4) epoll_wait call () (timing detection) Returns a collection of events to be processed.
(5) in turn calls the event set each element in ptr the structure of the callback function points.
The above is the prototype version, then epoll reactor model even more than what the prototype version of it?
See bold third step, when we put on the tree and custom descriptor structure, if the discharge is to monitor and make its corresponding readable event callbacks. That is, it will always exist as a listener readable event.
Its process is:
listening readable event (ET) ⇒ ⇒ trigger event data arrives ⇒ epoll_wait () callback processing returns ⇒ ⇒ Continue epoll_wait () ⇒ until the program is stopped so the cycle

Then the next upgrade to version molding epoll reactor model
their processes are:
listening readable event (ET) ⇒ ⇒ trigger event data arrives ⇒ epoll_wait () returns ⇒
read the complete data (read callback function in the event) from the node ⇒ off the red-black trees (the read event callback function) ⇒ setting write event and the corresponding write callback (the callback event readable) ⇒ hung tree (the read event callback) ⇒ data (available reading event in the callback function)
⇒ listener can write event (ET) ⇒ other trigger event readable ⇒ ⇒ epoll_wait () returns ⇒
written data (write within an event callback function) ⇒ off from the node red-black tree ( within a write event callback function) ⇒ event and the corresponding set-readable readable callback function (the read-write event callback function) ⇒ put tree (the write event callback function) ⇒ finishing process (the write event callback ) ⇒ stop until the program has been so alternating cycle
This is the end

(1) increase is not so busy in removing waste CPU time?
A: For the same socket, the complete transceivers occupy the position of at least two trees. Alternately only need one. Any kind of design approach, there will be a waste of CPU resources when you look at the key waste was worth, here is whether the cost in exchange for greater benefits is the measure of whether the waste standards. And the second question the whole, there is not a waste

(2) Why readable and writable later set up, and has been alternately?
A: The basic working servers send and receive nothing but data, epoll reactor model quasi from TCP mode, a question and answer. Server receives the data, give the reply, the current situation is the vast majority of servers.
(2-1) server to receive data not be able to write data
Suppose a: server receives client data, just at this time the client receives the sliding window is full, we can not assume that the event is set to write, and the client is interested in let situation to receive his full sliding window (hacker). So, the current server with the client's state has been blocked in writable event, unless you set up a non-blocking + error handling when writing data
Hypothesis 2: The client suddenly due to abnormal reasons to stop after sending data, which will lead to a FIN sent to the server, if the server is set up event listeners can write, then write data after receiving the data will lead to abnormal SIGPIPE, the final server process terminates.
Details can check out my blog post about SIGPIPE signal
----------------
Disclaimer: This article is CSDN blogger "Qingcheng Mountain monk" of the original article, follow the CC 4.0 BY-SA copyright agreement, reproduced, please attach the original source link and this statement.
Original link: https: //blog.csdn.net/qq_36359022/article/details/81355897

Guess you like

Origin www.cnblogs.com/qun115991/p/11710112.html