Socket programming | Non-blocking polling of TCP server IO model-4

1 Introduction

The previous article has already talked about how to build a multi-thread/multi-process server model, which can support multiple clients to connect to the server at the same time. This article implements a server model with a single-process non-blocking polling mechanism.

Links to previous articles:

" Socket Programming | TCP Programming Basic Process and API Detailed Explanation-1 "

" Socket programming | TCP server blocking IO model (multithreading)-2 "

" Socket programming | TCP server IO model concurrent blocking (multi-process implementation)-3 "

2. Conceptual understanding

The previous multi-thread/multi-process server model was implemented based on the mechanism of blocking IO. This test is based on non-blocking IO, and the server polls to detect the connection of the client. The concepts of blocking and non-blocking are explained as follows:

2.1 Blocking I/O model:

The process will block until the data copy is complete.

The application calls an IO function, causing the application to block, waiting for data to be ready. If the data is not ready, keep waiting... When the data is ready, copy it from the kernel to the user space, and the IO function returns a success indication.

When calling the recv()/read() function, the system first checks whether there is ready data. If the data is not ready, then the system is in a waiting state. When the data is ready, the data is copied from the system buffer to user space , and the function returns.

When using the socket() function to create a socket, the default socket is blocking. Not all Sockets API calls with a blocking socket as a parameter will block. For example, to block

Guess you like

Origin blog.csdn.net/weixin_40209493/article/details/129183607