Briefly summarize the core process of nodejs processing tcp connection

This article mainly introduces the core process of nodejs processing tcp connection. This article introduces you in detail through example code, which has a certain reference value for everyone's study or work. Friends who need it can refer to

A few days ago, I exchanged some knowledge about epoll and request processing in nodejs with a small partner. Today, let’s briefly talk about the logic of nodejs processing requests. We start with the listen function.

int uv_tcp_listen(uv_tcp_t* tcp, int backlog, uv_connection_cb cb) { 
 // Set the strategy for processing the request, see the analysis below 
 if (single_accept == -1) { 
  const char* val = getenv("UV_TCP_SINGLE_ACCEPT"); 
  single_accept = (val != NULL && atoi(val) != 0); /* Off by default. */ 
 ) 
 if (single_accept) 
  tcp->flags |= UV_HANDLE_TCP_SINGLE_ACCEPT; 
 // Execute bind or set flag 
 err = maybe_new_socket(tcp, AF_INET , flags); 
 // Start listening 
 if (listen(tcp->io_watcher.fd, backlog)) 
  return UV__ERR(errno); 
 // Set callback 
 tcp->connection_cb = cb; 
 tcp->flags |= UV_HANDLE_BOUND; 
 // Set The callback of the io watcher is executed when epoll monitors the connection arrival. 
 tcp->io_watcher.cb = uv__server_io;
 // Insert the observer queue. At this time, it has not been added to epoll. In the poll io stage, traverse the observer queue for processing (epoll_ctl) 
 uv__io_start(tcp->loop, &tcp->io_watcher, POLLIN); 
 
 return 0; 
}

We see that when we createServer, the Libuv layer is the logic of traditional network programming. At this time our service is started. In the poll io stage, our listening file descriptors and context (events of interest, callbacks, etc.) will be registered in epoll. Normally it is blocked in epoll. So what happens when a tcp connection comes at this time? Epoll first traverses the fd that triggered the event, and then executes the callback in the fd context, that is, uvserver_io. Let's take a look at uvserver_io.

void uv__server_io(uv_loop_t* loop, uv__io_t* w, unsigned int events) { 
 // Loop processing, uv__stream_fd(stream) is the fd corresponding to the server 
 while (uv__stream_fd(stream) != -1) { 
  // Get the client through accept The fd of the end communication, we see that this fd is different from the fd of the server 
  err = uv__accept(uv__stream_fd(stream)); 
  // The fd corresponding to uv__stream_fd(stream) is non-blocking. Returning this error means that there is no connection available to accept. , Return directly 
  if (err <0) { 
   if (err == UV_EAGAIN || err == UV__ERR(EWOULDBLOCK)) 
    return; 
  } 
  // record it 
  stream->accepted_fd = err; 
  // execute callback 
  stream->connection_cb(stream , 0); 
  /* 
   stream->accepted_fd is -1 indicating that accepted_fd has been consumed in the callback connection_cb, 
   otherwise the read event of the fd in epoll of the server will be cancelled first, and then registered after waiting for consumption, that is, the request is no longer processed 
  */
  if (stream->accepted_fd != -1) { 
   uv__io_stop(loop, &stream->io_watcher, POLLIN); 
   return; 
  } 
 /* 
   ok, accepted_fd has been consumed, do we continue to accept new fd, 
   if set UV_HANDLE_TCP_SINGLE_ACCEPT means that only one connection is processed at a time, and then 
   sleeps for a while, giving the opportunity to accept other processes (in multi-process architecture). If it is not a multi-process architecture, setting this again 
   will cause a delay in processing the connection 
 */ 
  if (stream->type == UV_TCP && 
    (stream->flags & UV_HANDLE_TCP_SINGLE_ACCEPT)) { 
   struct timespec timeout = {0, 1} ; 
   nanosleep(&timeout, NULL); 
  } 
 } 
}

From uv__server_io, we know that Libuv constantly accepts new fd in a loop, and then executes the callback. Normally, the callback consumes fd, and so on until there is no connection to process. Next, let's focus on how fd is consumed in callbacks. Will a large number of loops consume too much time and cause Libuv's event loop to be blocked for a while. The tcp callback is OnConnection of the c++ layer.

// Callback triggered when there is a connection 
template 
void ConnectionWrap::OnConnection(uv_stream_t* handle, 
                          int status) { 
 // Get the c++ layer object 
 WrapType* corresponding to the Libuv structure                           wrap_data = static_cast(handle->data); 
 CHECK_EQ(&wrap_data ->handle_, reinterpret_cast(handle)); 
 
 Environment* env = wrap_data->env(); 
 HandleScope handle_scope(env->isolate()); 
 Context::Scope context_scope(env->context()); 
 
 // and customer an object side to communicate 
 the Local client_handle; 
 
 IF (Status == 0) { 
  // Instantiate Object the Client JavaScript and handle. 
  // Create a layer of js objects using 
  the Local client_obj; 
   return; 
  IF (WrapType :: Instantiate (the env, wrap_data, WrapType!: :SOCKET)
       .ToLocal(&client_obj)) 
 wrap_data->MakeCallback(env->onconnection_string(), arraysize(argv), argv);
 
  // Unwrap the client javascript object. 
  WrapType* wrap; 
  // Store the c++ layer object corresponding to the object client_obj used by the js layer in the wrap 
  ASSIGN_OR_RETURN_UNWRAP(&wrap, client_obj); 
  // Get the corresponding handle 
  uv_stream_t* client = reinterpret_cast (&wrap->handle_); 
  // Take one of the fd from handleaccpet and save it to the client, and the client can communicate with the client 
  if (uv_accept(handle, client)) 
   return; 
  client_handle = client_obj; 
 } else { 
  client_handle = Undefined (env->isolate()); 
 } 
 // callback js, client_handle is equivalent to executing new TCP 
 Local argv[] = {Integer::New(env->isolate(), status), client_handle }; 
}

The code looks complicated, we only need to focus on uv_accept. The parameters of uv_accept, the first is the handle corresponding to the server, and the second is the object that communicates with the client.

int uv_accept(uv_stream_t* server, uv_stream_t* client) {
 int err;
 
 switch (client->type) {
  case UV_NAMED_PIPE:
  case UV_TCP:
   // 把fd设置到client中
   err = uv__stream_open(client,
              server->accepted_fd,
              UV_HANDLE_READABLE | UV_HANDLE_WRITABLE);
   break;
 // ...
 }
 
 client->flags |= UV_HANDLE_BOUND;
 // 标记已经消费了fd
 server->accepted_fd = -1;
 return err;
}

uv_accept is mainly two logics, set the fd that communicates with the client to the client, and mark that it has been consumed, thereby driving the while loop just mentioned to continue execution. For the upper layer, it is to get an object with the client. It is a structure at the Libuv layer, a c++ object at the c++ layer, and a js object at the js layer. The three of them are encapsulated and connected layer by layer. The core is the fd in the client structure of Libuv, which is the underlying ticket for communicating with the client. The last callback to the js layer is to execute the onconnection of net.js. onconnection encapsulates a Socket object for communicating with the client. It holds the c++ layer object, the c++ layer object holds the Libuv structure, and the Libuv structure holds the fd.

const socket = new Socket({
  handle: clientHandle,
  allowHalfOpen: self.allowHalfOpen,
  pauseOnCreate: self.pauseOnConnect,
  readable: true,
  writable: true
 });
const socket = new Socket({
  handle: clientHandle,
  allowHalfOpen: self.allowHalfOpen,
  pauseOnCreate: self.pauseOnConnect,
  readable: true,
  writable: true
 });

So far this article on the core process of nodejs processing tcp connection is introduced, thank you for your support.

Guess you like

Origin blog.csdn.net/yaxuan88521/article/details/114686264