Netty Authoritative Guide Reading Notes (2)

1. CS model
interaction between two processes.
Server: Provide location information (bound IP address + listening port).
Client: initiate a request through a connection operation like the address the server is listening on. The connection is established through the three-way handshake, and after the success, the communication can be carried out through the socket.
2. Synchronous blocking model
ServerSocket: responsible for binding IP
Socket: initiating connection operation After the
connection is
3. BIO communication model (one request and one response model)
server: an independent acceptor thread is responsible for monitoring the connection of the client. Each time a connection request from a client is received, a new thread is created for link processing. After processing, a response is returned to the client through the output stream, and the thread is destroyed.
Disadvantages: Lack of elastic scaling capabilities. When the number of concurrent clients is large, the number of server threads and the number of concurrent client accesses are 1:1. (A thread can only handle one client connection)
Threads are a valuable resource of the Java virtual machine. After the number of threads expands, the system performance will decrease, and then thread stack overflow will occur, new threads will fail to create, and eventually they will crash and cannot provide services.
4. Thread pool tuning
When a new client accesses, encapsulate the client socket into a task (implementing Runnable) and throw it into the back-end thread pool for processing. A message queue is set in the thread pool for buffering, which avoids the creation of a new thread for each request access, and the resources are controllable.
Advantages: Both thread pools and message queues are bounded, and no matter how much concurrency is, it will not cause the number of threads to expand or the queue to be too long to overflow.
5. Essential problems
read method: When reading the socket input stream, it will block until "there is data to read", "the available data has been read", "null pointer or IO field". When the other party "sends the request or the response message is slow", or "the network transmission is slow", the communication thread of the party reading the input stream will be blocked for a long time (the sender needs 60s to send, and the IO thread read by the reader will be blocked for a long time). Synchronous blocking for 60s, during which other intervening messages can only be queued in the message queue).
write method: When the write method of OutputStream is called to write the output stream, it will also block until all the bytes to be sent are written, or an exception occurs. When the message receiver processes slowly, it will not be able to read data from the TCP buffer in time, which will cause the sender's TCP window size to continue to decrease until it reaches 0. Both parties are in the keep Alive state, and the message sender will no longer be able to buffer against TCP. When the message is written to the area, the write operation will be blocked until the window size is greater than 0 or an IO exception occurs.
Thread pool problem: If all available threads are blocked, subsequent IO messages will enter the blocking queue. After the queue is full, subsequent operations to enter the queue will be blocked. Because there is only one Acceptor thread in the front end to receive client access, after it is blocked in the synchronous blocking queue of the thread pool, new client request messages will be rejected, and a large number of connection timeouts will occur on the client side. When all connections have timed out, the caller will think that the system has crashed and cannot receive new request messages.

Reprinted from: http://blog.csdn.net/xxcupid/article/details/50492194

Guess you like

Origin http://10.200.1.11:23101/article/api/json?id=327061909&siteId=291194637