Analysis of Polling and Push (Long-Polling) Services

The dilemma of real-time web applications
The information exchange process of web applications is usually that the client sends a request through the browser, the server receives and audits the request, processes it and returns the result to the client, and then the client browser presents the information. This mechanism can still be safe for applications where information changes are not particularly frequent, but for those applications with high real-time requirements, such as online games, online securities, device monitoring, online news broadcasts, RSS subscription pushes, etc., when By the time the client browser is ready to render this information, it may be out of date on the server. So keeping the client-side and server-side information in sync is a key element of real-time Web applications, and a challenge for Web developers. Before the WebSocket specification came out, developers wanted to implement these real-time Web applications and had to adopt some compromise solutions. The most commonly used ones were Polling and Comet technology. Comet technology is actually an improvement of polling technology. It can be subdivided into two implementations, one is the long polling mechanism, and the other is called the streaming technology. Let's briefly introduce these technologies:
Polling:
This is the earliest solution to realize real-time Web applications. The client sends requests to the server at certain time intervals, and keeps the synchronization between the client and the server by frequent requests. The biggest problem with this synchronization scheme is that when the client initiates a request to the server at a fixed frequency, the data on the server may not be updated, which will bring a lot of unnecessary network transmission, so this is a very inefficient real-time plan.
Polling means that regardless of whether there is an update on the server side, the client (usually the browser) sends a request to query regularly. The result of polling may be that there is a new update on the server side, or there may be nothing. , just returns an empty message. Regardless of the result, the client will continue the next round of polling at the next timing point after processing.                                   
The client of the push or long-polling service does not perform polling. The client suspends immediately after initiating a request, and the server will actively push the information to the client until the server has an update. end. During the period before the server has an update and pushes the information, the client will not have new redundant requests, and the server does not need to do anything to the client, only the most basic connection information is retained. Once the server has an update It will be pushed to the client, and the client will deal with it accordingly, and then re-initiate the next round of requests after processing. It is divided into two types: long polling and streaming:
long polling:
long polling is an improvement and improvement of timed polling, the purpose is to reduce invalid network transmission. When there is no data update on the server side, the connection will remain for a period of time until the data or state changes or the time expires. This mechanism is used to reduce invalid interaction between the client and the server. Of course, if the data on the server side changes very frequently, this mechanism has no essential performance improvement compared with regular polling.
Streaming:
The streaming solution usually uses a hidden window on the client's page to send a long-connection request to the server. After the server receives this request, it responds and continuously updates the connection status to ensure that the connection between the client and the server does not expire. Through this mechanism, information from the server side can be continuously pushed to the client side. This mechanism has some problems in user experience. Different solutions need to be designed for different browsers to improve the user experience. At the same time, when the concurrency is relatively large, this mechanism is a great test for server-side resources.
For example, it will be clear:

polling mode, assuming that the client polls every 2 seconds, then the client will send a request every 2 seconds, and the corresponding server will respond to the client every 2 seconds. one request. In fact, the server side may have an update after 1 second, or it may be updated after 1 minute. For updates in 1 second, the client will have a delay of at least 1 second; and for updates after 1 minute, only the last query is meaningful, and polling within this minute is actually unnecessary. , both the server and the client are wasting resources.
In push mode, the client hangs up immediately after sending a request and waits for the server to respond, which may be 1 second, 10 seconds, or 1 minute. If there is an update on the server side in 1 second, the client will receive the update immediately after 1 second. If there is an update in 1 minute, the client will only request once for the whole minute, and the server will only respond once. , Is the difference between this and polling already clear?
Combining these solutions, you will find that the so-called real-time technologies we are currently using are not real real-time technologies. They are just using Ajax to simulate real-time effects. It is a process of HTTP request and response, and each HTTP request and response has complete HTTP header information, which increases the amount of data transmitted each time, and the programming implementations of the client and server in these schemes are all It is more complicated. In practical applications, in order to simulate a more realistic real-time effect, developers often need to construct two HTTP connections to simulate two-way communication between the client and the server, and one connection is used to handle the data transmission from the client to the server. , a connection is used to process the data transmission from the server to the client, which inevitably increases the complexity of programming, increases the load on the server, and restricts the scalability of the application system.

Guess you like

Origin http://10.200.1.11:23101/article/api/json?id=326822003&siteId=291194637