Difference Between Concurrency and Parallelism

 Concurrency and parallelism are macro-level concepts of processing multiple requests at the same time. But there is a difference between concurrency and parallelism. Concurrency means that two or more events occur at the same time; while concurrency means that two or more events occur at the same time interval.

    In the operating system , concurrency means that there are several programs in a period of time that are in the period between being started and running, and these programs are all running on the same processor, but there is only one program at any one point in time. run on the processor.

① There is no one-to-one correspondence between programs and calculations, and a copy of a program can have multiple calculations
② Concurrent programs have mutual constraints. Direct constraints are reflected in the fact that one program needs the calculation results of another program, and indirect constraints are reflected in the competition of multiple programs. A resource, such as a processor, buffer, etc.
③ The concurrent program is stop-and-go during execution, and it advances intermittently.

    On a network server, concurrency refers to the number of connections that can be processed at the same time. For example, if the server can establish 1000 TCP connections, that is, the server maintains 1000 sockets at the same time, the concurrency of this server is 1000, but the server may only have a single core or 8 cores, 16 cores, etc. In short, the processing of these 1000 socket connections is also done in time-sharing. If the processing time of each socket server is 1s, then the server can process 1000 requests in 1s, and if each socket processes 100ms, then the server can process 10000 requests in 1s.

 

    Here we throw some concepts first. If these concepts are clarified, concurrency and parallelism will be basically clear.

    Session: When we work with a computer, we open a window or a Web page, we can call it a "session", and extend it to the web server. To maintain the web page access of many users, we can think that the server manages Multiple "sessions".

    Number of concurrent connections: The website sometimes reports an error: "HTTP Error 503. The service is unavailable". But once or twice, it is normal, and it is estimated that it is likely to exceed the maximum number of concurrent connections of the website. Concurrent connection refers to the processing capability of the network traffic management device or proxy server to its business information flow. It is the maximum number of point-to-point connections that can be processed at the same time. It reflects the access control capability and connection status tracking capability of the device to multiple connections. This parameter The size directly affects the maximum number of information points that the device can support.

    Concurrency can be understood as how many sessions the server maintains at most. Parallelism is different. It is related to how many sessions are running at the same time. If there are two servers (processes), the number of possible parallels is 2, and the number of concurrent is 1000. We can also compare the concepts of throughput and bandwidth.

   The distinction between throughput and bandwidth: Throughput and bandwidth are words that are easily confused, and the unit of both is Mbps. Let's first look at the English corresponding to the two, throughput: throughput; bandwidth: Max net bitrate. When discussing the bandwidth of a communication link, it generally refers to the number of bits per second that can be transmitted on the link, which depends on the link clock rate and channel coding, also known as wire speed in computer networks . It can be said that the bandwidth of Ethernet is 10Mbps. But a distinction needs to be made between the available bandwidth on the link (bandwidth) and the actual number of bits per second (throughput) that can be transferred on the link. Usually the term "throughput" is more preferred to refer to the test performance of a system. Thus, a pair of nodes connected by a link with a bandwidth of 10 Mbps may only achieve a throughput of 2 Mbps because the implementation suffers from various inefficiencies. This means that an application on one host can send data to another host at 2Mbps.

   The bandwidth can be understood as parallel, that is, 10M bits (0, 1) can be transmitted in the line at the same time. The throughput is similar to concurrency, which means that the host can process 2M bits per second. Some metaphors are not very appropriate, but after careful experience, there are some similarities.

 

Interested friends can join my Internet architecture group: 477819525

 

There are a large number of java elementary, advanced, advanced, architecture videos and books, and everyone can communicate and learn together.

 

 

 

 

 

 

 

 

 

Guess you like

Origin http://10.200.1.11:23101/article/api/json?id=326945583&siteId=291194637