Some basic concepts of concurrent programming

He mentioned concurrent programming, the most basic concept is to thoroughly understand what is the process and thread, and on these two concepts, you can refer to my previous article written:
What is the process and thread

The relationship between the number of CPU cores and threads

Multi-core: also referred to as single-chip multiprocessor (Chip Multiprocessors, referred to as CMP), CMP is proposed by Stanford University, the idea is to massively parallel processor SMP (symmetric multiprocessor) integrated into the same chip each processor parallel execution of different processes. This relies on a plurality of programs to run simultaneously and in parallel CPU is an important direction ultrahigh-speed calculation, called parallel processing.

Multithreading: Simultaneous Multithreading referred SMT allows multiple threads on a single processor to perform synchronization and sharing of processor execution resources...

The number of cores, threads: the current mainstream CPU is multi-core. Increasing the number of cores is to increase the number of threads, because the operating system is to perform a task by a thread, in general, they are 1: 1 correspondence relationship, that generally has four quad-core CPU threads. But after the introduction of Intel Hyper-Threading technology, the number of cores and threads form 1: 2 relationship.
Here Insert Picture Description

CPU time slice rotation mechanism

We usually during development, the feeling is not limited by the number of cpu core, I want to start threads started the thread, even on a single-core CPU, Why? This is because the CPU operating system provides a round-robin mechanism.

Round-robin scheduling is one of the oldest, simplest, fairest and most widely used algorithm, also known as RR scheduling . Each process is assigned a period of time, called its time slice, the time that is allowed to run the process.

Baidu Encyclopedia of the principles of CPU time slice rotation mechanism is explained as follows:

If the process is still running at the end of the time slice, the CPU will be deprived and assigned to another process. If the process before the end of the time slice or blocking junctions, the CPU immediately switched. Scheduler to do is maintain a ready list of processes, when the process runs its time slice, it was moved to the end of the queue.

Round-robin scheduling only interesting point is that the length of the time slice. Switching from one process to another requires a given time, including loading and saving memory mapped register values, update various tables and queues. If the cutting process (processwitch), sometimes called context switching (context switch), the need 5ms, suppose that time slot is set after 20ms, 20ms then the useful work done, CPU 5ms will take to process switching. 20% CPU time is wasted on administrative overhead up.

In order to improve CPU efficiency, we can set the time slice 5000ms. At this time wasted only 0.1%. But taking into account in a time-sharing system, if there are 10 users interact almost simultaneously press the Enter key, what happens? Assuming that all other processes have full use of their time slices, the last an unfortunate process had to wait 5s only get a chance to run. Most users can not tolerate a brief command to respond to 5, the same problem also occurs on a program of support for multi-channel personal computer.

Conclusions can be summarized as follows: time slice is set too short can cause excessive process switching, reducing CPU efficiency: The set too long they may cause a response to short interactive requests deteriorated. The time slice is set to 100ms is usually a reasonable compromise.

In the case of CPU crashes, in fact, we find that when you run a program when the CPU at 100% to get the case do not restart the computer, in fact, we still have chance to make it kill off, I think it is precisely because this sake kind of mechanism.

Clarification parallel and concurrent

We give an example, if there are highways A, above, there are eight lanes side by side, then the maximum concurrent vehicle is 8, this provision highway vehicle while walking side by side A or less 8 when the vehicle can be parallel run. CPU is also this principle, a CPU equivalent to a highway A, the core number is equivalent to the number of threads or side by side passable lane; and a plurality of side by side CPU is equivalent to several expressways, highways and how each side by side lanes.

When talking about concurrent sure to add a unit of time, which means that the amount per unit time is complicated by how much? Left the unit time is actually meaningless.

As the saying goes, as one can not with two, which is the same computer, in principle, a CPU can only be assigned to a process, in order to run the process. The computer we usually use only one CPU, that is only one heart, to make it multitasking to run multiple processes at the same time, we must use concurrent techniques. Concurrency technology is quite complex, and easiest to understand is the "round-robin process scheduling algorithm."

In summary:

Concurrency: alternately refers to the application can perform different tasks, such as multi-threaded execution under a single CPU core is not simultaneously perform multiple tasks, if you open two threads execute, that is almost impossible to detect the speed continues to switch two tasks, has reached the "simultaneous implementation of the results," not really, but computer speed is too fast, we can not detect it.
Parallel: refers to the application can perform different tasks simultaneously, for example: eat when you can eat and phone, two things can be executed simultaneously.

Meaning high concurrent programming, the benefits and considerations

Since the birth of multi-core CPU multi-threaded, multi-threaded, high concurrent programming more and more attention and concern. Multithreading can bring the following benefits to the program:

(1) make full use of CPU resources
from the above description of the CPU, you can look out of the market now is not the core CPU without the use of multithreading mechanisms, especially servers also more than one CPU, if you still use single-threaded technology do the thinking, obviously it out of. Because the basic scheduling unit program is thread, and a thread can only run one thread a core of a CPU, and if you are the i3 CPU, then the worst is the dual-core 4 thread computing power: If a thread program, then it is a waste of CPU performance 3/4: If you design a multi-threaded program, then it can simultaneously run multiple threads on multiple cores of more than one CPU, can fully utilize the CPU, reducing CPU idle time, to play its computing power to improve concurrency.

As we usually take the subway, like many people take the subway when they are in long-term serious reading, rather than to take the subway and take the subway to go home by reading, so you have twice as much time equivalent. This is one reason why some people plenty of time, and some people always said no time to work, too, sometimes can be complicated to do a few things, make full use of our time, CPU is the same, but also fully use.

(2) speed up the response time of the user
, such as we often use Thunder download, like to open several threads to download, no one wants to use a thread to download, why? The answer is simple, that is how fast downloading multiple threads .
We should do so at the time of program development, especially when we do the Internet project, to enhance the response time of a web page if 1s, if the traffic flow, it will be able to increase the number of conversions. Did high performance tuning of web front-end know that you want to address with static resources twenty-three sub-domain to load, why? Because each one more sub-domain name, browser loading time of your page will be more open several threads to load your page resources, improve the response speed of the site. Multi-threaded, high concurrency really is everywhere.

(3) can make your code modular, asynchronous, simplified
example, we realize electrical systems business, orders and send text messages to the user, the message can be split, will give users send text messages, e-mail two separate steps as a separate module, and handed over to other threads to execute. This will not only increase the asynchronous operation, improve system performance, but also make the program modularity, clarity and simplification.

Published 19 original articles · won praise 6 · views 180 000 +

Guess you like

Origin blog.csdn.net/qiangzi1103/article/details/104480083