Sorting out concepts related to multithreading (personal understanding)

Scattered and scattered have been exposed to many concepts about multithreading. This blog mainly makes a summary, sorts out these concepts, and looks at how multithreading-related problems arise and how to solve them from a macro perspective. .

The meaning of multithreading

The most commonly heard sentence is "multithreading can improve efficiency", but why does multithreading improve efficiency, or will multithreading definitely improve efficiency?

Here's the score discussion:

Under a single-core processor, the significance of multi-threading is mainly to be able to process multiple tasks at the same time . In most cases, it cannot improve efficiency, because context switching also requires overhead, but sometimes it can also play a role in improving efficiency. For example, if a thread is performing IO operations , then switching threads at this time will obviously improve efficiency, but this is not the main purpose.

Under a multi-core processor, since multiple cores are responsible for running instructions, multi-threading obviously improves efficiency at once.

Problems caused by multithreading

Multithreading is very convenient, so it is often necessary to multithread, but this will also increase the difficulty of system design, because it will lead to thread safety issues, that is, data inconsistency .

The data inconsistency is essentially because the CPU usually copies the variables to be read and written by the thread to the cache first, so that the thread's operations on the variables are all on the cache, and the main memory cannot be directly operated, which results in data inconsistency. This is also why there will be no multi-threading problems for immutable classes , because instances of this class will not change. Once you modify this variable, you must return a new modified variable. For example, the String class does not Multithreading problem.

locks in multithreading

In order to solve the problems caused by multi-threading, the concept of lock appeared . Locking is supported by instructions at the processor level. For example, in X86 architecture chips, instructions with lock prefix can perform atomic operations. Such instructions cannot be interrupted, but also lock the bus or lock the cache.

On this basis, relying on these instructions, the operating system level implements some locks for users to use, such as mutex locks and spin locks. So when we use the library provided by the corresponding programming language, we can easily use these ready-made locks. But you can also rely on some other methods in the library, such as the java.util.concurrent.atomic package, to implement locks yourself to ensure multi-thread safety. This is what is often referred to as lock-free programming . Lock-free programming does not mean that there are no locks, but that the locks provided by the operating system are not used. The advantage of this compared to programming with locks is that there is no deadlock problem, and the efficiency will be improved to a certain extent (there is no operating system to maintain order, such as the context switching problem of mutexes). However, this also leads to new problems, such as the ABA problem. Usually lock-free programming becomes more difficult to implement.

reference link

  1. In C++, is std::atomic a real "atom"?
  2. The realization principle of the bottom layer of the lock

Guess you like

Origin blog.csdn.net/weixin_55658418/article/details/129433678