C++11 multi-threaded programming 2: multi-thread communication, thread synchronization, locks

 

C++11 Multithreaded Programming 1: Overview of Multithreading

C++11 multi-threaded programming 2: multi-thread communication, thread synchronization, locks

C++11 multi-threaded programming three: lock resource management and condition variables 

C/C++ basics, Boost thread creation, thread synchronization 


2.0 Overview

        Thread synchronization is a mechanism for data protection . The protected data is shared data. Shared data is a resource that multiple threads access together, that is, a piece of memory, which is the same piece of memory. If there are two threads A and B write data at the same time, A writes 100, B writes 200, and at this time thread C reads this memory at the same time, so what data is read? An error will definitely occur at this time, because if three threads use this memory at the same time, errors will occur. Therefore, thread synchronization does not mean letting multiple threads do something at the same time, but means doing something at the same time. When things happen, multiple threads are executed in sequence. In other words, thread synchronization does not mean that threads execute in parallel, but that threads execute linearly, thus ensuring data security.
 
        Suppose there are 4 threads A, B, C, and D. When the current thread A accesses the shared resources in the memory, the other threads B, C, and D cannot operate on this memory until thread A accesses the shared resources in the memory. Until the block memory is accessed, only one of B, C, and D can access the memory. The remaining two need to continue to block and wait, and so on, until all threads have completed the operation on this memory.

example:

#include <iostream>
#include <thread>
//#include <mutex>
//Linux make:  g++ -o main main3.c -lpthread
using namespace std;
//static mutex mut;

int i=0;

void thread_1(int n){
    while(n--){
       //mut.lock();
        i++;
       //mut.unlock();
    }
}

int main(){
	//n越大,i在不加锁的情况下出错越大
    int n=100000;
    thread th1(thread_1,n);
    thread th2(thread_1,n);
    th1.join();
    th2.join();
    cout<< i << endl;
    return 0;
}

 

        If you run this program multiple times, you will find that different results will be output. That is to say, if there is no error in the middle, the data they output should be the same. Now it is obvious that the two threads are processing the same memory space at the same time. operation (the current memory space is the global variable i), which caused the error.

        Multiple threads time-share and reuse CPU time slices, that is, thread A needs to grab the time slice of the CPU. Whoever grabs the time slice will execute. If thread A grabs the time slice of the CPU, the CPU will start counting. Where does the data come from? This data is read from the physical memory. The data in the physical memory will be loaded into the registers of the CPU and processed through the registers. Generally, there is a cache between the physical memory and the CPU, usually a third-level cache. , the data is transferred from physical memory -> level 3 cache -> level 2 cache -> level 1 cache -> CPU register. Caching is to increase speed. The processing speed of registers is much faster than the processing speed of physical memory. There are With such a cache, data processing efficiency can be improved.

Synchronously

        For the problem of data confusion when multiple threads access shared resources, thread synchronization is required. There are four commonly used thread synchronization methods: mutex locks, read-write locks, condition variables, and semaphores. The so-called shared resources are variables that are accessed by multiple threads. These variables are usually global data area variables or heap area variables. The shared resources corresponding to these variables are also called critical resources.

         Each thread in the locked area can only run this area alone and cannot run at the same time. Only after the current thread goes out to unlock, other threads can unblock the lock. If three are blocked, they will grab the lock. , whoever grabs it will unblock it,


2.1 Analysis of multi-thread status and switching process

Thread status description:

        Initialization (Init): The thread is being created. (That is, create a thread object and set up the callback function. The thread is in the initialization state, initializing the memory space of the thread, etc. In fact, there is not much code intervention in this part, that is, there is actually a process between initialization and ready state. Time is consumed, which is why a thread pool is used later to do a process to reduce time consumption. When various memories are ready, they become ready.) Ready: It does not mean that it can be
         run immediately. The thread is in the ready list, waiting for CPU scheduling.
        Running: The thread is running. Scheduled by the CPU.
        Blocked: The thread is blocked and suspended. Blocked status includes: pend (lock, event, semaphore, etc. blocking), suspend (active pend), delay (delay blocking), pendtime (due to timeout waiting for lock, event, semaphore time, etc.). The blocking state means that the CPU scheduling is no longer here, and the CPU scheduling is given up without wasting resources.
        Exit: The thread ends and waits for the parent thread to reclaim its control block resources.


2.2 Competition state and critical section introduce mutex code

Race Condition: Multiple threads simultaneously read and write shared data.
Critical Section: Code fragments that read and write shared data
must avoid race condition strategies and protect the critical section. Only one thread can enter the critical section at the same time.


To address the issues in the 2.0 overview:

At this time, you need to add a mutex lock (mutex is a mutex lock), and you need to include the header file: #include <mutex>
static mutex mux; mux is called a mutex variable. After the resource is locked, other threads are equivalent to waiting in line - blocking
mux.lock is a lock at the operating system level. There is only one mux mutex lock. When a thread takes the initiative to process it, it must first seize the lock resource, so the thread preempts it on one side. CPU resources seize lock resources.

#include <iostream>
#include <thread>
#include <mutex>	//需要包含的头文件
//Linux make:  g++ -o main main3.c -lpthread
using namespace std;

static mutex mut;	//添加互斥锁变量

int i=0;

void thread_1(int n){
    while(n--){
       mut.lock();	//获取锁资源,如果没有获得,则阻塞等待
        i++;
       mut.unlock();//释放锁
    }
}

int main(){
	//n越大,i在不加锁的情况下出错越大
    int n=100000;
    thread th1(thread_1,n);
    thread th2(thread_1,n);
    th1.join();
    th2.join();
    cout<< i << endl;
    return 0;
}

 

From the results of multiple runs, it seems that there are no more errors, the output is the same every time, and thread synchronization is successful.


2.3 The reason why the pit thread of the mutex lock cannot seize resources

Ideally, when a thread releases the lock resource, subsequent threads will queue up to acquire the lock resource. However, in practice, sometimes one thread always occupies the resource, and other threads queue up and never get the resource.

#include <thread>
#include <iostream>
#include <string>
#include <mutex>
//Linux -lpthread
using namespace std;
static mutex mux;
 
void ThreadMainMux(int i)
{
    for (;;)
    {
        mux.lock();
        cout << i << "[in]" << endl;
		this_thread::sleep_for(100ms);
        mux.unlock();
    }
}
int main(int argc, char* argv[])
{
    for (int i = 0; i < 3; i++)
    {
        thread th(ThreadMainMux, i + 1);
        th.detach();
    }
 
    getchar();
    return 0;
}

 

You will find that the result is that this thread always enters, and not all threads can be displayed ideally. This is not the result we want. The reasons are as follows:

        In this code, when thread 1 obtains the lock resource, thread 2 and thread 3 are in a blocked state. When thread 1 is unlocked, logically one of threads 2 and 3 should obtain the lock resource immediately, but for thread 1 Say, when it unlocks, it re-enters the lock, that is, it re-applies for the lock after unlocking. This lock is determined by the operating system kernel to determine whether the lock resource is occupied. When thread 1 is unlocked, it Of course, the memory resources are not released immediately, because our operating system is not a real-time operating system, and the time between unlocking and locking may be microseconds, and CPU scheduling must detect this resource after a period of time. When the thread 1 enters the lock again immediately after unlocking, and the operating system has no time to react. The operating system will think that thread 1 has grabbed the lock resource again. In fact, it is not, but because thread 1 has re-entered the lock before its resources have time to be released, so It enters again without queuing, so this is where the pitfall lies. Therefore, a delay must be added before unlocking and locking to give the operating system time to release.

#include <thread>
#include <iostream>
#include <string>
#include <mutex>
//Linux -lpthread
using namespace std;
static mutex mux;
 
void ThreadMainMux(int i)
{
    for (;;)
    {
        mux.lock();
        cout << i << "[in]" << endl;
        //std::this_thread::sleep_for和sleep,没啥太大区别,都是表示当前线程休眠一段时间,
        //休眠期间不与其他线程竞争CPU,根据函数参数,等待相应时间时间。
        //只是一个是C的函数一个是c++的函数分别对应头文件 <unistd.h> 和 < thread >
        this_thread::sleep_for(100ms);
        mux.unlock();
        this_thread::sleep_for(1ms);
    }
}
int main(int argc, char* argv[])
{
    for (int i = 0; i < 3; i++)
    {
        thread th(ThreadMainMux, i + 1);
        th.detach();
    }
 
    getchar();
    return 0;
}


2.4 Timeout lock timed_mutex (to avoid long-term deadlock) and recursive lock recursive_mutex

        Mutex does not have a timeout by default, and when a thread is occupied, other threads are always blocked. This method of code is concise, but it makes subsequent code debugging more difficult. For example, we accidentally wrote a dead line in the code. Lock, then how do you find this deadlock during debugging? Record the log before each lock to see if it has entered. Such debugging costs are relatively high and is not easy to find. As long as timed is added, timeout is supported.

#include <thread>
#include <iostream>
#include <string>
#include <mutex>
//Linux -lpthread
using namespace std;
timed_mutex tmux;
 
void ThreadMainTime(int i){
    for (;;){
        if (!tmux.try_lock_for(chrono::milliseconds(500))){  //等待时间超过500ms就超时了
            cout << i << " try_lock_for timeout" << endl;
            continue;
        }
        cout << i << "[in]" << endl;
        this_thread::sleep_for(2000ms);  //假设要处理的业务的持续时间
        tmux.unlock();
        this_thread::sleep_for(1ms);     //防止某一个线程一直占用这个锁资源
    }
}
 
int main(int argc, char* argv[]){
    for (int i = 0; i < 3; i++){
        thread th(ThreadMainTime, i+1);
        th.detach();
    }
    getchar();
    return 0;
}

  

        Many businesses may use the same lock, and the same lock may be locked multiple times. If it is an ordinary lock (mutex lock), an exception will be thrown the second time the lock is locked. If the exception is not caught, Then the program will crash, and this recursive lock can be locked multiple times. It will increase the lock count of the current thread by one, and the locked state will not change. There will be as many unlocks as there are locks, until It is only released when the count reaches zero, which can avoid unnecessary deadlocks. 

#include <thread>
#include <iostream>
#include <string>
#include <mutex>
//Linux -lpthread
using namespace std;
recursive_mutex rmux;

void Task1(){
    rmux.lock();
    cout << "task1 [in]" << endl;
    rmux.unlock();
}
void Task2(){
    rmux.lock();
    cout << "task2 [in]" << endl;
    rmux.unlock();
}
void ThreadMainRec(int i){
    for(;;){
        rmux.lock();
        Task1();
        cout << i << "[in]" << endl;
        this_thread::sleep_for(500ms);
        Task2();
        rmux.unlock();
        this_thread::sleep_for(1ms);
    }
}
int main(int argc, char* argv[]){
    for (int i = 0; i < 3; i++){
        thread th(ThreadMainRec, i + 1);
        th.detach();
    }
    getchar();
    return 0;
}


2.5 Shared lock shared_mutex solves read and write problems

        The current thread needs to be mutually exclusive when writing data, that is, other threads can neither write nor read. While the current thread is reading, other threads can only read but not write.
        This involves two locks, a read lock and a write lock.
If this thread is read-only, then we can use a read-only lock. However, when modifications are involved in this thread, we need First get the read lock, then get the write lock, then modify it, and then release it after the modification. Use this method to create a share.
        The shared lock contains two locks, one is the shared lock and the other is the mutex lock. As long as no one locks the mutex lock, the shared lock is returned immediately. As long as someone locks the mutex lock, the shared lock cannot be entered. Others Mutex locks cannot be entered either.
c++14 shared timed mutex shared_timed_mutex (the default general value is supported to C++14)
c++17 shared mutex shared_mutex
If the mutex is only needed when writing and not when reading, how to use ordinary locks?
According to the following code, only one thread can enter for reading. In many business scenarios, CPU resources are not fully utilized.

        The principle is that if one thread is writing, all other threads can neither read nor write. If there is one thread reading, other threads can read, but cannot write. You must wait for all reading threads to finish reading before writing, so This ensures that the resource will not be written by multiple people and errors will occur.
        This is the shared lock we are going to use. The read lock must be released first. If the read lock is locked, the mutex lock cannot be entered. Once the mutex is entered, all other reading threads are waiting. The write locks of other threads are also waiting, and only one thread can write at the same time.

#include <thread>
#include <iostream>
#include <string>
#include <mutex>
#include <shared_mutex>
//Linux make: g++ -std=c++14 -o main main8.c -lpthread
using namespace std;

//shared_mutex smux;		//c++17  共享锁
shared_timed_mutex stmux;	//c++14  共享锁 
void ThreadRead(int i){
    for(;;){
        //stmux.lock_shared();共享锁,只要对方没有把互斥锁锁住,共享锁大家都可以进去。
        stmux.lock_shared();
        cout << i << " Read" << endl;
        this_thread::sleep_for(500ms);
        stmux.unlock_shared();
        this_thread::sleep_for(1ms);
    }
}
void ThreadWrite(int i){
    for(;;){
        stmux.lock_shared();
        //读取数据
        stmux.unlock_shared();
        stmux.lock(); //互斥锁 写入
        cout << i << " Write" << endl;
        this_thread::sleep_for(300ms);
        stmux.unlock();
        this_thread::sleep_for(1ms);
    }
}
int main(int argc, char* argv[]){
    for(int i = 0; i < 3; i++){
        thread th(ThreadWrite, i + 1);
        th.detach();
    }
    for(int i = 0; i < 3; i++){
        thread th(ThreadRead, i + 1);
        th.detach();
    }
    getchar();
    return 0;
}

Guess you like

Origin blog.csdn.net/qq_34761779/article/details/129226464