"Linux" Lecture Nine: Detailed Explanation of Linux Multithreading (3) _ Thread Mutual Exclusion | Thread Synchronization

The "Preface" article is about the knowledge of Linux multithreading. The previous article is  the detailed explanation of Linux multithreading (2) . Today's article is the detailed explanation of Linux multithreading (3). The content is roughly about thread mutual exclusion and thread synchronization. Let's start with the explanation !

"Belonging column" Linux system programming

"Author" Mr. Maple Leaf (fy)

"Motto" Cultivate myself on the way forward

"Mr. Maple Leaf is a little intellectually ill"

"One sentence per article"

The hall is full of flowers and drunk with three thousand guests,

Fourteen states of frost and cold with one sword.

——Guan Xiu's "Sacrificing Money to the Father"

Table of contents

Four, Linux thread mutual exclusion

4.1 Mutual exclusion related concepts between process threads

4.2 Mutex mutex

4.3 Mutex interface function

4.4 Mutex implementation principle

5. Reentrancy and thread safety

5.1 Concept

5.2 Common thread unsafe situations

5.3 Common thread-safe situations

5.4 Common non-reentrant situations

5.5 Common reentrant situations

5.6 Reentrancy and thread safety connection

5.7 The difference between reentrancy and thread safety

Six, deadlock

6.1 Concept

6.2 Four necessary conditions for deadlock

6.3 Avoiding Deadlocks

Seven, Linux thread synchronization

7.1 Synchronization concepts and race conditions

7.2 Condition variables

7.3 Condition variable related functions


Four, Linux thread mutual exclusion

4.1 Mutual exclusion related concepts between process threads

  • Critical resources : resources shared by multi-threaded execution streams are called critical resources
  • Critical section : Inside each thread, the code that accesses critical resources is called a critical section
  • Mutual exclusion : At any time, mutual exclusion guarantees that one and only one execution flow enters the critical section, accesses critical resources, and usually protects critical resources
  • Atomicity : An operation that will not be interrupted by any scheduling mechanism. The operation has only two states, either completed or not completed

Critical Resources && Critical Sections

How to understand critical resources and critical sections? ?

In the previous inter-process communication, if inter-process communication, must rely on third-party resources, because the processes are independent of each other, third-party resources such as: pipes, shared memory and so on. The third-party resources in inter-process communication are called critical resources, and the code that accesses third-party resources is called a critical section. Third-party resources are also called shared resources

It is relatively simple to communicate between threads, because most resources between threads are shared. For example, define a global variable ticket, this ticket is a shared resource, and each thread can see this resource.

Suppose the ticket is a movie ticket in the movie ticketing system. There are 1000 tickets in total. Now there are two threads to grab the movie ticket. The code is as follows:

#include <iostream>
#include <pthread.h>
#include <unistd.h>
using namespace std;

// 票 -- 共享资源
int tickets = 1000;

void* getTicket(void* args)
{
    string username = static_cast<const char*>(args);
    while(1)
    {
        if(tickets > 0)
        {
            cout << username << ": 正在进行抢票 "  << tickets-- << endl;
            sleep(1);//模拟抢票
        }
        else
        {
            break;
        }
    }
}

int main()
{
    pthread_t tid1, tid2;
    pthread_create(&tid1, nullptr, getTicket, (void*)"thread 1");
    pthread_create(&tid2, nullptr, getTicket, (void*)"thread 2");

    pthread_join(tid1, nullptr);
    pthread_join(tid2, nullptr);

    return 0;
}

Compile and run, both threads can grab tickets

In the above example, the global variable tickets is a critical resource, and the code that accesses critical resources in each thread is called a critical section

Atomicity and Mutex

In the case of multi-threading, if these multiple execution streams operate on critical resources on their own, then data inconsistency may occur at this time, and thread safety issues may arise.

For example, if multiple threads grab tickets, the main thread creates 5 threads to grab tickets. If the number of votes is 0, the thread will automatically end. The main thread only needs to join

#include <iostream>
#include <pthread.h>
#include <unistd.h>
using namespace std;

// 票 -- 共享资源
int tickets = 1000;

void* getTicket(void* args)
{
    string username = static_cast<const char*>(args);
    while(1)
    {
        if(tickets > 0)
        {
             //模拟抢票花费的时间
            usleep(12345);//微秒
            cout << username << ": 正在进行抢票 "  << tickets-- << endl;
        }
        else
        {
            break;
        }
    }
}

int main()
{
    pthread_t tid1, tid2, tid3, tid4, tid5;
    pthread_create(&tid1, nullptr, getTicket, (void*)"thread 1");
    pthread_create(&tid2, nullptr, getTicket, (void*)"thread 2");
    pthread_create(&tid3, nullptr, getTicket, (void*)"thread 3");
    pthread_create(&tid4, nullptr, getTicket, (void*)"thread 4");
    pthread_create(&tid5, nullptr, getTicket, (void*)"thread 5");

    pthread_join(tid1, nullptr);
    pthread_join(tid2, nullptr);
    pthread_join(tid3, nullptr);
    pthread_join(tid4, nullptr);
    pthread_join(tid5, nullptr);
    return 0;
}

Compile and run, and run it several times, we found that the number of votes has actually become negative. The number of votes was originally 1000, but you also sold 1001, 1002, 1003, and 1004, which is obviously unreasonable. This is the problem of thread safety caused by multiple threads accessing a resource at the same time.

  • The above thread is the cross execution
  • The essence of cross-execution of multiple threads: it is to let the scheduler perform thread scheduling and switching as frequently as possible
  • Threads generally switch: when the time slice arrives, a higher priority thread comes, and when the thread is waiting.
  • When does the thread detect the above problem? When returning from the kernel state to the user state, the thread needs to detect the scheduling state, and if possible, the thread switch occurs directly

Tickets in the above code is a critical resource, because it is accessed by multiple execution streams at the same time, and to determine whether tickets are greater than 0, print the remaining number of tickets, and perform --, these codes are critical areas, because these codes have access to critical resources

Reasons for the negative number of remaining votes:

  • Critical sections can be accessed concurrently (simultaneously) by multiple threads, critical resources are not protected
  • usleep This simulates a long business process. In this long business process, many threads may enter this code segment
  • tickets - the operation is not atomic

How to protect the critical section? ?

Mutual exclusion, the function of mutual exclusion is to ensure that at any time, only one execution flow (thread) enters the critical section and accesses critical resources

Why is the tickets-- operation not atomic? ?

Atomicity refers to an operation that will not be interrupted by any scheduling mechanism. The operation has only two states, either completed or not completed

The -- operation is not an atomic operation, but corresponds to three assembly instructions:

  1. load : Load the shared variable ticket t from memory into a register
  2. update : Update the value in the register, perform -1 operation
  3. store : Write the new value from the register back to the memory address of the shared variable ticket

If it is atomic, it must be completed in one step, without multiple steps. That is, if the operation on a resource can be completed with only one assembly, it is atomic

. The corresponding assembly code is as follows:

The corresponding operation is as shown in the figure below, -- the operation is divided into three steps

Analyze why the number of votes becomes negative 

Assuming that thread 1 has just executed the if(tickets > 0) judgment, the CPU of thread 1 is cut off , that is, it is stripped from the CPU, and the above data of the thread must be saved after cutting off, because the register of a CPU has only One set, thread switching must save the context data of the process. Assuming that there is 1 vote left at this time

At this time, a thread 2 came. After the thread 2 executed the if(tickets > 0) judgment, it also completed the -- operation , that is, (1) read tickets from the memory to the CPU register, (2) in the register Perform logical operations, (3) rewrite the updated tickets back to the memory

At this time, the CPU starts to execute thread 1 again, reloads the context of thread 1 into the register, and thread 1 continues to execute the original code, that is, it is ready to execute printing, -- operations . Thread 1 executes the -- operation at this time, (1) read tickets from the memory to the register of the CPU, and the tickets are 0 at this time. (2) Perform logical operations in the register, and after -1, the tickets become -1 (3) Write the updated tickets back to the memory, and the tickets become negative at this time

This is just one of the cases. If it is the first step in the three steps of the operation , thread 1 will be cut off by the CPU (this situation cannot be simulated, the CPU is too fast, so it can only be dictated), assuming The number of votes is 1000

-- The operation needs three steps to complete, then it is possible that when thread1 reads the value of tickets into the CPU, it is cut off, that is, it is stripped from the CPU, assuming that the value read by thread1 is 1000 at this time, When thread1 is cut off, the 1000 in the register is called the context information of thread1, so it needs to be saved, and then thread1 is suspended

Assuming that thread2 is scheduled at this time, since thread1 only performs the first step of --the operation , thread2 sees that the value of tickets is still 1000 at this time, and the system may give thread2 more time slices, causing thread2 to execute 500 times at a time --It was cut off, and finally the tickets were reduced from 1000 to 500

At this time, the CPU restores thread1 again. The essence of the restoration is to continue to execute the code of thread1, and to restore the previous context information of thread1. At this time, the value in the register is the restored 1000, and then thread1 continues to execute the second step of the  operation --. The first and third steps, and finally write 999 back to the memory, which is simply unreasonable

At this time, the problem of data inconsistency may be caused, and the problem of data security occurs. This is the problem of thread safety caused by multi-threading.

--Therefore, the operation on a variable is not atomic. Although tickets--it is one line of code, this line of code is essentially three lines of assembly after being compiled by the compiler, and the corresponding ++ operation is not atomic.

How to solve these problems? ? mutex (mutex lock)

4.2 Mutex mutex

  • In most cases, the data used by the thread is a local variable, and the address space of the variable is in the independent stack space of the thread. In this case, the variable belongs to a single thread
  • But sometimes, many variables need to be shared between threads. Such variables are called shared variables, and the interaction between threads can be completed through data sharing.
  • Multiple threads concurrently operating shared variables will bring about some thread safety issues, such as the example above

To solve the above problems, three things need to be done:

  1. The code must have mutual exclusion behavior: when the code enters the execution of the critical section, other threads are not allowed to enter the critical section.
  2. If multiple threads request the execution of the code in the critical section at the same time, and no thread is executing in the critical section, only one thread is allowed to enter the critical section.
  3. If a thread is not executing in a critical section, that thread cannot prevent other threads from entering the critical section

To achieve these three points, a lock is essentially needed. The lock provided on Linux is called a mutex, also known as a mutex

4.3 Mutex interface function

Initialize the mutex

The mutex needs to be initialized before it can be used. There are two ways to initialize the mutex: static allocation and dynamic allocation.

(1) Static allocation

pthread_mutex_t mutex = PTHREAD_MUTEX_INITIALIZER

(2) Dynamic allocation

The function of mutex initialization is  pthread_mutex_init , man 3 pthread_mutex_init view:

函数:pthread_mutex_init

头文件: #include <pthread.h>

函数原型:
        int pthread_mutex_init(pthread_mutex_t *restrict mutex,
              const pthread_mutexattr_t *restrict attr);

参数:
    第一个参数mutex:需要初始化的互斥量
    第二个参数attr:初始化互斥量的属性,一般设置为空即可

返回值:
    互斥量初始化成功返回0,失败返回错误码

 destroy mutex

The mutex needs to be destroyed when it is used up. The mutex destruction function is  pthread_mutex_destroy 

函数: pthread_mutex_destroy

头文件:#include <pthread.h>

函数原型:
        int pthread_mutex_destroy(pthread_mutex_t *mutex);

参数:
    mutex:需要销毁的互斥量

返回值:
     互斥量销毁成功返回0,失败返回错误码

When destroying a mutex, you need to pay attention to:

  • Mutexes initialized with PTHREAD_ MUTEX_ INITIALIZER do not need to be destroyed
  • Do not destroy a locked mutex
  • For the mutex that has been destroyed, make sure that no thread will try to lock it later

mutex lock

The function of mutex locking is called pthread_mutex_lock , man 3 view:

函数:pthread_mutex_lock

头文件:#include <pthread.h>

函数原型:
        int pthread_mutex_lock(pthread_mutex_t *mutex);

参数:
    mutex:需要加锁的互斥量

返回值:
    互斥量加锁成功返回0,失败返回错误码

 mutex unlock

The function to unlock the mutex is called  pthread_mutex_unlock 

函数:pthread_mutex_unlock

头文件:#include <pthread.h>

函数原型:
        int pthread_mutex_unlock(pthread_mutex_t *mutex);

参数:
    mutex:需要解锁的互斥量

返回值:
    互斥量解锁成功返回0,失败返回错误码

Note that with pthread_mutex_lock , the following situations may be encountered:

  • The mutex is in an unlocked state, this function will lock the mutex and return success at the same time
  • When the function call is initiated, other threads have already locked the mutex, or there are other threads applying for the mutex at the same time, but the mutex is not competed, then the pthread_mutex_lock  call  will be blocked (the execution flow is suspended), waiting for the mutex to be unlocked

Improve the example in 4.1 above

#include <iostream>
#include <pthread.h>
#include <unistd.h>
using namespace std;

//定义互斥量,全局,每个线程都可以看到
pthread_mutex_t mutex;

// 票 -- 共享资源
int tickets = 1000;

void* getTicket(void* args)
{
    string username = static_cast<const char*>(args);
    while(1)
    {
        pthread_mutex_lock(&mutex);//加锁
        if(tickets > 0)
        {
             //模拟抢票花费的时间
            usleep(12345);//微秒
            cout << username << ": 正在进行抢票 "  << tickets-- << endl;
            pthread_mutex_unlock(&mutex);//解锁
        }
        else{
            pthread_mutex_unlock(&mutex);//解锁
            break;
        }
    }
}

int main()
{
    pthread_mutex_init(&mutex, nullptr);//初始化互斥量
    pthread_t tid1, tid2, tid3, tid4, tid5;
    pthread_create(&tid1, nullptr, getTicket, (void*)"thread 1");
    pthread_create(&tid2, nullptr, getTicket, (void*)"thread 2");
    pthread_create(&tid3, nullptr, getTicket, (void*)"thread 3");
    pthread_create(&tid4, nullptr, getTicket, (void*)"thread 4");
    pthread_create(&tid5, nullptr, getTicket, (void*)"thread 5");

    pthread_join(tid1, nullptr);
    pthread_join(tid2, nullptr);
    pthread_join(tid3, nullptr);
    pthread_join(tid4, nullptr);
    pthread_join(tid5, nullptr);
    pthread_mutex_destroy(&mutex);//使用完了,销毁互斥量
    return 0;
}

Critical section:

 Compile and run, no negative numbers will appear again

Let's look at the running process:

  • The execution of the sequence is slowed down, because there is a mutex, and the process from locking to unlocking, multiple threads are serial (only one thread can execute at the same time)
  • But we also found that only one thread is grabbing these tickets ( it needs to be solved with thread synchronization later )
  • Mutual exclusion locks only stipulate mutual exclusion access, and there is no provision for who must apply first, and who gets the lock is the result of competition among multiple execution streams

Do other things after grabbing tickets: such as generating orders

Compile and run, there will be no situation where there is only one thread grabbing tickets ( you need to use thread synchronization to solve it later )

4.4 Mutex implementation principle

How to treat mutexes?

  • Global variables are critical resources and need to be protected. Locks are used to protect critical resources
  • A lock is a global resource, so the lock itself is also a critical resource. Does the lock need to be protected? Since the lock is a critical resource, the lock must be protected, but the lock itself is used to protect the critical resource, so who protects the lock?
  • The lock actually protects itself. We only need to ensure that the process of applying for the lock is atomic, then the lock is safe.
  • pthread_mutex_lock, pthread_mutex_unlock: The process of locking and unlocking is actually atomic: if the application is successful, it will continue to execute backwards. If the application is temporarily unsuccessful, the execution flow will be blocked, and whoever holds the lock enters the critical section.

Will a thread executing in a critical section undergo a thread switch?

  • Threads in the critical section can completely switch threads, but even if the thread is switched away, other threads cannot enter the critical section for resource access
  • Because the thread is cut away with the lock at this time, if the lock is not released, it means that other threads cannot apply for the lock, and cannot enter the critical section for resource access.
  • Other threads that want to enter the critical section for resource access must wait for the thread to execute the code in the critical section and release the lock before applying for the lock, and then entering the critical section after applying for the lock.

How to reflect the atomicity of mutex?

For other threads, there are only two forms of meaningful locks: (1) before applying for a lock, and (2) after applying for a lock. From the perspective of other threads, looking at the process of the current thread holding the lock is atomic 

How to ensure atomicity in the locking and unlocking process? ? (The principle of mutual exclusion lock)

In order to realize mutual exclusion lock operation, most architectures provide swap or exchange instruction, the function of this instruction is to exchange the data of register and memory unit, since there is only one instruction, atomicity is guaranteed , even for multi-processor platforms , The bus cycles for accessing the memory are also sequenced. When the swap instruction on one processor is executed, the swap instruction of another processor can only wait for the bus cycle.

The pseudo code for locking and unlocking is as follows:

%al is a register in the CPU (the name is different for different architectures), lock: the assembly code for locking (application), unlock: the assembly code for unlocking, all of which are pseudocodes for easy understanding, assuming that the initial value of mutex is 1

The process of applying for a lock:

  1. First move 0 to the %al register, that is, clear 0
  2. The mutex variable is a mutex we defined
  3. xchgb %al, mutex is to exchange the value in the %al register and mutex, this instruction can complete the exchange of data between the register and the memory unit , the mutex exists in the memory (the exchange is completed by an assembly statement)
  4. Then judge whether the content in the register is greater than 0, and if it is greater than 0, the lock application is successful. At this time, you can enter the critical area to access the corresponding critical resource.
  5. The content in the register is not greater than 0, the application fails, the thread is suspended and waits until the lock is released and then competes to apply for the lock again

The process of releasing the lock:

  1. Move the value of the mutex in memory to 1
  2. Then wake up the suspended thread, that is, the thread waiting for mutex

The process of applying for locks and releasing locks is not afraid of being cut off by the CPU, and the context data can be restored when the cut is back, so lock and unlock operations are usually thread-safe

5. Reentrancy and thread safety

5.1 Concept

  • Thread safety : When multiple threads concurrently execute the same piece of code, different results will not appear. This problem occurs when global variables or static variables are commonly operated and there is no lock protection.
  • Reentrancy : The same function is called by different execution flows. Before the current process is executed, other execution flows enter again. We call it reentrance. In the case of reentrancy, a function will not have any difference or any problems in the running result, then the function is called a reentrant function, otherwise, it is a non-reentrant function

5.2 Common thread unsafe situations

  • Functions that do not protect shared variables
  • A function whose state changes as it is called
  • function returning a pointer to a static variable
  • Functions that call thread-unsafe functions

5.3 Common thread-safe situations

  • Each thread has only read access to global variables or static variables, but no write access. Generally speaking, these threads are safe
  • Classes or interfaces are atomic operations for threads
  • Switching between multiple threads will not cause ambiguity in the execution results of this interface

5.4 Common non-reentrant situations

  • The malloc/free function is called, because the malloc function uses a global linked list to manage the heap
  • Calls to standard I/O library functions, many implementations of the standard I/O library use global data structures in a non-reentrant manner
  • Static data structures are used in reentrant function bodies

5.5 Common reentrant situations

  • Do not use global or static variables
  • Do not use the space opened up by malloc or new
  • Non-reentrant functions are not called
  • Does not return static or global data, all data is provided by the caller of the function
  • Use local data, or protect global data by making a local copy of global data

5.6 Reentrancy and thread safety connection

  • Functions are reentrant, that is thread safe
  • The function is not reentrant, so it cannot be used by multiple threads, which may cause thread safety issues
  • If a function has global variables, then the function is neither thread-safe nor reentrant

5.7 The difference between reentrancy and thread safety

  • A reentrant function is a type of thread-safe function
  • Thread safety is not necessarily reentrant, but reentrant functions must be thread safe.
  • If the access to the critical resource is locked, the function is thread-safe, but if the reentrant function has not released the lock, it will cause a deadlock, so it is not reentrant

Six, deadlock

6.1 Concept

Deadlock refers to a permanent waiting state in which each thread in a group of processes occupies resources that will not be released, but is in a permanent waiting state due to mutual application for resources that are occupied by other threads and will not be released

  • In the future, we may use multiple locks. Assuming that thread A holds its own lock and does not release it, and also has the other party's lock, the same is true for thread B, thread CDE..., it is easy to cause deadlock at this time
  • A single execution flow may also cause deadlock. If an execution flow applies for two locks in a row (there is a problem with the written code), then the execution flow will be suspended at this time. Because the execution flow applied for the lock successfully for the first time, but when applying for the lock for the second time, because the lock has already been applied for, the application fails and it is suspended until the lock is released. But this lock is already in my hands, and I have no chance to release the lock in the suspended state, so the execution flow will never be awakened, and the execution flow is in a state of deadlock at this time

6.2 Four necessary conditions for deadlock

  1. Mutually exclusive conditions : a resource can only be used by one execution flow at a time
  2. Request and Hold Conditions : When an execution flow is blocked by requesting resources, hold on to the obtained resources
  3. Non-deprivation condition: the resource obtained by an execution flow cannot be forcibly deprived before it is used up
  4. Circular waiting condition: A head-to-tail cyclic waiting resource relationship is formed between several execution flows

Note: Deadlock will only occur if these four conditions are met at the same time

6.3 Avoiding Deadlocks

  • Any one of the four necessary conditions for breaking a deadlock
  • The order of locking is the same
  • Avoid scenarios where locks are not released
  • One-time allocation of resources

In addition, there are some algorithms to avoid deadlock, such as deadlock detection algorithm and banker's algorithm.

Seven, Linux thread synchronization

7.1 Synchronization concepts and race conditions

  • Synchronization: On the premise of ensuring data security, allowing threads to access critical resources in a specific order, thereby effectively avoiding starvation problems, is called synchronization
  • Race condition: due to timing problems, the program is abnormal, which is called race condition

Synchronization is explained as follows: 

  • If it is just simply locking, there will be some problems. Suppose a thread is very competitive and can apply for a lock every time, but it does nothing after applying for the lock. So in our opinion, this thread is Always applying for locks and releasing locks, which may cause other threads to not compete for locks for a long time, which will cause starvation problems.
  • There is nothing wrong with simple locking. It can ensure that only one thread enters the critical section at the same time, but it is unreasonable. It does not allow each thread to use this critical resource efficiently.
  • Now add a rule, when a thread releases the lock, the thread cannot immediately apply for the lock again, and the thread must be queued at the end of the resource waiting queue for the lock.
  • After adding this rule, the next thread to acquire the resource of the lock must be the thread at the head of the resource waiting queue, which is thread synchronization

In order to support thread synchronization, condition variables need to be used

7.2 Condition variables

The concept of condition variables :

Condition variables are a synchronization mechanism used to communicate between multiple threads. It allows a thread to wait for another thread to meet a certain condition before continuing execution. Condition variables are often used with mutexes for thread safety. Condition variables provide an efficient way to achieve synchronization and communication between threads.

An example to help understand condition variables:

  • Assume that there is a room for the interview, and there is an interviewer in the interview. The company sends an interview notice to the students who are interviewing, and then a large number of students come to the interview room and wait. When a classmate's interview was completed and the interviewer was about to interview the next classmate, the interviewer found that the door was full of students waiting for the interview.
  • The interviewer didn't know whose turn it was, so he just called a classmate who was closest to him
  • If the classmate who was called in came later than the classmate who came in front, he went in first just because he was close to the interviewer. Is this in line with the rules? fits, but it's unreasonable
  • Later, a manager came to manage the students who were interviewed. The manager directly put up a sign: If you want to have an interview, you must queue up first, and you will only conduct interviews in order from the queue.
  • The sign erected is equivalent to a conditional variable, and you are only allowed to conduct an interview if you meet the conditions.
  • The students interviewed are equivalent to individual threads, and the interview room is equivalent to a public resource, that is, a critical resource, and all processes want to access this resource
  • If a thread wants to access this critical resource, the thread must satisfy the condition variable, otherwise the thread can only wait under the condition variable

Converted to the following can be:

  1. The condition variable has its own internal "queuing" queue
  2. Whoever calls the wait function of the condition variable will go to the queue
  3. When a thread receives the "signal to enter the interview room", it is taken off the head of the queue, allowing it to access the critical resource "interview room"

7.3 Condition variable related functions

Initialize condition variable

Condition variables need to be initialized just like mutexes. The initialization function is pthread_cond_init , see man 3 pthread_cond_init:

函数:pthread_cond_init

头文件:#include <pthread.h>

函数原型:
         int pthread_cond_init(pthread_cond_t *restrict cond,
              const pthread_condattr_t *restrict attr);

参数:
    第一个参数cond:需要初始化的条件变量
    第二个参数attr:初始化条件变量的属性,一般设置为空即可

返回值:
    条件变量初始化成功返回0,失败返回错误码

 Calling the pthread_cond_init function to initialize condition variables is called dynamic allocation. In addition, we can also initialize condition variables in the following way, which is called static allocation:

 pthread_cond_t cond = PTHREAD_COND_INITIALIZER;

destroy condition variable

The function that destroys the condition variable is called pthread_cond_destroy

函数:pthread_cond_destroy

头文件:#include <pthread.h>

函数原型:
        int pthread_cond_destroy(pthread_cond_t *cond);

参数:
    cond:需要销毁的条件变量

返回值:
    条件变量初始化成功返回0,失败返回错误码

Note:  PTHREAD_COND_INITIALIZERcondition variables initialized with use do not need to be destroyed

Wait for a condition variable to be satisfied

The function that waits for the condition variable to be satisfied is called pthread_cond_wait  , man 3 pthread_cond_wait view:

函数:pthread_cond_wait

头文件:#include <pthread.h>

函数原型:
        int pthread_cond_wait(pthread_cond_t *restrict cond,
              pthread_mutex_t *restrict mutex);

参数:
    第一个参数cond:需要等待的条件变量。
    第二个参数mutex:当前线程所处临界区对应的互斥锁

返回值:
    条件变量初始化成功返回0,失败返回错误码

wake up wait

There are two functions to wake up and wait  : pthread_cond_signal and  pthread_cond_broadcast , see man 3:

函数:pthread_cond_broadcast 和 pthread_cond_signal

头文件:#include <pthread.h>

函数原型:
        int pthread_cond_broadcast(pthread_cond_t *cond);
        int pthread_cond_signal(pthread_cond_t *cond);

参数:
    cond:唤醒在cond条件变量下等待的线程

返回值:
    条件变量初始化成功返回0,失败返回错误码

the difference:

  • The pthread_cond_signal function is used to wake up the first thread in the waiting queue
  • The pthread_cond_broadcast function is used to wake up all threads in the waiting queue

The process of using condition variables:

  1. When a thread needs to wait for a condition, it calls the wait function of the condition variable and releases the mutex.
  2. When another thread satisfies the condition, it signals the waiting thread, and reacquires the mutex.
  3. After the waiting thread receives the signal, it will reacquire the mutex and check whether the condition is met. If it is met, it will continue to execute, otherwise it will continue to wait.

Example, or the example of grabbing tickets above

The main thread creates 4 new threads, and let the main thread control the 4 new threads. After the 4 new threads are created, they all wait under the condition variable. The thread will not execute until the main thread wakes up a waiting thread.

#include <iostream>
#include <pthread.h>
#include <unistd.h>
using namespace std;

// 票 -- 共享资源
int tickets = 1000;

//定义全局互斥量 -- 每个线程都可以看到
pthread_mutex_t mutex;
//定义全局的条件变量
pthread_cond_t cond;

void* getTicket(void* args)
{
    string username = static_cast<const char*>(args);
    while(1)
    {
        pthread_mutex_lock(&mutex);//加锁
        pthread_cond_wait(&cond, &mutex);//线程阻塞在这里,直到被唤醒
        if(tickets > 0)
        {
            //注意这里没有进行sleep
            cout << username << ": 正在进行抢票 "  << tickets-- << endl;
            pthread_mutex_unlock(&mutex);//解锁
        }
        else{
            pthread_mutex_unlock(&mutex);//解锁
            break;
        }
    }
}

int main()
{
    pthread_mutex_init(&mutex, nullptr);//初始化互斥量
    pthread_cond_init(&cond, nullptr);//初始化条件变量

    pthread_t tid1, tid2, tid3, tid4, tid5;
    pthread_create(&tid1, nullptr, getTicket, (void*)"thread 1");
    pthread_create(&tid2, nullptr, getTicket, (void*)"thread 2");
    pthread_create(&tid3, nullptr, getTicket, (void*)"thread 3");
    pthread_create(&tid4, nullptr, getTicket, (void*)"thread 4");

    while(1)
    {
        pthread_cond_signal(&cond);//间隔一秒发送信号,唤醒等待的线程(一个)
        cout << "main thread wakeup one thread..." << endl;
        sleep(1);
    }

    pthread_join(tid1, nullptr);
    pthread_join(tid2, nullptr);
    pthread_join(tid3, nullptr);
    pthread_join(tid4, nullptr);

    pthread_mutex_destroy(&mutex);//使用完了,销毁互斥量
    pthread_cond_destroy(&cond);//销毁条件变量
    return 0;
}

Compile and run

Observing the phenomenon, we will find that there is an obvious sequence when waking up these four threads. The root cause is that when these threads start, they will wait in the condition variable by default, and what we wake up every time is waiting under the current condition variable. The head thread, when the thread finishes the printing operation, it will continue to wait at the end of the waiting queue, so we can see a turnover phenomenon

If we want to wake up all threads waiting under the condition variable every time we wake up, we can change pthread_cond_signalthe function in the code to pthread_cond_broadcastfunction

Compile and run, each wake-up will wake up all threads waiting under the condition variable, that is, wake up this thread every time

Why does the second parameter of pthread_cond_wait need to pass in a mutex? ?

  • Because when the pthread_cond_wait function is called, the mutex needs to be locked first (locked before waiting for the condition variable), and then the mutex is passed to the function as a parameter, and then the mutex is unlocked and waited inside the function Signals for condition variables.
  • If you don't pass a mutex, you can't guarantee that access to the shared resource is mutually exclusive while waiting on the condition variable, potentially causing issues like data races, which could lead to program errors or deadlocks.
  • Therefore, in order to ensure safe operation between threads, it is necessary to pass the mutex when calling the pthread_cond_wait function.

Thread mutual exclusion and synchronization are completed, and the next article will enter the production consumer model

--------------------- END ----------------------

「 作者 」 枫叶先生
「 更新 」 2023.5.3
「 声明 」 余之才疏学浅,故所撰文疏漏难免,
          或有谬误或不准确之处,敬请读者批评指正。

Guess you like

Origin blog.csdn.net/m0_64280701/article/details/130458105