Producer consumer model [Linux]

1. Concept

1.1 Introduction

Supermarkets, manufacturers, and customers are a good example. A manufacturer can be seen as a producer, which produces goods and ships them to supermarkets. A supermarket can be viewed as a buffer zone, which stores goods produced by manufacturers. A customer can be considered as a consumer, which buys goods from a supermarket.

When supermarkets are well stocked, manufacturers don't need to ship more merchandise. However, when supermarkets run out of stock, manufacturers need to produce more goods and ship them to supermarkets. Likewise, when supermarkets have enough items, customers can buy them. However, when supermarkets run out of stock, customers need to wait for manufacturers to ship more products.

The basic idea of ​​the producer consumer mode: the producer is responsible for generating data and putting it into the buffer, while the consumer takes the data out of the buffer and processes it. When the buffer is empty, the consumer needs to wait for the producer to generate new data; when the buffer is full, the producer needs to wait for the consumer to fetch data.

1.2 Concept

The producer-consumer model is a common multi-threaded design pattern, which is used to solve the synchronization problem between producers and consumers. In this model, the producer thread is responsible for generating data and putting it into the buffer, while the consumer thread takes the data out of the buffer and processes it, so as to solve the strong coupling problem of producers and consumers.

Since the producer and consumer threads share the buffer, a synchronization mechanism is required to ensure thread safety. Typically, a mutex can be used to protect a buffer from simultaneous access by multiple threads. Additionally, condition variables can be used to achieve synchronization between producers and consumers. When the buffer is empty, the consumer thread waits on the condition variable until the producer thread adds new data to the buffer and wakes it up. Likewise, when the buffer is full, the producer thread waits on the condition variable until the consumer thread fetches data from the buffer and wakes it up.

Why is there no buffer, the producer and the consumer are strongly coupled?

This is because without a buffer, the producer has to pass data directly to the consumer, and the consumer has to get the data directly from the producer.

In this way, a tight dependency relationship is formed between producers and consumers. The producer must wait for the consumer to be ready to receive data, and the consumer must wait for the producer to generate new data. This dependency leads to increased coupling between producers and consumers, making their interactions more complex.

On the contrary, if there is a buffer, then the producer and the consumer can be decoupled through the buffer. The producer only needs to put data into the buffer, and does not need to care when the consumer gets the data. Similarly, consumers only need to fetch data from the buffer, and do not need to care about when the producer generates new data. In this way, producers and consumers can operate independently, and the coupling between them will be reduced.

In reality, there may be more than one producer and consumer. In the computer, will there be a similar competitive relationship between producers and consumers?

Ensure that the operations of threads are mutually exclusive and synchronized to ensure data security.

In real life, there may be two brands of ham manufacturers on the shelves at the same time. In the computer, if this is done, it will lead to data confusion. For example, the shelves in a supermarket are equivalent to resources shared by threads. It will compete for shared resources; consumers are similar. To give an extreme example in real life, when the end of the world comes, there will be strong competition and mutual exclusion among consumers. Similarly, if the producer's production and consumer's consumption methods are not restricted, it is also likely to cause data loss problems: for example, the producer is taken out by the consumer in the middle of the production, and the data will be missing; the consumer is When the data is fetched, the producer continues to produce.

This is what mutexes and condition variables are for:

  • Mutual exclusion lock: Avoid race conditions caused by multiple threads accessing the shared data area at the same time, making all threads independent.
  • Condition variables: Ensure data security by marking two states.
    • When the buffer is full, the producer thread should enter a wait state. At this point, the producer thread can use the condition variable to wait for the consumer to fetch data from the buffer.
    • When the consumer fetches data from the buffer, it uses the condition variable to notify the producer to continue producing data.

2. Features

We can use the "123 principle" to remember several characteristics of the producer-consumer model:

  • 1 space: Producers and consumers share the same storage space within the same period of time, producers produce data into the space, and consumers take data from the space.
  • 2 roles: producer and consumer.
  • 3 relationships
    • Between producers: mutual exclusion
    • Between Consumers: Mutual Exclusion
    • Producers and Consumers: Mutex and Synchronization

In the producer-consumer pattern, the producer and the consumer are thread-role, which means that both the producer and the consumer are independent threads, so that they can execute concurrently. This shared space refers to a buffer represented by a certain data structure, and the so-called commodity is the data in the computer.

And the operations of producers and consumers must be mutually exclusive, that is, for data, it only has two states before being produced and after being produced. Only in this way can the producer and the consumer be executed concurrently. The producer does not need to wait for the consumer to consume the data before continuing to produce data, and the consumer does not need to wait for the producer to finish producing the data before continuing to consume data.

The advantage of doing this is that it can improve the concurrency and performance of the program. This improves the throughput and response time of the program.

Which data structure is this buffer implemented by?

queue. A buffer can be implemented as a fixed-size array. This array can be seen as a queue, which follows the first-in-first-out (FIFO) principle. Producers add data to the queue, and consumers take data from the queue.

Of course, the buffer can also be implemented by other data structures, such as linked lists, stacks, etc. Which data structure to use depends on the specific application scenario. This article will use queues as an example.

Who knows best whether there are new products in the supermarket? – producer

Is there room for merchandise, who knows best? – consumers

The above two conditions can form a logical loop:

  1. When the consumer takes out the product, the consumer can wake up the producer to continue production;
  2. When the producer finishes production and puts the product in the supermarket, the consumer can be awakened to continue taking it out.

For consumers, the "condition" for waking up producers is that the data in the buffer is not full; for producers, the "condition" for waking up consumers is that the data in the buffer is full. This "condition" then becomes the condition to be flagged by the condition variable.

Note: The process of production and consumption not only includes the producer storing data in the buffer, and the consumer taking the data out of the buffer, but also includes where the data generated by the producer comes from (network, disk...), so that the data is How consumers use it. This is more time-consuming.

3. Blocking Queue

3.1 Introduction

Blocking Queue is often used in the producer-consumer model. As a buffer between producers and consumers, producers add data to Blocking Queue, and consumers take data from Blocking Queue:

  • When the Blocking Queue is full, the producer thread will be blocked;
  • When the Blocking Queue is empty, the consumer thread will be blocked.

producer_consumer

Image source: https://math.hws.edu/eck/cs124/javanotes7/c12/producer-consumer.png

The main difference between Blocking Queue and ordinary queue is that it has blocking function. This is similar to pipes, where pipes may block under certain circumstances:

  • When there is no data to read in the pipeline, the operation of reading data from the pipeline will be blocked until new data is available;
  • When the pipe is full, the operation of writing data to the pipe will block until a free space is available in the pipe.

Blocking is relative to both.

3.2 Simulation to realize Blocking Queue

For the convenience of implementation, there is only one producer and consumer in the following example.

basic framework

ProdCons.ccCreate, wait thread, producer and consumer functions will be implemented in source files.

#include "BlockQueue.hpp"
#include <pthread.h>

void* productor(void* args)
{
    
    
	return nullptr;
}
void* consumer(void* args)
{
    
    
	return nullptr;
}
int main()
{
    
    
	pthread_t cons, prod;
	pthread_create(&cons, nullptr, consumer, nullptr);
	pthread_create(&prod, nullptr, productor, nullptr);

	pthread_join(cons, nullptr);
	pthread_join(prod, nullptr);

	return 0;
}

This framework will be expanded below. In BlockQueue.hpp, implement the blocking queue logic.

blocking queue

STL is not thread-safe because it was not designed to support concurrent access by multiple threads. The implementation of STL containers and algorithms has no built-in mechanism to prevent race conditions caused by multiple threads accessing the same container or algorithm at the same time. The issue of thread safety needs to be implemented at the user level.

The design philosophy of the STL is to provide efficient, general-purpose, and reusable components, rather than providing built-in support for every possible usage scenario. This allows programmers to choose the most suitable synchronization mechanism according to specific application scenarios, instead of forcing programmers to use STL's built-in synchronization mechanism.

Here, we can use the queue container and the mutex in the pthread library to ensure thread safety.

At the same time, two condition variables will be used to represent the "conditions" for producers and consumers to wake up each other, namely:

  • The condition for the producer to wake up the consumer: the goods are taken away -> the buffer is not empty;
  • The condition for the consumer to wake up the producer: the goods are full -> the buffer is full.

This is the significance of the mutex. It ensures that only one thread can access the container at the same time, preventing both of them from operating data in the buffer at the same time, thereby avoiding race conditions.

In order to judge whether the buffer is empty, there must be a counter to record the capacity, and this operation is also implemented by the user. Also, in order to generalize types, templates are used. Below is BlockQueuethe skeleton of the class.

#pragma once

#include <iostream>
#include <queue>
#include <pthread.h>

using namespace std;

const int gDefaultCap = 5; // 队列容量

template<class T>
class BlockQueue
{
    
    
public:
	BlockQueue(int capacity = gDefaultCap)
	: _capacity(capacity)
	{
    
    }
	void push()
	{
    
    }
	void pop()
	{
    
    }
	~BlockQueue()
	{
    
    }
private:
	queue<T> _bq;			// 阻塞队列
	int _capacity;			// 容量
	pthread_mutex_t _mtx;	// 互斥锁
	pthread_cond_t _empty;	// 队空条件变量
	pthread_cond_t _full;	// 队满条件变量
};

Thread Function Framework

Returns ProdCons.ccthe logic for writing producer and consumer thread functions.

First of all, the last parameter of the pthread_create function is used to pass parameters to the thread externally, and its type is void* type, so we can pass the BlockQueue type object forcibly, and then transfer back to the object inside the thread function The type itself can get the thread information in this object. Restricting the information inside the thread function can restrict the behavior of the thread, thus realizing the purpose of controlling the thread through the blocking queue.

int main()
{
    
    
	BlockQueue<int>* bqueue = new BlockQueue<int>();

	pthread_t cons, prod;
	pthread_create(&cond, nullptr, consumer, bqueue);
	pthread_create(&prod, nullptr, productor, bqueue);

	pthread_join(cons, nullptr);
	pthread_join(prod, nullptr);

	delete bqueue;

	return 0;
}

It is worth noting that the fourth parameter of pthread_create will pass in an object of BlockQueue* type, which is equivalent to passing a data packet to the thread.

For blocking queues, the two most important interfaces are pop and push, which correspond to consumers and producers respectively. Now, you can write some simple logic for the producer and consumer thread functions: the producer simply writes data, and the consumer simply reads data.

void* productor(void* args)
{
    
    
	BlockQueue<int>* bqueue = (BlockQueue<int>*)args;
	int a = 1;
	while(1)
	{
    
    
		bqueue->push(a);
		cout << "生产一个数据: " << a << endl;
		a++;
	}
	return nullptr;
}
void* consumer(void* args)
{
    
    
	BlockQueue<int>* bqueue = (BlockQueue<int>*)args;
	while(1)
	{
    
    
		sleep(1);
		int a = -1;
		bqueue->pop(&a); // 输出型参数
		cout << "消费一个数据: " << a << endl;
	}
	return nullptr;
}

In order to observe the phenomenon conveniently, a print statement is added in the thread function.

push and pop framework

Before supplementing the push and pop framework, BlockQueueother member functions of the class need to be supplemented:

template<class T>
class BlockQueue
{
    
    
public:
	BlockQueue(int capacity = gDefaultCap)
	: _capacity(capacity)
	{
    
    
		pthread_mutex_init(&_mtx, nullptr);
		pthread_cond_init(&_empty, nullptr);
		pthread_cond_init(&_full, nullptr);
	}
	void push(const T& in)
	{
    
    }
	void pop(T* out)
	{
    
    }
	~BlockQueue()
	{
    
    
		pthread_mutex_destroy(&_mtx);
		pthread_cond_destroy(&_empty);
		pthread_cond_destroy(&_full);
	}
private:
	queue<T> _bq;			// 阻塞队列
	int _capacity;			// 容量
	pthread_mutex_t _mtx;	// 互斥锁
	pthread_cond_t _empty;	// 队空条件变量
	pthread_cond_t _full;	// 队满条件变量
};

The initialization and destruction operations of the mutex are respectively placed in the constructor and destructor of the BlockQueue. This is a common programming method: RAII (Resource Acquisition Is Initialization, resource acquisition is initialization), simply put, the resource The application and release are handed over to the life cycle management of the object (this idea is used in the new smart pointer of C++11).

It's worth noting that mutexes and condition variables come into play here:

  • Mutual exclusion lock: Before push and pop, lock and unlock operations must be performed to prevent the two operation opportunities from intersecting and ensure data security.
  • Condition variable: Before push and pop, it is necessary to check whether the critical resource meets the access condition, and then the critical resource can be accessed.

The logic of push and pop: When the queue is full, the critical resource does not meet the condition, so let the thread wait for this condition variable.

Why is the second parameter of pthread_cond_wait() the address of the lock?

Because the thread is checking whether the critical resource meets the conditions, the detection operation itself needs to be executed in the critical section (critical section: the code between locking and unlocking), if the wait function successfully waits, the line of code that calls the wait function After blocking, the lock cannot be released. Later, if the consumer thread wants to call pop, because there is also a detection operation inside pop, then it cannot apply for the lock successfully.

The second parameter is the address of the lock, which can solve the above problem. When the thread successfully calls the wait function to block, the lock will be automatically released inside the wait function to prevent the lock from being held by the blocked thread.

The producer's thread will be woken up after waiting for the condition variable to be satisfied, so where will it be woken up?

Where the line of code is blocked, the line of code wakes up.

The following is the logic of pop and push. They use the push() and pop() interfaces of the queue container, and the logic for judging whether the queue is empty or full is wrapped with a function, which also encapsulates the size() interface.

// 删除了重复的内容
const int gDefaultCap = 5; 	// 队列容量

template<class T>
class BlockQueue
{
    
    
private:
    bool isQueueEmpty()
    {
    
    
        return _bq.size() == 0;
    }
    bool isQueueFull()
    {
    
    
        return _bq.size() == _capacity;
    }
public:
	void push(const T& in)
	{
    
    
		pthread_mutex_lock(&_mtx);				// 加锁
		if(isQueueFull()) 
			pthread_cond_wait(&_full, &_mtx);	// 队列满, 生产者等待
		_bq.push(in);							// 队列未满, 继续生产
		pthread_mutex_unlock(&_mtx);			// 解锁
		pthread_cond_signal(&_empty);			// 唤醒
	}
	void pop(T* out)
	{
    
    
		pthread_mutex_lock(&_mtx);				// 加锁
		if(isQueueEmpty())
			pthread_cond_wait(&_empty, &_mtx);	// 队列空, 消费者等待
		*out = _bq.front();						// 更新输出型参数
		_bq.pop();								// 队列未空, 继续消费
		pthread_mutex_unlock(&_mtx);			// 解锁
		pthread_cond_signal(&_full);			// 唤醒
	}
private:
	queue<T> _bq;			// 阻塞队列
	int _capacity;			// 容量
	pthread_mutex_t _mtx;	// 互斥锁
	pthread_cond_t _empty;	// 队空条件变量
	pthread_cond_t _full;	// 队满条件变量
};

In order to make the printed content more neat, let the consumer thread function sleep for 1 second before consuming. Note: In order to obtain the value of the element at the head of the queue (for printing), the encapsulated pop interface has an out output parameter, which corresponds to the output parameter in the consumer thread function.

void* consumer(void* args)
{
    
    
	BlockQueue<int>* bqueue = (BlockQueue<int>*)args;
	while(1)
	{
    
    
		sleep(1);
		int a = -1;
		bqueue->pop(&a);
		cout << "消费一个数据: " << a << endl;
	}
	return nullptr;
}

test 1

image-20230416224223010

This is an implementation example of the simplest producer-consumer mode. When the size of the internal queue has not reached the specified capacity (5), the queue is not full. Because there is no sleep limit push, the producer thread will Instantly produce 5 pieces of data (in fact, this is done by the producer thread when it is called once); when the queue is full, the if(isQueueFull()) branch in push will be triggered, causing the producer thread to block, and then the consumer thread function is run by the thread , since the queue is not empty, pop directly, and the number of queue elements is 4 at this time. Then the next time the producer thread is called, it will continue to push a piece of data, and the cycle repeats.

This is the reason why 5 data are first produced in the figure, and then one is produced by cyclic consumption. The speed of production and consumption can be controlled by sleep:

image-20230416224601955

But this kind of efficiency is too low, and only a few data are produced or taken out each time. If it is express delivery, the express delivery fee will be too high. In case the capacity of the queue in the business may be large, consumers can consume when the capacity is more than half:

void pop(T* out){
    
    
    // ...
if(_bq.size() >= _capacity / 2)
    pthread_cond_signal(&_mtx);		// 解锁
}

image-20230416225036869

From this result, this strategy successfully controls the behavior of production and consumption.

In the logic of critical resource operations (push and pop), is there any limit to the sequence of unlocking and waking up?

There is a bounded sequence of unlocking and waking up after the operation critical resource is complete. Normally, you should unlock first, and then wake up the waiting thread. This avoids the awakened thread from being blocked again immediately.

Unlocking and then waking up the waiting thread is to avoid deadlock and waste of CPU resources. If the waiting thread is woken up first, the woken up thread will immediately try to acquire the lock, but since the lock is still held, the woken up thread will be blocked again. This wastes CPU resources and can lead to deadlocks. Therefore, it should be unlocked before waking up the waiting thread.

What if you wake up a thread before unlocking it, and the lock has not been released yet?

The thread before being awakened is in a blocked state and is waiting under the condition variable. If this happens, the awakened thread will immediately try to acquire the lock, and the condition will become waiting on the lock. As long as the lock is released, it will can acquire the lock. But since the lock is still held, the awakened thread will be blocked again. This wastes CPU resources and can lead to deadlocks. This is also the reason for unlocking first and then waking up.

If there are multiple threads waiting for the unreleased lock, and only one thread can acquire the lock once the lock is released, what should the other threads do?

Then other threads that failed to acquire the lock will continue to wait. The remaining threads will compete for the same lock, and the operating system will use a scheduling algorithm to decide which thread will acquire the lock. Threads that fail to acquire the lock continue to wait until they are able to acquire the lock. In terms of time, each thread can acquire locks, as long as its behavior of applying for and releasing locks is standardized, ensuring data security, and it doesn't matter which thread produces or consumes resources.

Condition variable usage specification

Taking the above code as an example, the behavior of the pthread_cond_wait() function is limited by the use of condition variables, and the condition variable is a mark of the action of "checking whether the critical resource is ready". And this operation itself is in the critical resource, because the condition variable is usually defined by a global variable, so restricting the use of the condition variable can restrict the behavior of accessing the critical resource.

Will the pthread_cond_wait() function fail?

As long as it is a function, it may fail. If it fails, indicating that the thread was not successfully blocked, then the statement after this function will be executed, so it may cause an out-of-bounds problem.

Is it possible that the condition variable is not representing the actual condition?

There may be threads waiting for conditions that are not met, but are awakened by mistake, which is called "false wakeup". Once a false wakeup occurs, you should not continue to execute the code downwards, but re-judge whether the return value of the wait function is legal. So the solution is to change if to while. This way you can be 100% sure that the resource is ready:

while(isQueueFull()) 
    pthread_cond_wait(&_full, &_mtx);	// 队列满, 生产者等待

while(isQueueEmpty())
    pthread_cond_wait(&_empty, &_mtx);	// 队列空, 消费者等待

Does the order in which producer and consumer threads are scheduled after they are initially created affect the outcome?

No, because we use condition variables to limit their behavior, that is, if the condition variables are not satisfied, they will be blocked immediately, so it is important to follow the specifications when writing code and handle concurrency.

about efficiency

From the above code, we can know that the movement of data in the producer->blocking queue->consumer may be copied, and the copy will reduce the efficiency. So what is the significance of this mode?

If you simply look at the process of data moving at both ends of the queue, copying will indeed lead to lower efficiency. However, the production and consumption mode is a decoupling behavior. In the long run, this queue may only be a small part of the data transmission process, just like a small wooden bridge. Because on the producer side of the queue, the data may come from the network, which takes a lot of time; IO and access to the network also take a long time to process. In fact, decoupling behavior is the root of improving efficiency. For example, data producers don’t have to worry about consumers being interrupted by the other party when they write data, because it takes time to deal with the other party’s interruption (just like a supermarket clerk is loading goods , but you ask him to bring you what he just put on); the same is true for consumers.Therefore, the real efficiency improvement of the blocking queue does not lie in the efficient way it transmits data to improve the efficiency of its small part of data transmission, but by decoupling the producers and consumers of data so that they can Focus on your own affairs as much as possible, thereby improving overall efficiency.

Supplement:
In the blocking queue, whether the data is copied or not depends on the specific implementation. In some implementations, the data may be copied into a buffer, and in others, the data may have move semantics to avoid copying. Copying data adds additional overhead that may affect efficiency. If the amount of data is large, then avoiding copying can bring significant performance improvements.

Supplementary data processing

In the above program, we implemented a small wooden bridge. Then let's briefly simulate the process of consumers processing data. In fact, it’s nothing, just simulate it through sleep, the important thing is to understand the role of this wooden bridge in the whole.

They can be additionally task.hppdefined in .

#pragma once

#include <iostream>
#include <functional>
using namespace std;

typedef function<int(int, int)> func_t;
class Task
{
    
    
public:
	Task(){
    
    }
	Task(int x, int y, func_t func)
	: _x(x)
	, _y(y)
	, _func(func)
	{
    
    }
	int operator()()
	{
    
    
		return _func(_x, _y);
	}
public:
	int _x;
	int _y;
	func_t _func;
};

In this Taskclass, the default constructor and the default constructor are overloaded operator(). The former is to cancel the compiler warning, because the members in this class should be set as private, but it is redundant to write the get and set interfaces. The purpose of the latter is to call methods directly in consumer and producer thread functions.

Among them, func_t is a function object type.

A simple addition and subtraction function can be used as the data processing task of the thread, and it is saved in an array here:

#define SOL_NUM 2

typedef function<int(int, int)> func_t;

int Add(int x, int y)
{
    
    
	return x + y;
}
int Sub(int x, int y)
{
    
    
	return x - y;
}

func_t sol[SOL_NUM] = {
    
    Add, Sub};

Stored in the array are the addresses of the two functions.

The thread function should also be modified accordingly. In the original example, the data type of BlockQueue is int, but now it should be changed to a custom Task class. Because I am too lazy to input, I use random values ​​to get the values ​​of two integers x and y. Set a global variable opt, whose value is 0 or 1, indicating the subscript of the task to be executed by the consumer in the array.

At the same time, different tasks should be allocated according to different tasks (subscripts), which can be reflected by print statements.

int opt = -1;
void* productor(void* args)
{
    
    
	BlockQueue<Task>* bqueue = (BlockQueue<Task>*)args;
	while(1)
	{
    
    		
		opt = rand() % 2;
		int x = rand() % 10 + 1;
		usleep((rand() % 1000));
		int y = rand() % 5 + 1;
		Task t(x, y, Add);
		bqueue->push(t);
		if(opt) cout << "生产者线程: " << t._x << "-" << t._y << "= _?_" << endl;
		else 	cout << "生产者线程: " << t._x << "+" << t._y << "= _?_" << endl;
	}
	return nullptr;
}
void* consumer(void* args)
{
    
    
	BlockQueue<Task>* bqueue = (BlockQueue<Task>*)args;
	while(1)
	{
    
    
		sleep(1);
		Task t;			 // 获取任务
		bqueue->pop(&t); // 输出型参数
		if(opt) cout << "消费者线程: " << t._x << "-" << t._y << "=" << sol[opt](t._x, t._y) << endl;
		else	cout << "消费者线程: " << t._x << "+" << t._y << "=" << sol[opt](t._x, t._y) << endl;
	}
	return nullptr;
}

image-20230417001217718

It is worth noting that parameters need to be passed in while calling the method of the function sol[opt](t._x, t._y).

There is only one producer and consumer in this example, does the code energy consumption here support multiple producers and consumers?

  • Mutex locks guarantee multi-threaded implementations because threads must compete for the same lock. If it is a producer and multiple consumers, this is equivalent to implementing a thread pool.

Mutex Design

If there are multiple threads, the lock and unlock interfaces are always used in the process of unlocking, which is very inconvenient and may be forgotten sometimes. So you can also use RAII to bind the application and release operations of the mutex to the life cycle of an object.

The following is Mutexthe implementation of the class. In fact, the address is used as a member variable, and the two interfaces are placed in the constructor and destructor respectively:

class Mutex
{
    
    
public:
    Mutex(pthread_mutex_t* mtx)
    : _pmtx(mtx)
    {
    
    }
    void lock() 
    {
    
    
        pthread_mutex_lock(_pmtx);
    }
    void unlock()
    {
    
    
        pthread_mutex_unlock(_pmtx);
    }
    ~Mutex()
    {
    
    }
private:
    pthread_mutex_t* _pmtx;
};

Regarding the transfer of locks, you can use pointers or references.

Then set another layer to add and unlock:

class lockGuard
{
    
    
public:
    lockGuard(pthread_mutex_t* mtx, string msg)
    : _mtx(mtx)
    , _msg(msg)
    {
    
    
        _mtx.lock();
        cout << _msg <<"---加锁---" << endl;
    }
    ~lockGuard()
    {
    
    
        _mtx.unlock();
        cout << _msg << "---解锁---" << endl;
    }
private:
    Mutex _mtx;
    string _msg;
};

In order to facilitate the observation of the phenomenon, a string member function is added to pass in the prompt information in the thread function. Then push and pop can be more concise, mainly because the operations of applying for locks and releasing locks are in compliance with the specifications.

image-20230417004633139

It can be seen from the results that adding and unlocking all appear in pairs. Call the constructor to lock when defining the object, exit the code block, call the destructor, and release the lock.

Guess you like

Origin blog.csdn.net/m0_63312733/article/details/130191224