C++ data interaction between multiple threads

Refer to the blog https://blog.csdn.net/hai008007/article/details/80246437 to organize and modify it.

Data interaction between multiple threads in the same process is inevitable. Queues and shared data are common ways to achieve data interaction between multiple threads. The encapsulated queue is relatively less error-prone to use, and Shared data is the most basic and error-prone, because it will cause data contention, that is, more than one thread tries to grab a resource at the same time, such as reading and writing a block of memory, as shown in the following example:

#include <iostream>
#include <thread>
 
using namespace std;
 
#define COUNT 10000
 
void inc(int *p){
    
    
    for(int i = 0; i < COUNT; i++){
    
    
        (*p)++;
    }
}
 
int main()
{
    
    
    int a = 0;
    
    thread ta(inc, &a);
    thread tb(inc, &a);
    
    ta.join();
    tb.join();
    
    cout << " a = " << a << endl;
    return 0;
}

The above example is a simple data exchange. It can be seen that two threads read and write the memory address of &a at the same time. On the surface, after the two threads are executed, the final a value should be COUNT * 2, but in fact it is not the case, because there may be two threads that need to access this piece of memory at the same time for operation, and the thread will be interrupted. phenomenon. To solve this problem, for simple basic types of data such as characters, integers, pointers, etc., C++ provides the atomic template class atomic, and for complex objects, it provides the most commonly used locking mechanism, such as mutual exclusion class mutex, door Lock lock_guard, unique lock unique_lock, condition variable condition_variable, etc.

std::atomic

For threads, atomic types belong to "resource type" data, which means that multiple threads can usually only access a copy of a single atomic type.
Therefore, in C++11, atomic types can only be constructed from their template parameter types.
The standard does not allow atomic types to perform operations such as copy construction, move construction, and use of operator=. In fact, operations such as copy construction, move construction, and operator= of atomic class templates are deleted by default.
However, it is possible to construct variables of the template parameter type T from the variables of the atomic type. This is because the atomic class template always defines the type conversion function from atomic<T> to T. When needed, the compiler will implicitly Complete the conversion from the atomic type to its corresponding template parameter type.
The C11 header file <cstdatomic> simply defines the atomic type corresponding to the built-in type

atomic Types of
atomic_bool bool
atomic_char char
atomic_schar signed char
atomic_uchar unsigned char
atomic_int int
atomic_uint unsigned int
atomic_short short
atomic_ushort unsigned short
atomic_long long
atomic_ulong unsigned long
atomic_llong long long
atomic_ullong unsigned long long
atomic_char16_t char16_t
atomic_char32_t char32_t
atomic_wchar_t wchar_t

Again, for example:

#include <iostream>
#include <thread>
#include <atomic>
 
using namespace std;
 
#define COUNT 10000
 
void inc(atomic<int> *p){
    
    
    for(int i = 0; i < COUNT; i++){
    
    
        (*p)++;
    }
}
 
int main()
{
    
    
    atomic<int> a{
    
    0};
    
    thread ta(inc, &a);
    thread tb(inc, &a);
    
    ta.join();
    tb.join();
    
    cout << " a = " << a << endl;
    return 0;
}

std::lock_guard

Let's take a small example first:

mutex m;
m.lock();
sharedVariable= getVar();
m.unlock();

In this code, the mutex m ensures that the access of the key part sharedVariable = getVar(); is sequential.
Sequential means: In this special case, each thread gets access to the key part in order.
The code is simple, but it is prone to deadlock. If the critical part throws an exception or the programmer simply forgets to unlock the mutex, a deadlock will occur.

Using std::lock_guard, we can do it more elegantly:

{
    
    
  std::mutex m,
  std::lock_guard<std::mutex> lockGuard(m);
  sharedVariable= getVar();
}

It's easy. But what are the opening bracket {and closing bracket}?
In order to ensure that the life cycle of std::lock_guard is only valid in this {}.
In other words, when the life cycle leaves the critical zone, its life cycle ends.
To be precise, at that point in time, the destructor of std::lock_guard was called, and yes, the mutex was released. The process is fully automatic. In addition, if getVar() throws an exception when sharedVariable = getVar() also releases the mutex. Of course, the scope of the function body or loop also limits the life cycle of the object.

#include <iostream>
#include <thread>
#include <mutex>
 
using namespace std;
 
#define COUNT 10000

static mutex g_mutex;
 
void inc(int *p){
    
    
    for(int i = 0; i < COUNT; i++){
    
    
    	lock_guard<mutex> lck(g_mutex);
        (*p)++;
    }
}
 
int main()
{
    
    
    int a{
    
    0};
    
    thread ta(inc, &a);
    thread tb(inc, &a);
    
    ta.join();
    tb.join();
    
    cout << " a = " << a << endl;
    return 0;
}

In addition, unique_lock() can also be used.
Unique_lock is a class template. In work, generally lock_guard (recommended); lock_guard replaces mutex's lock() and unlock(); unique_lock is much more flexible than lock_guard, less efficient, and takes up a little more memory. The specific unique_lock() usage is explained separately, so I won't repeat it here.

std::mutex

std::mutex is the most basic mutex in C++11. The std::mutex object provides the characteristic of exclusive ownership—that is, it does not support recursively locking std::mutex objects, while std::recursive_lock The mutex object can be locked recursively.

#include <iostream>
#include <thread>
#include <mutex>
 
using namespace std;
 
#define COUNT 10000

static mutex g_mutex;
 
void inc(int *p){
    
    
    for(int i = 0; i < COUNT; i++){
    
    
    	g_mutex.lock();
        (*p)++;
        g_mutex.unlock();
    }
}
 
int main()
{
    
    
    int a{
    
    0};
    
    thread ta(inc, &a);
    thread tb(inc, &a);
    
    ta.join();
    tb.join();
    
    cout << " a = " << a << endl;
    return 0;
}

std::condition_variable

For event notifications between threads, C++11 provides a condition variable class condition_variable (which can be regarded as an encapsulation of pthread_cond_t). The use of condition variables allows one thread to wait for notifications from other threads (wait, wait_for, wait_until), or to other The thread sends notifications (notify_one, notify_all). The condition variable must be used in conjunction with the lock. When waiting, because there are unlocking and relocking, you must use a lock that can be manually unlocked and locked while waiting, such as unique_lock, but cannot be used lock_guard, an example is as follows:

#include <thread>
#include <iostream>
#include <condition_variable>

# define THREAD_COUNT 10

using namespace std;
mutex m;
condition_variable cv;

int main(void){
    
    
    thread** t = new thread*[THREAD_COUNT];
    int i;
    for(i = 0; i < THREAD_COUNT; i++){
    
    
	    t[i] = new thread( [](int index){
    
    
	        unique_lock<mutex> lck(m);
	        cv.wait_for(lck, chrono::hours(1000));
	        cout << index << endl;}, i );
            
 	    this_thread::sleep_for( chrono::milliseconds(50) );
    }
    
    for(i = 0; i < THREAD_COUNT; i++){
    
    
	    lock_guard<mutex> _(m);
 	    cv.notify_one();
    }
    
    for(i = 0; i < THREAD_COUNT; i++){
    
    
 	    t[i]->join();
	    delete t[i];
    }
    delete t;
    
    return 0;
}

After compiling and running the program and outputting the results, you can see that the order of condition variables is not guaranteed, that is, the one that calls wait first may not be awakened first.

std::promise/future

Promise/future can be used to carry out simple data interaction between threads without considering the issue of locks. Thread A saves the data in a promise variable, and another thread B can get it through get_future() of this promise variable. Value, when thread A has not yet assigned a value in the promise variable, thread B can also wait for the assignment of the promise variable:

#include <thread>
#include <iostream>
#include <future>

using namespace std;

promise<string> val;

int main(void){
    
    
    thread ta([](){
    
    
	    future<string> fu = val.get_future();
	    cout << "waiting promise->future" << endl;
	    cout << fu.get() << endl;
    });
    
    thread tb([](){
    
    
	    this_thread::sleep_for( chrono::milliseconds(5000) );
	    val.set_value("promise is set");
    });
    
    ta.join();
    tb.join();
    
    return 0;
}

A future variable can only call get() once. If you need to call get() multiple times, you can use shared_future. You can also pass exceptions between threads through promise/future.

std::packaged_task

If you combine a callable object with a promise, it is packaged_task, which can further simplify the operation:

#include <thread>
#include <iostream>
#include <mutex>
#include <future>

using namespace std;

static mutex g_mutex;

int main(void){
    
    
    auto run = [=](int index){
    
     
		{
    
    
	    	lock_guard<mutex> lck(g_mutex);
	    	cout << "tasklet " << index << endl;
		}
		this_thread::sleep_for( chrono::seconds(5) );
		return index * 1000;
    };
    
    packaged_task<int(int)> pt1(run);
    packaged_task<int(int)> pt2(run);
    thread t1( [&](){
    
    pt1(2);} );
    thread t2( [&](){
    
    pt2(3);} );

    int f1 = pt1.get_future().get();
    int f2 = pt2.get_future().get();
    cout << "task result=" << f1 << endl;
    cout << "task result=" << f2 << endl;

    t1.join();
    t2.join();
    
    return 0;
}

std::async

You can also try to combine a packaged_task with a thread, which is the async() function. Use the async() function to start the execution code and return a future object to save the code return value. We do not need to explicitly create and destroy threads, etc., but the implementation of the C++11 library decides when to create and destroy threads, and Create several threads, etc., examples are as follows:

#include <thread>
#include <iostream>
#include <mutex>
#include <future>
#include <vector>

# define COUNT 1000000

using namespace std;

static long do_sum(vector<long> *arr, size_t start, size_t count){
    
    
    static mutex m;
    long sum = 0;
    
    for(size_t i = 0; i < count; i++){
    
    
	    sum += (*arr)[start + i];
    }
    
    {
    
    
	    lock_guard<mutex> lck(m);
	    cout << "thread " << this_thread::get_id() << ", count=" << count
	        << ", start="<< start << ", sum=" << sum << endl;
    }
    return sum;
}

int main(void){
    
    
    vector<long> data(COUNT);
    for(size_t i = 0; i < COUNT; i++){
    
    
        data[i] = random() & 0xff;
    }
    
    vector< future<long> > result;
    
    size_t ptc = thread::hardware_concurrency() * 2;
    for(size_t batch = 0; batch < ptc; batch++) {
    
    
	    size_t batch_each = COUNT / ptc;
	    if (batch == ptc - 1) {
    
    
	        batch_each = COUNT - (COUNT / ptc * batch);
	    }
	    result.push_back(async(do_sum, &data, batch * batch_each, batch_each));
    }

    long total = 0;
    for(size_t batch = 0; batch < ptc; batch++) {
    
    
	    total += result[batch].get();
    }
    cout << "total=" << total << endl;
    
    return 0;
}

In summary, the above are several different multithreading methods, and the examples used are only very simple.
For details of the specific different methods, here is a link . Follow-up, continue to learn.

Guess you like

Origin blog.csdn.net/qq_24649627/article/details/112557135