Linux multithreading (2) C++ multithreading programming foundation

1. About C++ multithreading

Process: It is the basic unit for the operating system to allocate and schedule resources such as system memory resources and CPU time slices, and provides an operating environment for running applications;

Thread: It is the smallest unit that the operating system/CPU can perform operation scheduling. It is contained in a process, and a process contains one or more threads.

Multi-threading: It is a means to achieve concurrency/parallelism, that is, multiple threads are executed at the same time. Generally speaking, a process can be understood as a complete solution for one thing. Multi-threading is to split the complete steps of executing one thing into multiple sub-processes. step, and then this multiple sub-steps are performed simultaneously.

C++ multithreading: use multiple functions to realize their respective functions, and different functions generate different functions and execute them at the same time. (Different threads may have a certain degree of execution sequence, which can generally be regarded as simultaneous execution).

2. C++ Multithreading Basics

2.1 Create thread

Introduce the header file #include<thread> (C++11), which defines the thread class. Creating a thread means instantiating an object of this class. The constructor called by the instantiated object needs to pass a parameter, which is is the function name, thread th1(proc1) If the function passed in needs to pass parameters, write these parameters after the function name when instantiating the object, thread th1(proc1,a,b).

Thread blocking method:

  • join()
  • detach()

th1.join(), that is, the thread where the statement is located is written in the main() function. After the execution of the specified thread th1 is completed, the main thread continues to execute, that is, the current thread is suspended, and after the execution of the specified thread is completed, the current thread resumes. continue

The purpose of blocking threads is to adjust the execution order of each thread

#include<iostream>
#include<thread>
using namespace std;
void proc(int a)
{
    cout << "我是子线程,传入参数为" << a << endl;
    cout << "子线程中显示子线程id为" << this_thread::get_id()<< endl;
}
int main()
{
    cout << "我是主线程" << endl;
    int a = 9;
    thread th2(proc,a);
    cout << "主线程中显示子线程id为" << th2.get_id() << endl;
    th2.join();
    return 0;
}

2.2 Mutex usage

There is a printer in the unit (shared data a), you need to use the printer (thread 1 needs to operate data a), and colleague Lao Wang also needs to use the printer (thread 2 also needs to operate data a), but the printer can only be used by one person at a time Use, at this time, it is stipulated that no matter who is using the printer, he must apply for a license (lock) from the leader before using the printer, and return the license (unlock) to the leader after using it. There is only one license in total. Wait until the colleagues who use the printer are used up before applying for a license (blocking, other threads cannot lock after thread 1 locks the mutex, and other threads can only lock after thread 1 is unlocked, then this license is the mutex .The mutex ensures that the process of using the printer will not be interrupted.

The program instantiates the mutex object m, and the thread calls the member function m.lock(), and the following happens:

  • If the mutex is currently unlocked, the calling thread locks the mutex and holds the lock until unlock() is called.
  • If the mutex is currently locked, the calling thread is blocked until the mutex is unlocked.

Mutex usage first requires

#include<mutex>

lock() and unlock()

#include<iostream>
#include<thread>
#include<mutex>
using namespace std;
mutex m;//实例化m对象,不要理解为定义变量
void proc1(int a)
{
    m.lock();
    cout << "proc1函数正在改写a" << endl;
    cout << "原始a为" << a << endl;
    cout << "现在a为" << a + 2 << endl;
    m.unlock();
}
 
void proc2(int a)
{
    m.lock();
    cout << "proc2函数正在改写a" << endl;
    cout << "原始a为" << a << endl;
    cout << "现在a为" << a + 1 << endl;
    m.unlock();
}
int main()
{
    int a = 0;
    thread th1(proc1, a);
    thread th2(proc2, a);
    th1.join();
    th2.join();
    return 0;
}

Using lock_guard or unique_lock can avoid the problem of forgetting to unlock.

  • lock_guard():

The principle is: declare a local lock_guard object, lock it in its constructor, and unlock it in its destructor. The end result is: the creation is locked, and the scope is automatically unlocked when the scope ends. Thus using lock_guard() can replace lock() and unlock().
By setting the scope, the lock_guard is destructed in an appropriate place (the code between the mutex lock and the mutex unlock is called a critical section (the code that requires mutual exclusive access to shared resources is called a critical section), The scope of the critical section should be as small as possible, that is, it should be unlocked as soon as possible after locking the mutex ), and by using {} to adjust the scope, the mutex m can be unlocked in a suitable place :

#include<iostream>
#include<thread>
#include<mutex>
using namespace std;
mutex m;//实例化m对象,不要理解为定义变量
void proc1(int a)
{
    lock_guard<mutex> g1(m);//用此语句替换了m.lock();lock_guard传入一个参数时,该参数为互斥量,此时调用了lock_guard的构造函数,申请锁定m
    cout << "proc1函数正在改写a" << endl;
    cout << "原始a为" << a << endl;
    cout << "现在a为" << a + 2 << endl;
}//此时不需要写m.unlock(),g1出了作用域被释放,自动调用析构函数,于是m被解锁
 
void proc2(int a)
{
    {
        lock_guard<mutex> g2(m);
        cout << "proc2函数正在改写a" << endl;
        cout << "原始a为" << a << endl;
        cout << "现在a为" << a + 1 << endl;
    }//通过使用{}来调整作用域范围,可使得m在合适的地方被解锁
    cout << "作用域外的内容3" << endl;
    cout << "作用域外的内容4" << endl;
    cout << "作用域外的内容5" << endl;
}
int main()
{
    int a = 0;
    thread proc1(proc1, a);
    thread proc2(proc2, a);
    proc1.join();
    proc2.join();
    return 0;
}

lock_gurad can also pass in two parameters. When the first parameter is the adopt_lock flag, it means that it has been locked, and the mutex is no longer locked in the constructor, so it needs to be manually locked in advance at this time

#include<iostream>
#include<thread>
#include<mutex>
using namespace std;
mutex m;//实例化m对象,不要理解为定义变量
void proc1(int a)
{
    m.lock();//手动锁定
    lock_guard<mutex> g1(m,adopt_lock);
    cout << "proc1函数正在改写a" << endl;
    cout << "原始a为" << a << endl;
    cout << "现在a为" << a + 2 << endl;
}//自动解锁
 
void proc2(int a)
{
    lock_guard<mutex> g2(m);//自动锁定
    cout << "proc2函数正在改写a" << endl;
    cout << "原始a为" << a << endl;
    cout << "现在a为" << a + 1 << endl;
}//自动解锁
int main()
{
    int a = 0;
    thread proc1(proc1, a);
    thread proc2(proc2, a);
    proc1.join();
    proc2.join();
    return 0;
}

unique_lock:

unique_lock is similar to lock_guard, but unique_lock has more usage and supports the original function of lock_guard().
Manual lock() and manual unlock() cannot be used after using lock_guard; manual lock() and manual unlock() can be used after using unique_lock; the
second parameter of unique_lock, in addition to adopt_lock, can also be try_to_lock and defer_lock;
try_to_lock: try To lock, you must ensure that the lock is in the unlocked state , and then try to obtain the lock now; try to lock the mutex with mutx's lock(), but if the lock is not successful, it will return immediately and will not block there. defer_lock:
initialization Created an unlocked mutex

#include<iostream>
#include<thread>
#include<mutex>
using namespace std;
mutex m;
void proc1(int a)
{
    unique_lock<mutex> g1(m, defer_lock);//始化了一个没有加锁的mutex
    cout << "不拉不拉不拉" << endl;
    g1.lock();//手动加锁,注意,不是m.lock();注意,不是m.lock();注意,不是m.lock()
    cout << "proc1函数正在改写a" << endl;
    cout << "原始a为" << a << endl;
    cout << "现在a为" << a + 2 << endl;
    g1.unlock();//临时解锁
    cout << "不拉不拉不拉"  << endl;
    g1.lock();
    cout << "不拉不拉不拉" << endl;
}//自动解锁
 
void proc2(int a)
{
    unique_lock<mutex> g2(m,try_to_lock);//尝试加锁,但如果没有锁定成功,会立即返回,不会阻塞在那里;
    cout << "proc2函数正在改写a" << endl;
    cout << "原始a为" << a << endl;
    cout << "现在a为" << a + 1 << endl;
}//自动解锁
int main()
{
    int a = 0;
    thread proc1(proc1, a);
    thread proc2(proc2, a);
    proc1.join();
    proc2.join();
    return 0;
}

Unique_lock ownership transfer

mutex m;
{  
    unique_lock<mutex> g2(m,defer_lock);
    unique_lock<mutex> g3(move(g2));//所有权转移,此时由g3来管理互斥量m
    g3.lock();
    g3.unlock();
    g3.lock();
}

2.3 Asynchronous threads

add header file

#include<future>

  • async and future

async is a function template used to start an asynchronous task, it returns a future class template object, the future object acts as a placeholder , the just instantiated future has no stored value, but when calling the get() of the future object When the member function is used, the main thread will be blocked until the execution of the asynchronous thread ends, and the return result is passed to the future, that is, the function return value is obtained through FutureObject.get().

It is equivalent to you going to do government business (main thread), hand over the information to the front desk, and the front desk arranges personnel to handle it for you (async creates sub-threads), and the front desk gives you a receipt (future object), talking about your business It is being processed for you (the sub-thread is running), and you will come back after a while to get the result based on this receipt. After a while, you go to the foreground to get the result, but the result hasn’t come out yet (the child thread hasn’t returned yet), so you wait (block) in the foreground, and you don’t leave until you get the result (get()) (no longer block).

#include <iostream>
#include <thread>
#include <mutex>
#include<future>
using namespace std;
double t1(const double a, const double b)
{
	double c = a + b;
	return c;
}
 
int main() 
{
	double a = 2.3;
	double b = 6.7;
	future<double> fu = async(t1, a, b);//创建异步线程线程,并将线程的执行结果用fu占位;
	cout << "正在进行计算" << endl;
	cout << "计算结果马上就准备好,请您耐心等待" << endl;
	cout << "计算结果:" << fu.get() << endl;//阻塞主线程,直至异步线程return
        //cout << "计算结果:" << fu.get() << endl;//取消该语句注释后运行会报错,因为future对象的get()方法只能调用一次。
	return 0;
}

shared_future

Both future and shard_future are used to occupy space , but there are some differences between them.
The get() member function of future is to transfer data ownership; the get() member function of shared_future is to copy data.
therefore:

  • The get() of the future object can only be called once ; multiple threads cannot wait for the same asynchronous thread. Once one of the threads obtains the return value of the asynchronous thread, other threads cannot obtain it again.
  • The get() of the shared_future object can be called multiple times ; multiple threads can wait for the same asynchronous thread, and each thread can obtain the return value of the asynchronous thread.

2.4 Atom type automatic

  • Atomic operations are minimal and non-parallelizable operations.

It means that even if it is multi-threaded, the atomic object must be operated synchronously , thus saving the time consumption of locking and unlocking the mutex.

// C++ Standard: C++17
#include <iostream>
#include <thread>
#include <atomic>
using namespace std;
atomic_int n(0);
void count10000() {
	for (int i = 1; i <= 10000; i++) {
		n++;
	}
}
int main() {
	thread th[100];
	for (thread &x : th)
		x = thread(count10000);
	for (thread &x : th)
		x.join();
	cout << n << endl;
	return 0;
}

3 thread pool

When not using thread pool:

Create a thread -> execute the task by the thread -> destroy the thread after the task is executed. Even if a large number of threads are required, each thread must be created, executed, and destroyed according to this process.

Although the time consumed by creating and destroying threads is much less than the execution time of threads, for tasks that frequently create a large number of threads, the time and CPU resources occupied by creating and destroying threads will also have a large proportion.

In order to reduce the time consumption and resource consumption caused by creating and destroying threads, the thread pool strategy is adopted:

After the program starts, a certain number of threads are pre-created and placed in the idle queue. These threads are in a blocked state, basically do not consume CPU, and only occupy a small memory space.

After receiving a task, the thread pool selects an idle thread to execute the task.

After the task is executed, the thread is not destroyed, and the thread continues to wait for the next task in the pool.

Problems solved by thread pool:

(1) When a large number of threads need to be created and destroyed frequently, the time overhead and CPU resource occupation caused by creating and destroying threads are reduced. (save time and energy)

(2) In the case of high real-time requirements, since a large number of threads are created in advance, the thread can be called from the thread pool to process the task immediately after receiving the task, skipping the step of creating a thread, which improves the real-time performance . (real time)

Reference link: C++ Multithreading Basic Tutorial - zizbee - Blog Park (cnblogs.com)

Guess you like

Origin blog.csdn.net/weixin_46267139/article/details/131087034