Summary of the use of std::async

C++98There is no thread library in the standard. It was not until C++11China finally provided a multi-threaded standard library that provided classes for managing threads, protecting shared data, synchronization operations between threads, and atomic operations. The header file corresponding to the multi-threaded library is the #include <thread>class name std::thread.

However, threads are something closer to the system after all, and it is still not very convenient to use, especially thread synchronization and obtaining thread running results are more troublesome. We cannot simply thread.join()get the result, we must define a thread shared variable to pass the result, and we must also consider the mutual exclusion between threads. Fortunately C++11, it provides a relatively simple asynchronous interface std::asyncthrough which you can simply create threads and std::futureget results through them . In the past, I used to encapsulate threads to implement my own async. Now that a threaded cross-platform interface can be used, it greatly facilitates C++ multithreaded programming.

First look at std::asyncthe function prototype

//(C++11 起) (C++17 前)
template< class Function, class... Args>
std::future<std::result_of_t<std::decay_t<Function>(std::decay_t<Args>...)>>
    async( Function&& f, Args&&... args );

//(C++11 起) (C++17 前)
template< class Function, class... Args >
std::future<std::result_of_t<std::decay_t<Function>(std::decay_t<Args>...)>>
    async( std::launch policy, Function&& f, Args&&... args );  

The first parameter is the thread creation strategy, there are two strategies to choose from :

  • std::launch::async: Start creating threads when calling async.
  • std::launch::deferred: Lazy loading method to create threads. No thread is created when async is called, and thread is not created until get or wait of future is called.

The default strategy is: std::launch::async | std::launch::deferredthat is, a collection of the two strategies, I will talk about it in detail later

The second parameter is the thread function

Thread function acceptablefunction, lambda expression, bind expression, or another function object

The third parameter is the parameter of the thread function

No longer explain

Return value std::future

std::futureIs a template class, which provides a mechanism to access the results of asynchronous operations. Literally, it means the future, which is very appropriate, because she does not get the result immediately but can get the result synchronously at some point. We can obtain the structure of asynchronous operations by querying the status of the future. future_status has three statuses:

  • deferred: the asynchronous operation has not yet started
  • ready: the asynchronous operation has been completed
  • timeout: Asynchronous operation timeout, mainly used for std::future.wait_for()

Example:

//查询future的状态
std::future_status status;
do {
    status = future.wait_for(std::chrono::seconds(1));
    if (status == std::future_status::deferred) {
        std::cout << "deferred" << std::endl;
    } else if (status == std::future_status::timeout) {
        std::cout << "timeout" << std::endl;
    } else if (status == std::future_status::ready) {
        std::cout << "ready!" << std::endl;
    }
} while (status != std::future_status::ready); 

std::futureThere are three ways to get results:

  • get: Wait for the end of the asynchronous operation and return the result
  • wait: waiting for the end of the asynchronous operation, but no return value
  • waite_for: timeout waiting to return the result, the above example shows the use of timeout waiting

After introducing std::asyncthe function prototype, how should it be used?

std::asyncBasic usage: example link

#include <iostream>
#include <vector>
#include <algorithm>
#include <numeric>
#include <future>
#include <string>
#include <mutex>

std::mutex m;
struct X {
    void foo(int i, const std::string& str) {
        std::lock_guard<std::mutex> lk(m);
        std::cout << str << ' ' << i << '\n';
    }
    void bar(const std::string& str) {
        std::lock_guard<std::mutex> lk(m);
        std::cout << str << '\n';
    }
    int operator()(int i) {
        std::lock_guard<std::mutex> lk(m);
        std::cout << i << '\n';
        return i + 10;
    }};

template <typename RandomIt>int parallel_sum(RandomIt beg, RandomIt end){
    auto len = end - beg;
    if (len < 1000)
        return std::accumulate(beg, end, 0);

    RandomIt mid = beg + len/2;
    auto handle = std::async(std::launch::async,
                             parallel_sum<RandomIt>, mid, end);
    int sum = parallel_sum(beg, mid);
    return sum + handle.get();
}

int main(){
    std::vector<int> v(10000, 1);
    std::cout << "The sum is " << parallel_sum(v.begin(), v.end()) << '\n';

    X x;
    // 以默认策略调用 x.foo(42, "Hello") :
    // 可能同时打印 "Hello 42" 或延迟执行
    auto a1 = std::async(&X::foo, &x, 42, "Hello");
    // 以 deferred 策略调用 x.bar("world!")
    // 调用 a2.get() 或 a2.wait() 时打印 "world!"
    auto a2 = std::async(std::launch::deferred, &X::bar, x, "world!");
    // 以 async 策略调用 X()(43) :
    // 同时打印 "43"
    auto a3 = std::async(std::launch::async, X(), 43);
    a2.wait();                     // 打印 "world!"
    std::cout << a3.get() << '\n'; // 打印 "53"
} // 若 a1 在此点未完成,则 a1 的析构函数在此打印 "Hello 42"

Possible outcome

The sum is 10000
43
world!
53
Hello 42

It can be seen that std::asyncthe asynchronous operation is a good encapsulation, so that we can easily obtain the asynchronous execution status and results without paying attention to the internal details of thread creation, and we can also specify the thread creation strategy.

Deep understanding of thread creation strategy

  • The std::launch::async scheduling strategy means that the function must be executed asynchronously, that is, executed in another thread.
  • The std::launch::deferred scheduling strategy means that the function may only be executed when the future object returned by std::async calls get or wait. That is, execution will be postponed until one of the calls occurs. When calling get or wait, the function will execute synchronously, that is, the caller will block until the end of the function. If get or wait is not called, the function will never be executed.

Both strategies are very clear, but the default strategy of the function is very interesting. It is not what you explicitly specify, that is, the strategy used in the first function prototype, that is std::launch::async | std::launch::deferred, the instructions given in the C++ standard are:

Asynchronous execution or lazy evaluation depends on the implementation

auto future = std::async(func);        // 使用默认发射模式执行func

With this scheduling strategy, we have no way to predict whether the function func will be executed in which thread, or even whether it will be executed, because func may be scheduled to defer execution, that is, execute when get or wait is called, and whether get or wait is It is unpredictable whether it will be executed or in which thread it will execute.

At the same time, the flexibility of this scheduling strategy will confuse the use of thread_local variables, which means that if func writes or reads this thread local storage (Thread Local Storage, TLS), it is impossible to predict which thread's local variables are fetched.

It also affects the timeout based on the wait loop, because the scheduling strategy may be deferredthat calling wait_for or wait_until will return the value std::launch::deferred. This means that the following loop, which looks like it will eventually stop, may actually run forever:

void func()           // f睡眠1秒后返回
{
    std::this_thread::sleep_for(1);
}
auto future = std::async(func);      // (概念上)异步执行f
while(fut.wait_for(100ms) !=         // 循环直到f执行结束
      std::future_status::ready)     // 但这可能永远不会发生
{
    ...
}

In order to avoid falling into an infinite loop, we must check whether the future postponed the task. However, the future cannot know whether the task has been postponed. A good technique is to obtain whether the future_status is deferred by wait_for(0):

auto future = std::async(func);      // (概念上)异步执行f
if (fut.wait_for(0) == std::future_status::deferred)  // 如果任务被推迟
{
    ...     // fut使用get或wait来同步调用f
} else {            // 任务没有被推迟
    while(fut.wait_for(100ms) != std::future_status::ready) { // 不可能无限循环
      ...    // 任务没有被推迟也没有就绪,所以做一些并发的事情直到任务就绪
    }
    ...        // fut就绪
}

Some people may say that since there are so many shortcomings, why use it, because after all we consider the possibility of extreme cases, sometimes I do not require it to be executed concurrently or synchronously, and there is no need to consider modifying the thread_local variable. At the same time Can also accept that the possible task will never be executed, then this method is a convenient and efficient scheduling strategy.

In summary, we conclude the following points:

  • The default scheduling strategy of std::async allows tasks to be executed asynchronously and simultaneously.
  • The flexibility of the default strategy leads to uncertainty when using the thread_local variable. It implies that the task may not be executed, and it also affects the program logic of wait calls based on timeouts.
  • If asynchronous execution is required, specify the std::launch::async launch strategy.

Reference article:

API Reference Document

Use C++11 std::async instead of thread creation

Guess you like

Origin blog.csdn.net/weixin_41191739/article/details/113115847