Wenpan Rust -- tokio binding cpu practice

tokio is a popular asynchronous runtime framework in the rust ecosystem. In actual production, what should we do if we want the tokio application to be bound to a specific cpu core? This time we will talk about this topic.

First, let's write a simple multitasking program.

use tokio::runtime;pub fn main() {    let rt = runtime::Builder::new_multi_thread()        .enable_all()        .build()        .unwrap();
rt.block_on(async { for i in 0..8 { println!("num {}", i); tokio::spawn(async move { loop { let mut sum: i32 = 0; for i in 0..100000000 { sum = sum.overflowing_add(i).0; } println!("sum {}", sum); } }); } });}

The program is very simple. First construct a tokio runtime environment, and then derive multiple tokio concurrency, each of which executes an infinite loop to do overflowing_add. The overflowing_add function returns a tuple of additions and a boolean indicating whether arithmetic overflow will occur. If overflow would occur, the wrapped value is returned. Then take the first element of the ancestor and print it.

This program runs on Ubuntu 20 OS, 4 core cpu. The monitoring through nmon is as follows:

You can see that each core has a load.

To bind load to a core, use  core_affinity_rs (https://github.com/Elzair/core_affinity_rs) . core_affinity_rs is a Rust crate for managing CPU affinity. Currently supports Linux, Mac OSX and Windows. The official claims to support multiple platforms, I only did the test of the linux operating system.

Let's modify the code a bit:

use tokio::runtime;
pub fn main() { let core_ids = core_affinity::get_core_ids().unwrap(); println!("core num {}", core_ids.len()); let core_id = core_ids[1];
let rt = runtime::Builder::new_multi_thread() .on_thread_start(move || { core_affinity::set_for_current(core_id.clone()); }) .enable_all() .build() .unwrap();
rt.block_on(async { for i in 0..8 { println!("num {}", i); tokio::spawn(async move { loop { let mut sum: i32 = 0; for i in 0..100000000 { sum = sum.overflowing_add(i).0; } println!("sum {}", sum); } }); } });}

When building a multi-threaded runtime, set the cpu affinity in on_thread_start. You can see that the load is bound to the specified core.

The above code just binds the load to one core, so what should we do if we want to bind multiple cores?

Let's look at the following code

pub fn main() {    let core_ids = core_affinity::get_core_ids().unwrap();    println!("core num {}", core_ids.len());
let rt = runtime::Builder::new_multi_thread() .enable_all() .build() .unwrap();
let mut idx = 2;
rt.block_on(async { for i in 0..8 { println!("num {}", i); let core_id = core_ids[idx]; if idx.eq(&(core_ids.len() - 1)) { idx = 2; } else { idx += 1; }
tokio::spawn(async move { let res = core_affinity::set_for_current(core_id); println!("{}", res); loop { let mut sum: i32 = 0; for i in 0..100000000 { sum = sum.overflowing_add(i).0; } println!("sum {}", sum); } }); } });}

The code needs to tie all the loads on core3 and core4. The principle is to add the core_affinity setting to the derived task. By adjusting the idx, the derived concurrency is evenly bound to the specified core. The monitoring of code running is shown in the figure below.

That’s all for this issue about cpu affinity, see you in the next issue

-end-

本文分享自微信公众号 - 京东云开发者(JDT_Developers)。
如有侵权,请联系 [email protected] 删除。
本文参与“OSC源创计划”,欢迎正在阅读的你也加入,一起分享。

{{o.name}}
{{m.name}}

Guess you like

Origin my.oschina.net/u/4090830/blog/9908565