In the actual development process , we may encounter the scenario of writing files concurrently. If it is not handled properly, the file content may be out of order. Below we describe this process and give a solution to this problem through a sample program.
use std::{
fs::{self, File, OpenOptions},
io::{Write},
sync::Arc,
time::{SystemTime, UNIX_EPOCH},
};
use tokio::task::JoinSet;
fn main() {
println!("parallel write file!");
let max_tasks = 200;
let _ = fs::remove_file("/tmp/parallel");
let file_ref = OpenOptions::new()
.create(true)
.write(true)
.append(true)
.open("/tmp/parallel")
.unwrap();
let mut set: JoinSet<()> = JoinSet::new();
let rt = tokio::runtime::Runtime::new().unwrap();
rt.block_on(async {
loop {
while set.len() >= max_tasks {
set.join_next().await;
}
未做写互斥函数
let mut file_ref = OpenOptions::new()
.create(true)
.write(true)
.append(true)
.open("/tmp/parallel")
.unwrap();
set.spawn(async move { write_line(&mut file_ref) });
}
});
}
fn write_line(file: &mut File) {
for i in 0..1000 {
let now = SystemTime::now().duration_since(UNIX_EPOCH).unwrap();
let mut content = now.as_secs().to_string();
content.push_str("_");
content.push_str(&i.to_string());
file.write_all(content.as_bytes()).unwrap();
file.write_all("\n".as_bytes()).unwrap();
file.write_all("\n".as_bytes()).unwrap();
}
}
The code is not complicated. Tokio implements a concurrent runtime. The file writing function writes the timestamp directly. In order to facilitate the display of out-of-order, it writes two newlines.
The output text looks like this:
1691287258_979
1691287258_7931691287258_301
1691287258_7431691287258_603
1691287258_8941691287258_47
1691287258_895
1691287258_553
1691287258_950
1691287258_980
1691287258_48
1691287258_302
1691287258_896
1691287258_744
1691287258_6041691287258_554
Obviously, the writes did not meet expectations, the intervals were not even, and the execution steps inside the function were out of order.
Let's modify the above program:
use std::{
fs::{self, File, OpenOptions},
io::Write,
sync::Arc,
time::{SystemTime, UNIX_EPOCH},
};
use tokio::sync::Mutex;
use tokio::task::JoinSet;
fn main() {
println!("parallel write file!");
let max_tasks = 200;
let _ = fs::remove_file("/tmp/parallel");
let file_ref = OpenOptions::new()
.create(true)
.write(true)
.append(true)
.open("/tmp/parallel")
.unwrap();
let f = Arc::new(Mutex::new(file_ref));
let mut set: JoinSet<()> = JoinSet::new();
let rt = tokio::runtime::Runtime::new().unwrap();
rt.block_on(async {
loop {
while set.len() >= max_tasks {
set.join_next().await;
}
let mut file = Arc::clone(&f);
set.spawn(async move { write_line_mutex(&mut file).await });
}
});
}
async fn write_line_mutex(mutex_file: &Arc<Mutex<File>>) {
for i in 0..1000 {
let mut f = mutex_file.lock().await;
let now = SystemTime::now().duration_since(UNIX_EPOCH).unwrap();
let mut content = now.as_secs().to_string();
content.push_str("_");
content.push_str(&i.to_string());
f.write_all(content.as_bytes()).unwrap();
f.write_all("\n".as_bytes()).unwrap();
f.write_all("\n".as_bytes()).unwrap();
}
}
This time we used tokio::sync::Mutex, the write_line_mutex function acquires the file mutex before each write task.
Take a look at the file content this time:
1691288040_374
1691288040_374
1691288040_374
1691288040_375
1691288040_374
1691288040_374
1691288040_374
1691288040_374
1691288040_374
1691288040_374
1691288040_374
1691288040_374
1691288040_374
1691288040_374
1691288040_375
1691288040_375
1691288040_374
1691288040_375
1691288040_375
1691288040_375
1691288040_375
1691288040_375
1691288040_375
1691288040_375
1691288040_375
1691288040_375
1691288040_375
The written format is correct to ensure that the function is completely executed every time the function is written.
Regarding the mutual exclusion of file writing, that’s all for today.
Complete source code: https://github.com/jiashiwen/wenpanrust/tree/main/parallel_write_file
-end-
This article is shared from the WeChat public account - JD Cloud Developers (JDT_Developers).
If there is any infringement, please contact [email protected] to delete it.
This article participates in the " OSC Source Creation Program ". You are welcome to join in and share it.