Paper translation - STUN: Reinforcement-Learning-Based Optimization of Kernel Scheduler Parameters 5 (3)

Continued from the previous article: Thesis translation and reading - STUN: Reinforcement-Learning-Based Optimization of Kernel Scheduler Parameters 5 (2)

5. Evaluation

5.3 Micro-Benchmark Analysis

To demonstrate the impact of STUN on performance, we optimized Hackbench, a micro-benchmark of kernel schedulers, using STUN. The Hackbench execution option has 120 processes and 1000 iterations. Through the filtering process, the scheduler policy selects "normal" and uses the parameters kernel.sched_latency_ns and kernel.sched_wakeup_granularity_ns as optimization variables. The optimal value of the learning result is as follows:

  • Scheduler Policy:Normal;
  • kernel.sched_latency_ns = 20,100,000;
  • kernel.sched_wakeup_granularity_ns = 190,000,000。

The learning results show that the execution time of Hackbench is reduced by 27.7% compared with the default setting: 2.72 to 1.95 seconds. Figure 6 shows Hackbench execution time for each step in the learning process.

During the learning process, the Hackbench results increase and decrease randomly, but after 16000 steps, the learning process shows stable performance. Figure 7 shows the state changes of the two parameters used for optimization during the learning process.

Although each parameter is initially randomly increased and decreased during the learning process, STUN eventually finds the optimal value.

Guess you like

Origin blog.csdn.net/phmatthaus/article/details/131458645