Optimize storage performance? You need to pay attention to these Linux I/O scheduler options

Summary: To optimize Linux performance, IT teams should examine the I/O schedulers currently in use and evaluate alternative options such as deadline and Completely Fair Queuing. If a Linux server is underperforming, it's usually related to the storage channel.

To optimize Linux performance, IT teams should examine the I/O schedulers currently in use and evaluate alternative options such as deadline and Completely Fair Queuing.

If a Linux server is underperforming, it's usually related to the storage channel. Decades ago, it was relatively easy to analyze, a server had a RAID array with partitions on top of the RAID array and an Ext2 filesystem running on top of the partitions. In today's data centers, however, analyzing storage channels is not so easy.

Many modern data center Linux servers run on top of the VMware hypervisor and are connected to different types of Storage Area Network (SAN) systems. This means that there are many factors to consider during the Linux storage optimization process.

Common sense says that when you use Linux over a hypervisor, you don't need to do anything about storage optimization, but in many cases this is not the case. Storage performance depends on many factors, one of which is the Linux I/O scheduler, which, if tuned correctly, can have a decisive impact on performance.

Understanding the Different Linux I/O Scheduler Types The

I/O scheduler is the kernel process that determines how I/O requests are ordered. There are many different types of schedulers, such as the deadline type, the Completely Fair Queuing type, and the noop (no-operation) type. In earlier kernel versions, anticipatory schedulers also existed.

The default Linux I/O scheduler on most systems is Completely Fair Queuing. With this scheduler, the Linux kernel tries to distribute read and write requests evenly among storage channels before they are requested. Most hypervisors and co-SAN products are doing the same thing, so this type of scheduler is more likely to give a small drop in specific loads than an improvement. Still, this is the safest option, which is why all versions use it as the default.

Many IT professionals believe that the noop scheduler provides the best performance when using smart storage. With this scheduler, the Linux kernel transmits read and write requests directly to the memory channel and reorders them. The noop scheduler provides the best performance where hypervisors, SSDs, or SANs are mostly used. However, this may not always be the case, especially when facing a heavy write load, using the deadline scheduler may be more helpful to the underlying memory channel.

The Deadline I/O scheduler optimizes write requests by reordering in the most efficient way, simplifying the performance load at the underlying hypervisor layer. If your server writes a lot, the deadline I/O scheduler is worth checking out.

Finally, the expected scheduler may also be encountered. This scheduler was used in older Linux kernels and is now uncommon. On these older kernels, this scheduler optimizes read requests by performing read-ahead when executing file store blocks.

1Set Linux I/O Scheduler

Administrators can set the I/O scheduler for specific disks or the entire server. To set it up for the entire server, modify the grub configuration file /etc/default/grub. In this file, find the line that starts with Linux. In some versions, Linux may be followed by a number. This line contains all the kernel boot parameters. Add elevator=setting on this line, the change of "setting" needs to be done with the help of the I/O scheduler used. After changing the GRUB configuration file, run grub2-mkconfig -o /boot/grub2/grub.cfg to write the new settings to the system, then reboot the system.

While changing the system-wide Linux I/O scheduler can take effect on some specific workloads, consider changing the per-disk I/O scheduler settings as an alternative. If the server has different storage loads, and different load types are writing to different devices, consider running the test with these settings.

The interface file for each disk device has a file with the name /sys/block/device/queue/scheduler. You can repeatedly push the requested scheduling settings to this file to make them take effect immediately, for example, repeat the request: deadline > /sys/block/sda/queue/scheduler. While the settings can be changed continuously, Linux does not provide a standard configuration file, so you need to integrate it in the system startup script and make it run automatically.

This article is reproduced from d1net (reproduced)

If you find any suspected plagiarism in this community, please send an email to: [email protected] to report and provide relevant evidence. Once verified, this community will immediately delete the suspected infringing content.

Guess you like

Origin http://43.154.161.224:23101/article/api/json?id=326197836&siteId=291194637