Releases Found:
Red Hat Enterprise Linux 4, Red Hat Enterprise Linux 5
Resolution:
In virtualized environments, it is often not beneficial to schedule I/O at both the host and guest layers. If multiple guests use storage on a filesystem or block device managed by the host operating system, the host may be able to schedule I/O more efficiently because it is aware of requests from all guests and knows the physical layout of storage, which may not map linearly to the guests' virtual storage. Red Hat Enterprise Linux 4 and Red Hat Enterprise Linux 5 guests can use the "noop" I/O scheduler to allow the host to optimize I/O requests.
Guests using storage accessed by iSCSI or physical device pass-through should not use the noop scheduler, since these methods do not allow the host to optimize I/O requests to the underlying physical device.
When using Red Hat Enterprise Linux 4 or Red Hat Enterprise Linux 5 as a host for Xen or VMware guests, the default cfq scheduler is usually ideal, since it performs well on nearly all workloads. If minimizing I/O latency is more important than maximizing I/O throughput on the guest workloads, it may be beneficial to use the deadline scheduler on the host instead.
The I/O scheduler can be selected at boot time using the "elevator" kernel parameter. In the following example grub.conf stanza, the system has been configured to use the noop scheduler:
title Red Hat Enterprise Linux Server (2.6.18-8.el5)
root (hd0,0)
kernel /vmlinuz-2.6.18-8.el5 ro root=/dev/vg0/lv0 elevator=noop
initrd /initrd-2.6.18-8.el5.img
In Red Hat Enterprise Linux 5, it is also possible to change the I/O scheduler for a particular disk after the system has been booted. This makes it possible to use different I/O schedulers for different disks.
# cat /sys/block/hda/queue/scheduler
noop anticipatory deadline [cfq]
# echo 'noop' > /sys/block/hda/queue/scheduler
# cat /sys/block/hda/queue/scheduler
[noop] anticipatory deadline cfq
All scheduler tuning should be tested under normal operating conditions, as synthetic benchmarks typically do not accurately compare performance of systems using shared resources in virtual environments.
http://kbase.redhat.com/faq/docs/DOC-5428