VMware performance optimization for SUSE Linux

SUSE Linux was designed with VMware performance in mind. But these performance optimization tips will help you squeeze even more power out of virtual machines.

With some simple VMware performance optimization methods, you can achieve substantial performance gains for virtual...

machines running SUSE Linux Enterprise Server.

VMware performance optimization involves selecting the right drivers and kernel modules, and the Linux scheduler and file system journal also come into play. The following VMware performance optimization techniques enable virtual machines (VMs) and an entire virtualized infrastructure to run better.

VMware performance optimization with disk drivers
Optimizing VMware performance for SUSE Linux Enterprise Server (SLES) is actually quite easy. If possible, start with the most recent version, SLES 11. The SLES 11 kernel better integrates with VMware, which means improved performance out of the box.

For further performance optimization, change the disk driver. After you install a SUSE VM, VMware installs an LSI logic disk controller by default. This controller, which offers a virtualized disk device, captures and emulates the data generated by a virtual machine.

You should replace the LSI logic disk controller with the VMware paravirtual disk driver, which is optimized for the SUSE-VMware kernel. You'll notice a huge performance improvement after applying this driver, because it eliminates the trap-and-emulation steps. As a result, a VM can communicate with the hard disk in a much more efficient manner.

Installing VMware Tools can also improve VMware performance when running SUSE Linux VMs. VMware Tools further optimizes the VM and I/O channel, and it maximizes performance on the VMware side.

For further VMware performance optimization, you can tweak the SUSE Linux VMs themselves.

SUSE Linux performance optimization
In Linux operating systems, you can configure the behavior of the I/O scheduler to improve performance. The I/O scheduler determines how the disk controller works, thus setting priorities when delivering data to disk. There are three settings:

  • Completely Fair Queuing (CFQ): The controller uses settings that work well in a system where read/write requests are more or less balanced and the bandwidth is equally distributed among processes. It's the default setting, and it performs well on an average system.
  • Deadline: With this setting, the I/O scheduler waits as long possible before writing buffers to the disk. The scheduler tries to behave like a real-time scheduler, so this option is recommended for database environments.
  • Noop: Many modern disk controllers, such as RAID controllers or storage area network (SAN) devices, take care of read/write performance optimization. But you should at least try this option to find out how your controller storage helps optimize I/O requests. In many VMware environments with High Availability, the storage back end is on the SAN, and the SAN typically optimizes the scheduler behavior.

The I/O scheduler settings are written to a file in the sys filesystem. In the /sys/block directory, there is a subdirectory for every storage device on your server. Within that directory, you'll find the file queue/scheduler. Use the cat command to find its current setting, as shown below:


root@lassen:/# cat /sys/block/sda/queue/scheduler
noop deadline [cfq]

In the example above, the I/O scheduler is set to Completely Fair Queuing. To find out whether you can benefit from one of the other parameters, just echo the new parameter to the configuration file:


echo deadline > /sys/block/sda/queue/scheduler

Next, test the VMware performance. At all times, it's a good idea to try the four parameters to find out which one best fits your server's workload. Then, you can include the new setting in the boot procedure, for instance, by including the echo command in the /etc/init.d/boot.local file. This action ensures that the new setting is activated every time your server reboots.

VMware performance optimization with the file system journal
All modern Linux file systems use a journal to make data recovery easy after a server crash, but the benefits of file system journals are generally overrated.

Before a write to a file, the journal logs the transaction. So if the server fails, you can easily roll back the file to its previous state. If you have a write-intensive server, however, the default journal setting may negatively affect write performance. On the other hand, if your server is read-intensive, you don't have to care about the journal settings, because the journal is not involved in read transactions.

On modern systems with battery-backed storage, it often makes sense to make the journal as lightweight as possible. (The hardware takes care of data integrity, anyway.) This approach also makes sense with a heavily write-oriented server.

To minimize the journal's effect on VMware performance, you can use the data=journal option while mounting the file system through fstab. The following line shows what a line in /etc/fstab could look like:


/dev/sdg1 / ext3 user_xattr,data=writeback 1 1

After applying this change, restart your server, which will activate the changes.


Sander van Vugt, Contributor  

Sander van Vugt is an independent trainer and consultant based in the Netherlands. Van Vugt is an expert in Linux high availability, virtualization and performance and has completed several projects that implement all three. He is also the writer of various Linux-related books, such as Beginning the Linux Command Line, Beginning Ubuntu Server Administration and Pro Ubuntu Server Administration.


This was last published in January 2011

Dig Deeper on Creating and upgrading VMware servers and VMs

Start the conversation

Send me notifications when other members comment.

By submitting you agree to receive email from TechTarget and its partners. If you reside outside of the United States, you consent to having your personal data transferred to and processed in the United States. Privacy

Please create a username to comment.