Using VMware Distributed Power Management: The basics

VMware Distributed Power Management, a part of VMware Distributed Resource Scheduler, can reduce energy use by putting unused virtual servers into a standby state. But be aware of some High Availability and system monitoring caveats.

In today's economic climate, green IT and cost savings are relevant for any IT organization. Power consumption savings was an early driver of virtualization. Since then, VMware engineers have developed VMware Distributed Power Management (DPM), which can reduce energy consumption in VMware environments by putting unused virtual servers in standby. In this article, we'll review how DPM works and how to use it in your environment.

What is VMware Distributed Power Management?
DPM was introduced as an experimental feature with VMware ESX 3.5 and vCenter Server 2.5. In this video created by VMware's engineers prior to VMworld 2008, you can check out an example of how VMware Distributed Power Management works.

In a nutshell, when a host virtual machine is idle, vCenter suspends the server to save power and, when the workload warrants additional resources, resumes it. This functionality is intriguing, but there are plenty of planning issues and areas that create snags. DPM is a part of Distributed Resource Scheduler (DRS), a component of VMware Infrastructure 3 (VI3). DRS is available in Enterprise editions of VI3 or can be purchased separately. Once enabled in the cluster, DPM's configuration behavior is similar to that of other VI3 and DRS components. Figure 1 illustrates the cluster settings screen and the VMware DRS and Power Management settings. In this case, I've set DPM to automatic.


Click to enlarge.

Once configured, administrators should try to learn DPM to avoid subsequent surprises. It goes without saying that DPM should be introduced in a test and development environment before being used in production.

How does DPM actually work?
Suspending the host is relatively straightforward. The vCenter Server can suspend a host server, but only another host (or peer) can resume the host from the standby state. This is because the resume operation uses a host's VMkernel VMotion interfaces to send standard Wake on LAN (WOL) packets, which instruct the virtual servers that are on standby to wake up. By default, DPM keeps at least one host powered on to send the WOL packet. Additionally, if VMware High Availability (HA) rules are in place to accommodate host failures, more than one host is kept online. You can also configure DPM to fit your needs for a DRS cluster. One way to do this is to configure a host override. This will make a host eligible for DPM or explicitly not configured for DPM at all. This configuration option, as well as the advanced options explained later, allow administrators to configure DPM based on their comfort level with the technology.

Once DPM sends a host into a standby state, the host is powered off and thus consumes less power. Within the VI Client, however, the host server's status is shown as in standby mode. A red status indicator signals that it is offline. The image below shows a simple cluster with one host in standby mode:


Click to enlarge.

Other considerations
When the server is resumed either by DRS or user intervention, a traditional boot occurs on the server. Note that Fibre Channel host bus adapters (HBAs) are down during the standby state, so ensure that your storage team is aware of DPM practice, because it may notice frequent and random port drops on the storage network. Finally, when a host returns to the cluster, the ESX uptime counter resets.

Supported interfaces
If you're going to use DPM, understand how it will work on the host equipment. Most current servers have several onboard network interfaces that support WOL. If you add interfaces to your hosts, ensure that your VMkernel interface is assigned to an adapter that supports WOL, even if you don't immediately plan to implement DPM.

Not all ports on servers with added interfaces support WOL, especially subsequent ports of multi-interface cards, so check the network interfaces section of the host configuration to check for WOL support. Figure 2 shows a host with many interfaces, some of which do not support WOL:


Click to enlarge.

Monitoring concerns
By now, many system administrators may wonder how to monitor a DPM-suspended host. The short answer is that it's difficult. When the host is suspended via DPM, it will not respond to pings from either the service console or from VMkernel interfaces. That fact alone will cause most enterprise monitoring systems to generate false positives of host-down situations. Further, with DPM in its current state, the risk of power-on failures is an unaccounted-for scenario. To be fair, DPM is still experimental, and VMware has indicated that future releases will provide power-on failure notifications. DPM events and host status are accessible via the VMware Infrastructure API 2.5 to feed into monitoring systems, but this is an advanced function, and proven examples are sparse.

Workloads, DRS and HA
When engaged, DPM will maintain HA and DRS rules as long as host capacity permits. For most organizations, the key to DPM as a candidate is a workload that varies in intensity over the course of the day. A good example is a VI3 environment that provides a virtual desktop infrastructure technology to an office that operates in a single shift. In this case, the workload will be busy during office hours and otherwise idle. Many of the VMware capacity tools give insight into workload flow and whether a workload is a good DPM candidate. Figure 3 shows an example workload from VKernel that is a good candidate for DPM.


Click to enlarge.

In this case, the host provides virtual machines that act as terminal servers for a fixed-duration access. This is a pretty clear case for DPM for hosts in this cluster.

Wrap-up and further reading
Like many aspects of VMware Infrastructure 3, DPM's advanced configuration elements extend beyond what I have described here. With advanced functionality, you can prescribe a DPM behavior pattern that best suits your needs. One advanced option is the DemandCapacityTargetRatio option. This option enables you to set for each host a utilization target number determines whether to send a host to standby mode or not. The default is 63% for both CPU and memory utilization. A conservative approach is to back this value down to a lower number to keep hosts available for immediate resource needs. VMware DPM is documented well in the following VMware white paper: VMware Distributed Power Management Concepts and Use, so be sure to read that as well.

VMware will soon fully support DPM functionality, and the power savings can be significant. If you've sought techniques to green your data center, become familiar with Distributed Power Management to determine whether it makes sense for your IT shop, then test it in your environment.

ABOUT THE AUTHOR: Rick Vanover (MCTS, MCSA) is a systems administrator for Safelite AutoGlass in Columbus, Ohio. Vanover has more than 12 years of IT experience and focuses on virtualization, Windows-based server administration and system hardware.

This was first published in March 2009

Dig deeper on DRS and DPM

0 comments

Oldest 

Forgot Password?

No problem! Submit your e-mail address below. We'll send you an email containing your password.

Your password has been sent to:

-ADS BY GOOGLE

SearchServerVirtualization

SearchVirtualDesktop

SearchDataCenter

SearchCloudComputing

Close