Q

Should you migrate vCenter from a physical to virtual server?

Is the effort to migrate vCenter onto a virtual server something that can help your data center in a pinch?

We have an ESX HA cluster with about 200 virtual machines. We use two dedicated physical machines in HA for the...

vCenter server. We want to move these already existing physical servers to virtual machines. Besides some cost savings, are there other advantages to migrate vCenter to a virtual server? Are there any disadvantages?

The benefit of having a physical machine to run vCenter server is you don't need the ESX infrastructure to be available before you can access a vCenter server. That seems like an advantage, but it's of limited use.

If the ESX cluster isn't available but the vCenter server still is, it's useless. You might as well install the vCenter server as a virtual machine -- or a virtual appliance -- to increase the flexibility of your data center. This will be especially helpful if you ever need to fail over an entire site, when it would be beneficial to have all resources available as virtual resources.

This was last published in October 2013

Dig Deeper on Creating and upgrading VMware servers and VMs

Have a question for an expert?

Please add a title for your question

Get answers from a TechTarget expert on whatever's puzzling you.

You will be able to add details on the next page.

Join the conversation

2 comments

Send me notifications when other members comment.

By submitting you agree to receive email from TechTarget and its partners. If you reside outside of the United States, you consent to having your personal data transferred to and processed in the United States. Privacy

Please create a username to comment.

Is the VC (vCenter Server) a critical piece of infrastructure. Well thats a Yes and a NO..
It's not required for your guests to operate. It's essentially required to deploy, manage and report on the virtual infrastructure, but if it were down for an hour or 2 is it really an issue.
Physical Servers of reputable brands are very robust and redundant in their own right. It's rare a HP enterprise server fails. In fact they don't. Human error is another consideration.
There are a number of situations where having the VC (vCenter Server) running as a physical server are of considerable advantage.

Let’s look at the situation where you have had an entire Datacentre outage. Of course we all want to believe this will never occur because we have a (or many) enterprise Datacentres. But hey, it happens.
In the past 20 years I have seen it occur 5 times and always due to something unanticipated like a capacitor failure which triggers the vesda and cuts the power. This is what should occur when the Vesda is activated.
Back to the point.
You have just had a power outage due to a simple puff of smoke from a failed UPS.. You can very quickly restore power to the DC.. Cool.

You have your team ready..
Power is restored. Environmentals come on line..
DataComms confirm they up and running,
Next is the storage team, All OK there.

Now you want to restart your extensive virtual environment.
You power up you ESXi hosts but none of the guests power up..
Hmmm.
The vCenter Server is responsible for this task. If it's physical you can put a finger on a button and power it up. Now you can in a controlled fashion bring back your 100's of guests.

Let’s look at how that differs when it's virtual.
ESXi is up.. Now which one of my 50 hosts has the VC on it?.
You could have a rule which forces the VC to be on only 2 or 3 of the hosts.. This would limit the number of hosts you would have to point the VI Client at. But this would be a rule you would need to manage as hosts come and go, the VC is rebuilt etc..

When you look at it there are advantages to having the VC physical and advantages to it being virtual.. I don't feel either outweigh the other.
My advice.
Have your primary VC physical and your 2nd VC in link mode managing the DR site virtual.
Cancel
As an administrator who had managed environments with multiple unconnected networks and having a complete VMware infrastructure on each network I feel your answer is short sighted and not backed by real world experience.

Having personally experienced issues in which the SAN has caused the ESX hosts to panic and basically bring all of the virtual machines to their knees having a physical vCenter was key to discovering the problem and recovering from it. One of the incidents revolved around someone incorrectly clearing the config from one of the B side SAN switches resulted in a blank zoning structure getting replicated throughout the B side SAN.

Yes this is something that could have been prevented, shouldn't have happened, etc. But this is the real world. People make mistakes. Stuff breaks. Mankind has an uncanny ability to inadvertently find the weakness in anything.

As I mentioned I oversaw multiple networks with full VMware infrastructures. We tried to run virtual vCenter servers. Heck even the VMware rep said we should be virtual. But when an electrician plugged in a power strip that was mis-wired and shorted the power circuit causing a spike that damaged lots of equipment, a virtual vCenter became a liability.

With our View infrastructure we implemented Heartbeat. While it was a valiant attempt by VMware to put some resiliency behind the cornerstone of their product set it fell flat. Flat enough VMware pulled it from its offering.

I am disappointed that for version 6 their answer for resiliency in vCenter is HA or Fault Tolerance. Again they rely on their own product to be infallible which it is not. To me it is the best product on the market but it has weaknesses. According to VMware support, if you have path go dead on your storage, even if you recover the path, you should reboot all of your hosts at some point. In the instance where the B side SAN took down my environment when the zoning on the B side vanished the switches quit allowing the host to talk to storage via that path. There was a perfectly good A side path still operational but over time the hosts kept trying to reconnect the B path and eventually the systems became unresponsive because they were overloaded with path retries. The SAN switches saw no issue. They were directed to no longer grant access. They didn't report a down path to the host because they path didn't go down it just went away. I understand VMware has built in as much robustness as possible but there are scenarios that can bring your entire world to its knees.

Having a physical vCenter with SQL and DNS installed saved my teams job more than once whether the fault was ours or that of someone else. A vCenter server with a DNS zone specifically for your virtual architecture takes the reliance upon any outside resource away. vSphere has become more dependent on DNS over the last few versions. Without it you lose functionality.

I trust vSphere to host every workload we run. From Active Directory/DNS to VDI to virtual Cisco Call Manager, but as I said, stuff happens. Providing yourself with the tools to find and resolve the problems no matter what happens will make your life easier. When everyone is panicking and management is screaming for updates is not the time to discover you cannot see into the environment you have built.

Just my hard earned .02 cents.
Cancel

-ADS BY GOOGLE

SearchServerVirtualization

SearchVirtualDesktop

SearchDataCenter

SearchCloudComputing

Close