Manage Learn to apply best practices and optimize your operations.

Is your organization ready to virtualize SQL Server?

While capacity is what storage is for, virtualized applications will also run better when the right type of storage is in place.

While storage is important for its ability to hold data, if it's not properly tuned for your applications then you may not get the results you expect when those applications have been virtualized.

Michael Webster is one of the authors of Virtualizing SQL Server With VMware, which guides administrators in the steps required to virtualize the popular database application. One section of the book centers on how performance of these virtualized programs hinges on proper storage architecture.

SearchVMware talked with Michael Webster about why storage is so important and also about when an organization knows it's ready to virtualize SQL Server.


Why is it important to design storage for performance before capacity?

Michael Webster: Storage performance, especially where databases are concerned, causes 80% of problems in virtualized environments. In order to achieve the performance, you need enough storage devices, and therefore that will take care of the capacity. If you don't take care of performance, you will have way too much capacity at the end of your performance, and it is effectively unusable. 

How do you know when an organization is ready to virtualize SQL Server?

Webster: With the way hypervisors have developed, there are very few databases that can't be virtualized. When customers are looking to improve service levels, increase availability, performance, reliability, improve development and test life cycles -- these are places where virtualization can come in. A database doesn't know it's virtualized and, as my co-author Michael Corey says, you don't need to tell it. The barriers to virtualizing databases are gone. There are no technical issues, [but] training and operational processes may need adjustment. This is more of the consideration around virtualizing databases now. 

Which resource – CPU, memory or networking – has the biggest impact on storage architecture and performance, and why?

Webster: Memory has the biggest impact for a SQL database. The more memory you have and assign to the buffer cache, the less read I/O and the more optimized read and writes can be. The buffer cache is just a big cache at the end of the storage. Therefore, memory has the biggest impact on storage architecture, performance and investments.

When talking about memory we should also look at flash, which is similar to nearline memory. It is having a massive impact on how databases are deployed, as are the in memory capabilities of SQL 2014


What are some possible issues that could arise if you don't standardize design across all the VM templates?

Webster: The biggest impact is complexity and increased operational costs. You really want to keep things simple and as standard as possible, while meeting all of the business requirements, of course. If you have too much variation, your whole operational model will be more complex. You will need to modify too many things during deployment, and [then there are] too many things to track during changes. There is a higher risk of configuration error in a non-standardized environment and also a higher risk of performance issues that can become harder to troubleshoot. Having everything built into a standardized template means you know you'll get all your best practices applied consistently every time. 


How has hyper-converged infrastructure changed how IT shops implement storage? Is that change good or bad for virtualizing applications?

Webster: Hyper-convergence really simplifies how customers deploy software-defined data centers. It removes the infrastructure complexity from the equation so that admins can focus more on the applications and fulfilling application requirements, without having to worry as much about the storage underneath. It removes a lot of the moving parts and reduces operating expense and total cost of ownership. You can scale out as and, when you need, support lots of databases and VMs with consistent performance, and know that this will increase constantly as things change and expand. The biggest change is how fast it can be implemented, so it offers much more agility and faster time to market. Hyper-convergence will be the predominant way that most applications are deployed inside organizations in the future as it offers a cloud-like experience, in terms of infrastructure on demand, but under the complete control of the organization, while still offering extension to cloud services. Over time, SANs will just be there to manage physical systems, Unix and mainframes. But this transition will happen over time and it will be a very different landscape in 10 years. 

This was last published in April 2015

Dig Deeper on Selecting storage and hardware for VMware environments

Join the conversation

1 comment

Send me notifications when other members comment.

By submitting you agree to receive email from TechTarget and its partners. If you reside outside of the United States, you consent to having your personal data transferred to and processed in the United States. Privacy

Please create a username to comment.

Do you have any applications you want to virtualize? Which ones and how is it going?
Cancel

-ADS BY GOOGLE

SearchServerVirtualization

SearchVirtualDesktop

SearchDataCenter

SearchCloudComputing

Close