SSDs are designed to emulate conventional magnetic hard drives -- even using standard physical hard drive interfaces...
such as Serial Attached SCSI (SAS), Fibre Channel (FC) and older Serial ATA (SATA). This means you can install an SSD in a local server or shared storage array just like a magnetic hard drive, format the SSD with VMware's virtual machine file system (VMFS) and use the SSD for the data store.
In actual practice, SSD performance is so radically different from magnetic hard drives that SSDs are not deployed in the same disk or RAID group -- or even the same storage tiers -- as HDDs. Instead, SSDs are typically segregated into their own groups or tiers where the performance won't be encumbered by magnetic media. As a result, SSDs are best used to support VMs that exhibit the most storage activity or are most sensitive to storage performance or latency.
SSDs are also particularly useful for cache storage in virtualized environments. One example is the swap cache. The ESXi hypervisor can use SSD space to swap content between memory and storage -- much like a page swap file -- as a way to over-commit memory on the host server. In many cases, ESXi techniques like page sharing and memory compression can enable some level of memory over-commitment without much impact on VM performance. When there simply isn't enough physical memory to go around, page swapping uses disk space as supplemental memory. Swapping imposes a huge performance hit on VMs -- or any application -- but the solid state memory in SSDs make the swap process much faster and ease some of the performance penalty. ESXi allows administrators to select the datastore and set the desired size of the SSD swap space.
SSDs can also be employed as a flash read cache formatted using VMware's virtual flash file system (VFFS). This allows the SSD to serve as a swap cache and a dedicated read and write-through cache for other VMs which may exist on conventional magnetic hard drives. Caches are normally discarded when a VM is suspended or powered off. Caches can also migrate with VMs if the source and destination systems have similar local HDD and SDD drives. If not, the cache is discarded during VM migration and a new cache is created on the destination system. Remember that the benefit of flash read cache will depend on the workload; read-intensive VMs often benefit the most, and this can lower read I/O demands on shared storage (such as the SAN).
Finally, SSDs are intended to integrate with VMware's virtual SAN (VSAN) technology in ESXi 5.5 and later. The VSAN allows the local storage of a host server to be pooled and re-provisioned to VMs based on QoS demands. This means SSDs can also be pooled and re-provisioned to performance-sensitive workloads. VSANs impose different rules on storage and SSDs. For example, flash read cache cannot use SSDs and SSDs cannot be formatted with VMFS or any file system. This can throw a wrench in SSD-based flash configurations that move to VSAN deployments.
Dig Deeper on VMware performance enhancements
Related Q&A from Stephen J. Bigelow
Just because software passes functional tests doesn't mean it works. Dig into stress, load, endurance and other performance tests, and their ... Continue Reading
Don't neglect form factor as part of your data center server selection. Instead, figure out what type of environment you need and learn which server ... Continue Reading
Learn how load balancing in the cloud differs from a traditional network traffic distribution, and explore the different services available from AWS,... Continue Reading