Get started Bring yourself up to speed with our introductory content.

How are SSD garbage collection, wear leveling and TRIM different?

While using solid-state drives in a vSphere environment can give a boost to VMs that require high performance, it's important to know how certain longevity features function in the storage hardware.

Solid state disk drives are have become an essential asset for high-performance storage in virtualized servers...

in VMware shops, but it's important for IT planners to understand the best uses for SSD investments and recognize the limitations that can influence SSD working life and reliability.

These considerations are even more vital when multiple workloads depend on SSD availability and performance. Wear leveling, garbage collection and TRIM are three technologies used to extend the life of flash memory devices and related products like solid-state disk (SSD) drives.

Wear leveling

Magnetic storage media has an indefinite working life because the platter coating does not wear and the read/write heads never contact the media. As storage techniques emerged, there was no problem writing and over-writing the same places on magnetic media when data changed frequently. These so-called "hot spots" had no real impact on magnetic disk reliability.

However, flash memory cells have a finite working life and can fail after several thousand program/erase (P/E) cycles. This poses a problem for SSDs because allowing some write-intensive applications to erase and rewrite the same series of memory blocks -- while other memory blocks remain relatively untouched -- cause flash memory cell failures far sooner. The technique of wear leveling spreads out new P/E cycles across the entire space of the flash chip. Wear leveling doesn't make flash chips any more reliable but spreading the usage can help avoid storage hot spots that might cause failures much sooner.

Garbage collection

Flash memory is organized into blocks comprised of a series of pages. Data can be written to pages anytime as long as the page is unwritten or erased. However, flash memory cannot erase individual pages within a block; the entire block must be erased before the pages within the block are freed for re-use. This means that changed data winds up being written to subsequent pages within the same block.

To free the old pages and preserve the updated pages, the current pages are first copied to another available block, while the old or unneeded pages -- the "garbage data pages" -- are discarded. So the newly written block winds up holding just the current pages and the prior block can be erased and freed for re-use. The SSD garbage collection process in flash memory is almost always implemented in concert with wear leveling.

TRIM

There is a slight disconnect between operating systems and storage devices. An operating system "deletes" an HDD file by noting its clusters as free in a table. The OS doesn't need to tell the HDD anything about the deletion -- the HDD will overwrite the freed clusters on the disk as needed.

Flash memory and SSDs don't work the same way. Flash only realizes that a page is old when a new write is attempted. Only then will the old data be marked for discard and the new data will be written to that location. In other words, if you erase a file in flash memory, the OS may think that space is free, but the SSD will continue to hold and move the pages of that old file until a new file tries to use that space. Until then, SSDs incur more erase cycles and slower writes because they are still carrying that old data.

The TRIM function allows an operating system to tell the flash controller that certain data pages are outdated or invalid, and the garbage collection process can skip the old data instead of retaining it. TRIM allows the SSD to recognize freed space much sooner, recover that freed space earlier, collect garbage more effectively, and run more efficiently.

Next Steps

Read about the 2015 Editor's Choice for Innovation

 

This was last published in January 2015

Dig Deeper on Selecting storage and hardware for VMware environments

Join the conversation

2 comments

Send me notifications when other members comment.

By submitting you agree to receive email from TechTarget and its partners. If you reside outside of the United States, you consent to having your personal data transferred to and processed in the United States. Privacy

Please create a username to comment.

What are the biggest concerns about SSD deployment or reliability in your vSphere environment?
Cancel
Our biggest concerns when implementing SSD into our vSphere environment is stability and longevity. SSD drives are superior in speed and size but the longevity of SSD drives is a concern in our vSphere environment. The stability concerns stem from the longevity concerns. If the longevity is shortened due to poor design to the SSD the stability decreases and has the potential to compromise or damage our stored data.
Cancel

-ADS BY GOOGLE

SearchServerVirtualization

SearchVirtualDesktop

SearchDataCenter

SearchCloudComputing

Close