SSD’s on Storage Spaces are killing your VM’s performance

We’re wrapping up a project that involved Windows Storage Spaces on Server 2012 R2. I was very excited to get my hands on new SSDs and test out Tiered Storage Spaces with Hyper-V. As it turns out, the newest technology in SSD drives combined with the default configuration of Storage Spaces is killing performance of VM’s.

First, it’s important to understand sector sizes on physical disks, as this is the crux of the issue. The sector size is the amount of data the physical disk controller inside your hard disk actually writes to the storage medium. Since the invention of the hard disk, sector sizes have been 512 bytes for hard drives.  Many other aspects of storage are based on this premise. Up until recently, this did not pose an issue. However, with larger and larger disks, this caused capacity problems. In fact, the 512-byte sector is the reason for the 2.2TB limit with MBR partitions.

Disk manufacturers realized that 512-byte sector drives would not be sustainable at larger capacities, and started introducing 4k sector, aka Advanced Format, disks beginning in 2007. In order to ensure compatibility, they utilized something called 512-byte emulation, aka 512e, where the disk controller would accept reads/writes of 512 bytes, but use a physical sector size of 4k. To do this, internal cache temporarily stores the 4k of data from physical medium and the disk controller manipulates the 512 bytes of data appropriately before writing back to disk or sending the 512 bytes of data to the system. Manufacturers took this additional processing into account when spec’ing performance of drives. There are also 4k native drives which use a physical sector size of 4k and do not support this 512-byte translation in the disk controller – instead they expect the system to send 4k blocks to disk.

The key thing to understand is that since SSD’s were first released, they’ve always had a physical sector size of 4k – even if they advertise 512-bytes. They are by definition either 512e or 4k native drives. Additionally, Windows accommodates 4k native drives by performing these same Read-Modify-Write, aka RMW, functions at the OS level that are normally performed inside the disk controller on 512e disks. This means that if the OS sees you’re using a disk with a 4k sector size, but the system receives a 512b, it will read the full 4k of data from disk into memory, replace the 512 bytes of data in memory, then flush the 4k of data from memory down to disk.

Enter Storage Spaces and Hyper-V. Storage Spaces understands that physical disks may have 512-byte or 4k sector sizes and because it’s virtualizing storage, it too has a sector size associated with the virtual disk. Using powershell, we can see these sector sizes:

Get-PhysicalDisk | sort-object SlotNumber | select SlotNumber, FriendlyName, Manufacturer, Model, PhysicalSectorSize, LogicalSectorSize | ft

Get-PhysicalDisk

Any disk whose PhysicalSectorSize is 4k, but LogicalSectorSize is 512b is a 512e disk, a disk with a PhysicalSectorSize and LogicalSectorSize of 4k is a 4k native disk, and any disk with 512b for both PhysicalSectorSize and LogicalSectorSize is a standard HDD.

The problem with all of this is that the when creating a virtual disk with Storage Spaces, if you do not specify a LogicalSectorSize via the Powershell cmdlet, the system will create a virtual disk with a LogicalSectorSize equal to the greatest PhysicalSectorSize of any disk in the pool. This means if you have SSD’s in your pool and you created the virtual disk using the GUI, your virtual disk will have a 4k LogicalSectorSize.  If  a 512byte write is sent to a virtual disk with a 4k LogicalSectorSize, it will perform the RMW at the OS level – and if you’re physical disks are actually 512e, then they too will have to perform RMW at the disk controller for each 512-bytes of the 4k write it received from the OS. That’s a bit of a performance hit, and can cause you to see about 1/4th of the advertised write speeds and 8x the IO latency.

Why this matters with Hyper-V? Unless you’ve specifically formatted your VHDx files using 4k sectors, they are likely using 512-byte sectors, meaning every write to a VHDx storage on a Storage Spaces virtual disk is performing this RMW operation in memory at the OS and then again at the disk controller. The proof is in the IOMeter tests:

32K Request, 65% Read, 65% Random

Virtual Disk 4k LogicalSectorSize

RMW-IOMeter

Virtual Disk 512b LogicalSectorSize

512-IOMeter

 

 

32 thoughts on “SSD’s on Storage Spaces are killing your VM’s performance

  1. Very interesting! I thought i would check my own setup, but apparently my HDDs AND my SSDs report they are both physically and logically 512b. But to my surprise every storage pool is set to 512 logical and 4k physical.

    So why the 4k physical? and wouldn’t that give a problem similar to what you describe (just the other way around)? The processor having to convert every 512b operation to 4k and then back again when hitting the physical disk?

  2. I created a storage tier from a pool with 2012 R2 that includes a 250GB SSD and 3TB WD RED. The LogicalSectorSize is 512 which matches my BOOT SSD (also 250GB).

    CrystalDiskMark shows the tiered pool exceeding the BOOT SSD in all tests.

    Also, I can’t find any reference to how a LogicalSectorSize can be defined in the link to New-VirtualDisk. Can you clarify your results and explain the optimum sector size please?

  3. Pingback: Shared VHDX on Storage Spaces - Do you IT?

  4. The new-virtualdisk cmdlet doesn’t let me specify the logical sector size. How do you specify a LogicalSectorSize for a virtual disk in a storage pool? Did you mean to link to the new-vhd cmdlet instead?

  5. Hi,

    How did you format your virtual disks with 512b logical sector sizes?
    I have looked at the new-virtualdisk cmdlet but it does not appear to have any option to specify this.
    There is the ability to specify a default logical sector size when creating a new storage pool. Is this how you managed to do this?

  6. I have a GUI workaround to get the LogicalSectorSizeDefault set to 512, that may also work for you if you have some 512 disks in your JBOB. Create your Storage Pool with only the 512 disk selected. Then add the SSD’s after using the add Physical Disk feature.

    I was having trouble getting the powershell command just right.

  7. Hi Jeff,

    Thanks for confirming that it is set using LogicalSectorSizeDefault.

    Robert>

    Are you saying that if I have a storage pool already with 512b disks then adding SSD’s after will continue to create 512b virtual disks?

    • No. This gets set when first creating the storage pool. Adding SSDs after should not impact the logical sector size.

  8. Hi, very interesting!
    But I can’t create pool by PS, but in GUI I can. Error – “One of the physical disks specified is not supported by this operation”.
    Do you know about it ?

  9. Do you mind elaborating on what IOMeter settings you used in order to obtain these benchmark differences? I setup a similar comparison on our tiered storage spaces and obtained very similar results regardless of the logical sector size of the storage pool, contrary to what you had found.

  10. Hi!

    Did you do the IOmeter on the virtual Disk directly or within a VM? I did some tests too, but can not confirm the performance impact found here. I have a mix of 4K and 0,5k drives and testesd with pools of 4k and 0,5k size – no difference.

  11. Robert, Jeff,
    I am confused re the solution as some of my drive are Advanced Format, 512e so very confused. Can you confirm that the solution is as follows for the non expert among us.
    1. The storage space is on the host. Check the physical size with Get-PhysicalDisk before creating space
    2.Create the pool with hdd only with 512k size if that is hdd default
    3.Add the ssd via the gui after create of space above this forces ssd to use 512K (is this correct)
    3. If HDD is 4k default then dont mix this with hdd 512K or create 512k first as above
    4. if HDD is 4k physical ok with ssd as this is also 4k
    Thanks

    • You’ll need to add ALL of the disks to the storage server BEFORE creating the pool. Then use the Get-PhysicalDisk command to check the physical and logical sector sizes. If any of them have a logical sector size of 512, set your storage pool logical sector size to 512 – the disks using 512e can already handle the RMW operation in firmware. If ALL of your disks are 4k logical sector size, you’ll need to set the storage pool to 4k and ensure your virtual disks are using 4k sectors.

  12. So are you recommending that the storagpool/virtualdisk be created with 512 for VMs?
    PS C:\Windows\system32> get-virtualdisk -friendlyname VD5 | select LogicalSectorSize

    LogicalSectorSize
    —————–
    512

    • Let me clarify: To avoid RMW penalty when using tiered storage, then the SP was created using just the HDDs (512b) and then add the SSDs (4k) resulting in a native 512b SP. Then creating the VD which is created in 512b.

  13. Pingback: Microsoft Storage Spaces 2016: Storage Tiering NVMe + SSD Mirror + HDD Parity – Get-SysadminBlog

  14. Pingback: True Cloud Storage – Storage Spaces Direct | Jeff's Blog

    • That’s based on the profile that was developed from capturing traffic of actual VM workloads. Each application has different IO patterns, and as a service provider, we needed an “average” profile to determine how many customers can be supported by the underlying hardware.

  15. Hi Jeff
    so it makes sense then to always use 4k sector VHDX as the basis for storage spaces, then they align perfectly on 512 block size drives as well
    Would that work?

    • Yes, as long as your guests support 4k sector size, it’s the recommended configuration. We had to support down-level OSes that did not recognize 4k sector size on OS disk.

  16. Keep in mind that if at a later point in time you install a native 4k disk, then you need to create the virtual disk with a logicalsectorsize of 4k

  17. So my SSDs have a physical and logical sector size of 512, and my HDDs have physical and logical sector size of 4K. I have created tiered virtual disks and they have a Logical Sector Size of 4K. Will I be affected by the RMW penalty?

  18. Great article.
    You might consider changing the title from: SSD’s on Storage Spaces are killing your VM’s performance
    to something like: Misconfigured SSD’s on Storage Spaces are killing your VM’s performance

Leave a Reply

Your email address will not be published. Required fields are marked *