Microsoft Storage Spaces 2016 Performance. NVME+SSD+Parity.

By tiering storage we get acceptable performance from the normally unusable write speeds of a parity volume.

Advertisements

I thought I would share with you the real world performance of Microsoft Storage Spaces 2016. As a reminder my setup for this test is 1x NVME SSD acting as Cache. 2x Samsung 850 Evo drives in a mirror tier, 3x WD Red drives + 3x WD RE3 Drives in a Parity Space. The volume is comprised of 200GB of SSD space, 5.4TB of HDD space, and 30GB of Write Cache.

I ran this test twice. Once directly on the hypervisor, and once in the guest VM. The guest VM is using a dynamically expanding VHDX disk. Prior to running the test I ran the command

Optimize-Volume -Volume E -TierOptimize

To ensure that the disk was in a good state. I did not pin the IOMeter test file to the HDD or SSD tier. The test was run using the 512 75% read preset and the 4k 75% read preset set to run for 3 minutes against a 100GB test file. Here are the results.

SERVER TYPE: Storage Spaces Host
CPU TYPE / NUMBER: i7 920 
HOST TYPE: Server 2016
STORAGE TYPE / DISK NUMBER / RAID LEVEL: SSD + HDD Tiered Parity
Test name Latency Avg iops Avg MBps cpu load
512 B; 75% Read; 0% random 0.04 23643 11 15%
4 KiB; 75% Read; 0% random 0.23 4314 16 10%
SERVER TYPE: Windows Hyper-V Guest
CPU TYPE / NUMBER: i7 920 (4 virtual cores)
HOST TYPE: Server 2012 R2 Virtual
STORAGE TYPE / DISK NUMBER / RAID LEVEL: SSD + HDD Tiered Parity
Test name Latency Avg iops Avg MBps cpu load
512 B; 75% Read; 0% random 0.12 8071 3 0%
4 KiB; 75% Read; 0% random 1.22 816 3 0%

As you can see the performance difference between host and VM is pretty apparent. This performance is perfectly fine for the Plex file server that I run off this system, however if I was running something that was more IO heavy like SQL this performance wouldn’t be good enough and I would need to probably look at doing a Mirrored space.