Performance of Microsoft Storage Space 2016 on Dell PERC H700 RAID controller.

Before we get into this post let me be up front and say that this is a completely unsupported configuration by Microsoft. Much like UnRAID or ZFS, storage spaces wants direct access to the disk to work properly. You can force Storage spaces to work with RAID volumes, but if you have problems MS support will not assist. Your biggest issue will be handling failures, as Storage spaces will not be able to accurately predict SMART failures.

Now with that out of the way. Here is the setup for this. Dell T710 as the server, Perc H700 with 512MB BBWC, 2x Samsung 850 EVO + 2x 500 GB Seagate Constellation ES SATA drives in a Tiered mirror space, and 4 500GB Seagate Constellation ES + 4 1TB WD Black SATA disks in a Parity space with 30GB SSD Write Cache. I wanted to compare how drives configured on a RAID controller that had its own built in write cache would stack against my configuration.

We setup the test the same as the earlier test on my homelab server. 100GB Testfile from IOMeter and a 75% Read 512 and 75% Read 4k. I only tested against the host, not the VM, and we tested against both the Mirrored tiered drives, and the Parity array. Here are the results.

SERVER TYPE: Dell T710
CPU TYPE / NUMBER: Xeon x5560 x2
HOST TYPE: Server 2016
STORAGE TYPE / DISK NUMBER / RAID LEVEL: Storage Spaces Tiered Mirror
Test name Latency Avg iops Avg MBps cpu load
512 B; 75% Read; 0% random 0.12 8601 4 0%
4 KiB; 75% Read; 0% random 0.15 6742 26 0%
SERVER TYPE: Dell T710
CPU TYPE / NUMBER: Xeon x5560 x2
HOST TYPE: Server 2016
STORAGE TYPE / DISK NUMBER / RAID LEVEL: Storage Spaces Parity
Test name Latency Avg iops Avg MBps cpu load
512 B; 75% Read; 0% random 0.11 8638 4 0%
4 KiB; 75% Read; 0% random 0.16 6169 24 0%

When we compare these figures to my setup we see a couple things. Firstly the CPU load on the host is considerably less. I was seeing between 10-15% CPU utilization during my tests, because my CPU has less compute power than a single Xeon, let alone 2.Next we notice that on average my latency was lower. This is due to the fact that I am using NVMe Cache instead of just SATA SSD cache for my tiers. Lastly we look at average IOPS and throughput which again are significantly higher on my system because we are seeing the NVMe cache really help things out.

Conclusions:
A RAID controller with its own dedicated cache on the card, not only is an unsupported configuration, however also really isn’t a substitute for NVMe write cache. Additionally Parity spaces greatly benefit from more powerful CPUs in terms of overall system performance. Replacing my stock i7-920 with an i7-965 with a QPI Bus speed of 1×6.2k should help cut the CPU overhead down some, and is about a $75 upgrade these days.

Advertisements

Microsoft Storage Spaces 2016 Performance. NVME+SSD+Parity.

I thought I would share with you the real world performance of Microsoft Storage Spaces 2016. As a reminder my setup for this test is 1x NVME SSD acting as Cache. 2x Samsung 850 Evo drives in a mirror tier, 3x WD Red drives + 3x WD RE3 Drives in a Parity Space. The volume is comprised of 200GB of SSD space, 5.4TB of HDD space, and 30GB of Write Cache.

I ran this test twice. Once directly on the hypervisor, and once in the guest VM. The guest VM is using a dynamically expanding VHDX disk. Prior to running the test I ran the command

Optimize-Volume -Volume E -TierOptimize

To ensure that the disk was in a good state. I did not pin the IOMeter test file to the HDD or SSD tier. The test was run using the 512 75% read preset and the 4k 75% read preset set to run for 3 minutes against a 100GB test file. Here are the results.

SERVER TYPE: Storage Spaces Host
CPU TYPE / NUMBER: i7 920 
HOST TYPE: Server 2016
STORAGE TYPE / DISK NUMBER / RAID LEVEL: SSD + HDD Tiered Parity
Test name Latency Avg iops Avg MBps cpu load
512 B; 75% Read; 0% random 0.04 23643 11 15%
4 KiB; 75% Read; 0% random 0.23 4314 16 10%
SERVER TYPE: Windows Hyper-V Guest
CPU TYPE / NUMBER: i7 920 (4 virtual cores)
HOST TYPE: Server 2012 R2 Virtual
STORAGE TYPE / DISK NUMBER / RAID LEVEL: SSD + HDD Tiered Parity
Test name Latency Avg iops Avg MBps cpu load
512 B; 75% Read; 0% random 0.12 8071 3 0%
4 KiB; 75% Read; 0% random 1.22 816 3 0%

As you can see the performance difference between host and VM is pretty apparent. This performance is perfectly fine for the Plex file server that I run off this system, however if I was running something that was more IO heavy like SQL this performance wouldn’t be good enough and I would need to probably look at doing a Mirrored space.

 

Set-FileStorageTier fails on Microsoft ReFS formatted volume.

In my last two posts I discussed my conversion to Storage Spaces 2016 and some of the issues I had along the way. Today we will discuss an issue I had when trying to use the Set-FileStorageTier Command to pin my VHDX files to to my SSD tier.

The Specific Error I kept getting when trying to do this was

Set-FileStorageTier : The specified volume does not support storage tiers.

screen-shot-2016-12-01-at-3-42-48-pm

This seemed odd to me given that we had already proved that we had two tiers tiering was enabled, and I could absolutely see data being written to my SSD tier at 200+MBps. So what the hell was going on?

Let’s run a few commands and check some things.
First things first, let’s make sure the disk is healthy. Which it is.
screen-shot-2016-12-01-at-3-43-41-pm

Next let’s see if we can just TierOptimize the disk.
screen-shot-2016-12-01-at-3-43-21-pm

Nope, can’t do that. Some googling later tells us that we may have to run a defrag operation first. So let’s try that.
screen-shot-2016-12-01-at-3-43-34-pm

What do you mean “Hardware isn’t supported for tiering?” Ok fine. Let’s just see if there are any tiers recognized by the disk to begin with.
screen-shot-2016-12-01-at-3-43-10-pm

So no tiers, can’t optimize, can’t do anything. What the hell?

Well some googling around later I come across multiple forum posts with the same issue in 2012 R2. User’s with ReFS volumes were NOT able to tier optimize their disks even with a single SSD as the only disk. Which means all of the new Optimize-Volume options for TRIM weren’t going to work on ReFS. Since I didn’t have anything to lose, I migrated all my data back off the volume, followed the same process I did the last time, however THIS time I formatted NTFS not ReFS and…

screen-shot-2016-12-02-at-11-14-28-am
screen-shot-2016-12-02-at-11-15-08-am
screen-shot-2016-12-02-at-11-15-28-am

…the results speak for themselves. So here is just one more thing ReFS can’t do even in 2016. This made me really sad because I was really excited for all the enhancements that ReFS would bring to my VHDX files. But alas I guess we will have to continue to wait for MS to get everything figured out here.

Performance of NTFS formatted tiered Storage.

On the HyperVisor
screen-shot-2016-12-02-at-11-31-29-am

In the guest VM
screen-shot-2016-12-02-at-11-33-29-am

The disk latency over 100ms still concerns me, however during transfer it is considerably lower than the 1,000ms+ latency we were seeing. Additionally I am seeing a much more stable 100+MBps in transfer where as before I was seeing it only hit that briefly, then drop once the cache filled.

The other additional benefit of running a NTFS volume is that I’m able to DeDupe the volume using 2016’s Virtual machine aware dedupe which we will play more with and I’m sure that there will be more blog posts about that.