Enabling users for ActiveSync based on group membership using Exchange Powershell.

I recently had a task where I was required to create a nightly task to enable or disable users’ ActiveSync access based on being a member of a group. I wrote a simple powershell script and tied it to a nightly Powershell Job to to run at midnight.


#####################################################
#   Disable ActiveSync for all users except Group   #
#   Created by - Cameron Joyce                      #
#   Last Modified - Feb 24 2017                     #
#####################################################
# This script will disable ActiveSync in Exchange for all users except those in a specified security group.

# Import Exchange Modules
Add-PSSnapin Microsoft.Exchange.Management.PowerShell.SnapIn;

# Variables
$AsMemeber = @(Get-DistributionGroupMember -Identity 'ActiveSync Users' | Select Name) # Insert all users from the ActiveSync Users group into an array.
$mailboxes = Get-Mailbox -ResultSize Unlimited # Get all Mailboxes in the exchange Orginization.

# For each mailbox check to see if the mailbox user is a member of the ActiveSync users group, if so enable OWA and AS. If not, disable it.
Foreach($Mailbox in $Mailboxes){
    $Ismember = $false # Set the variable to the default of off
    $Name = $mailbox.Name # Convert the property to a string value.
    If($AsMemeber -like "*$name*"){ # If the Name of the mailbox is found in the array of ActiveSync Users, set the variable from $false to $true.
        $Ismember = $true
    }
    If($ismember){ # If the member is part of the Array do the following
        Write-Host "$name is an ActiveSync user and is being enabled"
        Set-CASMailbox $MName –ActiveSyncEnabled $true
        $astatus = Get-CASMailbox $Name | Select-Object Name, ActiveSyncEnabled
        if($astatus -like "False"){
            Write-Host "Failure occured setting ActiveSync policy on the following mailbox"
            Write-Output $astatus
        }
        Set-CASMailbox $Name -OWAforDevicesEnabled $true
        $ostatus = Get-CASMailbox $Name| Select-Object Name, OWAforDevicesEnabled
        if($ostatus -like "False"){
            Write-Host "Failure occured setting OWA for Devices policy on the following mailbox"
            Write-Output $ostatus
        }
    }
    Else{ # If the mailbox is not a member of the Array do the following.
        Write-Host "$name is not an ActiveSync user and is being disabled"
        Set-CASMailbox $Name –ActiveSyncEnabled $false
        Set-CASMailbox $Name –OWAforDevicesEnabled $false
    }
}

Advertisements

Performance of Microsoft Storage Space 2016 on Dell PERC H700 RAID controller.

Before we get into this post let me be up front and say that this is a completely unsupported configuration by Microsoft. Much like UnRAID or ZFS, storage spaces wants direct access to the disk to work properly. You can force Storage spaces to work with RAID volumes, but if you have problems MS support will not assist. Your biggest issue will be handling failures, as Storage spaces will not be able to accurately predict SMART failures.

Now with that out of the way. Here is the setup for this. Dell T710 as the server, Perc H700 with 512MB BBWC, 2x Samsung 850 EVO + 2x 500 GB Seagate Constellation ES SATA drives in a Tiered mirror space, and 4 500GB Seagate Constellation ES + 4 1TB WD Black SATA disks in a Parity space with 30GB SSD Write Cache. I wanted to compare how drives configured on a RAID controller that had its own built in write cache would stack against my configuration.

We setup the test the same as the earlier test on my homelab server. 100GB Testfile from IOMeter and a 75% Read 512 and 75% Read 4k. I only tested against the host, not the VM, and we tested against both the Mirrored tiered drives, and the Parity array. Here are the results.

SERVER TYPE: Dell T710
CPU TYPE / NUMBER: Xeon x5560 x2
HOST TYPE: Server 2016
STORAGE TYPE / DISK NUMBER / RAID LEVEL: Storage Spaces Tiered Mirror
Test name Latency Avg iops Avg MBps cpu load
512 B; 75% Read; 0% random 0.12 8601 4 0%
4 KiB; 75% Read; 0% random 0.15 6742 26 0%
SERVER TYPE: Dell T710
CPU TYPE / NUMBER: Xeon x5560 x2
HOST TYPE: Server 2016
STORAGE TYPE / DISK NUMBER / RAID LEVEL: Storage Spaces Parity
Test name Latency Avg iops Avg MBps cpu load
512 B; 75% Read; 0% random 0.11 8638 4 0%
4 KiB; 75% Read; 0% random 0.16 6169 24 0%

When we compare these figures to my setup we see a couple things. Firstly the CPU load on the host is considerably less. I was seeing between 10-15% CPU utilization during my tests, because my CPU has less compute power than a single Xeon, let alone 2.Next we notice that on average my latency was lower. This is due to the fact that I am using NVMe Cache instead of just SATA SSD cache for my tiers. Lastly we look at average IOPS and throughput which again are significantly higher on my system because we are seeing the NVMe cache really help things out.

Conclusions:
A RAID controller with its own dedicated cache on the card, not only is an unsupported configuration, however also really isn’t a substitute for NVMe write cache. Additionally Parity spaces greatly benefit from more powerful CPUs in terms of overall system performance. Replacing my stock i7-920 with an i7-965 with a QPI Bus speed of 1×6.2k should help cut the CPU overhead down some, and is about a $75 upgrade these days.

Microsoft Storage Spaces 2016 Performance. NVME+SSD+Parity.

I thought I would share with you the real world performance of Microsoft Storage Spaces 2016. As a reminder my setup for this test is 1x NVME SSD acting as Cache. 2x Samsung 850 Evo drives in a mirror tier, 3x WD Red drives + 3x WD RE3 Drives in a Parity Space. The volume is comprised of 200GB of SSD space, 5.4TB of HDD space, and 30GB of Write Cache.

I ran this test twice. Once directly on the hypervisor, and once in the guest VM. The guest VM is using a dynamically expanding VHDX disk. Prior to running the test I ran the command

Optimize-Volume -Volume E -TierOptimize

To ensure that the disk was in a good state. I did not pin the IOMeter test file to the HDD or SSD tier. The test was run using the 512 75% read preset and the 4k 75% read preset set to run for 3 minutes against a 100GB test file. Here are the results.

SERVER TYPE: Storage Spaces Host
CPU TYPE / NUMBER: i7 920 
HOST TYPE: Server 2016
STORAGE TYPE / DISK NUMBER / RAID LEVEL: SSD + HDD Tiered Parity
Test name Latency Avg iops Avg MBps cpu load
512 B; 75% Read; 0% random 0.04 23643 11 15%
4 KiB; 75% Read; 0% random 0.23 4314 16 10%
SERVER TYPE: Windows Hyper-V Guest
CPU TYPE / NUMBER: i7 920 (4 virtual cores)
HOST TYPE: Server 2012 R2 Virtual
STORAGE TYPE / DISK NUMBER / RAID LEVEL: SSD + HDD Tiered Parity
Test name Latency Avg iops Avg MBps cpu load
512 B; 75% Read; 0% random 0.12 8071 3 0%
4 KiB; 75% Read; 0% random 1.22 816 3 0%

As you can see the performance difference between host and VM is pretty apparent. This performance is perfectly fine for the Plex file server that I run off this system, however if I was running something that was more IO heavy like SQL this performance wouldn’t be good enough and I would need to probably look at doing a Mirrored space.

 

IOMeter testing of Server 2016 Deduplicated Volumes.

In the process of generating some test data for another post, I managed to completely fill my 5.6TB storage space in 5 minutes, all due to 1 simple thing. IOMeter. The long story short is that I had my storage space running the Hyper-V aware deduplication so I could test the performance. At the same time I was running IOMeter on that volume to generate performance statistics for anther post, however what I didn’t realize was that Microsoft was inline deduping the testfile that IOMeter generated, and then was randomly reading / writing from. The result. 2.1TB of chunk store, and all of my VMs going paused critical.

screen-shot-2016-12-09-at-8-54-25-pm

Thankfully I was able to expand the volume a little bit, get my VMs online again, and then run the following commands to get everything back under control.

Start-DedupJob -Volume E: -Type Scrubbing -Priority High -Memory 50
Start-DedupJob -Volume E: -Type GarbageCollection -Priority High -Memory 50

The Scrubbing job verifies the Chunks, and the GarbageCollection deletes the chunks that are no longer needed.

This process will take a few hours and is pretty I/O intensive on the disk, but it is the only safe way to properly clear that store without destroying your existing data.

Using a Dell PERC H310 on an eVGA x58 motherboard to provide 8 SATAIII ports.

Continuing on my venture of rebuilding my Hyper-V box to be a super hyperconverged storage and compute box, I realized that I needed to be able to add more SATA drives to my system, because 7 was just not going to be enough. I decided I would purchase a HBA to do the job, and went to Reddit /r/datahoarder to ask what they thought would be the best option for this venture. The two cards recommended were the IBM M1015 and the Dell PERC H310. Both cards are dual port SAS RAID cards, that are really LSI cards underneath and can be cross flashed to the LSI IT firmware to allow for straight passthrough of disks (which is exactly what I needed, I’ll explain in another post about RAID disks vs Passthrough ATA disks in Storage Spaces). I found a PERC H310 on eBay for $44 shipped, so I went that direction. I also bought two SAS Breakout cables for $15 combined and then waited for everything to show up in the mail.

Once everything got here and was checked over to make sure it was OK, I got to the process of flashing the card using this fantastic guide by Vladan Seget. This proved extremely difficult in my situation, and all I can say is that I’m glad I have friends. The specific motherboard that I am using is 132-BL-E758 – EVGA X58 SLI running BIOS version 83.

Issue Number 1: No Boot.
After plugging in the card the system wouldn’t even POST. Just kept throwing the post code “86” eVGA’s manual shows this as “Reserved” so there is absolutely no help here. I reset the CMOS hoping that would fix it, and thankfully that was all it took to get the system to POST. Unfortunately I still ran into an issue where even after the machine showed all the system info and the JMicron AHCI controller info, it just went to a black screen. I couldn’t get to BIOS, windows didn’t boot. I assumed there was something wrong with the card as removing the controller allowed boot to Windows without issue. I tried different slots, multiple CMOS resets, cold boots everything I could think of. Eventually I just waited at the black screen and low and behold about 2 minutes later I finally see a “0 Virtual Drives Handled by BIOS” message, and the pre-boot completes and I am able to select my boot device! Woo hoo!

Issue Number 2: Megarec sees that the card is installed, but refuses to flash the card to an empty rom.
So this was annoying. I could run all the utilities to find info about the card, and see that it was healthy and attached and what firmware it was running, but I couldn’t get Megarec.exe to actually flash the card. it would just launch the .exe and sit at a blinking curser. Again, I tried multiple slots, different versions of FreeDOS and different USB keys and no dice. The solution was to put the card into a spare Dell R610 server that a friend had and flash it there. This did work successfully and I was able to get the base IT firmware loaded.

Issue Number 3: Random system resets when the card was installed.
This one took a while to diagnose. When the PERC was installed (again didn’t matter what slot) the system would randomly restart. Removing the PERC resolved this issue, but then I can’t use my drives. So what the hell? Well a bunch of googling later I found Yannick’s Tech Blog who explained that by masking pins B5 and B6 you block the SMBus signals from the card which are what generate your boot issues and for me were causing the system to reboot. After masking off the pins with electrical tape, everything is working as it should. In the last 24 hours I haven’t had any issues with performance, or stability of the system.

Once all issues were resolved I have been able to cleanly boot the system multiple times, and now that the card is in pure IT passthrough mode there is no longer the boot delay.

Set-FileStorageTier fails on Microsoft ReFS formatted volume.

In my last two posts I discussed my conversion to Storage Spaces 2016 and some of the issues I had along the way. Today we will discuss an issue I had when trying to use the Set-FileStorageTier Command to pin my VHDX files to to my SSD tier.

The Specific Error I kept getting when trying to do this was

Set-FileStorageTier : The specified volume does not support storage tiers.

screen-shot-2016-12-01-at-3-42-48-pm

This seemed odd to me given that we had already proved that we had two tiers tiering was enabled, and I could absolutely see data being written to my SSD tier at 200+MBps. So what the hell was going on?

Let’s run a few commands and check some things.
First things first, let’s make sure the disk is healthy. Which it is.
screen-shot-2016-12-01-at-3-43-41-pm

Next let’s see if we can just TierOptimize the disk.
screen-shot-2016-12-01-at-3-43-21-pm

Nope, can’t do that. Some googling later tells us that we may have to run a defrag operation first. So let’s try that.
screen-shot-2016-12-01-at-3-43-34-pm

What do you mean “Hardware isn’t supported for tiering?” Ok fine. Let’s just see if there are any tiers recognized by the disk to begin with.
screen-shot-2016-12-01-at-3-43-10-pm

So no tiers, can’t optimize, can’t do anything. What the hell?

Well some googling around later I come across multiple forum posts with the same issue in 2012 R2. User’s with ReFS volumes were NOT able to tier optimize their disks even with a single SSD as the only disk. Which means all of the new Optimize-Volume options for TRIM weren’t going to work on ReFS. Since I didn’t have anything to lose, I migrated all my data back off the volume, followed the same process I did the last time, however THIS time I formatted NTFS not ReFS and…

screen-shot-2016-12-02-at-11-14-28-am
screen-shot-2016-12-02-at-11-15-08-am
screen-shot-2016-12-02-at-11-15-28-am

…the results speak for themselves. So here is just one more thing ReFS can’t do even in 2016. This made me really sad because I was really excited for all the enhancements that ReFS would bring to my VHDX files. But alas I guess we will have to continue to wait for MS to get everything figured out here.

Performance of NTFS formatted tiered Storage.

On the HyperVisor
screen-shot-2016-12-02-at-11-31-29-am

In the guest VM
screen-shot-2016-12-02-at-11-33-29-am

The disk latency over 100ms still concerns me, however during transfer it is considerably lower than the 1,000ms+ latency we were seeing. Additionally I am seeing a much more stable 100+MBps in transfer where as before I was seeing it only hit that briefly, then drop once the cache filled.

The other additional benefit of running a NTFS volume is that I’m able to DeDupe the volume using 2016’s Virtual machine aware dedupe which we will play more with and I’m sure that there will be more blog posts about that.

Microsoft Storage Spaces 2016: Storage Tiering NVMe + SSD Mirror + HDD Parity

So as we discovered a straight parity space with NVMe cache wasn’t going to work. Just straight up, it wasn’t going to happen. The performance was abysmal, and I couldn’t deal. I decided I would spend today getting a tiered parity space working, and in the quest to make sure that all my storage tiers could survive a failure, I went out and purchased 2 Samsung 850 Evo drives. So now I have a NVMe+SSD+HDD Tiered storage.

In case someone from the googleverse finds this in searching for exactly what I did, here is start to finish your copy/paste powershell guide for getting this working.

First create your storage pool. Few things to note. -LogicalSectorSizeDefault 512 is needed to ensure that you have a 512e disk Explained Here -FaultDomainAwarenessDefault PhysicalDisk is necessasary to later to prevent an error in creating the Volume.

New-StoragePool -StoragePoolFriendlyName "Pool1" -StorageSubSystemFriendlyName (Get-StorageSubSystem).FriendlyName -PhysicalDisks (Get-PhysicalDisk -CanPool $true) -LogicalSectorSizeDefault 512 -FaultDomainAwarenessDefault PhysicalDisk

Next we set the resiliency settings. Not something you should have to do, but I kept running into an error, so better safe than sorry. Note that you should change your parity Columns to match the number of drives you have in that tier (up to 8).

Get-Storagepool Pool1 | Set-ResiliencySetting -Name Mirror -NumberOfColumnsDefault 1
Get-Storagepool Pool1 | Set-ResiliencySetting -Name Parity -NumberOfColumnsDefault 3

Now we create our storage tiers. One SSD, one HDD.

New-StorageTier -StoragePoolFriendlyName Pool1 -FriendlyName SSDTier -MediaType SSD -ResiliencySettingName Mirror -NumberOfColumns 1 -PhysicalDiskRedundancy 1 -FaultDomainAwareness PhysicalDisk
New-StorageTier -StoragePoolFriendlyName Pool1 -FriendlyName HDDTier -MediaType HDD -ResiliencySettingName Parity -NumberOfColumns 3 -PhysicalDiskRedundancy 1 -FaultDomainAwareness PhysicalDisk

Let us now actually create our Volume! Because this is a tiered volume we want to use ReFS.

New-Volume -StoragePoolFriendlyName Pool1 -FriendlyName VM -FileSystem ReFS -StorageTierFriendlyName SSDTier, HDDTier -StorageTierSizes 200GB, 3.5TB

Vola! We have a Volume! Now let’s make sure that it actually formatted properly.

Get-StorageTier | FT FriendlyName, ResiliencySettingName, PhysicalDiskRedundancy, FaultDomainAwareness, NumberOfDataCopies

This returns

FriendlyName ResiliencySettingName PhysicalDiskRedundancy FaultDomainAwareness NumberOfDataCopies
------------ --------------------- ---------------------- -------------------- ------------------

SSDTier      Mirror                1                      PhysicalDisk         2
VM_HDDTier   Parity                1                      PhysicalDisk         1
HDDTier      Parity                1                      PhysicalDisk         1
VM_SSDTier   Mirror                1                      PhysicalDisk         2

Excellent! But I know that I still have space. So how do we expand?

Resize-StorageTier -InputObject (Get-StorageTier -FriendlyName "VM_HDDTier") -Size 3.6TB

And lastly since I am actually on a UPS we run this.

Set-StoragePool -FriendlyName Pool1 -IsPowerProtected $True

So there you have it. I now have a 3 tiered setup on a single host. Adding drives to the Parity tier is pretty easy, as is expanding the capacity.

ScreenShot Proof!

And now the performance figures.

CPU and Disk in VM
CPU and Disk from the hypervisor OS

I am now seeing writes in the 170MBps, however there is a good amount of latency inside the VM. That being said, there is no longer crazy latency at the hpyervisor level, and I will definitely take the increase in sustained write.