Multi-tenant Azure Site Recovery E2A using Virtual Machine Manager

Azure Site Recovery (ASR) is simple, automated protection and disaster recovery service delivered from the cloud. It enables replication and failover of workloads between datacenters, and from on-premises to Azure. ASR supports physical machines, VMWare and Hyper-V environments. ASR integrates with Windows Azure Pack (WAP) enabling service providers to offer managed disaster recovery for IaaS workloads through the Cloud Solution Provider (CSP) program. I’ll run through configuring ASR to support your multi-tenant cloud, and point out several important caveats in the configuration.

The CSP program enables service providers to resell Microsoft first-party cloud services like Azure, while owning the customer relationship and enabling value-add services. Azure subscriptions provisioned under the CSP program are single-tenant, which presents a challenge when configuring ASR with WAP and Virtual Machine Manager (SCVMM). In order to enable ASR, you must first register a SCVMM server with a Recovery Services vault within an Azure subscription. This will allow ASR to query the SCVMM server and retrieve metadata such as names of virtual machines and networks. In most service provider configurations, a single SCVMM server supports multiple tenants, and as such, you need to register SCVMM to a “master” vault in a subscription owned by the service provider. SCVMM can only be registered to a single vault, which also means that if you are using Azure as a DR target, you can only fail VM’s to a single Azure region. While the SCVMM server can only be registered to a single subscription, we can configure per-cloud protection policies specifying compute and storage targets in other subscriptions. This is an important distinction, as it means that the service provider will need to create separate clouds in VMM (and therefore separate plans in WAP) for EACH tenant. This enables a hoster to provide managed disaster recovery for IaaS workloads in a multi-tenant SCVMM environment. The topology is illustrated below.


While the initial configuration of the Recovery Services vault can now be done in the Azure portal, configuration of ASR to support multi-tenancy requires using powershell. You’ll need at least version 0.8.10 of Azure Powershell, but I recommend using Web Platform Installer to get the latest.

First,  if you are using the Recovery Services cmdlets for the first time in your subscription, you should register the Azure provider for Recovery Services. Before you can do this, first enable access to the Recovery Services provider on your subscription, by running the following commands. **NOTE: It may take up to an hour to enable access to Recovery Services on your subscription. Attempts to register the provider might fail in the interim.

Register-AzureRmProviderFeature -FeatureName betaAccess -ProviderNamespace Microsoft.RecoveryServices
Register-AzureRmResourceProvider -ProviderNamespace Microsoft.RecoveryServices

Then, let’s setup some constants we’ll use later.

$vaultRgName = "WAPRecoveryGroup"
$location = "westus"
$vaultName = "WAPRecoveryVault"
$vmmCloud = "AcmeCorp Cloud"
$policyName = "AcmeCorp-Policy"
$serverName = ""
$networkName = "YellowNetwork"
$vmName = "VM01"

Next, connect to your service provider subscription (this can be any direct subscription – EA/Open/PAYG).

$UserName = ""
$Password = "password"
$SecurePassword = ConvertTo-SecureString -AsPlainText $Password -Force
$Cred = New-Object System.Management.Automation.PSCredential -ArgumentList $UserName, $SecurePassword
Login-AzureRmAccount -Credential $Cred

If you have access to multiple subscriptions, you’ll need to set the subscription context.

#Switch to the service provider's subscription
Select-AzureRmSubscription -TenantId 00000000-0000-0000-0000-000000000000 -SubscriptionId 00000000-0000-0000-0000-000000000000

Now we can create the resource group and vault.

#Create the Resource Group for the vault
New-AzureRmResourceGroup -Name $vaultRgName -Location $location
#Create the Recovery Services vault
$vault = New-AzureRmRecoveryServicesVault -Name $vaultName -ResourceGroupName $vaultRgName -Location $location
#Set the vault context
Set-AzureRmSiteRecoveryVaultSettings -ARSVault $vault
#Download vault settings file
Get-AzureRmRecoveryServicesVaultSettingsFile -Vault $vault

At this point, you’ll need to download the Azure Site Recovery provider and run the installation on your SCVMM server, then register the SCVMM server with the vault using the settings file you just downloaded. Additionally, you’ll need to install (but do not configure) the Microsoft Azure Site Recovery agent on each of the Hyper-V servers. Screenshots can be found here.

Now that SCVMM has been registered with the vault, and the agents have been installed, we can create the storage account and virtual network in the tenant subscription.

#Switch to the tenant's subscription
Select-AzureRmSubscription -TenantId 00000000-0000-0000-0000-000000000000 -SubscriptionId 00000000-0000-0000-0000-000000000000
#Storage account must be in the same region as the vault
$storageAccountName = "drstorageacct1"
$tenantRgName =  "AcmeCorpRecoveryGroup" 
#Create the resource group to hold the storage account and virtual network
New-AzureRmResourceGroup -Name $tenantRgName -Location $location
#Create the storage account
$recoveryStorageAccount = New-AzureRmStorageAccount -ResourceGroupName $tenantRgName -Name $storageAccountName -Type "Standard_GRS" -Location $location
#Create the virtual network and subnet
$subnet1 = New-AzureRmVirtualNetworkSubnetConfig -Name "Subnet1" -AddressPrefix ""
$vnet = New-AzureRmVirtualNetwork -Name $networkName -ResourceGroupName $tenantRgName -Location $location -AddressPrefix "" -Subnet $subnet1

We’re ready to create the protection policy and associate it to the SCVMM cloud.

#Switch to the service provider's subscription
Select-AzureRmSubscription -TenantId 00000000-0000-0000-0000-000000000000 -SubscriptionId 00000000-0000-0000-0000-000000000000
#Create the policy referencing the storage account id from the tenant's subscription
$policyResult = New-AzureRmSiteRecoveryPolicy -Name $policyName -ReplicationProvider HyperVReplicaAzure -ReplicationFrequencyInSeconds 900 -RecoveryPoints 1 -ApplicationConsistentSnapshotFrequencyInHours 1 -RecoveryAzureStorageAccountId $recoveryStorageAccount.Id
$policy = Get-AzureRmSiteRecoveryPolicy -FriendlyName $policyname
#Associate the policy with the SCVMM cloud
$container = Get-AzureRmSiteRecoveryProtectionContainer -FriendlyName $vmmCloud
Start-AzureRmSiteRecoveryPolicyAssociationJob -Policy $policy -PrimaryProtectionContainer $container

Once the policy has been associated with the cloud, we can configure network mapping.

#Retrieve the on-premises network
$server = Get-AzureRmSiteRecoveryServer -FriendlyName $serverName
$network = Get-AzureRmSiteRecoveryNetwork -Server $server -FriendlyName $networkName
#Create the network mapping referencing the virtual network in the tenant's subscritpion
New-AzureRmSiteRecoveryNetworkMapping -PrimaryNetwork $network -AzureVMNetworkId $vnet.Id

Lastly, we enable protection on the virtual machine.

#Get the VM metadata
$vm = Get-AzureRmSiteRecoveryProtectionEntity -ProtectionContainer $container -FriendlyName $vmName
#Enable protection. You must specify the storage account again
Set-AzureRmSiteRecoveryProtectionEntity -ProtectionEntity $vm -Protection Enable –Force -Policy $policy -RecoveryAzureStorageAccountId $recoveryStorageAccount.Id

You can monitor protection and perform failovers for a virtual machine in a multi-tenant SCVMM environment to fail over to a tenant’s subscription in Azure from the Recovery Services vault in the provider’s Azure subscription.


Adding additional VIP to Azure Load Balancer

Recently, a partner needed guidance on adding an additional VIP to an Azure Load Balancer. This is a typical scenario where multiple SSL-based websites are running on a pair of servers and clients may not have SNI support, necessitating dedicated public IP’s for each website. Azure Load Balancer in Azure Resource Manager does support multiple VIP’s, just not via the portal. Not to worry, Powershell to the rescue. The Azure documentation site has a great article describing the process of deploying a two-node web farm and internet facing load balancer. These commands assume you’ve already deployed the load balancer and are just adding a second VIP:

Select-AzureRmSubscription -SubscriptionId 00000000-0000-0000-0000-000000000000
#Get the Resource Group
$rg = Get-AzureRmResourceGroup -Name "MultiVIPLBRG"
#Get the Load Balancer
$slb = Get-AzureRmLoadBalancer -Name "MultiVIPLB" -ResourceGroupName $rg.ResourceGroupName
#Create new public VIP
$vip2 = New-AzureRmPublicIpAddress -Name "PublicVIP2" -ResourceGroupName $rg.ResourceGroupName -Location $rg.Location -AllocationMethod Dynamic
#Create new Frontend IP Configuration using new VIP
$feipconfig2 = New-AzureRmLoadBalancerFrontendIpConfig -Name "MultiVIPLB-FE2" -PublicIpAddress $vip2
$slb | Add-AzureRmLoadBalancerFrontendIpConfig -Name "MultiVIPLB-FE2" -PublicIpAddress $vip2
#Get Backend Pool
$bepool = $slb | Get-AzureRmLoadBalancerBackendAddressPoolConfig
#Create new Probe
$probe2 = New-AzureRmLoadBalancerProbeConfig -Name "Probe2" -RequestPath "/" -Protocol http -Port 81 -IntervalInSeconds 5 -ProbeCount 2
$slb | Add-AzureRmLoadBalancerProbeConfig -Name "Probe2" -RequestPath "/" -Protocol http -Port 81 -IntervalInSeconds 5 -ProbeCount 2
#Create Load Balancing Rule
$slb | Add-AzureRmLoadBalancerRuleConfig -Name Rule2 -FrontendIpConfiguration $feipconfig2 -BackendAddressPool $bepool -Probe $probe2 -Protocol TCP -FrontendPort 80 -BackendPort 81
#Save the configuration
$slb | Set-AzureRmLoadBalancer

Azure Pack Connector

In my role as Cloud Technology Strategist with Microsoft over the past 18 months, I’ve been working closely with service providers of all types in making hybrid cloud a reality. Microsoft is uniquely positioned to be able to deliver on the 3 key pillars of cloud – on-premises, hosted, and public – via the Microsoft Cloud Platform. Service providers understand the value of hybrid and, with the release of Azure Pack Connector, have a tool they can use to provide a unified experience for managing public and private cloud.

Azure Pack was released in the fall of 2013 as a free add-on for Windows Server and System Center. It extended the private cloud technology delivered in Virtual Machine Manager to provide self-service multi-tenant Infrastructure as a Service (IaaS) through Hyper-V, in a manner that is consistent with IaaS in public Azure. As more and more enterprises see the value in distributed cloud, service providers are looking to extend their managed services to be able to provision and manage workloads not only running in their data center via Azure Pack, but also IaaS workloads running in public Azure. While Azure Pack ensures the portal and API is consistent, it was still two separate management experiences. Azure Pack Connector bridges that gap by enabling provisioning and management of IaaS in public Azure, through Azure Pack.

Azure Pack Connector

The solution was originally developed by Microsoft IT for internal use to enable various development teams to self-service on public Azure IaaS. Azure Pack Connector was born out of collaboration with the Microsoft Hosting and Cloud Services team to bring the MSIT solution to service providers as open source software released under MIT license. Azure Pack Connector was developed specifically with Cloud Solution Provider partners is in mind, supporting Azure Resource Manager API and including tools to configure Azure subscription provisioned in the CSP portal or CREST API for use with Azure Pack Connector.

The solution consists of 4 main components:

  • Compute Management Service – A windows service that orchestrates the provisioning and de-provisioning of Azure VM’s.
  • Compute Management API – A backend API supporting UI components and enabling management of Azure VM’s.
  • Admin Extension – UI extension for Azure Pack that enables on-boarding and management of Azure subscriptions.
  • Tenant Extension – UI extension for Azure Pack that enables tenant self-service provisioning and management of Azure VM’s.

The Azure Pack Connector subscription model uses a 1-to-1 mapping of Azure Pack plans to Azure Subscriptions, allowing the administrator to control VM operating systems and sizes on a per plan basis and Azure regions globally. Once a user has a subscription to an Azure Pack plan that has an attached Azure subscription, they can provision and manage Azure VM’s through the Azure Pack tenant portal.

Azure Pack Connector Dashboard

This video walkthrough will help explain the features and demonstrate how to use Azure Pack Connector:


The Azure Pack Connector solution is published on GitHub:

Head on over and grab the binaries to expand your Azure Pack installation today!


True Cloud Storage – Storage Spaces Direct

Let me start off by saying this is one of the most exciting new features in the next version of Windows Server. Prior to joining Microsoft, as a customer, I had been begging for this feature for years. Well, the product team has delivered – and in a big way. All of the testing I have performed on this has shown that this is a solid technology, with minimal performance overhead. You can expect to see near line-rate network speeds and full utilization of disk performance. It is as resilient as you would expect, and as flexible as you will need. Recently announced at Ignite 2015, Storage Spaces Direct (S2D) is available in the current Server Technical Preview 2 for testing.

The current version of storage spaces relies on a shared SAS JBOD being connected to all storage nodes. While this helps reduce the cost of storage by replacing expensive SANs with less expensive SAS JBOD’s, it still is a higher cost than leveraging DAS for a few reasons: Dual-port SAS drives are required in order to provide redundant connectivity to the SAS backplane, the JBOD itself has additional cost, and the datacenter costs associated with the rack space, power, and cabling for the JBOD shelf. Storage Spaces Direct overcomes these hurdles by leveraging local disks in server chassis, thereby creating cloud storage out of the lowest cost hardware. You can finally leverage SATA SSD’s with Storage Spaces and still maintain redundancy and failover capabilities. Data is synchronously replicated to disks on other nodes of the cluster leveraging SMB3, meaning you can take advantage of SMB Multichannel and RDMA capable nics for maximum performance.

Here’s a quick look at a Storage Spaces Direct deployment in my lab:

Shared Nothing Storage Spaces

Storage Spaces Direct

Clustering creates virtual enclosures using the local disks in each storage node. Data is the automatically replicated and tiered according to the policy of the virtual disk. There are several recommendations for this release:

  • A minimum of 4 nodes is required to ensure availability of data in servicing and failure scenarios
  • 10Gbps network with RDMA is the recommended backplane between nodes
  • Three-way mirrored virtual disks are recommended
  • The server HBA must be capable of pass-through or JBOD mode

All of the other features you know and love about Storage Spaces work in conjunction with this feature. So how do we configure this? Powershell of course. Standard cluster configuration applies, and it’s assumed you have a working cluster, network configured, and disks ready to go. First, let’s take a look at the cluster properties:

Shared Nothing Storage Spaces Cluster Properties

Storage Spaces Direct Cluster Properties

The main setting that enables this behavior is DASModeEnabled. You can see that it is already enabled on my cluster by setting it to 1. Additionally, you can control which storage bus types are allowed. For SAS/SATA used 0xc00 (hint: while not supported, use 0x100 for RAID disks). Additionally, you’ll want to enable the algorithm that helps provide better fairness in disk access for workloads that have different IO characteristics, including rebuild operations, by setting DASModeOptimizations to 1.


Each node will then create a virtual storage enclosure using available internal disks. You can view these enclosures and disks using the Get-StorageEnclsoure and Get-PhysicalDisk cmdlets (in my test lab, I’m using VM’s with VHDX files attached, so disks appear as Msft Virtual Disks):

Enclosure and Disks

Enclosure and Disks

The enclosures are now visible in failover clustering manager:

Failover Clustering Manager Enclosures

Failover Clustering Manager Enclosures

Next, you’ll want to create the storage pool. If using 512e drives, make sure to set the logical sector size to 512 (see my post on SSD’s and Storage Spaces Performance):

$ss = Get-StorageSubsystem | ?{$_.Model -match “Clustered Storage Spaces”}
$disks = (Get-PhysicalDisk -CanPool $true)

New-StoragePool -StorageSubSystemFriendlyName $ss.FriendlyName -FriendlyName pool1 -WriteCacheSizeDefault 0 -FaultDomainAwarenessDefault StorageScaleUnit -ProvisioningTypeDefault Fixed -ResiliencySettingNameDefault Mirror -PhysicalDisk $disks -LogicalSectorSizeDefault 512

If the media type of your disks shows up as “Unspecified”, you can set them manually to HDD/SSD now that there are in a storage pool like so:

Get-StoragePool pool1 | Get-PhysicalDisk | Set-PhysicalDisk -MediaType HDD

In this Technical Preview, we’ll want to use SSD’s for caching:

Get-StoragePool -StoragePoolFriendlyName pool1 | Get-PhysicalDisk |? MediaType -eq SSD | Set-PhysicalDisk -Usage Journal

Lastly, we’ll create a virtual disk (you can do this through the GUI as well). You’ll want to tweak the number of columns for your environment (hint: for number of columns, it’s best not to exceed the number of local disks per virtual enclosure):

New-Volume -StoragePoolFriendlyName “pool1” -FriendlyName “vm-storage1” -PhysicalDiskRedundancy 2 -FileSystem CSVFS_NTFS -ResiliencySettingName Mirror -Size 250GB

Viola! We have a mirrored virtual disk that’s been created using storage spaces direct:

Virtual Disk

Virtual Disk

Rebooting a node will cause the pool and virtual disk to enter a degraded state. When the node comes back online, it will automatically begin a background Regeneration storage job. The fairness algorithm we set earlier will ensure sufficient disk IO access to workloads while regenerating the data as quickly as possible – this will saturate the network bandwidth, which, in this case, is a good thing. You can view the status of the running job by using the Get-StorageJob cmdlet. Disk failures are handled the same there were previously, retire and then remove the disk.

Regarding resiliency, I ran several IO tests while performing various simulated failures:

Node Reboot – 33% performance drop for 20s then full performance, 10% performance drop during regeneration

Node Crash (hard power cycle) – Full IO pause for 20s then full performance, 10% performance drop during regeneration

Disk Surprise Removal – Full IO pause for 4s then full performance, 10% performance drop during regeneration

Note that when using Hyper-V over SMB, the IO pause does not impact running VM’s. They will see increased latency, but the VM’s will continue to run.

Automating VMWare to Hyper-V Migrations using MVMC

There are several tools for migrating VMWare VM’s to Hyper-V. The free tool from Microsoft is called the Microsoft Virtual Machine Converter which allows you to convert from VMWare to Hyper-V or Azure, and from physical to virtual via a GUI or powershell cmdlets for automation. Microsoft also has the Migration Automation Toolkit which can help automate this process. If you have NetApp, definitely check out MAT4SHIFT which is by far the fastest and easiest method for converting VMWare VM’s to Hyper-V. MVMC works fairly well, however, there are a few things the tool doesn’t handle natively when converting from VMWare to Hyper-V.

First, it requires credentials to the guest VM to remove the VMWare Tools. In a service provider environment, you may not have access to the guest OS, so this could be an issue. Second, the migration will inherently cause a change in hardware, which in turn can cause the guest OS to lose its network configuration. This script accounts for that by pulling the network configuration from the guest registry and restoring it after the migration. Lastly, MVMC may slightly alter other hardware specifications (dynamic memory, mac address) and this script aims to keep them as close as possible with the exception of disk configuration due to Gen 1 limitations in Hyper-V booting.

This script relies on several 3rd party components:

You’ll need to install MVMC, HV PS Module, and VMWare PowerCLI on your “helper” server – the server where you’ll be running this script which will perform the conversion. Devcon, the HV IS components, VMWare Tools, and NSSM will need to be extracted into the appropriate folders:


I’ve included a sample kick-off script (migrate.ps1) that will perform a migration:

$esxhost = ""
$username = "root"
$password = ConvertTo-SecureString "p@ssWord1" -AsPlainText -Force
$cred = New-Object -Typename System.Management.Automation.PSCredential -Argumentlist "root", $password
$viserver = @{Server=$esxhost;Credential=$cred}
$destloc = "\\\vm-storage1"
$vmhost = "HV03"
$vmname = "MYSERVER01"
cd C:\vmware-to-hyperv-convert
. .\vmware-to-hyperv.ps1 -viserver $viserver -VMHost $vmhost -destLoc $destloc -VerboseMode
$vms = VMware.VimAutomation.Core\Get-VM -Server $script:viconnection
$vmwarevm = $vms | ?{$_.Name -eq $vmname}
$vm = Get-VMDetails $vmwarevm
Migrate-VMWareVM $vm

Several notes about MVMC and these scripts:

  • This is an offline migration – the VM will be unavailable during the migration. The total amount of downtime depends on the size of the VMDK(s) to be migrated.
  • The script will only migrate a single server. You could wrap this into powershell tasks to migrate several servers simultaneously.
  • Hyper-V Gen1 servers only support booting from IDE. This script will search for the boot disk and attach it to IDE0, all other disks will be attached to a SCSI controller regardless of the source VM disk configuration.
  • Linux VM’s were not in scope as there are not reliable ways to gain write access to LVM volumes on Windows. Tests of CentOS6, Ubuntu12 and Ubuntu14 were successful. CentOS5 required IS components be pre-installed and modifications made to boot configuration. CentOS7 was unsuccessful due to disk configuration. The recommended way of migrating Linux VM’s is to pre-install IS, remove VMWare Tools, and modify boot configuration before migrating.
  • These scripts were tested running from a Server 2012 R2 VM migrating Server 2003 and Server 2012 R2 VM’s – other versions should work but have not been tested.
  • ESXi 5.5+ requires a connection to a vCenter server as storage SDK service is unavailable on the free version.

Control Scale-Out File Server Cluster Access Point Networks

Scale-out File Server (SOFS) is a feature that allows you to create file shares that are continuously available and load balanced for application storage. Examples of usage are Hyper-V over SMB and SQL over SMB. SOFS works by creating a Distributed Network Name (DNN) that is registered in DNS to all IP’s assigned to the server for which the cluster network is marked as “Cluster and Client” (and if the UseClientAccessNetworksForSharedVolumes is set to 1, then additionally networks marked as “Cluster Only”). DNS natively serves up responses in a round robin fashion and SMB Multichannel ensures link load balancing.

If you’ve followed best practices, your cluster likely has a Management network where you’ve defined the Cluster Name Object (CNO) and you likely want to ensure that no SOFS traffic uses those adapters. If you change the setting of that Management network to Cluster Only or None, registration of the CNO in DNS will fail. So how do you exclude the Management network from use on the SOFS? You could use firewall rules, SMB Multichannel Constraints, or the preferred method – the ExcludeNetworks parameter on the DNN cluster object. Powershell to the rescue.

First, let’s take a look at how the SOFS is configured by default:

Scale-out file server cluster access point

Scale-out file server cluster access point

In my configuration, is my management network and is my storage network where I’d like to keep all SOFS traffic. The CNO is configured on the network and you can see that both networks are currently configured for Cluster and Client use:

Cluster networks

Cluster networks

Next, let’s take a look at the advanced parameters of the object in powershell. “SOFS” is the name of my Scale-out File Server in failover clustering. We’ll use the Get-ClusterResource and Get-ClusterParameter cmdlets to expose these advanced parameters:

Powershell cluster access point

Powershell cluster access point

You’ll notice a string parameter named ExcludeNetworks which is currently empty. We’ll set that to the Id of our management network (use a semicolon to separate multiple Id’s if necessary). First, use the Get-ClusterNetwork cmdlet to get the Id of the “Management” network, and then the Set-ClusterParameter cmdlet to update it:

Powershell cluster set-parameter

Powershell cluster set-parameter

You’ll need to stop and start the resource in cluster manager for it to pick up the change. It should show only the Storage network after doing so:

Scoped cluster access point

Scoped cluster access point

Only IP’s in the Storage network should now be registered in DNS:

PS C:\Users\Administrator.CONTOSO> nslookup sofs
Server:  UnKnown


Managed Service Accounts in Server 2012 R2

Managed Service Accounts were first introduced in Server 2008 R2. They are a clever way to ensure lifecycle management of user principals of windows services in a domain environment. Passwords for these accounts are maintained in Active Directory and updated automatically. Additionally, they simplify SPN management for the services leveraging these accounts. In Server 2012 and above, these can also be configured as Group Managed Service Accounts which are useful for server farms. A common scenario for using a managed service account may be to run a the SQL Server service in SQL 2012.

There are a few steps involved in creating these managed service accounts on Server 2012 R2. First, there is a dependency on the Key Distribution Service starting with Server 2012 (in order to support group managed service accounts, though it’s now required for all managed service accounts). You must configure a KDS Root Key. In a production environment, you must wait 10 hours for replication to complete after creating the key, but in lab scenarios with single domain controllers, you can force it to take effect immediately:

Add-KdsRootKey -EffectiveTime ((get-date).addhours(-10))

Once the key has been created, you can create a managed service account from a domain controller. You will need to import the AD Powershell module. We’ll create a MSA named SQL01MSSQL in the domain for use on a server named SQL01

Import-Module ActiveDirectory

New-ADServiceAccount -Name SQL01MSSQL -Enable $true -DNSHostName

Next, you’ll need to specify which computers have access to the managed service account.

Set-ADServiceAccount -Identity SQL01MSSQL -PrincipalsAllowedToRetrieveManagedPassword SQL01$

Lastly, the account needs to be installed on the computer accessing the MSA. You’ll need to do this as a domain admin and the AD Powershell module installed and loaded there as well:

Enable-WindowsOptionalFeature -FeatureName ActiveDirectory-Powershell -Online -All

Import-Module ActiveDirectory

Install-ADServiceAccount SQL01MSSQL

You can now use the MSA in the format of DOMAINNAME\ACCOUNTNAME$ with a blank password when configuring a service.


WMI Bug with Scale Out File Server

During the build out of our Windows Azure Pack infrastructure, I uncovered what I believe is a bug with WMI and Scale Out File Server. For us, the issue bubbled up in Virtual Machine Manager where deployments of VM templates from a library on a SOFS share would randomly fail with the following error:

Error (12710)

VMM does not have appropriate permissions to access the Windows Remote Management resources on the server (

Unknown error (0x80338105)

This issue was intermittent, and rebooting the SOFS nodes always seemed to clear up the problem. Upon tracing the process, I found BITS was getting an Access Denied error when attempting to create the URL in wsman. Furthermore, VMM was effectively saying the path specified did not exist. From the VMM trace:

ConvertUNCPathToPhysicalPath (catch CarmineException) [[(CarmineException#f0912a) { Microsoft.VirtualManager.Utils.CarmineException: The specified path is not a valid share path on  Specify a valid share path on to the virtual machine to be saved, and then try the operation again.

Further testing, I found I got mixed results when querying cluster share properties via WMI:

PS C:\Users\jeff> gwmi Win32_ClusterShare -ComputerName CLOUD-LIBRARY01


PS C:\Users\jeff> gwmi Win32_ClusterShare -ComputerName CLOUD-LIBRARY01

Name                                    Path                                    Description
—-                                    —-                                    ———–
\\CLOUD-VMMLIB\ClusterStorage$          C:\ClusterStorage                       Cluster Shared Volumes Default Share
\\CLOUD-LIBRARY\ClusterStorage$         C:\ClusterStorage                       Cluster Shared Volumes Default Share
\\CLOUD-VMMLIB\MSSCVMMLibrary           C:\ClusterStorage\Volume1\Shares\MSS…

Finally, while viewing procmon while performing the WMI queries:

A success:

Date & Time:  6/3/2014 3:56:20 PM
Event Class:   File System
Operation:     CreateFile
Path:   \\CLOUD-VMMLIB\PIPE\srvsvc
TID:    996
Duration:       0.0006634
Desired Access:        Generic Read/Write
Disposition:    Open
Options:        Non-Directory File, Open No Recall
Attributes:     n/a
ShareMode:   Read, Write
AllocationSize: n/a
Impersonating:         S-1-5-21-xxxx
OpenResult:   Opened

A failure:

Date & Time:  6/3/2014 3:56:57 PM
Event Class:   File System
Operation:     CreateFile
Path:   \\CLOUD-VMMLIB\PIPE\srvsvc
TID:    996
Duration:       0.0032664
Desired Access:        Generic Read/Write
Disposition:    Open
Options:        Non-Directory File, Open No Recall
Attributes:     n/a
ShareMode:   Read, Write
AllocationSize: n/a
Impersonating:         S-1-5-21-xxx

What’s happening here is that WMI is attempting to access the named pipe of the server service on the SOFS cluster object. Because we’re using SOFS, the DNS entry for the SOFS cluster object contains IP’s for every server in the cluster. The WMI call attempts to connect using the cluster object name, but because of DNS round robin, that may or may not be the local node. It would have appropriate access to that named pipe for the local server, but it will not for other servers in the cluster.

There are two workarounds for this issue. First, you can add a local hosts file entry on each of the cluster nodes containing the SOFS cluster object pointing back to localhost, or second, you can add the computer account(s) of each cluster node to the local Administrators group of all other cluster nodes. We chose to implement the first workaround until the issue can be corrected by Microsoft.

SSD’s on Storage Spaces are killing your VM’s performance

We’re wrapping up a project that involved Windows Storage Spaces on Server 2012 R2. I was very excited to get my hands on new SSDs and test out Tiered Storage Spaces with Hyper-V. As it turns out, the newest technology in SSD drives combined with the default configuration of Storage Spaces is killing performance of VM’s.

First, it’s important to understand sector sizes on physical disks, as this is the crux of the issue. The sector size is the amount of data the physical disk controller inside your hard disk actually writes to the storage medium. Since the invention of the hard disk, sector sizes have been 512 bytes for hard drives.  Many other aspects of storage are based on this premise. Up until recently, this did not pose an issue. However, with larger and larger disks, this caused capacity problems. In fact, the 512-byte sector is the reason for the 2.2TB limit with MBR partitions.

Disk manufacturers realized that 512-byte sector drives would not be sustainable at larger capacities, and started introducing 4k sector, aka Advanced Format, disks beginning in 2007. In order to ensure compatibility, they utilized something called 512-byte emulation, aka 512e, where the disk controller would accept reads/writes of 512 bytes, but use a physical sector size of 4k. To do this, internal cache temporarily stores the 4k of data from physical medium and the disk controller manipulates the 512 bytes of data appropriately before writing back to disk or sending the 512 bytes of data to the system. Manufacturers took this additional processing into account when spec’ing performance of drives. There are also 4k native drives which use a physical sector size of 4k and do not support this 512-byte translation in the disk controller – instead they expect the system to send 4k blocks to disk.

The key thing to understand is that since SSD’s were first released, they’ve always had a physical sector size of 4k – even if they advertise 512-bytes. They are by definition either 512e or 4k native drives. Additionally, Windows accommodates 4k native drives by performing these same Read-Modify-Write, aka RMW, functions at the OS level that are normally performed inside the disk controller on 512e disks. This means that if the OS sees you’re using a disk with a 4k sector size, but the system receives a 512b, it will read the full 4k of data from disk into memory, replace the 512 bytes of data in memory, then flush the 4k of data from memory down to disk.

Enter Storage Spaces and Hyper-V. Storage Spaces understands that physical disks may have 512-byte or 4k sector sizes and because it’s virtualizing storage, it too has a sector size associated with the virtual disk. Using powershell, we can see these sector sizes:

Get-PhysicalDisk | sort-object SlotNumber | select SlotNumber, FriendlyName, Manufacturer, Model, PhysicalSectorSize, LogicalSectorSize | ft


Any disk whose PhysicalSectorSize is 4k, but LogicalSectorSize is 512b is a 512e disk, a disk with a PhysicalSectorSize and LogicalSectorSize of 4k is a 4k native disk, and any disk with 512b for both PhysicalSectorSize and LogicalSectorSize is a standard HDD.

The problem with all of this is that the when creating a virtual disk with Storage Spaces, if you do not specify a LogicalSectorSize via the Powershell cmdlet, the system will create a virtual disk with a LogicalSectorSize equal to the greatest PhysicalSectorSize of any disk in the pool. This means if you have SSD’s in your pool and you created the virtual disk using the GUI, your virtual disk will have a 4k LogicalSectorSize.  If  a 512byte write is sent to a virtual disk with a 4k LogicalSectorSize, it will perform the RMW at the OS level – and if you’re physical disks are actually 512e, then they too will have to perform RMW at the disk controller for each 512-bytes of the 4k write it received from the OS. That’s a bit of a performance hit, and can cause you to see about 1/4th of the advertised write speeds and 8x the IO latency.

Why this matters with Hyper-V? Unless you’ve specifically formatted your VHDx files using 4k sectors, they are likely using 512-byte sectors, meaning every write to a VHDx storage on a Storage Spaces virtual disk is performing this RMW operation in memory at the OS and then again at the disk controller. The proof is in the IOMeter tests:

32K Request, 65% Read, 65% Random

Virtual Disk 4k LogicalSectorSize


Virtual Disk 512b LogicalSectorSize




Modifying IE Compatibility View Settings with Powershell

I recently upgraded my workstation to Windows 8.1 and as such, am now using Internet Explorer 11. While there are some welcomed improvements, there are several changes that have made day-to-day administration activities a bit challenging. For instance, all of the Dell hardware we use has a Remote Access Controller installed that allows us to perform various remote administration tasks. Unfortunately, the current version of firmware for these DRACs is not compatible with IE 11. However, running in IE 7 compatibility mode allows the UI of the DRACs to function properly.

The problem is, we access all of these directly by private IP and adding hundreds of IP addresses to the IE compatibility view settings on multiple workstations is a bit of a pain. Thankfully, these compatibility view exceptions can be set with Group Policy, but the workstations I use are in a Workgroup and do not belong to a domain. I set out to find a way to programmatically add these exceptions using powershell.

First, it’s important to note that the registry keys that control this behavior changed in IE 11. In previous versions of IE, the setting was exclusively maintained under HKCU(HKLM)\Software\[Wow6432Node]\Policies\Microsoft\Internet Explorer\BrowserEmulation\PolicyList. Starting with IE 11, there’s an additional key under HKCU\Software\Microsoft\Internet Explorer\BrowserEmulation\ClearableListData\UserFilter that controls compatibility view settings. Unfortunately, this new key is stored in binary format and there’s not much information regarding it. I was able to find a stackoverflow post where a user attempted to decipher the data, but I found that some of the assumptions they made did not hold true. Via a process of trial-and-error, I was able to come up with a script that can set this value. However, because IE 11 still supports the previous registry key, I HIGHLY recommend using the other method described later in this post. While this seems to work, there are several values I was not able to decode.

The script will pipe the values in the $domains array into the UserFilter registry key. It accepts either top-level domains, IP addresses or Subnets in CIDR notation.

$key = "HKCU:\Software\Microsoft\Internet Explorer\BrowserEmulation\ClearableListData"
$item = "UserFilter"
. .\Get-IPrange.ps1
$cidr = "^((25[0-5]|2[0-4]\d|1\d\d|[1-9]?\d)\.){3}(25[0-5]|2[0-4]\d|1\d\d|[1-9]?\d)/(3[0-2]|[1-2]?[0-9])$"
[byte[]] $regbinary = @()
#This seems constant
[byte[]] $header = 0x41,0x1F,0x00,0x00,0x53,0x08,0xAD,0xBA
#This appears to be some internal value delimeter
[byte[]] $delim_a = 0x01,0x00,0x00,0x00
#This appears to separate entries
[byte[]] $delim_b = 0x0C,0x00,0x00,0x00
#This is some sort of checksum, but this value seems to work
[byte[]] $checksum = 0xFF,0xFF,0xFF,0xFF
#This could be some sort of timestamp for each entry ending with 0x01, but setting to this value seems to work
[byte[]] $filler = 0x00,0x00,0x00,0x00,0x00,0x00,0x00,0x01
$domains = @("","")
function Get-DomainEntry($domain) {
[byte[]] $tmpbinary = @()
[byte[]] $length = [BitConverter]::GetBytes([int16]$domain.Length)
[byte[]] $data = [System.Text.Encoding]::Unicode.GetBytes($domain)
$tmpbinary += $delim_b
$tmpbinary += $filler
$tmpbinary += $delim_a
$tmpbinary += $length
$tmpbinary += $data
return $tmpbinary
if($domains.Length -gt 0) {
[int32] $count = $domains.Length
[byte[]] $entries = @()
foreach($domain in $domains) {
if($domain -match $cidr) {
$network = $domain.Split("/")[0]
$subnet = $domain.Split("/")[1]
$ips = Get-IPrange -ip $network -cidr $subnet
$ips | %{$entries += Get-DomainEntry $_}
$count = $count - 1 + $ips.Length
else {
$entries += Get-DomainEntry $domain
$regbinary = $header
$regbinary += [byte[]] [BitConverter]::GetBytes($count)
$regbinary += $checksum
$regbinary += $delim_a
$regbinary += [byte[]] [BitConverter]::GetBytes($count)
$regbinary += $entries
Set-ItemProperty -Path $key -Name $item -Value $regbinary

You’ll need the Get-IPrange.ps1 script from the technet gallery and you can download the above script here: IE11_CV

IE 11 still supports the older registry key, therefore it is the preferred method not only because the above is a hack, but also because the data is stored in the registry as strings and it supports specific hosts instead of only top-level domains. Again, this script supports hosts, domains, IP Addresses and Subnets in CIDR notation.

$key = "HKLM:\SOFTWARE\Wow6432Node\Policies\Microsoft"
if(-Not (Test-Path "$key\Internet Explorer")) {
New-Item -Path $key -Name "Internet Explorer" | Out-Null
if(-Not (Test-Path "$key\Internet Explorer\BrowserEmulation")) {
New-Item -Path "$key\Internet Explorer" -Name "BrowserEmulation" | Out-Null
if(-Not (Test-Path "$key\Internet Explorer\BrowserEmulation\PolicyList")) {
New-Item -Path "$key\Internet Explorer\BrowserEmulation" -Name "PolicyList" | Out-Null
. .\Get-IPrange.ps1
$cidr = "^((25[0-5]|2[0-4]\d|1\d\d|[1-9]?\d)\.){3}(25[0-5]|2[0-4]\d|1\d\d|[1-9]?\d)/(3[0-2]|[1-2]?[0-9])$"
$domains = @("","")
$regkey = "$key\Internet Explorer\BrowserEmulation\PolicyList"
foreach($domain in $domains) {
if($domain -match $cidr) {
$network = $domain.Split("/")[0]
$subnet = $domain.Split("/")[1]
$ips = Get-IPrange -ip $network -cidr $subnet
$ips | %{$val = New-ItemProperty -Path $regkey -Name $_ -Value $_ -PropertyType String | Out-Null}
$count = $count - 1 + $ips.Length
else {
New-ItemProperty -Path $regkey -Name $domain -Value $domain -PropertyType String | Out-Null

Again, you’ll need the Get-IPrange.ps1 script from the technet gallery and you can download the above script here: IE_CV