Features on Demand in Server 2012

Windows 8 and Server 2012 introduce a new concept of Features on Demand whereby each installation contains only basic components. Adding new components requires the OS to gather source files from an external location for installation. This is a move away from Server 2008 and Server 2008 R2 where an installation contained everything necessary to service the installation. When adding new features, Windows would simply use its locally cached installation sources.

This was a great concept for home users who no longer needed to worry about having source media, and it resolved a lot of problems for IT administrators having to deal with missing files for patching. The problem was that it drastically increased the size of every installation in the datacenter, chewing up tons of un-needed space. With Server 2012, Microsoft has found a decent balance between the two – provided you configure your environment appropriately.

By default, Server 2012 will go out to Windows Update any time it’s looking for a feature for which it does not have the source files. For example, a common feature that many programs still use for which Windows does not cache installation files locally is the .NET Framework 3.5. Under normal circumstances, you’ll simply use Server Manager to add the new features and never notice the difference. However, if you have a WSUS configured or if the server does not have Internet access, you might see the following error:

Update NetFx3 of Package Microsoft .NET Framework 3.0 failed to be turned on. Status: 0x800f0906.

The error means that Windows was unable to find appropriate source installation files to add the feature. This is because WSUS doesn’t currently support the new Features on Demand functionality in Server 2012. There are a few ways to workaround this issue.

First, you can specify a source from the command line using the Enable-Feature and Source switches for the DISM tool (or -Source parameter from the Install-WindowsFeature powershell command). You can then point to the X:\Sources\Sxs directory to proceed with the installation. The GUI will also allow you to specify this alternate source via the Add Roles and Features Wizard via a yellow warning banner on the confirmation screen.

Second, you can use a GPO or modify the registry to tell Windows to by-pass your WSUS server and go directly to Windows Update when servicing your Server 2012 installation. There are two REG_DWORD values that control this behavior located under [Software\Microsoft\Windows\CurrentVersion\Policies\Servicing]. The first is UseWindowsUpdate – setting this to 2 tells Windows to NEVER go to Windows Update for enabling features. The second is RepairContentServerSource – setting this to 2 tells Windows to go to Windows Update for repair source only (does not affect servicing). Both of these can be controlled via GPO (new Servicing.admx template) under Computer Configuration > Administrative Templates > System > Specify settings for optional component installation and component repair.

Last, you can use a GPO or modify the registry to point Windows to a list of installation source locations – similar to functionality in Server 2003 and XP. These can either be a copy of the X:\Sources\Sxs directory from installation media, or the actual WIM file (either install.wim from installation media or a copy of your company’s customized WIM image). Again, the value is located under [Software\Microsoft\Windows\CurrentVersion\Policies\Servicing]. Setting the REG_EXPAND_SZ value named LocalSourcePath allows you to specify one or more installation source locations (separated by a semicolon). To point to a WIM file, use the following format:

WIM:[path to wim]:[index]

You must specify the index so Windows knows where to find the appropriate installation files for your Server 2012 instance. The nice thing about using a WIM file is that you can perform offline servicing of the image to ensure that it always contains the latest patches and updates.

Error creating Server 2012 cluster on 2008 domain

We worked with Microsoft extensively throughout the Server 2012 TAP program providing feedback in various areas. We encountered an issue late in the program when creating a Server 2012 cluster on our production domain. We did not see this same issue in any lab environment, or in our development environment which mirrors production.

When attempting to create the cluster, the wizard would fail when creating the AD objects. We confirmed that the user had proper permissions, and did not encounter the same issues creating 2008 or 2008 R2 clusters on our production domain. After working with Microsoft, they recommended we install KB 976424 on our production domain controllers. After installing this hotfix and rebooting all of the domain controllers, we were able to create a Server 2012 cluster on our 2008 domain without issue.

Cisco VPN Client on Windows 8

Just upgraded my late 2007 MacBook Pro Boot Camp partition to Win 8 RTM and was in the process of re-installing several apps. The Cisco VPN Client we use to connect to our corporate network was a bit finicky. There are a few workarounds to get it running on Win8.

First, you need to fix the following registry key to resolve error 442 Unable to enable virtual adapter:

HKEY_LOCAL_MACHINE\SYSTEM\CurrentControlSet\Services\CVirtA\DisplayName

It will be set to something like “@oem8.inf,%CVirtA_Desc%;Cisco Systems VPN Adapter for 64-bit Windows” – drop everything before “Cisco Systems” from that value.

Next, when using certs, you cannot use your certificate from the local user store. Rather, import the certificate to the local computer store and delete it from your user store. This should resolve error 403 Unable to contact the security gateway.

Patching Server 2012 using Solarwinds Patch Manager

**UPDATE: Microsoft released KB 2734608 on August 24th, 2012 that describes a patch for WSUS 3.0 SP2 to support Windows 8 and Server 2012 which makes this procedure unnecessary unless you want to take advantage of new features supported in WSUS on Server 2012.

**NOTE: The configuration described here is not supported by SolarWinds.

With Server 2012 RTM around the corner, we’re working diligently to ensure that our infrastructure is configured appropriately to support it. Part of the excellent service OrcsWeb provides is managed Windows patching for all of the systems on our network. We pride ourselves in ensuring the best possible hosting experience, so deploying Microsoft critical and security patches in a timely manner is a must.

In order to patch a Server 2012 system, you must use a Server 2012 system running the WSUS role – WSUS 3.0 SP2 will not work with Server 2012. The reason is an incompatibility between the Windows Update client on Server 2012 and the WSUS server. That being said, a Server 2012 system running the WSUS role, can provide updates to Windows Server 2003, Server 2008, Server 2008 R2, and Server 2012 clients – provided they have the client update of KB 2720211. Using GPO’s, you could then configure your systems to connect to your WSUS server, and patch themselves during an appropriate window. We utilize Solarwinds Patch Manager (formerly EminentWare Extension Pack) to compliment WSUS for our environment. It provides additional functionality like the ability to publish 3rd party updates via WSUS (for Dell firmware & drivers, or Adobe updates for instance) and pushing patches during discrete windows – something our customers have asked for time and again.

Unfotunately, even the current beta version of Patch Manager (version 1.8) cannot be installed on Server 2012 (at the time of this writing), so that leaves us without a fully supported way of patching our Server 2012 systems. That being said, anyone in the IT industry knows *supported* and *it works* are two completely different concepts. Using a decentralized architecture, we are able to leverage a Server 2012 system running the WSUS role and Patch Manager 1.8 beta running on Server 2008 R2 with the WSUS 3.0 SP2 console to successfully patch Server 2012 clients.

I’ve outlined the steps below to accomplish this:

  1. Install the WSUS role on a Server 2012 system (we’ll call this WSUS-SERVER).
  2. Configure WSUS-SERVER to synchronize updates and arrange computers into groups like you would in previous versions of WSUS.
  3. Configure a GPO for domain clients to use WSUS-SERVER to receive updates.
  4. Install the WSUS 3.0 SP2 console on a Server 2008 R2 SP1 system (we’ll call this PATCH-SERVER).
  5. Connect to WSUS-SERVER from the WSUS 3.0 SP2 console on PATCH-SERVER.
  6. Install Patch Manager 1.8 Beta on PATCH-SERVER.
  7. During the configuration of Patch Manager, select WSUS-SERVER as your WSUS server and DO NOT configure 3rd party updates (unfortunately, 3rd party update publishing does not work because of the console version mismatch).

The WSUS 3.0 SP2 and Patch Manager consoles will incorrectly report the Operating System version as Windows Server 2003 x64 Edition, even though the systems are running Server 2012 RC (8400).


The WSUS console on Server 2012 will show the correct OS version:

Server 2012 clients can be included in standard Patch Manager jobs like any other client.

Happy Patching!

 

WSUS and update download failure 0x80244017

Recently configured WSUS on Server 2012 RC for a lab environment in preparation for RTM and ran into a configuration problem. Clients were failing to download updates and reporting error 0x80244017. After ensuring my RC installation was updated with KB 2627818 and that the clients had KB 2720211, I double checked that clients could access the WSUS site: https://wsusserver:8531/ClientWebService/client.asmx (you’ll receive a YSOD .NET error if you load that in a web browser, it’s normal). The C:\Windows\WindowsUpdate.log file contained the following error information:

WARNING: Download job failed because of proxy auth or server auth.
Error 0x80244017 occurred while downloading update; notifying dependent calls.

After some brief troubleshooting, I came across a post that suggested it was an authentication problem, but anonymous authentication had been configured in IIS appropriately. I compared NTFS permissions of the WSUS folder with a working installation in our production environment, and found that the local Users group did not have permissions. After granting the Users group Read permissions, clients were able to successfully download updates.

Fix Windows Update 800F0902 Error

Recently worked on a 2008 SP2 server that was receiving error 800F0902 when trying to check for updates. I confirmed access to the WSUS server manually via Internet Explorer and also confirmed no proxy settings were coming into play. I tried the age old trick of stopping the Windows Update service and renaming the C:\Windows\SoftwareDistribution folder to SoftwareDistribution.old and restarting the service. This regenerates the Windows Update configuration for you. This stubborn error survived that reset, so I finally came across a Microsoft FixIT KB article to fix Windows Update problems. I’m quite skeptical of using these, and I prefer to know exactly what’s being fixed, but without any other options, I decided to give it a try. No surprise, it said it found a configuration problem and fixed it, but the error persisted. A few other posts suggested an issue with trustedinstaller.exe (aka Windows Module Installer Service) so I gave that a restart and started receiving 80080005 errors. Another post suggested that after a reboot, this error cleared for that user. Sure enough, a reboot solved the problem for me as well.

Troubleshooting ARR 502.3 Errors

Load balancing is critical for any highly available application. In the case of websites, a webfarm fronted by a load balancer can help distribute the load across multiple servers to increase scale and ensure that your application remains online during planned maintenance or in the event of a server or application failure on a particular node. Microsoft provides a free IIS extension called Application Request Routing that can perform load balancing of HTTP and HTTP traffic. At OrcsWeb, we use a cluster of ARR servers to load balance our production sites.

There’s plenty of resource available that describe how Application Request Routing works, so I won’t go into detail about it here, but how do you troubleshoot when something goes wrong? One of the more common errors that can be generated by ARR is the 502 error code. There are two substatus codes: 502.3 and 502.4.

The 502.4 error is considerably easier to troubleshoot as it generally means that there were no available content nodes to route the request. This likely occurs when you have a health check configured for the content nodes, and it is failing for all of them – thus, there are no healthy content nodes to which ARR can route the request. Obviously, at this point, the easiest solution is to fix whatever’s causing the health check to fail on the content nodes. Additionally, there’s the concept of minimum servers in ARR. This value can help prevent a health check from taking too many nodes out of rotation. Setting this to at least 1 which ensure that users don’t receive a 502.4 error (though they may still see errors returned by the backend content node).

The 502.3 error can be a little more difficult to troubleshoot. It effectively means there was a communication issue between the ARR node and the content node. Most times it is a timeout due to a long running request on the content node. This is easy to spot by looking at the web logs. I recommend using LogParser to analyze the web logs and looking for any request with a time-taken value that exceeds the proxy timeout setting configured for the webfarm. You can either increase the value of the proxy timeout, or troubleshoot the web application to find out why the request is taking so long to process. Replace W3SVC0 with the site id of your website and replace *.log with the specific name of a log file if you web logs are large to help speed up processing:

LogParser.exe “select date, c-ip, cs-method, cs-uri-stem, cs-uri-query, sc-status, sc-substatus, time-taken from C:\inetpub\logs\logfiles\w3svc0\*.log where time-taken > 25000” -i:IISW3C -o:DATAGRID

The 502.3 error can also appear when something else it happening, and when this occurs, it’s time to get into deep troubleshooting. The first thing to do is enable Failed Request Tracing in IIS on the ARR node, then create a rule for all content that trips on 502.3 response codes. It’s important to note that only certain modules have tracing enabled by default. To capture tracing information from the URL Rewrite and Application Request Routing module, open up your applicationHost.config file, and add Rewrite and Request elements to traceProviderDefinitions/WWW Server:

<traceProviderDefinitions>
                <add name=”WWW Server” guid=”{3a2a4e84-4c21-4981-ae10-3fda0d9b0f83}”>
                    <areas>
                        <clear />
                        <add name=”Authentication” value=”2″ />
                        <add name=”Security” value=”4″ />
                        <add name=”Filter” value=”8″ />
                        <add name=”StaticFile” value=”16″ />
                        <add name=”CGI” value=”32″ />
                        <add name=”Compression” value=”64″ />
                        <add name=”Cache” value=”128″ />
                        <add name=”RequestNotifications” value=”256″ />
                        <add name=”Module” value=”512″ />
                        <add name=”FastCGI” value=”4096″ />
                        <add name=”Rewrite” value=”1024″ />
                        <add name=”RequestRouting” value=”2048″ />
                    </areas>
                </add>
               ———————— Truncated for readability —————- 

Ensure that when you are creating your rule, that the new provider areas of WWW Server are selected:

Once you’ve done that, attempt to reproduce the issue and a log file will be generated in the C:\inetpub\FailedReqLogFiles\W3SVC0 (where 0 is the site id). This file can help tell you where in the IIS pipeline the request is failing – look for warning or errors returned by modules. For example, here’s an example of a log file showing a 0x80070057 error from the ApplicationRequestRouting module:

The underlying error from the ARR module is “There was a connection error while trying to route the request.” So how do we find out what that means? Well, we need to look a little deeper into ARR to understand. ARR will proxy requests on behalf of the client to the content nodes. This means that the request from the client is actually regenerated into a new request by ARR and sent to the content node. Once the content node responds, ARR then repackages the response to send back to the client. To facilitate this, ARR uses the WinHTTP interface. In Server 2008 R2, you can enable WinHTTP tracing via netsh. Run this command to enable tracing:

netsh winhttp set tracing trace-file-prefix=”C:\Temp\WinHttpLog” level=verbose format=hex state=enabled

Then recycle the application pool to start logging. To disable tracing, run this command:

netsh winhttp set tracing state=disabled

You will find a log file in the C:\Temp directory named WinHttpLog-w3wp.exe-<pid>.<datetime>.log. Open this file and you will be able to see details of what ARR submitted to WinHTTP when generating the proxied request to send to the content node. You’ll want to search this file for the error mentioned in the Failed Request Tracing log. From the above example, you’ll see the error logged by ARR is 0x80070057 with an error message of “The parameter is incorrect.” Looking through our sample WinHTTP trace file, we find this:

15:15:51.551 ::WinHttpSendRequest(0x164d9a0, “…”, 696, 0x0, 0, 0, 164d740)
15:15:51.551 ::WinHttpAddRequestHeaders(0x164d9a0, “…”, 696, 0x20000000)
15:15:51.551 ::WinHttpAddRequestHeaders: error 87 ERROR_INVALID_PARAMETER]
15:15:51.551 ::WinHttpAddRequestHeaders() returning FALSE
15:15:51.551 ::WinHttpSendRequest: error 87 [ERROR_INVALID_PARAMETER]
15:15:51.551 ::WinHttpSendRequest() returning FALSE
15:15:51.551 ::WinHttpCloseHandle(0x164d9a0)
15:15:51.551 ::usr-req 0163D520 is shutting down

I replaced the actual header value with “…” in the sample above, but we can see that WinHTTP is failing when trying to put together the request headers to send the proxied request to the content node. Further investigation found that this was due to Internet Explorer passing unencoded non-ascii characters in the Referrer header which violates RFC 5987. To resolve this specific issue, we can either fix the source HTML to encode the characters, or we can modify the routing URL rewrite rule to always encode the Referrer header:

<rule name=”www.orcsweb.com“>
<match url=”.*” />
<serverVariables>
<set name=”HTTP_REFERER” value=”{UrlEncode:{HTTP_REFERER}}” />
</serverVariables>
<action type=”Rewrite” url=”http://www.orcsweb.com/{R:0}” />
</rule>

Read & Write .NET machine key with Powershell

We’re putting together webfarms all the time at OrcsWeb, and one of the cardinal rules with webfarms is that all systems need to have matching settings. To help automate this process, I put together a quick Powershell script that will read/write the machine key to the root machine.config files for all versions of the .NET framework.

machineKeys.ps1

# Machine Key Script
#
# Version 1.0
# 6/5/2012
#
# Jeff Graves
#
param ($readWrite = "read", $allkeys = $true, $version, $validationKey, $decryptionkey, $validation)
 
function GenKey ([int] $keylen) {
	$buff = new-object "System.Byte[]" $keylen
	$rnd = new-object System.Security.Cryptography.RNGCryptoServiceProvider
	$rnd.GetBytes($buff)
	$result =""
	for($i=0; $i -lt $keylen; $i++)	{
		$result += [System.String]::Format("{0:X2}",$buff[$i])
	}
	$result
}
 
function SetKey ([string] $version, [string] $validationKey, [string] $decryptionKey, [string] $validation) {
    write-host "Setting machineKey for $version"
    $currentDate = (get-date).tostring("mm_dd_yyyy-hh_mm_s") # month_day_year - hours_mins_seconds
 
    $machineConfig = $netfx[$version]
 
    if (Test-Path $machineConfig) {
        $xml = [xml](get-content $machineConfig)
        $xml.Save($machineConfig + "_$currentDate")
        $root = $xml.get_DocumentElement()
        $system_web = $root."system.web"
        if ($system_web.machineKey -eq $nul) { 
        	$machineKey = $xml.CreateElement("machineKey") 
        	$a = $system_web.AppendChild($machineKey)
        }
        $system_web.SelectSingleNode("machineKey").SetAttribute("validationKey","$validationKey")
        $system_web.SelectSingleNode("machineKey").SetAttribute("decryptionKey","$decryptionKey")
        $system_web.SelectSingleNode("machineKey").SetAttribute("validation","$validation")
        $a = $xml.Save($machineConfig)
    }
    else { write-host "$version is not installed on this machine" -fore yellow }
}
 
function GetKey ([string] $version) { 
    write-host "Getting machineKey for $version"
    $machineConfig = $netfx[$version]
 
    if (Test-Path $machineConfig) { 
        $machineConfig = $netfx.Get_Item($version)
        $xml = [xml](get-content $machineConfig)
        $root = $xml.get_DocumentElement()
        $system_web = $root."system.web"
        if ($system_web.machineKey -eq $nul) { 
        	write-host "machineKey is null for $version" -fore red
        }
        else {
            write-host "Validation Key: $($system_web.SelectSingleNode("machineKey").GetAttribute("validationKey"))" -fore green
    	    write-host "Decryption Key: $($system_web.SelectSingleNode("machineKey").GetAttribute("decryptionKey"))" -fore green
            write-host "Validation: $($system_web.SelectSingleNode("machineKey").GetAttribute("validation"))" -fore green
        }
    }
    else { write-host "$version is not installed on this machine" -fore yellow }
}
 
$global:netfx = @{"1.1x86" = "C:\WINDOWS\Microsoft.NET\Framework\v1.1.4322\CONFIG\machine.config"; `
           "2.0x86" = "C:\WINDOWS\Microsoft.NET\Framework\v2.0.50727\CONFIG\machine.config"; `
           "4.0x86" = "C:\WINDOWS\Microsoft.NET\Framework\v4.0.30319\CONFIG\machine.config"; `
           "2.0x64" = "C:\WINDOWS\Microsoft.NET\Framework64\v2.0.50727\CONFIG\machine.config"; `
           "4.0x64" = "C:\WINDOWS\Microsoft.NET\Framework64\v4.0.30319\CONFIG\machine.config"}
if(!$allkeys)
{
    while(!$version) {
        $input = read-host "Version (1.1x86, 2.0x86, 4.0x86, 2.0x64, 4.0x64)"
        if ($netfx.ContainsKey($input)) { $version = $input }
    }
}
 
if ($readWrite -eq "read")
{
    if($allkeys) {
        foreach ($key in $netfx.Keys) { GetKey -version $key }
    }
    else {
        GetKey -version $version
    }
}
elseif ($readWrite -eq "write")
{   
    if (!$validationkey) {
    	$validationkey = GenKey -keylen 64
    	write-host "Validation Key: $validationKey" -fore green
    }
 
    if (!$decryptionkey) {
    	$decryptionKey = GenKey -keylen 24
    	write-host "Decryption Key: $decryptionKey" -fore green
    }
 
    if (!$validation) {
    	$validation = "SHA1"
    	write-host "Validation: $validation" -fore green
    }
 
    if($allkeys) {
        foreach ($key in $netfx.Keys) { SetKey -version $key -validationkey $validationkey -decryptionKey $decryptionKey -validation $validation}
    }
    else {
        SetKey -version $version -validationkey $validationkey -decryptionKey $decryptionKey -validation $validation
    }
}

 

Using SQL Database Audits for change detection

There are several reasons you may want to audit the data in your SQL databases. It’s helpful when trying to track down a bug in software, trying to piece together the source of unexpected data, or to meet compliance doctrines. Depending upon your needs, there are several ways to accomplish this. If you need auditing in order to be able to track and revert changes to data, trigger-based table auditing is probably best.

However, when you need to generate audit trails for compliance reasons, Microsoft’s SQL Server has a few built-in ways of doing so. C2 auditing is supported, however the amount of data generated makes this a non-viable option for most installations. Starting with SQL 2008, Microsoft introduced a new high-performance feature called SQL Server Audit. This functions at both the server level (included with all editions) and the database level (enterprise edition). This will allow you to audit a myriad of server and database level functions, and can help ensure your SQL footprint is in compliance.

SQL Server audit is composed of a few different functions. At the server level, you can define an auditing configuration that writes to a file, the NT Application Log, or the NT Security Log. Logging to a file has the least performance overhead and writing to the Security log requires special permissions. From there, you can configure Server level auditing, and/or Database level auditing. The auditing rules can be fine-tuned to apply to specific objects in the database and for specific user principals. For this example, I will configure auditing of Update and Delete events for all tables in a database for all users – a scenario that can effectively meet a requirement commonly known as File Integrity Monitoring or Change Detection for audit trails.

First, we’ll setup an auditing specification to log events the Application Log. Under Security, right-click Audits and select New Audit. Give the Audit an appropriate name and select Application Log as the destination. Click OK. Then right-click the Audit that is created and select enable.

Next, go to the database you would like to audit. Under Security right-click Database Audit Specification and select New Database Audit Specification. Name the database audit and select the appropriate audit configuration (AppLog). Under Actions, select the actions you would like to audit, one at a time. Set the Object Class to Database, select the database in the Object Name column, and select the [public] database role in the Principal Name column to audit all users. Even though the pop-up window will allow you to select multiple items, each row can contain only 1 object and principal. Click OK. Then right-click the Database Audit that is created and select enable.

Once enabled, you can now test your audit configuration by updating or deleting database for any table in the audited database. Right-click on the audit specification at the server level and select View Audit Logs to see generated audit events.

 

Install SCVMM 2012 Console on non-domain machine

Since I work remotely, my workstation is not joined to the corporate domain. This presents various issues for administrative consoles. Some use integrated authentication to communicate with their server counterparts, others allow you to specify the credentials to use when connecting. The worst part is that there does not seem to be any consistency – even among products of the same suite from the same company.

Take SCVMM 2012 for instance. A feature the added based on feedback that we provided was to allow you to specify the domain credentials the console uses when connecting to the server – similar to what SCOM 2007 R2 allowed. Unfortunately, they still required that the workstation be joined to a domain in order to install the console. Notice I said, “a domain” and not “the domain” – it doesn’t matter if your workstation is part of your corporate domain, rather Micrsoft arbitrarily decided to require any domain-joined workstaion as a pre-requisite. The worst part is, the console functions just fine on systems that are not domain-joined.

With that rant out of the way, here’s how you can by-pass the domain check at installation. Browse to the proper bitness folder for your workstation on the installation media (D:\amd64 or D:\i386). Under the Setup>MSI>Client folder, you’ll find the AdminConsole.msi file. Just double-click it to run. Once installed, the console will allow you to specify your domain credentials when connecting to the VMM Server: