Using CheckSUR to repair Windows file corruption

Microsoft has developed a System Update Readiness tool which can identify and repair Windows system file corruption that could prevent other updates from being installed. The tool is available for all editions of Windows since Vista/Server 2008 and is published under KB947821. The utility installs as a MSU package on older platforms, and is baked in for Windows 8 and Server 2012 with the DISM utility. This utility is often referred to as CheckSUR – short for Check System Update Readiness.

Once installed, a log is generated under %windir%\Logs\CBS\CheckSUR.log. If CheckSUR is able to automatically repair files, it will do so and report this in the log file. For any corruption which cannot be automatically repaired, we can still use this utility to manually fix these items. You will find the KB article number of files that cannot be repaired in the CheckSUR log file:

=================================
Checking System Update Readiness.
Binary Version 6.1.7601.21645
Package Version 15.0
2012-07-06 13:57

Checking Windows Servicing Packages

Checking Package Manifests and Catalogs
(f)    CBS MUM Corrupt    0x00000000    servicing\Packages\Package_2_for_KB2685939~31bf3856ad364e35~amd64~~6.1.1.2.mum        Expected file name Microsoft-Windows-Foundation-Package~31bf3856ad364e35~amd64~~6.1.7600.16385.mum does not match the actual file name

Checking Package Watchlist
Checking Component Watchlist
Checking Packages
Checking Component Store

Summary:
Seconds executed: 109
Found 1 errors
CBS MUM Corrupt Total count: 1

Unavailable repair files:    servicing\packages\Package_2_for_KB2685939~31bf3856ad364e35~amd64~~6.1.1.2.mum   servicing\packages\Package_2_for_KB2685939~31bf3856ad364e35~amd64~~6.1.1.2.cat

From this log, we can see the corrupt files are part of KB2685939. To repair, you can follow these instructions.

1. Download the appropriate update package for KB2685939 for the target system from the Microsoft Download Center.

2. Expand the package using the expand command (this assumes the package was downloaded to C:\temp and that we’re expanding to C:\temp\KB2685939):

expand C:\temp\Windows6.1-KB2685939-x64.msu /f:* C:\temp\KB2685939

3. Expand the cab files to the same directory:

expand C:\temp\Windows6.1-KB2685939-x64.cab /f:* C:\temp\KB2685939

4. Copy the expanded *.mum and *.cat files to %windir%\Temp\CheckSUR\servicing\packages:

copy C:\temp\KB2685939\*.mum %windir%\Temp\CheckSUR\servicing\packages\
copy C:\temp\KB2685939\*.cat %windir%\Temp\CheckSUR\servicing\packages\

5. Re-run the System Update Readiness tool which will use the files in the %windir%\Temp\CheckSUR\servicing\packages folder to repair the corrupt or missing files.

Configure VMM 2012 SP1 Network Virtualization for use with Service Management Portal

With the RTM release of the Service Management Portal from Microsoft, hosters can configure VMM 2012 SP1 to allow self-service tenants to create NVGRE networks for use with VM’s deployed through the portal. The VMM Engineering Blog has a great post that provides a basis for understanding how Network Virtualization is configured in VMM 2012 SP1.

The process can be summarized as follows:

  1. Create a Logical Network with a Network Site & Subnet for use as the Provider Address.
  2. Create an IP Pool on the Logical Network for the Provider Address space.
  3. Create a Host Port Profile linked to the Network Site created in step 1.
  4. Optional: Create a port classification and profile for the virtual adapter. This will allow you to enable DHCP and Router guard on your templates and hardware profiles.
  5. Create the Logical Switch referencing the Host Port Profile (and Virtual Port Classification and Profile if you created them).
  6. Assign the Logical Switch to your Hyper-V hosts.
  7. Assign the Logical Network to your Cloud.
  8. Create a default VM Network for use with templates and hardware profiles.

To create the logical network, in VMM, go to Fabric > Networking > Logical Network and select Create Logical Network from the ribbon menu. Give the network a name (this is what will appear in the Katal portal) and select the “Allow new VM networks created on this logical network to use network virtualization” checkbox, then click Next.Create Logical Network

Add a new network site to be used as the Provider Address network. This is what the Hyper-V hosts will use to communicate with one another.Create Network Site

Now that a Logical Network and Site have been created, we’ll need to create an IP Pool for the Provider Addresses. Right-click on your logical network, and select Create IP Pool. Create IP Pool

Associate the Pool with the Network Site we created in the previous step.Associate Pool with Network Site

You can leave the default range and specify gateway and DNS settings if your Hyper-V hosts span multiple subnets. Next, we’ll want to create a Host Port Profile and associate it with the network site. Right-click Fabric > Networking > Native Port Profiles and select Create Native Port Profile. Name it appropriately and change the type to Uplink port profile.Create Host Port Profile

Associate the Port Profile with the Network Site we created on the Logical Switch and check the checkbox to Enable Windows Network Virtualization. Click Next and Finish.Associate Network Site with Uplink Port Profile

Optionally, you can create a virtual port classification and profile. This will allow you to enable/disable virtual adapter features or create tiers of service. Next, we can create the Logical Switch. From Fabric > Networking > Logical Switches select Create Logical Switch in the ribbon. Give the Switch a name and specify extensions as necessary. Associate the Uplink port profile we created in the previous step.Associate Logical Switch with Uplink Port Profile

Add you virtual port profiles if you created them and then click Finish to create the switch. We’ll now need to associate the switch with our host(s). Find your host under Fabric > Servers > All Hosts > Hostname, right-click and select properties. Click Virtual Switches and then click New Virtual Switch > New Logical Switch. If you have multiple Logical Switches, select the switch we created in the previous step, then select the appropriate adapter(s) and the Uplink Port Profile we created previously. Click OK to assign the logical switch.Assign Switch

Once the job completes, we’ll be able to associate our Logical Network with our cloud which will allow it to show up in the Service Management Portal. Under VMs and Services > Clouds, right-click on the name of your cloud and select Properties. Click Logical Networks, and select the checkbox next to the name of the Logical Network we created in the first step. Click OK.Assign Logical Network

 

You can now create VM Networks in the Service Management Portal that are bound to the Logical Network using NVGRE.Service Management Portal Create Network

The last step is to create a default VM Network to associate with our templates and hardware profiles. Select VMs and Services > VM Networks and click Create VM Network from the Ribbon. Name the name and associate it with the Logical Network we created in step 1.Create Default VM Network

Chose the option to Isolate using Hyper-V network virtualization with IPv4 addresses for VM and logical networks.Configure NVGRE Isolation

Specify a subnet for VM Network though it will not be used. Select No connectivity on the External connectivity screen and click Finish to create the VM Network. Configure your templates and hardware profiles to use this VM Network in order for them to work properly in the Service Management Portal.

SQL Server Reporting Services error installing DPM 2012 SP1 with remote SQL 2012 database

We use Microsoft Data Protection Manager in our environment to protect our Windows workloads. Recently, DPM 2012 SP1 was released and we have begun the process of upgrading each of our DPM servers to this version, but encountered a problem with the latest server to be upgraded. Though the prerequisite check was successful, DPM would fail to install citing an error with SQL Server Reporting Services on our remote SQL 2012 server:

DPM Setup cannot query the SQL Server Reporting Services configuration

DPM Setup cannot query the SQL Server Reporting Services configuration

Viewing the error log, we can see the following error attempting to query the SSRS configuration via WMI:

[3/4/2013 12:05:44 PM] Information : Getting the reporting secure connection level for DPMSQL01/MSSQLSERVER
[3/4/2013 12:05:44 PM] Information : Querying WMI Namespace: \\DPMSQL01\root\Microsoft\SqlServer\ReportServer\RS_MSSQLSERVER\v10\admin for query: SELECT * FROM MSReportServer_ConfigurationSetting WHERE InstanceName=’MSSQLSERVER’
[3/4/2013 12:05:44 PM] * Exception : => System.Management.ManagementException: Provider load failure

DPM is using WMI to get information about the SSRS installation, and is getting a “Provider load failure” error message. The natural troubleshooting technique here is to attempt to run this query manually via wbemtest from the SQL server itself, and sure enough, we end up with a 0x80041013 “Provider Load Failure” error message:

0x80041013 Provider Load Failure

0x80041013 Provider Load Failure

The SQL Server was originally deployed as SQL 2008 R2 and then upgraded to SQL 2012 SP1. Though there is a KB article describing this issue, there is no update for SQL 2012 SP1. You’ll also notice that the path mentioned in the error code includes v10 – which refers to SQL 2008. So, it seems as though the underlying problem has to do with an issue with the upgrade from SQL 2008 R2 to SQL 2012 and the WMI namespace.

Rather than open a PSS case to find the root cause, we decided it was probably faster to uninstall SQL entirely, then install a fresh instance of SQL 2012 and restore the DPM databases. If you choose to go this route, be sure to take a backup of your SSRS encryption key, DPM databases, master db, msdb, and the SSRS databases. If you don’t, you’ll spend hours reconfiguring reports, setting up SQL security and you’ll have to run DPMSync to recreate the SQL jobs.

ASP.NET MVC 4 Custom Validation for CIDR Subnet

I recently worked on a rWhois project that required writing an ASP.NET MVC 4 based admin portal. One of the last things I tackled was adding validation to inputs. Adding validation to a MVC model is super-simple with Data Annotations. By adding these annotations to your model, both client and server-side validation comes to life. There’s a great tutorial describing how to accomplish this on the asp.net site:

http://www.asp.net/mvc/tutorials/mvc-4/getting-started-with-aspnet-mvc4/adding-validation-to-the-model

The in-box DataAnnotations class provides numerous built-in validation attributes that can be applied to any property. Things like strings of specific length, or integers within a range, or even regex matching is supported using the built-in validation attributes – and they are automatically wired up when you use Visual Studio to add views with scaffolding. But what about when the built-in attributes don’t quite do what you need?

I needed to create validation that confirms the input is both in proper CIDR format (ie. 10.0.0.0/8 and not “some text”) and that the supplied CIDR subnet is valid (ie. 192.168.100.0/24 is a proper network/bitmask, but 192.168.100.50/24 is not). Validating that the input is in the proper format can easily be done with regex. Just add the following to annotation to the appropriate property on your model:

[RegularExpression(@"^(([0-9]|[1-9][0-9]|1[0-9]{2}|2[0-4][0-9]|25[0-5])\.){3}([0-9]|[1-9][0-9]|1[0-9]{2}|2[0-4][0-9]|25[0-5])(\/(\d|[1-2]\d|3[0-2]))$", ErrorMessage="Not valid CIDR format")]

To check that the supplied input is a valid network/bitmask CIDR combination, I needed custom validation. You can easily extend the ValidationAttribute and implement IClientValidatable to add your custom validation. First, you’ll want to create a new class and reference the DataAnnotations and Mvc namespaces:

using System.ComponentModel.DataAnnotations;
using System.Web.Mvc;
 
namespace MvcApplication1.Validation
{
    public class CIDRSubnet : ValidationAttribute, IClientValidatable
    {
 
    }
}

Next, we’ll want to create the appropriate server-side validation by overriding the ValidationResult class and adding the CIDR subnet check:

    protected override ValidationResult IsValid(object value, ValidationContext validationContext)
    {
        //validate that valus is valid cdir
        string[] cidr = ((string)value).Split(new char[] { '/' });
        string[] ip = cidr[0].Split(new char[] { '.' });
        int i = (Convert.ToInt32(ip[0]) << 24) | (Convert.ToInt32(ip[1]) << 16) | (Convert.ToInt32(ip[2]) << 8) | (Convert.ToInt32(ip[3]));
        int mask = Convert.ToInt32(cidr[1]) == 0 ? 0 : ~((1 << (32-29)) - 1);
        int network = i & mask;
 
        if (i != network) return new ValidationResult("Not a valid CIDR Subnet");
        return ValidationResult.Success;
    }

Now we want to create our client side validation function. Create a new javascript file and make sure it’s referenced in your View:

function ipv4checker(input) {
    //confirm cidr is valid network
    var cidr = input.split('/');
    var base = cidr[0];
    var bits = cidr[1];
    var ip = base.split('.');
    var i = (ip[0] << 24 | ip[1] << 16 | ip[2] << 8 | ip[3]);
    var mask = bits == 0 ? 0 : ((1 << (32 - bits)) - 1) ^ 0xFFFFFFFF;
    var network = i & mask;
 
    if (i == network) { return true; }
    else { return false; }
}

Assuming jquery and jquery unobtrusive validation libraries have already been added to your project, you can then add a method to the validator, and wire it up. I’m not passing any parameters here, but you could also include parameters:

 
//custom vlidation rule - check IPv4
$.validator.addMethod("checkcidr",
    function (value, element, param) {
    return ipv4checker(value);
    });
 
//wire up the unobtrusive validation
$.validator.unobtrusive.adapters.add
    ("checkcidr", function (options) {
        options.rules["checkcidr"] = true;
        options.messages["checkcidr"] = options.message;
    });

Then, you’ll need to implement the GetClientValidationRules method in your custom validation class:

    public IEnumerable<ModelClientValidationRule> GetClientValidationRules(ModelMetadata metadata, ControllerContext context)
    {
        ModelClientValidationRule rule = new ModelClientValidationRule();
        rule.ValidationType = "checkcidr";
        rule.ErrorMessage = "Not a valid CIDR Subnet";
        return new List<ModelClientValidationRule> { rule };
    }

Lastly, add a reference to your custom validation class in the model and add the custom validation attribute to your property:

using System.ComponentModel.DataAnnotations;
using MvcApplication1.Validation;
 
namespace MvcApplication1.Models
{
    public class Allocation
    {
        [Required]
        [RegularExpression(@"^(([0-9]|[1-9][0-9]|1[0-9]{2}|2[0-4][0-9]|25[0-5])\.){3}([0-9]|[1-9][0-9]|1[0-9]{2}|2[0-4][0-9]|25[0-5])(\/(\d|[1-2]\d|3[0-2]))$", ErrorMessage="Not valid CIDR format")]
        [CIDRSubnet(ErrorMessage="Not a valid CIDR Subnet")]
        public string IPNetwork { get; set; }
    }
}

Microsoft Service Management Portal User Account Password cannot be reset

We’ve been working with Microsoft for quite some time on their Windows Azure for Windows Server project. Microsoft is bringing Azure technology to their Server platform for hosters to take advantage of. The environment consists of a portal and several providers. The portal uses standard ASP.NET membership for users.

Recently, we had a user who forgot his password. After enabling password reset functionality in the portal, the user received an email containing a link to reset the password, but encountered an error when trying to perform the reset. The following error was logged on the portal server:

Error:MembershipPasswordException: The user account has been locked out.

The aspnet_Membership table in Microsoft.MgmtSvc.PortalConfigStore contains an IsLockedOut field. The value was set to 1 for this user because of the number of incorrect login attempts. Setting it back to 0 allowed the user to update his password:

 
USE Microsoft.MgmtSvc.PortalConfigStore
UPDATE aspnet_Membership
SET IsLockedOut = 0
WHERE email = 'user@domain.org'

 

Windows Server 2008 Firewall Block rule prevents RPC communication

Recently opened a PSS case regarding on issue we discovered with the Windows Firewall with Advanced Security on Server 2008 SP2. As a web host, we have many customer web servers with various ports open to the Internet. From time to time, nefarious users will test the server’s security. Part of the standard response is to block all access to the server from the offending IP. This is realized by creating a Windows Firewall with Advanced Security rule that blocks traffic on all ports, for all services with the remote IP scope set to the IP in question.

The problem was uncovered when we noticed backups were failing. The backup program in use leverages a dynamic RPC endpoint for communication, and with the block rule in place, the communication between the customer’s server and the backup server was failing – even though the scope of the block rule was configured to use only the attacker’s IP address. Furthermore, there was a rule specifically allowing communication from the backup server’s on the dynamic RPC endpoint.

Block Rule:

Rule Name: Block Hacker
———————————————————————-
Enabled: Yes
Direction: In
Profiles: Domain,Private,Public
Grouping:
LocalIP: Any
RemoteIP: 1.2.3.4/255.255.255.255
Protocol: Any
Edge traversal: No
Action: Block

Backup server rule:

Rule Name: Allow Backups
———————————————————————-
Enabled: Yes
Direction: In
Profiles: Domain,Private,Public
Grouping:
LocalIP: Any
RemoteIP: 10.1.1.0/255.255.255.0
Protocol: TCP
LocalPort: RPC
RemotePort: Any
Edge traversal: No
Action: Allow

According to PSS, this is a known issue with Server 2008 SP2. It was fixed in 2008 R2 but this apparently will not be fixed in Server 2008. Luckily, there is a workaround. By creating a rule with action of Secure and allowing it to override block rules and selecting the computer account of the server in question, we can ensure proper communication:

Rule Name: Fix Server 2008 firewall bug
———————————————————————-
Enabled: Yes
Direction: In
Profiles: Domain,Private,Public
Grouping:
LocalIP: Any
RemoteIP: Any
Protocol: TCP
LocalPort: RPC
RemotePort: Any
Edge traversal: No
InterfaceTypes: Any
RemoteComputerGroup: D:(A;;CC;;;S-1-5-21-2041841331-1236329097-1724550537-522200)
Security: Authenticate
Action: Bypass

The reason this works has to do with the order in which Windows Firewall applies rules – that process is described in detail here: http://technet.microsoft.com/en-us/library/cc755191(v=ws.10).aspx. This also seems to be the reason the communication is blocked – block rules are processed before allow rules and rules with broader scope before those with a more narrow scope.

Cannot find Newtonsoft.Json error when deploying

We recently revamped our source control, continuous integration, and bug tracking solution at OrcsWeb to take advantage of Git. Part of the solution includes a build server that automatically runs tests against projects and, assuming they pass, packages the and deploys the application using Web Deploy. Several of the projects typically include NuGet packages, and a new feature of NuGet (as of 1.6 & 2.0) allows you to exclude these NuGet packages when committing to source control. By enabling package restore, NuGet will automatically download any packages from packages.config on any system where they’re missing.

After uploading a new MVC 4 project that was using this NuGet Package Restore functionality, I was receiving the YSOD that asp.net “Could not load file or assembly ‘Newtonsoft.Jason, Version=4.5.0.0′”. I confirmed that NuGet was downloading the packages on the build server, but for some reason it wasn’t being included in the deployment. BlackSpy’s solution for the Newtonsoft.Json error on stackoverflow.com fixed the issue for me as well.

I removed the entry for Newtonsoft.Json version 4.5.6 from packages.config, saved and built the project, then re-added Newtonsoft.Json 4.5.11 from NuGet and committed the changes. The build server picked up the updates, downloaded the missing package, and deployed it to the server.

 

Running a MVC 4 application on IIS7

Recently tried to get a new MVC 4 application running on IIS7 and ran into an issue that required a hotfix. The IIS7 server was 2008 SP2 fully patched and had both .NET 2.0 and .NET 4.5 installed, however, whenever I attempted to load the application, IIS would return a 403.14 error message about a directory listing being denied.

Many solutions suggested that ASP.NET wasn’t registered properly, and I confirmed that my application pool settings were correct (need to run Integrated Mode or configure asp.net to handle all traffic so requests are routed to the controllers). Others recommended code solutions, but Sean Anderson’s solution on this stackoverflow.com post was the resolution for me.

There’s an extensionless URL hotfix (I blogged about this hotfix in relation to 404 errors) for IIS7 that is required: http://support.microsoft.com/kb/980368.

Linking spam sent through shared IIS SMTP server to a user

As a web host, one of the most time-consuming processes is investigating spam sent through mail servers. Many legit websites have forms and other functions that send email to users. If left unchecked, spammers can leverage these to send unsolicited mail.

In our environment, we enable the IIS SMTP role on our web servers and configure them to allow relaying only from localhost with basic authentication. This means that only local sites hosted in IIS can send mail and they have to provide a username and password to do so. Unfortunately, the IIS SMTP service does not log that username – it’s long been a point of contention with the IIS SMTP service. Most administrator’s recommendations suggest using another service, such as SmarterMail. However, there are ways to extract the authenticated username sending spam.

In order to use this method, you’ll need to capture a packet trace while spam is being sent. This will allow you to see the entire SMTP transaction between client and server. The catch here is that we are using localhost and most packet capture utilities cannot capture loopback traffic. Wireshark has an article that goes into detail about why it can’t capture loopback traffic. There is a utility that we can use called RawCap that will capture this local traffic at the socket level and output it into a format that Wireshark can parse. So, depending on the source of the spam, you’ll either want to use Wireshark (for remote) or RawCap (for local) to capture network data.

RawCap has an interactive prompt to guide you through the capture process:

Once you’ve capture sufficient traffic, you can cancel the capture by hitting Ctrl+C and then opening the resulting file in Wireshark for analysis. You’ll likely have a lot of network “noise” that you’ll want to filter out by using a filter of “smtp”:

From here, we can drill down to the AUTH LOGIN command sent by the client, and a 334 response from the server:

To explain what’s happening here, after the EHLO command, the server responds with what verbs it supports. The client then issues the AUTH LOGIN command and the server responds with “334 VXNlcm5hbWU6" where "VXNlcm5hbWU6" is a BASE64 encoded string "Username:". The client then responds with the BASE64 encoded username. We can decode this value on base64decode.org to find the username sending spam.

Compare Two Directories with Powershell

We use DFS to keep webfarm nodes’ content in sync and ran across a problem where a directory with thousands of files and folders had one missing file on a replica. Here’s a quick script we used to find out what files were missing:

$dir1 = “\\server1\c$\folder1”
$dir2 = “\\server2\c$\folder1”
$d1 = get-childitem -path $dir1 -recurse
$d2 = get-childitem -path $dir2 -recurse
$results = @(compare-object $d1 $d2)

foreach($result in $results)
{
$result.InputObject
}

 

Directory: \\server1\c$\folder1\subfolder

Mode LastWriteTime Length Name
—- ————- —— —-
-a— 2/16/2012 4:08 PM 162 ~$README.txt

This will output all of the files and directories that exist in only one location.