Resolving 404 error using Web Deploy on IIS

I recently used Web Platform Installer to install Web Deploy 3 on IIS 8 so I could remotely develop a PHP site in WebMatrix. After installing and setting up a publishing profile in WebMatrix, a quick test showed a 404 returned by IIS error when attempting to connect with Web Deploy. I confirmed that the IIS Web Management Service (WMSvc) was started and configured properly. I also confirm proper IIS Manager permissions for the user I was using to connect with Web Deploy. WMSvc tracing showed IIS was returning a 404 error code when attempting to access https://servername:8172/MsDeploy.axd – the Web Deploy handler in IIS. After much troubleshooting, I stumbled across a stackoverflow post that described the fix. Even though I had installed Web Deploy via the Web Platform Installer, it appears it didn’t actual install the Web Deploy Handler. To resolve this, I manually ran the MSI installer for Web Deploy and selected the appropriate components. After a restart of WMSvc, I was able to successfully connect to IIS from WebMatrix via Web Deploy.

The Web Deploy Windows Installer can be downloaded here:

http://technet.microsoft.com/en-us/library/dd569059(v=ws.10).aspx

Linking spam sent through shared IIS SMTP server to a user

As a web host, one of the most time-consuming processes is investigating spam sent through mail servers. Many legit websites have forms and other functions that send email to users. If left unchecked, spammers can leverage these to send unsolicited mail.

In our environment, we enable the IIS SMTP role on our web servers and configure them to allow relaying only from localhost with basic authentication. This means that only local sites hosted in IIS can send mail and they have to provide a username and password to do so. Unfortunately, the IIS SMTP service does not log that username – it’s long been a point of contention with the IIS SMTP service. Most administrator’s recommendations suggest using another service, such as SmarterMail. However, there are ways to extract the authenticated username sending spam.

In order to use this method, you’ll need to capture a packet trace while spam is being sent. This will allow you to see the entire SMTP transaction between client and server. The catch here is that we are using localhost and most packet capture utilities cannot capture loopback traffic. Wireshark has an article that goes into detail about why it can’t capture loopback traffic. There is a utility that we can use called RawCap that will capture this local traffic at the socket level and output it into a format that Wireshark can parse. So, depending on the source of the spam, you’ll either want to use Wireshark (for remote) or RawCap (for local) to capture network data.

RawCap has an interactive prompt to guide you through the capture process:

Once you’ve capture sufficient traffic, you can cancel the capture by hitting Ctrl+C and then opening the resulting file in Wireshark for analysis. You’ll likely have a lot of network “noise” that you’ll want to filter out by using a filter of “smtp”:

From here, we can drill down to the AUTH LOGIN command sent by the client, and a 334 response from the server:

To explain what’s happening here, after the EHLO command, the server responds with what verbs it supports. The client then issues the AUTH LOGIN command and the server responds with “334 VXNlcm5hbWU6" where "VXNlcm5hbWU6" is a BASE64 encoded string "Username:". The client then responds with the BASE64 encoded username. We can decode this value on base64decode.org to find the username sending spam.

Server-side workaround for BEAST SSL vulnerability on IIS

Recently, a vulnerability in the CBC cipher suite used in the SSL and TLS protocols was discovered that could allow an attacker to gain access to encrypted information. While the attack is not easily implemented, it will show up on compliance audits and auditors don’t like that. Fortunately, there is a server-side fix for Server 2008 and above that can be easily implemented without breaking compatibility with clients.

More information about the attack and workarounds can be found here: http://blogs.msdn.com/b/kaushal/archive/2011/10/03/taming-the-beast-browser-exploit-against-ssl-tls.aspx.

The workaround is to enable TLS 1.1 and/or 1.2 on servers that support it, and prioritize cipher suites so RC4 takes precedence over CBC. Server 2008 R2 and above supports TLS 1.1 and 1.2 – you can enable those protocols by following the instructions in KB 2588513. You’ll also want to change the priority of cipher suites on all Server 2008 and above systems using group policy (either a local group policy object for a single server, or by modifying domain policy in an AD environment).

1. Open Group Policy Editor (locally, Start>Run>gpedit.msc).
2. Browse to Computer Configuration>Administrative Templates>Network>SSL Configuration Settings.
3. Modify SSL Cipher Suite Order: set it as enabled, and enter a comma delimited list of cipher suites. I recommend the following:

TLS_RSA_WITH_RC4_128_SHA,TLS_RSA_WITH_RC4_128_MD5,SSL_CK_RC4_128_WITH_MD5,TLS_RSA_WITH_AES_128_CBC_SHA,TLS_RSA_WITH_AES_256_CBC_SHA,TLS_RSA_WITH_3DES_EDE_CBC_SHA,TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA_P256,TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA_P384,TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA_P521,TLS_ECDHE_ECDSA_WITH_AES_256_CBC_SHA_P256,TLS_ECDHE_ECDSA_WITH_AES_256_CBC_SHA_P384,TLS_ECDHE_ECDSA_WITH_AES_256_CBC_SHA_P521,TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA_P256,TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA_P384,TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA_P521,TLS_ECDHE_RSA_WITH_AES_256_CBC_SHA_P256,TLS_ECDHE_RSA_WITH_AES_256_CBC_SHA_P384,TLS_ECDHE_RSA_WITH_AES_256_CBC_SHA_P521,TLS_DHE_DSS_WITH_AES_128_CBC_SHA,TLS_DHE_DSS_WITH_AES_256_CBC_SHA,TLS_DHE_DSS_WITH_3DES_EDE_CBC_SHA,SSL_CK_DES_192_EDE3_CBC_WITH_MD5,SSL_CK_DES_64_CBC_WITH_MD5

4. Reboot the server for the setting to take effect.

Maintain protocol in URL Rewrite Rules

The URL Rewrite 2.0 module for IIS7+ is a very powerful tool for manipulating requests to an IIS server. We use it quite heavily with Application Request Routing load balancers in our environment. The combination allows us to perform L7 load balancing of requests. One of the great features of ARR is the ability to perform SSL offloading, which effectively terminates the SSL connection at the ARR node. Accomplishing this is quite simple – you create your rule and use HTTP:// as the scheme to route to the appropriate server farm. However, there are times when you will want to pass through the protocol to the backend servers.

There are a few ways to accomplish this. First, you could create two rules with a condition tracking the HTTPS server variable and route appropriately. However, this doubles the number of rules to maintain. Second, you could use a condition on the CACHE_URL variable and a back reference in the rewritten URL. The problem there is that you then need to match all of the conditions which could be a problem if your rule depends on a logical “or” match for conditions. Lastly, my preference involves using a rewrite map on the HTTPS server variable.

The idea is that we create a rewrite map named MapProtocol that contains two key value pairs – ON = https and OFF = http (I also prefer to set the default value for the rewrite map to http in the off chance the HTTPS variable does not contain a value). Then, we use that rewrite map in the Action url against the HTTPS server variable. The rule will look something like this:

<rule name="ARR Maintain protcol" enabled="true" stopProcessing="true">
  <match url=".*" />
    <conditions>
      <add input="{LOCAL_ADDR}" pattern="10\.1\.1\.10" />
    </conditions>
  <action type="Rewrite" url="{MapProtocol:{HTTPS}}://Webfarm1/{R:0}" />
</rule>
 
<rewriteMap name="MapProtocol" defaultValue="http">
  <add key="ON" value="https" />
  <add key="OFF" value="http" />
</rewriteMap>

Troubleshooting ARR 502.3 Errors

Load balancing is critical for any highly available application. In the case of websites, a webfarm fronted by a load balancer can help distribute the load across multiple servers to increase scale and ensure that your application remains online during planned maintenance or in the event of a server or application failure on a particular node. Microsoft provides a free IIS extension called Application Request Routing that can perform load balancing of HTTP and HTTP traffic. At OrcsWeb, we use a cluster of ARR servers to load balance our production sites.

There’s plenty of resource available that describe how Application Request Routing works, so I won’t go into detail about it here, but how do you troubleshoot when something goes wrong? One of the more common errors that can be generated by ARR is the 502 error code. There are two substatus codes: 502.3 and 502.4.

The 502.4 error is considerably easier to troubleshoot as it generally means that there were no available content nodes to route the request. This likely occurs when you have a health check configured for the content nodes, and it is failing for all of them – thus, there are no healthy content nodes to which ARR can route the request. Obviously, at this point, the easiest solution is to fix whatever’s causing the health check to fail on the content nodes. Additionally, there’s the concept of minimum servers in ARR. This value can help prevent a health check from taking too many nodes out of rotation. Setting this to at least 1 which ensure that users don’t receive a 502.4 error (though they may still see errors returned by the backend content node).

The 502.3 error can be a little more difficult to troubleshoot. It effectively means there was a communication issue between the ARR node and the content node. Most times it is a timeout due to a long running request on the content node. This is easy to spot by looking at the web logs. I recommend using LogParser to analyze the web logs and looking for any request with a time-taken value that exceeds the proxy timeout setting configured for the webfarm. You can either increase the value of the proxy timeout, or troubleshoot the web application to find out why the request is taking so long to process. Replace W3SVC0 with the site id of your website and replace *.log with the specific name of a log file if you web logs are large to help speed up processing:

LogParser.exe “select date, c-ip, cs-method, cs-uri-stem, cs-uri-query, sc-status, sc-substatus, time-taken from C:\inetpub\logs\logfiles\w3svc0\*.log where time-taken > 25000” -i:IISW3C -o:DATAGRID

The 502.3 error can also appear when something else it happening, and when this occurs, it’s time to get into deep troubleshooting. The first thing to do is enable Failed Request Tracing in IIS on the ARR node, then create a rule for all content that trips on 502.3 response codes. It’s important to note that only certain modules have tracing enabled by default. To capture tracing information from the URL Rewrite and Application Request Routing module, open up your applicationHost.config file, and add Rewrite and Request elements to traceProviderDefinitions/WWW Server:

<traceProviderDefinitions>
                <add name=”WWW Server” guid=”{3a2a4e84-4c21-4981-ae10-3fda0d9b0f83}”>
                    <areas>
                        <clear />
                        <add name=”Authentication” value=”2″ />
                        <add name=”Security” value=”4″ />
                        <add name=”Filter” value=”8″ />
                        <add name=”StaticFile” value=”16″ />
                        <add name=”CGI” value=”32″ />
                        <add name=”Compression” value=”64″ />
                        <add name=”Cache” value=”128″ />
                        <add name=”RequestNotifications” value=”256″ />
                        <add name=”Module” value=”512″ />
                        <add name=”FastCGI” value=”4096″ />
                        <add name=”Rewrite” value=”1024″ />
                        <add name=”RequestRouting” value=”2048″ />
                    </areas>
                </add>
               ———————— Truncated for readability —————- 

Ensure that when you are creating your rule, that the new provider areas of WWW Server are selected:

Once you’ve done that, attempt to reproduce the issue and a log file will be generated in the C:\inetpub\FailedReqLogFiles\W3SVC0 (where 0 is the site id). This file can help tell you where in the IIS pipeline the request is failing – look for warning or errors returned by modules. For example, here’s an example of a log file showing a 0x80070057 error from the ApplicationRequestRouting module:

The underlying error from the ARR module is “There was a connection error while trying to route the request.” So how do we find out what that means? Well, we need to look a little deeper into ARR to understand. ARR will proxy requests on behalf of the client to the content nodes. This means that the request from the client is actually regenerated into a new request by ARR and sent to the content node. Once the content node responds, ARR then repackages the response to send back to the client. To facilitate this, ARR uses the WinHTTP interface. In Server 2008 R2, you can enable WinHTTP tracing via netsh. Run this command to enable tracing:

netsh winhttp set tracing trace-file-prefix=”C:\Temp\WinHttpLog” level=verbose format=hex state=enabled

Then recycle the application pool to start logging. To disable tracing, run this command:

netsh winhttp set tracing state=disabled

You will find a log file in the C:\Temp directory named WinHttpLog-w3wp.exe-<pid>.<datetime>.log. Open this file and you will be able to see details of what ARR submitted to WinHTTP when generating the proxied request to send to the content node. You’ll want to search this file for the error mentioned in the Failed Request Tracing log. From the above example, you’ll see the error logged by ARR is 0x80070057 with an error message of “The parameter is incorrect.” Looking through our sample WinHTTP trace file, we find this:

15:15:51.551 ::WinHttpSendRequest(0x164d9a0, “…”, 696, 0x0, 0, 0, 164d740)
15:15:51.551 ::WinHttpAddRequestHeaders(0x164d9a0, “…”, 696, 0x20000000)
15:15:51.551 ::WinHttpAddRequestHeaders: error 87 ERROR_INVALID_PARAMETER]
15:15:51.551 ::WinHttpAddRequestHeaders() returning FALSE
15:15:51.551 ::WinHttpSendRequest: error 87 [ERROR_INVALID_PARAMETER]
15:15:51.551 ::WinHttpSendRequest() returning FALSE
15:15:51.551 ::WinHttpCloseHandle(0x164d9a0)
15:15:51.551 ::usr-req 0163D520 is shutting down

I replaced the actual header value with “…” in the sample above, but we can see that WinHTTP is failing when trying to put together the request headers to send the proxied request to the content node. Further investigation found that this was due to Internet Explorer passing unencoded non-ascii characters in the Referrer header which violates RFC 5987. To resolve this specific issue, we can either fix the source HTML to encode the characters, or we can modify the routing URL rewrite rule to always encode the Referrer header:

<rule name=”www.orcsweb.com“>
<match url=”.*” />
<serverVariables>
<set name=”HTTP_REFERER” value=”{UrlEncode:{HTTP_REFERER}}” />
</serverVariables>
<action type=”Rewrite” url=”http://www.orcsweb.com/{R:0}” />
</rule>

Razor ASP.NET web pages and CSHTML Forbidden errors

Recently, we had a support request come through for Cytanium’s ASP.NET 4.5 beta from a user trying to access an app written for ASP.NET web pages with Razor syntax. After publishing the files, the user was receiving the following YSOD:

Server Error in ‘/’ Application.


This type of page is not served.

Description: The type of page you have requested is not served because it has been explicitly forbidden.  The extension ‘.cshtml’ may be incorrect.   Please review the URL below and make sure that it is spelled correctly.
Requested URL: /testpage.cshtml

Normally, this is indicative of incorrect Application Pool settings. Razor syntax only works with ASP.NET 4.0 and requires the Integrated Pipeline to function properly. However, you also need to appropriate ASP.NET MVC files on the server – either in the GAC or deployed to your local /bin folder. Most people have ASP.NET MVC GAC’d on their development systems, so the application will work locally without having the appropriate DLL’s in the /bin folder of the web application. But that’s not necessarily the case on the server side. Per Microsoft’s recommendation, ASP.NET MVC is not GAC’d on the servers as there could be version issues that have a wide impact on all sites running on a shared host. Rather, it is recommended to bin deploy ASP.NET MVC DLL’s to each site. Once the appropriate DLL’s are in the /bin folder, and the app is running under ASP.NET 4.0 Integrate Pipeline, IIS will serve files written with Razor syntax.

Using IIS Debug Diagnostics to troubleshoot Worker Process CPU usage in II6

Failed request tracing in IIS7 can help track down many performance issues with websites, but we still have a broad customer base on IIS6. Troubleshooting performance issues in IIS6 has been quite difficult until Microsoft released a set of tools that gave greater insight into analyzing a stack trace.

The IIS Debug Diagnostics Tool can help track down CPU and memory issues from a worker process. Microsoft has a nice kb article that goes over the basics as well: http://support.microsoft.com/kb/919791.

1. Install the IIS Debug Diagnostics locally on the system.

2. Open the Debug Diagnostics Tool under Start > Programs > IIS Diagnostics > Debug Diagnostics Tool > Debug Diagnostics Tool.

3. Click Tools > Options And Settings > Performance Log tab. Select the Enable Performance Counter Data Logging option. Click OK.

4. Use task manager to find the PID of the worker process.

5. Select the Processes tab and find the process in the list.

6. Right-click on the process and select Create Full Userdump. This will take a few minutes and a box will pop-up giving you the path to the dump file.

7. Select the Advanced Analysis tab and click the Add Data Files button. Browse to the dump file that was jump created and click OK.

8. Select Crash/Hang Analyzers from the Available Analysis Scripts box for CPU Performance and crash analysis. Click Start Analysis.

After a few minutes, a report should be generated containing stack trace information as well as information about any requests executing for longer than 90 seconds. Note that the memory dump with use a few hundred megabytes of space, so be sure to install the tool on a drive with sufficient debugging space. Also, if the box is under heavy load, you can create the user dump on the system, copy the file to your workstation, and perform the analysis locally.

IIS6: 404 Error serving content with .com in URL

We ran into an issue today where a customer was having problems serving content from a folder named “example.com”. IIS6 was simply returning a 404 error. I immediately suspected something like URLScan but I eventually found it was due to the execute permissions configured on the parent virtual directory. When the customer configured the virtual directory, they set the execute permissions to “Scripts and executables”. This means that IIS will try to run any cgi compliant executables (.com and .exe files by default) in the virtual directory. In order to run the application, the executable also needs to be authorized in Web Service Extensions.

However, in this case, the URL simply contained “example.com” in the URL: http://server/example.com/images/image1.jpg and we were not trying to run an application. IIS was seeing the “example.com” in the URL and assuming it was a cgi executable and attempting to run the application. However, the file “example.com” did not exist and was therefore returning a 404 error. To correct the issue, we simply set the execute permissions to “None” since the customer was attempting to serve static content, though you can also use “Scripts only”.

The key to this is that there does not need to be a specifc mapping for executables. IIS 6 will attempt to run any executable if the vdir is configured with “Scripts and executables” permissions.