Using SQL Database Audits for change detection

There are several reasons you may want to audit the data in your SQL databases. It’s helpful when trying to track down a bug in software, trying to piece together the source of unexpected data, or to meet compliance doctrines. Depending upon your needs, there are several ways to accomplish this. If you need auditing in order to be able to track and revert changes to data, trigger-based table auditing is probably best.

However, when you need to generate audit trails for compliance reasons, Microsoft’s SQL Server has a few built-in ways of doing so. C2 auditing is supported, however the amount of data generated makes this a non-viable option for most installations. Starting with SQL 2008, Microsoft introduced a new high-performance feature called SQL Server Audit. This functions at both the server level (included with all editions) and the database level (enterprise edition). This will allow you to audit a myriad of server and database level functions, and can help ensure your SQL footprint is in compliance.

SQL Server audit is composed of a few different functions. At the server level, you can define an auditing configuration that writes to a file, the NT Application Log, or the NT Security Log. Logging to a file has the least performance overhead and writing to the Security log requires special permissions. From there, you can configure Server level auditing, and/or Database level auditing. The auditing rules can be fine-tuned to apply to specific objects in the database and for specific user principals. For this example, I will configure auditing of Update and Delete events for all tables in a database for all users – a scenario that can effectively meet a requirement commonly known as File Integrity Monitoring or Change Detection for audit trails.

First, we’ll setup an auditing specification to log events the Application Log. Under Security, right-click Audits and select New Audit. Give the Audit an appropriate name and select Application Log as the destination. Click OK. Then right-click the Audit that is created and select enable.

Next, go to the database you would like to audit. Under Security right-click Database Audit Specification and select New Database Audit Specification. Name the database audit and select the appropriate audit configuration (AppLog). Under Actions, select the actions you would like to audit, one at a time. Set the Object Class to Database, select the database in the Object Name column, and select the [public] database role in the Principal Name column to audit all users. Even though the pop-up window will allow you to select multiple items, each row can contain only 1 object and principal. Click OK. Then right-click the Database Audit that is created and select enable.

Once enabled, you can now test your audit configuration by updating or deleting database for any table in the audited database. Right-click on the audit specification at the server level and select View Audit Logs to see generated audit events.

 

Install SCVMM 2012 Console on non-domain machine

Since I work remotely, my workstation is not joined to the corporate domain. This presents various issues for administrative consoles. Some use integrated authentication to communicate with their server counterparts, others allow you to specify the credentials to use when connecting. The worst part is that there does not seem to be any consistency – even among products of the same suite from the same company.

Take SCVMM 2012 for instance. A feature the added based on feedback that we provided was to allow you to specify the domain credentials the console uses when connecting to the server – similar to what SCOM 2007 R2 allowed. Unfortunately, they still required that the workstation be joined to a domain in order to install the console. Notice I said, “a domain” and not “the domain” – it doesn’t matter if your workstation is part of your corporate domain, rather Micrsoft arbitrarily decided to require any domain-joined workstaion as a pre-requisite. The worst part is, the console functions just fine on systems that are not domain-joined.

With that rant out of the way, here’s how you can by-pass the domain check at installation. Browse to the proper bitness folder for your workstation on the installation media (D:\amd64 or D:\i386). Under the Setup>MSI>Client folder, you’ll find the AdminConsole.msi file. Just double-click it to run. Once installed, the console will allow you to specify your domain credentials when connecting to the VMM Server:

Backup Database using MySQL Workbench

It’s possible to backup a MySQL database remotely even if you do not have administrative privileges. Most articles describe running mysqldump directly on the server, but that’s not always possible. Fortunately, you can do this from MySQL Workbench. I was able to backup the WordPress database from my Cytanium Shared Windows Hosting account.

First, you’ll want to download and install the latest version of MySQL Workbench (I used 5.2.39). After installing, you’ll need to configure a Server Administration connection:

Follow the instructions in the wizard by entering the remote host address, username, password and default schema. If your account does not have root privileges, you will want to select “Do not use remote management.” Once complete, double-click on the new connection listed under Server Administration. Under Data Export / Restore, select Data Export:

Select your database, export to a self-contained file, and dump stored routines. Click Start Export when ready. MySQL WorkBench will then export the database schema and data to a .sql file you can use to restore your database.

Razor ASP.NET web pages and CSHTML Forbidden errors

Recently, we had a support request come through for Cytanium’s ASP.NET 4.5 beta from a user trying to access an app written for ASP.NET web pages with Razor syntax. After publishing the files, the user was receiving the following YSOD:

Server Error in ‘/’ Application.


This type of page is not served.

Description: The type of page you have requested is not served because it has been explicitly forbidden.  The extension ‘.cshtml’ may be incorrect.   Please review the URL below and make sure that it is spelled correctly.
Requested URL: /testpage.cshtml

Normally, this is indicative of incorrect Application Pool settings. Razor syntax only works with ASP.NET 4.0 and requires the Integrated Pipeline to function properly. However, you also need to appropriate ASP.NET MVC files on the server – either in the GAC or deployed to your local /bin folder. Most people have ASP.NET MVC GAC’d on their development systems, so the application will work locally without having the appropriate DLL’s in the /bin folder of the web application. But that’s not necessarily the case on the server side. Per Microsoft’s recommendation, ASP.NET MVC is not GAC’d on the servers as there could be version issues that have a wide impact on all sites running on a shared host. Rather, it is recommended to bin deploy ASP.NET MVC DLL’s to each site. Once the appropriate DLL’s are in the /bin folder, and the app is running under ASP.NET 4.0 Integrate Pipeline, IIS will serve files written with Razor syntax.

Server 2008 R2 SP1 Hyper-V Dynamic Memory Settings

While working on a recent project for Cytanium Windows VPS Servers, I uncovered a little documented feature that I thought was new for Windows 8 Hyper-V, but was actually implemented in 2008 R2 SP1. It has to do with the minimum and maximum values for VM’s using Dynamic Memory in Hyper-V. The GUI exposes the concept of startup memory and maximum memory, where startup is the amount exposed to the VM while booting as well as the minimum amount of RAM the hypervisor will allocate to the VM, and maximum being the limit the VM will consume.

While working through the WMI API, I stumbled across this:

http://msdn.microsoft.com/en-us/library/cc136856(v=vs.85).aspx

Limit – The maximum amount of memory that may be consumed by the virtual system. For a virtual system with dynamic memory enabled, this represents the maximum memory setting.

Reservation – Specifies the amount of memory guaranteed to be available for this VM. For a virtual system with dynamic memory enabled, this represents the minimum memory setting.

VirtualQuantity – The total amount of RAM in the virtual system, as seen by the guest operating system. For a virtual system with dynamic memory enabled, this represents the initial memory available at startup.

So, there’s actually three settings where VirtualQuantity and Limit map to the startup and maximum values in the GUI. But what about Reservation? This is actually the minimum amount of memory the hypervisor will allocate for a VM. When you configure startup memory in the GUI or via SCVMM, it’s actually setting VirtualQuantity and Reservation to the same values. The reasoning behind this is simple – Microsoft wants to protect you from yourself. By setting the VirtualQuantity to something larger than the Reservation, you could potentially encounter a scenario where a VM reboots, and the host does not have enough memory to satisfy the VM, and has to power down the VM. This is a non-issue in Windows 8 because of Smart Paging.

On the flip side, the value specified in VirtualQuantity is also the amount of memory reported in the VM during boot. So this can cause confusion for some users because the VM may only report the VirtualQuantity on startup, and will always only report the high watermark of RAM allocated – which is typically less than the maximum available to the VM. To prevent this, we can set the VirtualQuantity value to the same as the Limit, and then set the reservation value to the minimum required to run the Operating System. This ensures that the VM always reports the maximum amount of memory available to it, while still allowing the hypervisor to dynamically allocate only what’s necessary to run the workload.

Ben Armstrong has a great post outlining how this can be done via WMI:

http://blogs.msdn.com/b/virtual_pc_guy/archive/2010/09/15/scripting-dynamic-memory-part-5-changing-minimum-memory.aspx

Once you change these values, the GUI actually recognizes the change and warns that modifying the settings will revert to default behavior:

But the actual values behind the scenes:

Limit:                  2048
Reservation:       512
VirtualQuantity:  2048

Shortly after booting, Hyper-V recoups the unused RAM:

But the VM still reports the high watermark:

One potential downside to doing this is that the amount of in-use RAM could be reported incorrectly inside the VM. However, based on my testing, this occurs when using Dynamic Memory via traditional methods as well. The problem is that Windows calculates in use RAM by subtracting available RAM from total RAM. So, for the above VM, the amount of in use RAM is report as ~1.8GB, rather than the ~600MB that’s actually in use by the VM at startup. Do note however, that this occurs anytime a VM is using Dynamic Memory and bursts above the startup value. The VM always reports the high watermark and encounters the same miscalculation of available memory if the memory demand decreases.