On the Twelfth Day…

Bacon wrapped frog legs (twelve of them) for Christmas.  No more drumming for these guys!!

What could be better than bacon wrapped frog legs?  Oh yeah, more Virtual lab setup.

We will delve into setting up a SQL Cluster today.  We will also cover some high level tips for dealing with virtual box.  This will be good information and the type of stuff I would have like to have known heading into setting up a Virtual Lab.

Season Cleaning First.

On the Twelfth Day of pre-Christmas…

My DBA brought to me a Virtual SQL Cluster.  And with that cluster, we have a a few tidbits for Using VirtualBox.

The first tidbit is an administration aid.  Occasionally it is good to have similar machines grouped together.  At the same time, it is also necessary to start multiple virtual machines at the same time.  This is done through groups in VirtualBox.

Here you can see some of the groups that I have created.  If I right-click on a machine name, I will be presented a menu that has the Group option.

Once I have a group created, I can get a few different options if I were to highlight the group name I would get different options as shown in the following image.

The notable options here are to “Ungroup”, “Rename Group”, and “Add Machine.”  Another option is “Start.”  Though this option is present for the machine menu, the behavior is different.  This option allows you to start the entire group.  This can be a handy tool when dealing with a cluster for instance.

The next handy tidbit is the snapshot.  A snapshot allows point in time image of the VM to be taken so different configurations can be tested – and then quickly reverted i necessary.  Here is what I have for one of my VMs in the snapshot manager.

From this very same screen you can also see one of the many methods available to create a clone of a virtual machine.  The clone icon is the little button above the right hand pane that looks like a sheep.  Cloning a VM is a quick way to create several machines for various purposes.  As you will hear from many people – you should build a base image first, then run sysprep against it.  Sysprep is necessary in order to help prevent problems down the road.

The next tidbit for today is in regards to the file locations for virtual machines and virtual disks.  I recommend changing the default path for the VM files.  This can be done through the preferences option on the file menu.  Shown in the attachment is what it may look like if you have not changed it.  Notice that the default path goes to your user profile directory.

Ignore the red text on this page for now.  We will not be discussing the Proxy.

The last tip is in the network settings within the preferences that we already have open.  In the network settings, we can find an option to configure DHCP settings for the Host-Only Ethernet Adapter.  These are settings you may want to configure to ensure you have more control over the environment.  It is also helpful when looking to configure those IP settings for the FreeNAS that we have already discussed.

As I wrap up these tidbits, I have decided that this is a lot of information to soak in at this point.  So in the spirit of Christmas, I have decided to finish off the clustering information in a 13th day post.  This final post may or may not be available on Christmas day.  Worst case it will be available on the 26th.

Part of that reason is I want to rebuild my lab following the instructions I will be posting and I need time to test it.  I want the instructions to be of great use.

Please stay tuned as we conclude this series very soon.

On the Eleventh Day…

Yesterday we had an introduction into setting up a virtual lab to help the DBA learn and test new technologies while improving his/her own skill set.

Today we will continue to discuss the building of a virtual lab.  Today we will get a little closer to the SQL portion of things as we will be installing a familiar operating system to SQL Server.

The Operating System will be 2008.  And the version of SQL Server will be 2008 R2.  I chose these specifically because at the time that I built out my lab, I was setting up the environment to help me study for the MCM exams.

As a sidebar, I was just informed by a friend of another blog series that is also currently discussing setting up Virtual Machines in Virtual Box.  Fortunately, his series is based on Windows 2012 and SQL 2012 – so there is a bit of a difference.  The author of that series is Matt Velic and you can read his articles on the topic here.

I’ll be honest, upon hearing that news I had to go check out his articles to make sure I wasn’t doing the exact same thing.  And while there may be a little overlap, it looks like we have different things that we are covering.

And now that brings us to recap time.

On the Eleventh Day of pre-Christmas…

The next pre-requisite for this lab is to install a Domain Controller and Active Directory.  For this Domain Controller, I have the following Virtual Box settings.

  • A single Dynamic Virtual Disk of 20GB
  • 2 Network Adapters (1 NAT and 1 Internal)
  • 1024 MB memory

To install the operating system, we will mount the iso image the same as we did for the FreeNAS in yesterdays post.  This is a Windows setup, and I will not cover that.

Once you have installed the operating system, the first thing to do is to install the guest additions for Virtual Box.

With guest additions installed, next we will turn to the network adapters.  I have two adapters installed for good reason.  One adapter is visible to the virtual network and will be used for the VMs to talk to each other.  The second adapter is installed so I can get windows validated and so patches can be downloaded and installed.

Talking about patches, this is where we want to make sure the operating system is patched.  Run windows update, finish all of the requisite reboots, and then come back to the network control panel.  Prior to installing the domain, disable the external NIC.  We will do this to limit the potential for errors when joining the subsequent machines to the domain.

For the Internal adapter, I will also configure a static IP address as shown here.

Let’s now setup the domain and domain controller on this machine.  From Server Manager, right click roles and select Add Roles.  From the new screen, select Active Directory Domain Services and DNS Server.

You are now ready to configure your domain.  I am going to allow you to use your favorite resource for the directions on configuring a domain in Windows 2008.  After the domain has been configured, then enable the external network adapter.

The final step is to configure DNS.  The main concern in DNS to configure is the reverse lookup zones.  I have three subnets (network address ranges) that I will configure.  The relevance of these three zones will become apparent in the final article of the lab setup mini-series.  The configurations will be along the lines as seen in this next screenshot.

This gets us to where we can start building our SQL Cluster.  We will cover that in the next installment.

On the Tenth Day…

Silver and Gold have a way of really bringing the look and feel of the Christmas season.

Silver and Gold also seem to represent something of greater value.

We are now into the final three articles of the 12 Days of pre-Christmas.  And with these three articles, I hope to bring something that is of more value than anything shared so far.

Of course, the value of these articles is subjective.  I have my opinion as to why these are more valuable.  I hope to convey that opinion as best as possible to help bring out as much value as can be garnered from these articles.

Let’s first recap what we have to date.

On the Tenth Day of pre-Christmas…

My DBA gave me an education.  Sure, everyday so far in this series could possibly be an education.  This is an education via a lab.  Every DBA should have a lab to be able to test features and hone skills.  A lab is a better place to do some of the testing that needs done than the DEV, QA, or even Production environments.

Think about it, do we really want to be testing the setup of clustering in the DEV environment and potentially impact the development cycle?  I’d dare so no.

Unfortunately, reality does not always allow for a lab environment to be accessible to the DBA.  So the DBA needs to make do with other means.  It is due to these types of constraints, that I am devoting the next three days to the setup of a lab.  This lab can even be created on a laptop.  I created this lab on my laptop with only 8GB of ram.  I was quite pleased to see that it performed well enough for my testing purposes.

We will begin with an introduction to the technology used – VirtualBox.  I will also discuss the creation of enough virtual machines to create a SQL Cluster (domain controller, two sql boxes, and a NAS) along with the configuration steps to ensure it will work.

For this lab, we will be using Virtual Box.  You can download Virtual Box here.  And yes, the tool is one that is provided by Oracle.  Two of the reasons I want to use Virtual Box is the ability to install multiple operating systems, and the tool is currently free.  Another benefit is that I can easily import virtual machines created in VMWare as well as Microsoft Virtual Server/Virtual PC (I have not tested any created in Hyper-V).

While you are downloading the Virtual Box app, download the Extension Pack as well.  Links are provided for the extension pack on the same page as the application download.  Be sure to download the Extension Pack for the version of Virtual Box you download.

The version of VirtualBox I will be using for this article is 4.2.2.  As of the writing of this article a new version has been released – 4.2.6.  The differences in versions may cause the instructions in these articles to be inaccurate for 4.2.6.  You can use whichever version you deem appropriate.  I just won’t be covering version 4.2.6 and don’t know if the screens are different or it the settings are different.

You can check your version in the Help.About Menu.

For this lab, we have a few things that will be required prior to setting up the SQL Cluster.  Two big components of this infrastructure are Storage and a Domain.  We are going to simulate shared storage through the use of FreeNAS.  We will be discussing FreeNAS today.

For starters, we can download FreeNAS from here.  You might be able to find a few configuration guides online for FreeNAS.  Most of them seemed to be for really old versions and were less than usable for the version that I had downloaded.  All settings to be discussed today are pertinent to FreeNAS-8.3.0-RELEASE-x64 (r12701M).

To setup FreeNAS, we will need to have a Virtual Machine configured with the following settings.

  • A BSD VM with FreeBSD as the version.
  • Ensure the motherboard settings only has the “Enable IO APIC” setting checked.
  • Three Virtual Disks (1 for NAS OS, 1 for SAN Storage, and another for a Quorum)
  • 512 MB memory
  • 2 Network Adapters (1 Internal and 1 connected to the Host-Only Adapter)

Despite the FreeNAS actual disk requirements being rather small, any fixed disk size less than 2GB causes mount errors.  Any amount of memory less than 512MB also causes a mount problem.  These settings are the minimum configurations to save the hair on your head.

The Network Adapters is more of a strong suggestion.  I was able to get it to work with only one adapter, but it was more hassle than it was worth.  I found it easier to configure for use by the cluster later if I had two adapters.  The two adapter configuration also allows me easier administration from within the VM environment as well as from the host machine.

One other thing to do is to mount the FreeNAS ISO that has been downloaded to the CD drive that is created by default with the VM creation.  I mount the ISO before booting by opening the settings for the VM within Virtual Box.  On the storage screen, highlight the “Empty” CD Icon in the middle then click on the CD Menu Icon on the far right as shown below.

Navigate to the folder where the FreeNAS ISO is saved and then click ok until you are back at the Virtual Box manager screen.  You are now ready to start the machine and finish the install and then configure.

Once powered on, you should eventually come to the following screen.

Select to Install/Upgrade.  From here, you will see a few more prompts such as the next screen to select the installation location.

This should be pretty straight forward installation options for the IT professional.  I will not cover all of the installation prompts.  Once the install is finished, you will need to reboot the VM and un-mount the installation media.  The system will then come to the following screen.

Now that we are at the console screen, the next step is to configure the Network Interfaces.  You can see that I have already done this based on the IP addresses seen at the bottom of the screen.  I will leave the configuration of the IP addresses to you.  Both the internal network and the host-only network will need to be configured.  The host network should be the second adapter.  Keep track of the IP addresses that have been configured.  We will need to use them later.

In a browser window we will now start configuring the storage to be used by our Lab.  In the address bar, we will need to input the address we configured for the host network.  In my case,  When that page loads, the first thing we need to do is change the Admin password.

The default password is empty.  Pick a password you will remember and that is appropriate.  With that done, we can configure the storage.

The Next setting, I want to configure is the iSCSI setting.  In order to use the volumes that we create, we must enable the iSCSI service.  In the top section, click the Services button.  This will open a new tab in the web browser.  On the Services tab, we need to toggle the slider for iSCSI to the “ON” position as shown in the image.

Once toggled, we can configure the iSCSI settings for the volumes we will now create.  From here, we click on the storage tab.  Next, click on the Volume Manager Button.  In order for the disks to be imported, we have to use volume manager.  The Import Volume and Auto Import Volume must serve other purposes – but they don’t work for importing a new volume.  Here is a screenshot demonstrating what needs to be configured.

With the Volume created, a ZFS volume must next be created from within the storage management.  We do this by clicking the “Create ZFS Volume” icon next to the main volume we just created.  This icon is illustrated as the icon on the far right in the next image.

Once that icon is clicked, you will be presented with a new dialog.  The dialog is demonstrated in the above image.  Give the Volume a Name and then give it a size.  Note that you must specify a storage unit (m or g for example) or you will receive a pretty red error message.

Now go back to the Services tab where we enabled iSCSI.  There is a wrench icon next to the toggle to enable the service.  Click on this wrench and a new tab will be opened (again within the FreeNAS webgui) and the focus will be switched to this new tab.  On the “Target Global Configuration” ensure that Discovery Auth Method is set to “Auto.”  If it is not, make the change and click save at the bottom.

Next is the Portals.  The portals should be empty so we will need to add a portal.  By default, only one IP address is displayed for configuration on a new Portal entry.  We want to configure two IP addresses.  First, select from the IP Address drop down on the new window that opened when clicking on “Add Portal.”  Then select “Add extra Portal IP”.

Next is to configure an Initiator.  For this lab, I created on Initiator specifying ALL for the Initiators and Authorized Network as shown here.

With an initiator and a portal in place, we now proceed to the configuration of the Targets.  I have configured three targets and the main difference is in the name.  They should be configured as shown here.

Almost done with the setup for the storage.  It will all be well worth it when we are done.  We need to configure Device Extents and then Associate the targets, then we will be done.

Like with the Targets, I have three device extents configured.  The configuration for each is the same process.  I want to give each a name that is meaningful and then associate the extent to a disk that we imported earlier.

Last for this setup is the Target to Extent association.  This a pretty straight forward configuration.  I named my targets the same as extents so there was no confusion as to which should go with which.

That wraps up the configurations needed to get the storage working so we can configure a cluster later on.  Just getting through this configuration is a pretty big step in getting the lab created for use in your studies and career enhancement.

Next up in this series is to show how to configure (in limited detail) a domain and DNS, and then to install and configure a cluster.  Stay tuned and I will even through in a few tidbits here and there about Virtual Box.

I didn’t include every screenshot possible throughout the setup of FreeNAS and the configuration of iSCSI.  Part of the fun and education of a lab is troubleshooting and learning as you go.  If you run into issues, I encourage you to troubleshoot and research.  It will definitely strengthen your skill-set.

On the Ninth Day…

It’s the end of the world as we know it.  And as the song goes…I feel fine!  But hey, we didn’t start the fire.

Those are a couple of songs that pop into my head every time somebody starts talking doomsday and doomsday prophecies.

If you are reading this, I dare say that the world is still turning.  And that is a good thing because that gives us a chance to talk about the 12 days of pre-Christmas.

Today we will be talking about a tool that can be at the DBAs disposal to help in tracking performance as well as problems.

First there are a couple of items of housekeeping.  First item is that I only realized with this post that the post on the first day was wrong.  I had miscalculated my twelve days to end on Christmas day.  That is wrong!  Counting down from that first post on the 13th means the 12th day will end up on December 24th.  Maybe I will throw in a bonus 13th day post, or delay a post for a day, or just go with it.  It will be a mystery!

Second item is naturally the daily recap of the 12 days to date.

On the Ninth Day of pre-Christmas…

My DBA gave to me a Data Collection!

If only my DBA had told me that I would need to have Management Data Warehouse (MDW) preconfigured.  Well, that is not a problem.  We can handle that too.  For a tutorial on how to setup MDW, I am going to provide a link to one that has been very well written by Kalen Delaney.

The article written by Kalen covers the topic of MDW very well all the way from setting up the MDW, to setting up the Data Collectors, and all the way down to the default reports you can run for MDW.  The MDW and canned Data Collectors can provide some really useful information for your environment.

What I want to share though is a means to add custom data collections to your MDW.  To create a custom collection, we need to rely on two stored procedures provided to us by Microsoft.  Those stored procedures are: sp_syscollector_create_collection_item and sp_syscollector_create_collection_set.  Both of these stored procedures are found in the, you guessed it, msdb database.

Each of these stored procedures has a number of parameters to help in the creation of an appropriate data collection.  When creating a data collection, you will first create the collection set, and then you add collection items to that set.

There are a few notable parameters for each stored procedure that I will cover.  Otherwise, you can refer back to the links for each of the stored procedures to see more information about the parameters.

Starting with the sp_syscollector_create_collection_set stored procedure, I want to point out the @schedule_name, @name, and @collection_mode parameters.  The name is pretty obvious – make sure you have a distinguishable name that is descriptive (my opinion) or at least have good documentation.  The collection mode has two values.  As noted in the documentation, you will want to choose one value over the other depending on the intended use of this data collector.  If running continuously, just remember to run in cached mode.  And lastly is the schedule name.  This will help determine how frequently the job runs.

Unfortunately, the schedule names are not found in the documentation, but rather you are directed to query the sysschedules tables.  To help you find those schedules, here is a quick query.

Now on to the sp_syscollector_create_collection_item stored procedure.  There are three parameters that I want to lightly touch on.  For the rest, you can refer back to the documentation.  The parameters of interest here are @parameters, @frequency and @collector_type_uid.  Starting with the frequency parameter, this tells the collector how often to upload the data to the MDW if running in cached mode.  Be careful here to select an appropriate interval.  Next is the parameters parameter which is really the workhorse of the collection item.  In the case of the custom data collector that I will show in a bit, this is where the tsql query will go.

Last parameter to discuss is the collector type uid.  Like the schedule for the previous proc, the documentation for this one essentially refers you to a system view – syscollector_collector_types.  Here is a quick query to see the different collector types.

The collector type that I will be using for this example is Generic T-SQL Query Collector Type.  A discussion on the four types of collectors can be reserved for another time.

Let’s move on to the example now.  This custom data collector is designed to help troubleshoot deadlock problems.  The means I want to accomplish this is by querying the system_health extended event session.

I can query for deadlock information direct to the system_health session using a query like the following.

You may notice that I have converted to varchar(4000) from XML.  This is in large part to make sure the results will play nicely with the data collector.  Now to convert that to a query that can be used in the @parameters parameter, we get the following.

With this query, we are loading the necessary schema nodes that correlate to the Data Collector Type that we chose.  Since this parameter is XML, the schema must match or you will get an error.  We are now ready to generate a script that can create a deadlock data collector.

Upon creation, this will create a SQL Agent job with the defined schedule.  Since this is a non-cached data collector set, the Agent job will adhere to the schedule specified and upload data on that interval.

Now all we need to do is generate a deadlock to see if it is working.  It is also a good idea to introduce you to the table that will be created due to this data collector.  Once we create this collector set, a new table will be created in the MDW database.  In the case of this collector set, we have a table with the schema and name of custom_snapshots.systemhealthdeadlock.

This new table will have three columns.  One column represents the DeadlockGraph as we retrieved from the query we provided to the @parameters parameter.  The remaining columns are data collector columns for the collection date and the snapshot id.

Now that we have covered all of that, your favorite deadlock query has had enough time to finally fall victim to a deadlock.  We should also have some information recorded in the custom_snapshots.systemhealthdeadlock table relevant to the deadlock information (if not, it will be there once the agent job runs, or you can run a snapshot from SSMS of the data collector).  With a quick query, we can start looking into the deadlock problem.

This query will give me a few entries (since I went overkill and created a bunch of deadlocks).  If I click the DeadlockGraph cell in the result sets, I can then view the XML of the DeadlockGraph, as in the following.

Code to generate deadlock courtesy of “SQL Server 2012 T-SQL Recipes: A Problem-Solution Approach” by Jason Brimhall, Wayne Sheffield et al (Chapter 12, pages 267-268).  If you examine the deadlock graph you will see the code that generated the deadlock.

Since this is being pulled from the RingBuffer target of the system_health, it can prove useful to store that data into a table such as I have done.  The reason being that the Ringbuffer can be overwritten, and with good timing on the data collector, we can preserve this information for later retrieval and troubleshooting.  Deadlocks don’t always happen at the most opportune time and even less likely to occur when we are staring at the screen waiting for them to happen.

As you read more about the stored procedures used to create a data collector, you will see that there is a retention parameter.  This helps prevent the table from getting too large on us.  We can also ensure that an appropriate retention is stored for these custom collectors.


Creating a custom data collector can be very handy for a DBA.  Especially in times of troubleshooting.  These collectors are also quite useful for trending and analysis.  Consider this a tool in the chest for the poor man. 😉

Enjoy and stay tuned!

All scripts and references were for SQL 2008 R2.  The Deadlock script was pulled from the 2012 book, but the script runs the same for the 2008 version of the AdventureWorks database.

On the Eighth Day…

Today’s post is merely an illusion.  The illusion being that we have finally stopped talking about the msdb database.  I’ll explain about that later in this post.

This should be a good addition to the script toolbox for those Mere Mortal DBAs out there supporting their corporate SSRS environment.  Everybody could use a script now and then that helps them better support their environment and perform their DBA duties, right?

No reading ahead now.  We’ll get to the script soon enough.  First, we have a bit of business to cover just as we normally do.

We need to quickly recap the first seven days thus far (after all, the song does a recap with each day).


  1. Runaway Jobs – 7th Day
  2. Maintenance Plan Gravage – 6th Day
  3. Table Compression – 5th Day
  4. Exercise for msdb – 4th Day
  5. Backup, Job and Mail History Cleanup – 3rd Day
  6. Service Broker Out of Control – 2nd Day
  7. Maint Plan Logs – 1st Day

On the Eighth Day of pre-Christmas…

My DBA gave to me a means to see Report Subscriptions and their schedules.

One of the intriguing points that we find with having a reporting environment is that we also need to report on that reporting environment.  And one of the nuisances of dealing with a Reporting Services Environment is that data like report subscription schedules is not very human friendly.

Part of the issue is that you need to be fluent with math.  Another part of the issue is that you need to be a little familiar with bitwise operations in SQL Server.  That said, it is possible to get by without understanding both very well.  And as a last resort, there is always the manual method of using Report Manager to check the subscriptions for each of the reports that have been deployed to that server.  Though, I think you will find this to be a bit tedious if you have a large number of reports.

I have seen more than one script that provides the schedule information for the subscriptions without using math and just relying on the bitwise operations.  This tends to produce a lot of repetitive code.  The method works, I’m just not that interested in the repetitive nature employed.

Within SQL Server you should notice that in several tables, views, and processes employ the powers of 2 or base 2 or binary number system.  This is natural since this is so integral within computer science in general.  Powers of 2 translates to binary fairly easily and then integrates so well with bitwise operations.

The following table demonstrates the powers of 2 and conversion to binary.

power of 2 value binary
0 1 1
1 2 10
2 4 100
3 8 1000
4 16 10000
5 32 100000
6 64 1000000
7 128 10000000
8 256 100000000

To get numbers and values between the binary results or the value results listed above is a matter of addition.  We add the value from a power of 2 to another power of 2.  So if I need a value of 7, then I need 2^0 + 2^1 + 2^2.  This results in a binary value of 0111.  Now this is where the need for bit comparisons comes into play so we will use some bitwise operations (read more here) to figure out quickly which values are used to reach an end value of 7 (so I don’t need to really know a lot of math there 😉 ).

How does this Apply to Schedules?

This background has everything to do with scheduling in SSRS.  Within the ReportServer database, there is a table called Schedule in the dbo schema.  This table has multiple columns that store pieces of the Subscription Schedule.  The three key columns are DaysofWeek, DaysofMonth and Month.  The values stored in these columns are all sums of the powers of 2 necessary to represent multiple days or months.

For instance, you may see the following values

DaysOfWeek DaysOfMonth Month
62 135283073 2575

These values are not friendly to normal every day mere mortal DBAs.  The values from the preceding table are shown below with the human friendly data they represent.

DaysOfWeek DaysOfMonth Month
62 135283073 2575
Monday,Tuesday,Wednesday,Thursday,Friday 1,8,9,15,21,28 January,February,March,April,October,December

That is better for us to understand, but not necessarily better to store in the database.  So, I hope you can see the value of storing it in a numeric representation that is easily deciphered through math and TSQL.

Without further adieu, we have a script to report on these schedules without too much repetition.

Consider this as V1 of the script with expected changes coming forth.

I have set this up so a specific report name can be provided or not.  If not provided, the query will return all scheduling information for all reports.

Through the use of a numbers table (done through the CTE), I have been able to create a map table for each of the necessary values to be parsed from the schedule later in the script.  In the creation of that map table, note the use of the power function.  This map table was the critical piece in my opinion to create a script that could quickly decipher the values in the schedule and provide something readable to the DBA.


I did this script because I feel it important to know what reports are running and when they are running.  Add that management also likes to know that information, so there is value to it.  But, I found scripts on the web that used the bitwise operation piece but a lot of repetitive code to determine each Weekday and month.

An alternative would be to perform a query against the msdb database since Scheduled reports are done via a SQL Agent job.  I hope you find this report useful and that you can put it to use.

On the Seventh Day…

So far this series has been a bit of fun.  We have discussed a fair amount of useful things to date.  And here we are only on the seventh day.

Today we will be reverting our attention back to the msdb database.  This time not as a means of maintaining the under-carriage but rather to help us try and become better DBAs.

Sometimes to be a better DBA, we have to be a bit proactive and less reactive.

We’ll get to that shortly.  As we have done just about every day so far though, we need to recap real quick what we have to date in the 12 Days of pre-Christmas.


  1. Maintenance Plan Gravage – Day 6
  2. Table Compression – Day 5
  3. Exercise for msdb – Day 4
  4. Backup, Job and Mail History Cleanup – 3rd Day
  5. Service Broker Out of Control – 2nd Day
  6. Maint Plan Logs – 1st Day

On the Seventh Day of pre-Christmas…

My DBA gave to me an early Nuclear Fallout detection and warning system.  Ok, maybe not quite that extensive – but it could sure feel like it if something were to slip through the crevasses and the business management started breathing down your neck.

Have you ever had a SQL Job run longer than it should have?  Has the job run long enough that it ran past the next scheduled start time?  These are both symptoms that you might want to monitor for and track in your environment.  If you are watching for it – you should be able to preemptively strike with warning flares sent out to the interested parties.  Then you would be more of the hero than the scapegoat.  And we all want to be on the winning side of that conversation without the confrontation from time to time ;).

Today, I am going to share a quick script that can be used to help monitor for jobs that have waltzed well beyond the next scheduled start time.  First though, there is a script available out there for the jobs that have run beyond normal run times.  You can check out this article by Thomas LaRock to get a good script to check and log long running jobs.

Though the criteria are similar – we do have two different needs that need to be reported on and monitored.  This is why we have this second script.  If you have a job that should run every 15 minutes but you find that the job has been running non-stop for 45 minutes, that is quite a problem.  And many times we should alert on something like that.

So here is the quick script.

[codesyntax lang=”tsql”]


From here, it is a matter of creating a SQL job to run this script.  The schedule is dependent on job frequency and your expected needs to monitor.  It can very easily be set to run every five minutes.

Once the job is created then it is a matter of setting up a mechanism to alert.  There are any number of ways to do that as well.  I leave that to any who use the script to determine how best to implement for their own environment.


This script will catch jobs that are running and have exceeded the next scheduled start time.  This is accomplished through the filters in the where clause.  With a couple of date comparisons as well as filtering on job_status we can get the desired results.


On the Sixth Day…

What better way to kick off the sixth day of pre-Christmas than with six slices of foie-gras?  No animals have been hurt in the making of this post!

This could be a pretty festive dish thanks in part to those geese.  I enjoy a little foie gras every now and again.

No, this was not just a clever ploy to bring fat into another post.  Rather, I feel that foie gras is more of a delicacy than eggs.

Today, we will be talking of a delicacy in the database world.  It is a feature that should help every DBA sleep better at night.  Sadly, this delicacy can be tainted by another feature that may just give you hear burn instead.

First though, let’s recap.

  1. Table Compression – Day 5
  2. Exercise for msdb – Day 4
  3. Backup, Job and Mail History Cleanup – 3rd Day
  4. Service Broker Out of Control – 2nd Day
  5. Maint Plan Logs – 1st Day

On the Sixth Day of pre-Christmas…

Picture Courtesy of AllCreatures.org

My DBA gave to me a way to ensure my databases are not corrupt.

Sadly, the method chosen is to force feed (gravage) the checkdb through an SSIS style maintenance plan.  Have you ever tried to run checkdb through a maintenance plan?  It doesn’t always work so well.

Many times, when running checkdb, you may run into this pretty little error message.

Msg 5808, Level 16, State 1, Line 2
Ad hoc update to system catalogs is not supported.

Well, that is just downright wrong.  If I am provided a tool to create maintenance plans, then the tool should not generate an error such as that when I want to run consistency checks against the database.  This is akin to force feeding us something that isn’t all that good for our health.  There was even a connect item filed for this behavior, here.

So, what is it that causes this behavior?

Well, there are a couple of things that contribute to this behavior.  This can be reproduced from tsql as well.  To find what is causing this behavior, I used a tool that is a bit more reliable.  To recreate the failure, I created a small test database and then created a maintenance plan to run consistency checks for that database.  Then the reliable tool I used is Profiler.

Next up is to run the profiler with a filter for the test database, then to start the maintenance plan.  It shouldn’t take too long to have the maintenance plan complete.  When it completes, it is time to investigate the TSQL generated by Profiler and it should become apparent pretty quick what TSQL is being run during a maintenance plan that causes the checkdb to fail with the above mentioned error.

Are you ready?  This code is pretty complex and perplexing.  You better sit down so you don’t fall from the surprise and shock.

[codesyntax lang=”tsql”]


Why would a maintenance plan need to run that snippet every time that a checkdb is performed?

Now, there is another piece to this puzzle.  This error is thrown when another configuration setting is present.  If we change that setting, the error no longer happens.  Soooo, a quick fix would be to change that setting.  The setting in question is “Allow Updates”.  It has a value of 1 and must be changed to a value of 0.  Since SQL 2005, we want it to be a 0 anyway.

[codesyntax lang=”tsql”]


Now, another option for this issue would be that the maintenance plan not run the sp_configure at all.  Or, if it is deemed necessary, that it be changed to the following.

[codesyntax lang=”tsql”]


The best option in my opinion is that a maintenance plan not be used for consistency checks.  I’d prefer to see a custom built routine, a routine using sp_msforeachdb, or the maintenance plan routines developed by Ola Hallengren that can be found here.  All of these methods require that a little more thought be put into the consistency checking of your database, but that means you will get better sleep through the night.

On the Fifth Day…

Today we will have a quick change of pace.  This will be less about maintenance and more about internals.

Today’s topic is one of particular interest to me as well.  The topic is a small section of a presentation I like to give at User Groups and at SQL Saturdays.

Before we take this diversion from the under-carriage to something more related to the engine, let’s have a brief recap of the previous four days of pre-Christmas.

So far the gifts of pre-Christmas have included the following articles.

  1. Exercise for msdb – Day 4
  2. Backup, Job and Mail History Cleanup – 3rd Day
  3. Service Broker Out of Control – 2nd Day
  4. Maint Plan Logs – 1st Day

On the Fifth Day of pre-Christmas…

My DBA gave to me a much more compact database.  Surprisingly it can be as much as much as 20% of the original size.

I know, I know.  I’ve heard it before but this is not the compression used by doublespace and drivespace that we were given many years ago.  This really is much better.

And yes I have heard about the performance factor too.  That is a topic for another discussion.  As an aside, when it comes to performance, I always tell people that they must test for themselves because mileage will vary.

No, what I want to talk about is much different.  I want to talk about the CD Array(at the page level) and the new data types specific to compression that you may encounter within the CD Array.

CD Array Data Types

SQL Server introduces us to 13 data types that are used within the CD Array when Compression has been enabled.  Twelve of these data types can be seen when Row Compression has been enabled on an object.  The thirteenth data type is only applicable when page compression has been implemented.

There is no guarantee that any or all of these data types will be present on a page related to an object that has been compressed using either Row Compression or Page Compression.  If you want to find these data types on a compressed page, you may have to do a little hunting.

To demonstrate that these data types exist and that they can be found, I have a sample script.

[codesyntax lang=”tsql”]


Now let’s take a look at the different data types, starting with the 12 available with Row Compression.

0x00 NULL SomeNull
0x01 EMPTY SomeBit
0x02 ONE_BYTE_SHORT Some1Byte
0x03 TWO_BYTE_SHORT Some2Byte
0x05 FOUR_BYTE_SHORT Some4Byte
0x06 FIVE_BYTE_SHORT Some5Byte
0x07 SIX_BYTE_SHORT Some6Byte
0x0a LONG SomeLong
0x0b BIT_COLUMN SomeBit2

If we look at page 20392 (yours will likely be different), we will find all of these data types present.  We will also note that this page should show (COMPRESSED) PRIMARY_RECORD – which indicates that the page is Row Compressed.  When hunting for these data types, it is a good idea to make sure the page is compressed first.  In the supplied table, you can see what data type matches to which column in the table we created via the script.  The table also provides a short description of what that data type represents (as you would see in the CD Array).

If we now want to explore and find the 13th data type, we need to look at the second table we created in the attached script – CDTypes2.  Notice that this table has been page compressed.  I even did that twice.  I did this to make sure the data was page compressed (occasionally when testing I could not easily find a page compressed page unless I page compressed a second time).

Much the same as was done with Row Compression, we need to verify that a page is Page Compressed before searching up and down trying to find this 13th type.  To find a Page Compressed page, we need to look in the CompressionInfo section of the page for CI_HAS_DICTIONARY.  If this notation is present, the page is Page Compressed.

Here is a page snippet with the pertinent data type.

The CD array entry for Column 4 is what we are interested in here.  In this particular record, we see the thirteenth data type of 0x0c (ONE_BYTE_PAGE_SYMBOL).  This data type can appear for any of the columns in the CD array and can be different for each record on the page.  This is due to how Page Compression works (which is also a different discussion).

For this last example, I want to backtrack a bit and point out that I used page 24889 to query for the Symbol data type in the CD Array.  This page was different every time I re-ran the test script.  One thing that was consistent with how I setup the exercise is that the page compressed page that had this data type was always the second page of type 1 (data page) in the results from DBCC Ind.


I hope this helped demystify compression in SQL Server – a little.  This is only one piece of compression but an interesting one in my opinion.  Use the provided script and play with it a little bit to get some familiarity.

Stay tuned for the 6th day of pre-Christmas.

On the Fourth Day…

We have come to the fourth day in this mini-series.  So far I have laid down some under-carriage maintenance for one of our favorite databases.  That database being the msdb database.  The maintenance discussed so far has covered a few different tables and processes to try and keep things tidy (at a base level).

Today I want to discuss some under-carriage work for the database of id 4.  Yep, that’s right, we will talk some more about the msdb database today.  What we discuss today, though, may be applicable beyond just the msdb database.  First, let’s recap the previous three days.

  1. Backup, Job and Mail History Cleanup – 3rd Day
  2. Service Broker Out of Control – 2nd Day
  3. Maint Plan Logs – 1st Day

On the Fourth Day of pre-Christmas…

My DBA gave to me an exercise program to help me trim the fat!!

Somehow, that seems like it would be more appealing on bacon.  But the amount of fat shown here is just nasty.  And that is what I think a table using multiple gigabytes in a database is when it holds 0 records.

Just deleting the records we have spoken of over the past three days is not the end of the road with the maintenance of the msdb database.  Often times the pages allocated to these tables don’t deallocate right away.  Sometimes, the pages don’t deallocate for quite some time.

When these pages are cleaned up depends in large part on the ghost cleanup process. Of course, the ghost cleanup is not going to do anything on the tables where records were removed unless it knows that there are some ghost records (records that have been deleted) in the first place.  This doesn’t happen until a scan operation occurs.  You can read a much more detailed explanation on the ghost cleanup process from Paul Randal, here.

Because of this, you can try running a query against the table to help move the process along.  But what if you can’t get a scan operation to occur against the table?

Could you try updating statistics?  Maybe try running updateusage?  How about forcing a rebuild of the Clustered Index?  Or even running Index Rebuilds with LOB_Compaction on?  Or even creating a Clustered Index where one doesn’t exist?  Maybe you could try DBCC CLEANTABLE and see if that works.  Or you could try running DBCC CheckDb on the msdb database and try to get all of the pages read.

While working to reduce the table size for the tables in the following image, I tried several different things to try and force a scan of the pages so I could reduce the table sizes (0 records really should not consume several GB of space).


Even letting things simmer for a few days to see if backup operations could help, I still see the following.

This is better than it was, but still seems a bit over the top.  Something that both of these tables have in common is that they have BLOB columns.  To find that for the sysxmitqueue table, you would need to connect to the server using a DAC connection.  If you were to use a DAC connection, you would find that the msgbody column is a varbinary(max).

What I found to work for this particular case was to use DBCC Shrinkfile.  As a one time (hopefully)  maintenance task, it should be ok.  I would never recommend this as a regular maintenance task.  The used space for these tables does not decrease until DBCC Shrinkfile reaches the DbccFilesCompact stage.  This is the second phase of ShrinkFile and may take some time to complete.  You can read more on Shrinkfile here.


Is it necessary to go to these lengths to reclaim space in your msdb database from tables that are excessively large with very few records?  That is up to you and your environment.  In many cases, I would say it is a nice to have but not a necessity.

If you do go to these lengths to reduce space in msdb, then you need to understand that there are costs associated with it.  There will be additional IO as well as index fragmentation that you should be prepared to handle as a consequence of shrinking.

Stay tuned for the next installment.  We will be taking a look into some internals related to Compression.

On the Third Day…

Mais Oui!

But of course, we must have some french hens for today!  Today we have the third day of pre-Christmas.  And as was mentioned we have more undercarriage work in store.

So far we have talked about the following:

  1. Maintenance Plan Logs on the First Day
  2. Service Broker lost Messages on the Second Day

On the Third Day of Pre-Christmas…

My DBA gave to me more old fat.  Like the past few days, we are looking at more tables that have too much fat.  The kind of fat that has been sitting around for too long and is far beyond stale.

Here, I am going to cover two more tables that can get large.  This is very much similar to how Nicholas Cain (twitter) has covered the topic, though I do it slightly different.  You can read what he has done here. *

The biggest difference comes in how we determine large tables.  You can read my method for finding large tables here.

The key here is that there are tables in the msdb database that are in use for maintaining mail history and backup history.  These tables need to be given TLC.

To maintain these tables, I do much the same as Nic does.  To maintain these two tables, I create a SQL Job to run scripts.  The scripts are as follows.


Now, this is the third day of pre-Christmas so I have one more routine that Nic didn’t discuss in that post.  This one is to clean out Agent Job history.  This is done because I have seen the sysjobhistory table get rather large like the other tables I have discussed so far in this series.  To run maintenance for this table, it is rather similar to the prior two scripts from today.

The stored procedure being used can take different parameters.  It allows you to purge either for a specific job or by date or everything if no parameter is supplied.  You can read more of that from the documentation here.

Another reason I like to do this is to make it easier to read the agent job history when I am troubleshooting.  When there is a lot of history, it can be rather slow and even possibly cause timeouts trying to get in to troubleshoot a failed job.

Running this procedure comes in very handy when somebody has forgotten to set the SQL Agent properties for job history retention.  Then again, maybe they didn’t forget.

This type of setting of course is a blanket example.  As I alluded to earlier, this procedure can take different parameters.  And if somebody has not set the history retention properties for SQL Agent, then we have some flexibility.

With this flexibility, you can set a custom purge cycle for each of several different jobs.  Maybe some jobs only need 2-3 days of retention but another needs to have 90 days.  This method allows you to do that.


So far we have worked to maintain several different large tables within the msdb database.  In the next article, we will discuss how to release the space from the table that should be released after removing large amounts of data stored in these tables due to a lack of maintenance and TLC.


* The page previously linked to the blog for Nicholas Cain appears to now be broken. Due to that break, here are the indexes that were referenced in that blog post.

«page 1 of 2

December 2012
« Nov   Jan »

Welcome , today is Wednesday, February 26, 2020