Activity Monitor and Profiler

Tags: ,
Comments: 2 Comments
Published on: July 28, 2011

Today I came across a link to a neat little script for SQL 2005 / 2008 to help derive the head blocker in a blocking chain.  That script can be found here.  I was looking at the script and thought it looked pretty cool and also wondered why it might look a little familiar.  Not too big of a concern since many admin type scripts can look familiar.

Then I noticed that the title contained the words Activity Monitor.  I wondered to myself, was this taken from activity monitor?  So, I decided to play with Profiler for a bit to see if I could generate the same query.  The process really isn’t too difficult.  Here is what I did.

  1. Open Activity Monitor.  Simple enough to do.  Right click the instance (in SSMS) you wish to monitor, select Activity Monitor from the context menu.
  2. Open Profiler.  Again from within SSMS, click the tools menu and then select SQL Server Profiler.
  3. Start a New trace.  When starting the trace ensure that you are connecting to the same instance as you started Activity Monitor.  Select the TSQL-Replay template (other tsql will probably work, this is the one that I used).  Goto the events selection tab and click on the Column Filters button.  Select “LoginName” and then expand “Like” on the right hand side.  In the new text box, type the login you are using to authenticate to the SQL Instance in question (domain\user or SQLLogin).
  4. Click run and browse through the results for a bit.  It shouldn’t take too long for that query to pop up.
  5. Once you find the query, make sure you stop the trace.
What is the point in this?  Well, as explained in that other blog, sometimes it is just better to run a query than to run Activity Monitor.  Furthermore, you can take this method and trap many of the queries that Microsoft built into that tool.  You can either use them for learning or troubleshooting.  This is just one more tool to have in your arsenal in order to become better at your job.

SSSOL July 2011 Reminder

Categories: News, Professional, SSC
Tags: ,
Comments: No Comments
Published on: July 13, 2011

This is just a quick note to remind anybody who may be interested of the upcoming Las Vegas User Group meeting.

As I posted here, we will be learning OLAP for the OLTP pro.  This will be presented by Charley the chapter President.

Also note, this meeting is available via livemeeting.  We hope to have a great many of you present for this meeting.  The meeting will start at 6:30 PDT on Thursday July 14, 2011.

Foreign Key Hierarchy Update

Categories: News, Professional, Scripts, SSC
Comments: 1 Comment
Published on: July 11, 2011

Today I would like to revisit a post of mine that is rather old.  More precisely, the script in that post needs revisiting.  This is one of my more favorite scripts and I still have more ideas to implement with it.  The post/script in question can be found here.

In revisiting this script, I simplified it a bit.  I also had to work on a problem with it that occurs in Hierarchies with circular dependencies.  Quite frankly, that was a huge pain in the butt.  There are some tricks out there to try and help with it – but I was having a hard time getting any of them to work in this scenario.  I also updated the script to better handle self-referencing objects.

When you have circular references, and are trying to recurse the tree via a CTE, an instant blocker comes into play.  You can only reference the anchor of the Recursive CTE once.  Fixing a circular reference would be many times easier if you could reference the anchor twice.

In the end, the biggest hint to getting this to work came from this post.  For it to work, I needed to find which combination of fields would work best.  I finally settled on using the Object_ID to help reduce my pain.  I settled on using the following in the anchor:

[codesyntax lang=”tsql”]


and the following in the recursive definition.

[codesyntax lang=”tsql”]


You can see that I am concatenating into a string for this column.  This seems to work well for the purpose of eliminating those circular references.

Other adjustments to the script are not quite as significant but there is a small performance gain to be seen by these subtle changes.  The most notable is the change to remove the two joins out to sys.columns in order to get the column names of the Parent and Child objects.  In lieu of these joins, I am using the COL_NAME() function.  This little change came to me thanks to a little work done last week on my statistics script that you can read here.

The final notable change comes in the naming of the CTEs in this script.  I decided to rename the CTEs to something a bit more meaningful.  In this case, Hierarchy and Ancestry are much more appropriate.

Without further adieu, here is the next major revision of that script.

[codesyntax lang=”tsql”]


I hope you will play with this script, test it out and make recommendations or even find bugs with it and let me know.

Meme Monday – Horoscope

Categories: Meme Monday, News, Professional, SSC
Comments: No Comments
Published on: July 11, 2011

Did you know that your sign may have changed?  I didn’t – at least not until reading the meme Monday announcement.  Most people probably will see no difference in their astrological sign.  However, if you believe that there is a new all powerful 13th sign and if your sign is Scorpio – your sign probably changed.  In order to squeeze in that bakers dozenth sign, Scorpio got cut down to a whopping six days long.  In tribute to Scorpio, my meme Monday horoscope will be for that astrological sign.

You are a born performer, with an incredible transaction list to commit today.  A hard working instance, you tend to do your work with barely a spike in resources.  This is a great time to be clustered with others both local and geographically disperse.  This ability to run batches, run TSQL and always be available is the key to pleasing your DBA and avoiding the scrap heap.  There are DB systems (RDBMS and NOSQL alike) springing up all around you which always seem to challenge your status quo–be prepared.  You must decide whether the risk is worth the change it will bring about in your production environment.  New ways to alert or an easy interface will make monitoring and troubleshooting go well.  You may find yourself more resource taxed and burdened than usual. You are appreciated for your ability to multi-task.

July 2011 S3OLV Meeting

Tags: , ,
Comments: No Comments
Published on: July 6, 2011

We have had a bit of a lag between meetings for the User Group of Las Vegas.  In June, a meeting was scheduled but did not happen.  The cause of that was due to being locked out once again by our meeting place.  That should not happen this month – nor anytime in the near future.  The Venue has changed for our little group this month.

Starting with our July meeting, we will be holding our gatherings at M Staff Solutions &Training. They have been gracious enough to bring us on and they have provided us with a key.  Woot!!

Our topic for July will be the same as it was supposed to be for June.

For any who are interested, The S3OLV meeting will be held Thursday July 14 at 18h30 PDT.

This month Charley Jones will be test driving a new presentation.  He has been working feverishly to learn about cubes and how to create them.  He will be teaching us this month how to do this from the mind-set of an OLTP DBA in his presentation titled “OLTP Moving to OLAP.”  I think this will be a great opportunity for those of us who are weak or strong in OLAP to learn a bit and see how this transition may make sense from an OLTP perspective.

This month we will be conducting this meeting in person and also via livemeeting.  Here are the livemeeting details.

Attendee URL:
Meeting ID: Z8DZWK

In other news, we have a newsletter that we are publishing thanks to the service.  You can find that newsletter here.  Beyond the distribution via twitter, we are looking to distribute weekly newsletters to the group.

Lessons from Out of the Blue

Categories: News, Professional, SSC
Tags: ,
Comments: No Comments
Published on: July 6, 2011

This past Fourth of July weekend I had the opportunity to do a few different things and thought I would share some experiences.  None of these experiences were job related but the lessons from these experiences could be applied to SQL and databases.  I will explain that a little bit later.

Hanging Drywall

Putting up drywall is not too terribly difficult.  It is time consuming and can be a little demanding physically.  When working in a team (for the non-professional) this is a relatively easy task.  That said, the process can be easier and the team requirement can be reduced.  We have found a few tricks over the years to making this process easier.  These tricks involve the following items:  chalk-line, roto-zip, joint compound, and a dry-wall jack.

Each of these tools has it’s purpose and can make the job a lot easier and faster.  We use the chalk-line when needing a straight line to be marked on a piece of sheet-rock that is longer than four feet.  Just measure each end to desired length and then run the chalk-line and snap the string.  Roto-zip and joint compound actually work well together.  Most uses of joint compound come after hanging the drywall.  We use it to also mark pieces that need to be cutout for junction boxes and outlets.  Just put some joint compound on the box edges, press the drywall up against it (in the desired position), pull the board away and you can easily cut out that section using a roto-zip.  The last tool I think is pretty straight forward.  Save your back and arms – leave the heavy lifting to a machine.

These simple tips can reduce the fatigue and increase productivity.  It’s like using the right tool for the job.  More on that in a bit.

Inflatable Fun

As a part of our festivities this weekend, we rented an inflatable water slide for the children.  By children, I mean anybody with two legs and the desire to get wet.

Pictured is a two lane inflatable slip n slide.  Huge bonus that it was inflatable.  It’s like a slip n slide on a pillow. Big children can hit the slide at full speed and not worry about hitting the hard ground and getting bruised.  We played on this thing for a good hard six hours.

That was a lot of fun.

Apply to SQL

In the first experience, I shared the use of tools to make the job easier.  This correlates very easily to SQL.  Use the right tool for the job.  With experience, you begin to learn better tools for the job and how to become more efficient.  This comes through practice and effort.  Get to know the tools available such as DMVs and the DBCC commands.  Find better tools that have been made available through the community and don’t be afraid to ask around and try a few new things.  It could be a huge time saver!

In the second experience, we had a blast.  We were diving head first into this big cushy pillow of air.  What we did not expect was to find bruises, scrapes, and general soreness the next day.  Even if things look nice and cushy with your databases – are they?  Are you prepared for the bumps and bruises?  This is another case of where experience lends a hand and helps us to better be prepared for seemingly “easy” days on the job.

SQL Statistics – another Spin

Comments: 5 Comments
Published on: July 1, 2011

I was reading a blog by Pinal Dave that was about using catalog views to obtain information about stats for the entire database.  While reading the blog, I was reminded about an interesting tidbit of information I had seen once upon a time concerning statistics naming in SQL Server.

This got me looking for that information and burrowing down the rabbit hole.  I found the reference for the naming convention concerning auto generated statistics.  That reference is from Paul Randal and can be found here.  In that article, Paul outlines the five parts of the name of an auto-created statistic – with each part being separated by an underscore.  This got the wheels churning a bit more.

I had to go do a bit more research in order to put all the pieces together.  Based on the naming convention, I knew I needed to convert Hex to Integer, so I could see the object id (column or table).  You might think this would be straight forward, but the value in the name of the statistic is not a true Hex Value.  That value is a hex string and needs a bit of manipulation in order to convert from string to Hex.

After some searching, I found the solution.  Peter Debetta created this handy little function for this very purpose.  I found that script here.  After some testing, and comparing results I was pleased with how well it worked.  Now that I had the pieces necessary, I could venture further down that little rabbit hole.

First though, there is one more piece that I want to discuss.  Remember that naming scheme for that auto created statistics?  I noted that the name is separate by an underscore – each element of the name anyway.  I decided that I would use that underscore as my string delimiter and implement a string split function to break out the name parts.  Any number of splitter functions would work.  I chose to use the same function that was written about by Jeff Moden here.

Now all of the pieces have been referenced and are in play.  Let’s take a look at the query and some of those catalog views.

[codesyntax lang=”tsql”]


I started this little script out with three quick CTEs.  The first is simply to gather the auto created stats.  It is in this first CTE that the string splitter is utilized.  The next two help me to separate out the data so I can work with it.  The second CTE is to manage the data related to the column part of the statistics name.  In this CTE, you will note that I employ the use of that Hex function.  The third CTE is just like the second, but it treats the data related to the table part of the statistics name.

When splitting the data in that first CTE, and due to the naming format, we see that the column is always the fourth part of the name and the table is the fifth part.  The first part is really an empty result due to the name leading off with an underscore.

Once the data is grouped into those CTEs, I can then return it in a friendlier result set.  Here I use more functions that are built in with SQL server.  This reduces the number of catalog views to which I need to JOIN.  And since the CTEs are only working with statistics that are auto generated, I need to UNION in another query to pull in those statistics that are not auto created.  Note that this query only looks at two catalog views.

What is the point of such an elaborate query in the first part to get this information when I can simply use the second part to do all of it?  Not a whole lot of point other than to prove a point.  There is consistency between the queries.  There is consistency between the naming of those statistics and the object data that can be found in those catalog views.

Any of these methods will return the desired results concerning statistics in the database.  You have essentially two methods displayed by me, and then the method employed by Pinal.  Have fun with it!!

page 1 of 1

July 2011
« Jun   Aug »

Welcome , today is Thursday, June 4, 2020