System Base Tables

Comments: No Comments
Published on: January 30, 2012

On January 19th, I published a post about the Dedicated Administrator Connection.  I spoke very briefly of the system base tables in that article.  Today, I want to dive into these tables a little bit more.

First, let’s get the Microsoft definition for these tables.  “System base tables are the underlying tables that actually store the metadata for a specific database.”

Have you ever queried sys.objects from the master database and wondered about some of the results?  You can see all of the System base tables when querying the sys.objects view.  These tables are denoted in sys.objects by type of ‘S’ and a type_desc of ‘SYSTEM_TABLE’.

Here is a simple query to take a quick peek at these tables.

[codesyntax lang=”tsql”]

[/codesyntax]

There is no need for a Dedicated Administrator connection in order for this query to work.  You can view these results with a non-DAC connection to the instance so long as you have adequate permissions to query sys.objects.  That said, not all objects returned by that query are System Base Tables.  Furthermore, it appears that the list from MSDN is not comprehensive.  One such example is the reference to sys.sysserrefs that does not appear to exist in SQL 2008 R2 and the missing System Base table called sys.sysbrickfiles (which is used by sysaltfiles as shown in this execution plan).

If I try to query the sysbrickfiles table (as an example) without connecting via DAC, I will get an error message like this:

This is normal behavior.  You cannot query the system base tables without first connecting via DAC.  Having said that, the obligatory warning is required.  As explained on MSDN, these tables are intended for use by Microsoft.  Proceed at your own risk and please make sure you have backups.

In addition to these System Base tables, you will find tables not mentioned in the article nor in the master database.  These System Base tables are found within the Resource database.  The resource database does contain most of the tables mentioned in that article, but there are some differences.  I will leave that discovery exercise to the reader.

There is plenty about SQL Server that many of us take for granted.  Under the hood, there is much more to learn.  Taking a peek at the System Base tables is one of those areas that will help you to learn more about SQL Server.  My question is this: How far are you willing to explore to learn more about SQL Server?

Slammer, Alive…Barely

Categories: News, Professional, SSC, SSSOLV
Tags: ,
Comments: 1 Comment
Published on: January 24, 2012

Slammer

By now you must have heard of the SQL Slammer worm.  It was quite an infectious little nuisance.  The harm it caused came largely due to unpatched, unprotected SQL Servers.

We are now 9 years out from the initial discovery of this worm.  The worm has made its way onto the endangered species list – but it is not yet extinct.  I don’t know if I should be surprised by that.

My initial reaction is “No way that worm is still causing problems.  Everybody knows about it.”  But yet, I just caught several infection attempts from remote hosts that were affected by Slammer.  When I take a step back, I recall that many people out there are still running on unpatched servers.  I know of many places that are running SQL 2000.  I know of a large pool of servers across different versions and editions that are not patched.  I even know of a few places that are still running SQL 6.5.

When I take all of that into account, finding that Slammer is still active does not surprise me – but it should.

So for fun, here is what I was able to trap from the recent attempts at my machine with SQL Slammer.

When I trace that IP back to its source, I get a host name of the machine.  If I search on the Host Name of the IP Address, I find this page.  If I were a hacker, I now have a lot of valuable information.  I can also assume that this particular host has many virii.

This entire little foray has made me wonder how many people out there are concerned about security.  Do you know what the patch level is of your server?  Is your AV software up to date?  Are you running any form of HIPS?  If you are in IT and your focus is Data, you may want to check those things.  After all, our focus is to protect the data.

S3OLV February 2012

Categories: News, Professional, SSC, SSSOLV
Tags: ,
Comments: 1 Comment
Published on: January 23, 2012

Do you recognize this person?

 

If you are from the Colorado Springs area, you probably do.  This is:

 

 

 

Troy Ketsdever (twitter)

Troy will be presenting to the Las Vegas SQL User Group on February 9, 2012 @ 6:30 Pacific.  Here is his bio:

Troy Ketsdever is a data architect with over 15 years of commercial software development experience, and has maintained a love/hate relationship with SQL Server since version 4.2. In addition to his “day job”, Troy enjoys writing articles and presenting at user groups on a variety of database design and implementation topics.

His main objective and vision is “making the right information available to the right people at the right time”.

The topic that Troy has chosen for this meeting is titled: Zero to “MERGE” in 60 minutes.  And here is the abstract for that presentation.

Description: SQL Server 2008 saw the introduction of the new MERGE DML statement. In this session, we’ll take a look at the basic syntax and capabilities of the command. Once we have reviewed some simple examples, we’ll dive into some of the more advanced uses (abuses?) of the command, reinforcing our understanding by looking at more complex examples.

Bring your questions.  Bring your ugly code.  If you are remote, bring your own PIZZA.  Yes, this meeting will be both virtual and in person.

Virtual Meeting Info

Attendee URL: https://www.livemeeting.com/cc/UserGroups/join?id=H3ZGRQ&role=attend

Meeting ID:  H3ZGRQ

Physical Meeting Info

M Staff Solutions & Training / 2620 Regatta Drive Suite 102 Las Vegas, NV 89128

S3OLV – Jan 2012 Meeting Recap

Categories: News, Professional, SSC, SSSOLV
Tags: ,
Comments: 1 Comment
Published on: January 20, 2012

Last week (Jan 12, 2012), we held the user group meeting for the SQL Server Society of Las Vegas (a.k.a S3OLV or SSSOLV).

Presenting at that meeting was Josh Lewis (Twitter).  Josh presented on a pretty tough topic in my opinion.  He chose to present to us the topic of XML for the DBA.  You can read his abstract here.

We got the meeting rolling a little bit late.  Traffic must have been a bear down in LV.  Nonetheless, it got rolling and was a good meeting.

During this meeting we had our second installment of “Crap Code.”  Unintentionally, the crap code was a perfect segue into the presentation.  The crap code demonstrated extracting elements from XML related to the blocked process report.

Josh did a great job on the topic.  We recorded the meeting and you can view that here.  Check it out.  Get a little free learning on a difficult topic (for some us like myself).

Stay tuned, the February announcement is coming soon.

 

Dedicated Administrator Connection

Categories: News, Professional, SSC
Tags: , , ,
Comments: 3 Comments
Published on: January 19, 2012

Recently you may have read my article about some hidden functions in SQL Server.  In that article you learned that those functions were in some DMOs and that you could get at them through the resource database.

Today I found myself learning more about the resource database.  Due to what I had learned in my prior foray into the resource database, I was curious if certain other functions might call some hidden functions in that database.

Sadly – they did not.  But in my travels I did happen across something else that is in that database.  Those items are called system base tables.  Unlike the trio of functions from the last article – you can get to these but it is STRONGLY advised to not do it.

Naturally, I want to check these tables out – especially since the MSDN article does say how to get to them.  I will write about some adventures into looking at these tables in the future.  I already found one interesting thing that seemed odd – but first I will need to login using the DAC and start testing to confirm a hypothesis.

For now, I want to cover how to create a Dedicated Administrator Connection.  This should be something that DBAs know how to do.  It isn’t difficult, and I will only cover one method and leave the other method to the Microsoft documentation.

You can create a DAC through either SSMS or through SQLCMD.  You can create one remotely, but you will need to enable that option since it is disabled by default.  You can find the method for creating this connection via SQLCMD here.

To create a connection through SSMS, it is rather easy as well.  You simply add (case insensitive) “admin:” to the beginning of your server as shown in this image.

In order for this to work, you will need to have the browser service running.  If it is not running, you will get an error message.  This error message is informative if you read it.  It will provide a clue to look at the browser service.

Once you have successfully created this connection, you can now use it when necessary to perform administrative tasks or for some learning opportunities.  If you open a query using this connection you will see something like this next image in your query tab.

You can see in the tab of this query tab that there is the label “ADMIN:”.  This is your DAC connection.  You are limited to one of these at a time – period.

If you try to create a second connection, you will get a nasty message.  The message is not entirely informative – just understand that you are getting it because you already have a DAC open.

It is a good idea to become familiar with how to connect via the DAC.  I have a connection saved for quick access.  Luckily I have a development server which I can test and use for learning opportunities.  As the warning MSDN states: “Access to system base tables by using DAC is designed only for Microsoft personnel, and it is not a supported customer scenario.”  If you venture into the system base tables via the DAC – Microsoft will not support it if you break it.

Blackout

Categories: News, Professional, SSC, SSSOLV
Comments: No Comments
Published on: January 18, 2012

What more can I say.  I disagree with the kind of legislation that is being presented via SOPA and PIPA.

In support of the community, my site will be dark 18 Jan 2012 between 10:30 and 21:30 GMT-8.

Normal services will return after that.

 

You can see support of this from some more reputable sites as well.

Wikipedia

 

Steve Jones @ SqlServerCentral

Grant Fritchey

Gail Shaw

 

A Trio of Functions

Categories: News, Professional, Scripts, SSC
Comments: 2 Comments
Published on: January 17, 2012

I found myself perusing an execution plan the other day.  I know, big surprise there.  This execution plan showed me some interesting things I had never really paid much attention to in the past.  When I started paying attention to these things, I found myself jumping down a rabbit hole.

It all started with a bit of curiosity to see if I could make an “admin” script perform a bit better.  The execution plans started showing some table valued functions that I knew I hadn’t included in the query.  Subsequently, I found myself wondering – what is that?

The items that made me curious were all table valued functions.  There were three of them (different) in this particular plan.  I started looking hither and thither to find these functions.  It didn’t take long to figure out that I could find them in the mssqlsystemresource database.  So I proceeded to making a copy of the database and attaching a copy of it for further learning opportunities.

The three functions are:

SYSSESSIONS

FNGETSQL

SYSCONNECTIONS

Knowing the query and based on these names, I began looking in the appropriate DMOs to see what I could find.  Here are the scripts for each of those DMO’s.

[codesyntax lang=”tsql”]

[/codesyntax]

Cool.  I can now see the internals of each of the DMOs – sort of.  You see, there is an OPENROWSET call in each of these objects.  Each call uses an undocumented feature called TABLE.  This is an internal command used by the engine and you won’t find much on it (mostly people asking what it is and Microsoft saying they won’t tell).

Here is the fun part.  If you try to run that code outside of querying the DMO, you will receive error messages.  If you try to create a new view utilizing the Openrowset, it will fail.  It is reserved for internal usage.  With that said, just continue to use the DMO and you will be fine.  Personally, I was curious to find out how it worked so I tried a bit to find it.

So there you have it.  If you are curious what is the internal makings of these DMOs, you can script them from the resource database.  Alternatively, you could also run sp_helptext.  I like to check these things from the resource database.  It feels more like an adventure.  Have fun with it and see what you will learn.

Meme15 Twitter

Categories: Meme15, News, Professional, SSC, SSSOLV
Comments: No Comments
Published on: January 16, 2012

Twitter and Your Career

With the new blog party on the block, we have Jason Strate (blog | twitter) asking us this month these two questions:

  1. Why should average Jane or Joe professional consider using twitter?
  2. What benefit have you seen in your career because of twitter?

But first, a little background.  This blog party is an experiment in exploring the use of social networking and other medium (such as blogs) to enhance your career and professional development.  You can read about that in the link from this months invitation.

For myself, it will be very useful to participate as I explore these kinds of questions in answer for myself.  So, let’s get to the two questions at hand for this months topic.

Why should average Jane or Joe professional consider using twitter?

This question is one I had to explore back when I first started using twitter.  I even wrote a blog or two about it.

I first was having a difficult time justifying it for myself.  I wrote about that here.  But a little while later, I started to see that there was some worth to it and decided to take the plunge.

There are many benefits to twitter.  One benefit is that twitter is one method to announce information relevant to the local User Group.  Another top notch reason is that there are many very intelligent people watching twitter to help answer questions.  There are hashtags for SSRS, SSIS, SQLHelp and other topics.

If you are in a crunch and having a problem – twitter is often a very fast way to get a solid answer.  Call it an online helpdesk with quality and personality rating very high on the list.

What benefit have you seen in your career because of twitter?

Personally, I enjoy the benefit of the new friends and SQLFamily that twitter has exposed.  I sometimes find the time to lurk on twitter and enjoy in the conversation that is taking place.

Through the conversations that are occurring on twitter I find that it gives me a nice break from the work of the day.  It also gives me access to find interesting topics and articles that others have read or written.  But the best part boils back down to SQLFamily.

Check it out sometime – I think you will find that it is worthwhile.

Missing Indexes

Comments: 11 Comments
Published on: January 12, 2012

SQL Server has means built into it to track possible missing indexes.  This used to be found through the use of the Index Tuning Wizard.  The process has improved over time (you can sort of see that from my April Fools post).

As luck would have it, I was recently asked to help fix a query that was somewhat related to the whole process.  You see, since SQL Server 2005, there are DMOs that help to track metadata related to column and index usage.  And if there a query is repeated enough that doesn’t have a good matching index, then the engine may think that a new index is needed.  This potential index information is recorded and becomes visible via the DMOs.

The query that I was asked to help fix was a dynamic query within a cursor that read information from the DMOs in order to generate some missing index information.  That particular query was failing for a couple of reasons, but on the same token it gave me an idea to modify and adapt the query to something more in line with what I might use.  After all, the queries that I used were in need of updating and this gets me started in that direction.

First, a little on why the query was failing.  A common problem with dynamic queries is the placement of quotes and having enough quotes in all required locations.  When you start nesting more levels into a dynamic query, the more confusing the quotes can get.  When running into something like this, I like to print the statement that I am trying to build dynamically.  If it doesn’t look right, then adjust the quotes until it looks right.

The second reason it was failing was a simple oversight.  Whether building dynamic queries or just using variables in your code, make sure you use properly sized variables.  In this case, the dynamic query variable was substantially inadequate.  The use of a print statement also helps to catch these types of errors rather quickly.

There were also a few things that would cause me to not use the original query in any environment.  The first problem is that the script contains a column which is the create statement for each proposed index.  In this create statement, all indexes were given the same name.  That would be a bit of a problem.

The next issue is my concern with the creation of indexes without ensuring that the index is going to provide greater benefit than cost.  Better stated is that the creation of these indexes just because the script spewed them out is no better than to create all of the indexes proposed by the Database Engine Tuning Advisor.  For this, I added a cautionary statement next to every index create statement.

So with these tweaks, as well as other less significant tweaks, here is the query.

[codesyntax lang=”tsql”]

[/codesyntax]

As I post this message, as I tend to do, I am looking for ways to improve upon the query and make it better.  This script should only be used with caution.  It is to provide an insight into potential missing indexes in each database.  A score is assigned to each potential index.  It is with the highest score indexes, that I typically begin analysis to improve performance.  I typically start from a query and execution plan to performance tune.  There are times when an alternative starting point is necessary.  This script is a tool for those times.  Please head the warning that these should be created with extreme caution.

TSQL Tuesday #26 or #23 – Identity Crisis

Comments: 2 Comments
Published on: January 10, 2012

The first opportunity of this New Year to participate in TSQLTuesday, we have been invited by David Howard (blog) to take a second shot at a previous TSQLTuesday.

This second shot is giving me fits.  I have no clue if it is TSQLTuesday 26 or if it is TSQLTuesday 23.  Is it some sort of amoeba of both TSQLTuesdays combined?

While I try to figure that out, how about we look at some pictures of what the New Year might have looked like as it was rung in.  Let’s begin with Paris.

Oooh…Aaaaahhh…Those are quite impressive.

Next up on our tour is where a twin resides for a famous lady.  How did they ring in the New Year in New York City?

 

 

 

And now, let’s DBCC Timewarp to the other side of the world.  Here is what you might have seen if you were in Sydney Australia.

Ahhh.  Yes, that did the trick.  This little diversion sure gave me enough time to think about which TSQLTuesday this is.  Et merci a Stuart Ainsworth (Blog | Twitter).  Le Sujet qu’il a propose est celui laquelle dont je vais parler ce mois.  Dans le TSQLTuesday qu’il a organise, il nous a invite de parler a propos des JOINS.

J’ai completement rate cet occasion de parler a propos de “Joins” en participantes dans TSQLTuesday.  Voyez, TSQLTuesday 23 etait tenu pendant le premier semain au lieu de deuxieme semain ce fois ci.  Et, maintenant, je vais terminer cet article en Anglais.  I was going to write the whole thing in French, but will save that for another time.  I should have a second chance to do that someday.

This second chance, gives me the opportunity to finally talk about a topic that has been on my to-blog list for quite some time.  I hope this post will show some different ways of joining in TSQL.  They are certainly methods I had never considered – until it was required.

Business Requirement

I have some tables that I need query.  One of the tables has lookup information with a bitmask applied to the id field.  Another table references this table but the ids can be a direct match or an indirect match to the id of the lookup table.  For this case, they will always only differ by the value of 1 if the two values do not directly match.  No other match possibility is considered for this example.

Based on this need, the solution dictates some sort of bitwise math.  I have several examples of how a join could be written to accomplish this primary objective.  I am only going to show the possible ways of performing this.  In my environment these all yield the same results and the data is unique and large enough (4.2 million records).  I will compare performance of these different queries in a later post as I demonstrate a query tuning method to drive the query down from nine seconds to 100ms or less.  For the record, I would choose any of queries 5, 6, or 7 for this particular need based on plan cost and performance.

The JOINS

First up is the ANSI style INNER Join using addition in one of the conditions as well as an OR to fulfill the second portion of the business requirement.

[codesyntax lang=”tsql”]

[/codesyntax]

This is probably the easiest to understand and it performs well enough.  Until running into this business requirement, I hadn’t considered putting an OR in the JOIN conditions.  But it makes sense considering that an AND can be used there.

Next is a NON-ANSI style of JOIN.

[codesyntax lang=”tsql”]

[/codesyntax]

Through 2008 R2, this works just as well as the ANSI JOIN already shown.  I haven’t tested in SQL 2012 but I do know that the NON-ANSI syntax of *= (for example) no longer works.  I am not a big fan of this style JOIN because it is far too easy to end up with a Cartesian product.

Another type of JOIN that I like is the use of APPLY.

[codesyntax lang=”tsql”]

[/codesyntax]

This particular code segment is the equivalent of the first query shown.  This is the last in the set of using basic math and an OR in the JOIN conditions.  The remaining queries all rely on bitwise operations to perform the JOIN.  Again, until this particular need, I had never even considered using a bitwise operation in a JOIN.  First in this series is the NON-ANSI style JOIN.

[codesyntax lang=”tsql”]

[/codesyntax]

The big change here is in the where clause.  Notice the use of COALESCE and the first comparison value in that COALESCE.  This is called a BITWISE OR.  From MSDN: “The bits in the result are set to 1 if either or both bits (for the current bit being resolved) in the input expressions have a value of 1; if neither bit in the input expressions is 1, the bit in the result is set to 0.”

So I am comparing the bit values of 1 and the SourceID.  The SourceID from RumorView will create a match meeting the requirements put forth thanks in large part to the BIT OR operation being performed on both sides of the equality in the WHERE clause.  It is also worth mentioning that the COALESCE is completely unnecessary in this query but it I am leaving it as a pseudo reference point for the performance tuning article that will be based on these same queries.

Next on tap is the CROSS Apply version.

[codesyntax lang=”tsql”]

[/codesyntax]

And the last two queries that the optimizer equate to the same query.

[codesyntax lang=”tsql”]

[/codesyntax]

The query optimizer in this case is smart and eliminates the ISNULL.  These two queries use the same exact plan, have the same cost and the same execution statistics.  The version with COALESCE is considered more expensive and takes longer to run than these queries.  It is also important to note that the Cross Apply Join also produces the exact same plan as these two queries.

Conclusion

So there you have it.  Many different ways to write the JOIN for this little query.  Performance and results may vary.  It is good to have a few different ways of writing this particular JOIN.  During my testing, it was evident that various methods performed better under different circumstances (such as how the indexes were configured – which will be discussed in the follow-up article).

Notes: Names have been concealed to protect the innocent ;).  Also, the fireworks images are all links to external sites.  I have no affiliation with those sites…disclaimer yada yada yada…I am not responsible for content on those sites but they can have the credit for the images.

«page 1 of 2








Calendar
January 2012
M T W T F S S
« Dec   Feb »
 1
2345678
9101112131415
16171819202122
23242526272829
3031  
Content
SQLHelp

SQLHelp


Welcome , today is Tuesday, October 17, 2017