Scaring a Database Near you

Comments: No Comments
Published on: October 31, 2013

r2

Something I have a hard time by-passing is a good R2 unit.  I have R2 units in so many different forms, including a standing R2 cake one year for my birthday.  So when I cam across this R2 unit, I just had to share it.

That is a pumpkin carved into the resemblance of R2-D2.  I think it is a mighty fine job too.  It’s amazing how many good Star Wars related pumpkin carvings there are out there.  You probably wouldn’t have too difficult a time finding three or four hundred if you tried a little google-fu.

Each year I try to have something for the Halloween Holiday such as this one or this one.  I failed to provide something in 2012, and this is getting back on the right track.

Despite the ease to find haunting Halloween effects related to SQL Server, I am amazed at how few have even heard of “Halloween Protection” which stems from the “Halloween Problem.”

I am not going to dive into the problem or the protection of it.  I think that has been covered plenty and even quite masterfully by Paul White (blog | twitter).  I recommend that you read his four part series on the topic starting here.

With all of the COSPLAY going about here in the States, I find some of the scarier things to be about either stuff I have previously fixed or about which I have written or tweeted or all of the above.

Take for instance this article about the blatant disregard by some vendors and clients in regards to security.  I still can’t figure out why the public role would ever need to be dbo for a database – at least not a legitimate reason.

Or we can take on the recent time I tweeted about a cursor that I fixed.  I took that scary cursor down from a 30+ hour run time to a mere 50 seconds.  Here is a segment of the execution plan (plan is roughly 4mb in size to give a little scale) zoomed out to 1/5th.

optimized_segment

 

The query was much uglier than that originally.  Imagine that beast looping through on your server for 30 hrs, and that is not even the entire thing.  It is little wonder why things started to drag on the server.

Another scary item I like is the effect of implicit conversions.  That is a topic that can be viewed pretty easily through the use of google-fu.  Here is a short demo on the implications of implicit conversions.

[codesyntax lang="tsql"]

[/codesyntax]

In this demo I have created three temp tables.  Each is pretty simple in nature and each is to receive 10,000 records.  The insert statement just inserts an integer into each field of each table through the while loop.  Notice that I intentionally named a column in #T3 to be SomeReal but the datatype is an NVARCHAR.  This is to underscore a pet peeve of mine that I have seen over and over again – naming the field in the table after the datatype and the datatype doesn’t even match.

When this query runs, I get the following timing results.

timing

 

The thing that stands out to me is the huge difference in time between the implicit-free query and the query replete with an implicit conversion.  The implicit conversion query

grim

was about 930 times slower than the query free of implicit conversions.  Granted that query was against a cold cache, so let’s see what happens to an average of five runs each against a warm cache.

With a warm cache I see an average of 51ms for the implicit free query.  On the other hand, the implicit conversion query runs at an average of 84525ms.  That equates to about 1644 times slower.

Sure this was a contrived example.  But keep in mind the table sizes, the datatypes and the number of records in each table.  Had this been a more true to life example with larger tables and millions of records, we could be seeing a query that is far more devastating due to the implicit conversions.  Let’s just call it the grim reaper of your database. (Grim Reaper from www.mysticalpassage.com/grim_reaper.html)

With these horrifying things to haunt your database, I leave you with this new hope as you battle the dark side and grim that is in your database.

swbattle

Database In Recovery

Comments: 4 Comments
Published on: June 4, 2012

What do we do?

Have you ever run into a database that is in the “In Recovery” state?

If that has happened, have the bosses and/or endusers come to you asking “What do we do?” or “When will it be done?”.  They probably have – it is inevitable.

The question is, what do you do when you run into a database that is in this state?

We all know that it doesn’t help much if we are panicked about the issue – that just feeds the already growing anxiety.  If you feel anxiety – that’s OK, just don’t show that to the endusers or to the boss.  You need to portray to them that you are on top of the issue.

While trying to keep everybody calm and apprised of the situation, you would probably like some assurances for yourself that the database is progressing to a usable state.  That is what I want to share today – a little query that I wrote for this very instance.

Anxiety Tranquilizer

[codesyntax lang="tsql"]

[/codesyntax]

Unfortunately, this query does not demonstrate the time remaining for the rollback nor the percent complete without needing to query the error log.  Those would be awesome additions if you know how to do it (and let me know), other than via the error log.  Thanks to a blog post by Tim Loqua for the base info on querying the error log for the percent complete.

I think the key component on this query is the LEFT OUTER JOIN to sys.dm_tran_active_transactions.  This is essential since the recovery is shown in two transactions.  One transaction is numbered and is the placeholder for the un-numbered transaction where the work is actually being done.  In the numbered transaction, you should see a transaction name of “Recovery Allocation Locks” and nothing for the unnumbered transaction.

Now, unnumbered is not entirely accurate because that transaction has an id of 0, but you will not find a correlating transaction for that in the sys.dm_tran_active_transactions DMV.

The transactions displayed here will be displayed until recovery is complete.  That also means that if you really wanted to, you could create a table to log the recovery process by inserting the results from this query into it.  Then you could revisit the table and examine at closer detail what happened during recovery.

The anxiety killer from this query is to watch two columns in the unnumbered transaction.  These columns are database_transaction_log_record_count and database_transaction_next_undo_lsn.  I rerun the query multiple times throughout the process of recovery.  I check those columns to ensure the data in them is changing.  Changing results in those fields means that you are seeing progress and can provide some comfort by seeing actual progress (even though we know in the back of our head that it is progressing).

Move the T-log file of the Mirror Database

Categories: News, Professional, SSC, SSSOLV
Comments: 2 Comments
Published on: April 2, 2012

On more than one occasion, I have run into a case where a large database with a large transaction log was being mirrored.  Then a hiccup comes along – the transaction log on the primary server grows to a point where the transaction log on the secondary server fills the drive.  Now you have a problem and you need to reclaim some space or you need to find more disk.

Well, just because the database is mirrored and the drive is out of space, doesn’t mean there is nothing that can be done without impacting significantly the primary server – nor the mirror.

SQLCat has a nice write-up on what can be done.  I have used this method a few times, and felt that it needs to be brought up.  You can read the SQLCat article here.

In short (here are the steps from that article), you can do the following:

  1. On the secondary server, Use ALTER DATABASE MODIFY FILE to move the log file.

ALTER DATABASE <db_name> MODIFY FILE (NAME = LOG_FILE, FILENAME = ‘new location’).

  1. Stop the SQL Server Service for the instance which has the mirrored (secondary) database.
  2. Move the log file to the new location specified in the Modify File script already run.
  3. Start the SQL Server Service for the instance which has the mirrored (secondary) database.

Performing these steps can be just the trick needed to save the day.  Performing this move this way has saved me an outage on more than one occasion.  Also, this has saved me hours of work that could come along with having to break and rebuild the mirror.

System Base Tables

Comments: No Comments
Published on: January 30, 2012

On January 19th, I published a post about the Dedicated Administrator Connection.  I spoke very briefly of the system base tables in that article.  Today, I want to dive into these tables a little bit more.

First, let’s get the Microsoft definition for these tables.  “System base tables are the underlying tables that actually store the metadata for a specific database.”

Have you ever queried sys.objects from the master database and wondered about some of the results?  You can see all of the System base tables when querying the sys.objects view.  These tables are denoted in sys.objects by type of ‘S’ and a type_desc of ‘SYSTEM_TABLE’.

Here is a simple query to take a quick peek at these tables.

[codesyntax lang="tsql"]

[/codesyntax]

There is no need for a Dedicated Administrator connection in order for this query to work.  You can view these results with a non-DAC connection to the instance so long as you have adequate permissions to query sys.objects.  That said, not all objects returned by that query are System Base Tables.  Furthermore, it appears that the list from MSDN is not comprehensive.  One such example is the reference to sys.sysserrefs that does not appear to exist in SQL 2008 R2 and the missing System Base table called sys.sysbrickfiles (which is used by sysaltfiles as shown in this execution plan).

If I try to query the sysbrickfiles table (as an example) without connecting via DAC, I will get an error message like this:

This is normal behavior.  You cannot query the system base tables without first connecting via DAC.  Having said that, the obligatory warning is required.  As explained on MSDN, these tables are intended for use by Microsoft.  Proceed at your own risk and please make sure you have backups.

In addition to these System Base tables, you will find tables not mentioned in the article nor in the master database.  These System Base tables are found within the Resource database.  The resource database does contain most of the tables mentioned in that article, but there are some differences.  I will leave that discovery exercise to the reader.

There is plenty about SQL Server that many of us take for granted.  Under the hood, there is much more to learn.  Taking a peek at the System Base tables is one of those areas that will help you to learn more about SQL Server.  My question is this: How far are you willing to explore to learn more about SQL Server?

Haunting a Database Near You

Comments: 6 Comments
Published on: October 31, 2011

Today, we have a special Halloween edition.  For me, Halloween and computer geek go quite well together.  And thinking about it, I wanted to try to better understand if there was a correlation.  As a DBA, have you wondered the same thing?

Well, I have a short list of five things that may help you to correlate your affinity for Halloween with your love for Databases.

Tombstones

Did you know that a tombstone is a legitimate thing in SQL Server?

Tombstones are replica related.  They are deleted items in the replica and are used to make sure the deleted item doesn’t get put back in the replica inadvertently.

You can read a lot more about tombstones from the msdn article here.

Tombstones are not unique to SQL Server.  These are commonplace in Active Directory as well.

 

Ghosts

Not all rows that are deleted move on to the afterlife quickly like they should.  Some like to hang around due to unfinished business.

The unfinished business in this case is the server running a cleanup thread.  This has to be done when the server is not too busy and has enough free resources to help these records move on to the afterlife.

You can see the evidence of these ghosts with specialized equipment.  By the use of a DMO, we can see the ghost record count on a per index basis.  The DMO is sys.dm_db_index_physical_stats.  Take a look at the ghost_record_count column in the returned record set.

With more specialized equipment, the engine takes care of the cleanup and removal of these ghosts.  Here is an in-depth foray into the world of SQL ghost hunting.  Whatever you do, don’t cross the streams.

Zombies

It’s alive!!

No, I killed it!!

It can’t be…How is it still alive?

The transaction will not commit and may get rolled back.  The zombie has reared its’ ugly head.  A transaction that cannot commit but keeps going (or rolls back) due to an unrecoverable error is a zombie transaction.

From MSDN, here is a more specific definition of a zombie.

Rowsets can become zombies if the internal resource on which they depend goes away because a transaction aborts.

Spawn

The spawn of SQL server is not so much like the Spawn character of the comics.  Nor is it much like the spawn shown to the right.

That is unless it is not managed very well.  SQL can spawn multiple threads if the optimizer deems it necessary for a query.  This is also known as parallelism.

Parallelism can be a good thing or it can be a bad thing.  Understanding it can help keep it on the good side.  You might want to check out some of Paul White’s articles on the topic.

Children of the Corn

Well, this one is not really something in SQL server.  That said, every time I think of orphaned users in SQL server – children of the corn comes to mind.

An orphaned user is one in which the login SID does not match for one reason or another.  This makes it so that the user can no longer log in to SQL server.

If you don’t know about these kids, they can really make for a frustrating day.  Read more here.

Bonus

I have just covered five things in SQL server that correlate quite closely to Halloween.  But this by no means is an exhaustive list.  For instance, an obvious correlation is the “KILL” command.  Another good one is the monster known as the Blob (read more about that monster here and here).

With the opportunity to have Halloween every day, it’s no wonder I like being a DBA.

Happy Halloween

Seize the Moment

Categories: News, Professional, SSC
Comments: 2 Comments
Published on: April 29, 2011

Today I had a bit of regret slap me in the face.  That face slap came from participation in a SQL Quiz on twitter that was hosted by Paul Randal (Blog | Twitter).  The questions being thrown out there were deep technical internals type of questions.  These weren’t necessarily the type of questions that you would see in an interview and were for fun.

I say it was a bit of a face slap because I had an opportunity to attend an Internals training session presented by SQLSkills in Dallas but was unable to attend.  It made me wonder how much more I would have been able to answer had I actually attended the course.  If you have an opportunity to attend such an event – DO IT!

From the set of questions today, I learned quite a bit.  The knowledge and wealth of information that you can gain by attending one of these events has got to be substantially more than what is presented in the measly ten questions posed in these Pop Quizzes that Paul has conducted.

Now I need to find my way into the Bellevue course.

Physical Row Location

Categories: News, Professional, SSC
Comments: No Comments
Published on: April 29, 2011

SQL Server 2008 has presented us a couple of options to aid in becoming better DBA’s.  You can see this evidenced in many ways in the product.  A couple of the things that make me think this is the case boils down to two functions that are new in SQL 2008.  I learned about these while trying to learn how to do something else.  It just so happens that these functions could possibly help me in the other process (I’ll write more about that later when I have finished it).

These new functions are: sys.fn_PhysLocFormatter and sys.fn_PhysLocCracker.  The two functions are really very similar.  The first of the two does as the name implies and formats the physical location, while the second of the two provides a table output of the location.  If you look at the sp_helptext of both, you can see that they only have minor differences.

[codesyntax lang="tsql" title="physlocformat"]

[/codesyntax]

and

[codesyntax lang="tsql" title="physloccracker"]

[/codesyntax]

When you look at these two functions, you can easily say that they are similar right up until the end where they diverge in functionality.  The first casts the data into the “formatted” version, while the cracker simply outputs to a table.

Use of these functions is also quite easy.

[codesyntax lang="tsql" title="usage"]

[/codesyntax]

These functions can prove to be very helpful in your troubleshooting or dives into Internals.  Check them out and enjoy.

SQL Server Startup

From time to time I see a question asking how to determine the last time a SQL Server was either restarted or when the last time was that the database was used.  For the first question there is a pretty common answer that works for SQL 2000 and up.  In answering both questions though, I am going to show you three quick similar scripts that can be used to find this information.  The catch is that these queries will not work on SQL 2000.  On the flipside, it should give you an idea of what can be done to find out the same info for SQL 2000.

TempDB

[codesyntax lang="tsql"]

[/codesyntax]

A common answer to the question of when was SQL Server last started is to check the Create_Date of the tempdb database.  Since tempdb is recreated every time SQL Server is restarted, it is a pretty accurate timestamp of the server start.  This query was written to work for SQL 2005 and above, but you can still find the create_date of the tempdb database in SQL 2000 by using a different query.

sys.dm_exec_requests

[codesyntax lang="tsql"]

[/codesyntax]

In this query we access one of those Dynamic Management views available in SQL 2005 and up.  In this case, we look at the first process id similar to what one may have done with sysprocesses in sql 2000.  Now is probably a good time to also note something that is happening inside the CTE.  Notice the COALESCE that is being used?  I am taking the Max from those fields that are associated with read type of activities in a database.  This information is being pulled from a DMV called sys.dm_index_usage_stats.  This query will get me the most recent activity for reads and writes in each database (notice the group by).

Default Trace

[codesyntax lang="tsql"]

[/codesyntax]

The last one is to make use of the default trace in SQL 2005, 2008 and 2008 R2.  The default trace records the time of certain events that occur in the database and that includes when the database is starting up.  To find the server startup time in this case, I am checking the EventSequence and IsSystem fields for a value of 1.  As you can see, I also dumped the filepath for the default trace file into a variable and use that in the function to get the data.

When comparing performance of these three options, they pretty much came out even.  Each took its turn performing faster – with regards to time.  However, the default trace method did return a more expensive execution plan every single time.  With how fast these queries run, I’m not real sure that that is very conclusive nor that it would be a heavy concern.  These queries are designed more for the occasional run by the DBA rather than to be executed millions of times a day.  You decide which will be best for you.  I think with the use of the function to get the default trace info, I was mildly surprised that the query performed that well.

Another thing to note is that each of these methods will return a slightly different timestamp.  For instance, the tempdb timestamp gives me 3:21:28 and the sys.dm_exec_requests produces a timestamp of 3:21:36 and the tracefile shows a timestamp of 3:21:24.  I don’t have any hard foundation for why that is – just speculation that seems to make sense.  I will just leave it at that though.

In addition to these methods there is also the option of checking the system event logs to determine the SQL Service startup time.  This information should help you when investigating your databases and server and would be something handy to keep in your toolbox.

SQL Resource

Comments: No Comments
Published on: March 22, 2011

By now, I am sure that you are well aware of the Resource Database.  Starting with SQL Server 2005, a new system database was made available – the Resource Database.  In SQL 2005 you could move the Resource Database – but in 2008 and beyond you cannot.  If you plan your install accordingly, your databases should install where you would like them (i.e. the program files directory that is the default installation directory can be changed during setup).

That is all well and good, but what is in this database and how can we take a look into it?  I thought about this as I was reading a blog by Thomas LaRock where he discusses system tables and system views.  I started thinking about the underpinnings of these views and if we could see them in the resource databse.  Taking a peek into the Resource Database is rather trivial.  I will show you how to copy that database and attach it as an alternate database.

The first step in taking a peek is to locate the directory where the data and log file for the hidden database are stored.  You can find the path by referring back to the SQL 2005 and 2008 links earlier in this post.  In my case, the path (I am using SQL 2008 SP2) for me is displayed in the next image.

Take the two highlighted files, copy them and place them into a new directory.  I copied and renamed the files – placing the files into my “Data” directory as follows.

As you can see, the database is rather small – and you could probably do this procedure multiple times ;).

Once you have the files copied to the new locations, you are now ready to attach the database.  I am just using the GUI to attach the database – for illustrative purposes.  While attaching, I think there is an interesting thing that should stand out.

Now look closely at the image just above – pay attention to the current file path.  This is the screen you will see after you browse to your new directory location and choose to add the mdf file for the Resource database.  You will see that this file path does not match either of the file paths that I have already shown.  Take a closer look:

You can see that the file path references a path for e:\sql10_katmai_t\sql…  I find that pretty interesting.  I don’t know the reason for that path persisting like that.  The file paths shown need to be changed to the appropriate file path where we placed the copied files – as follows.

There is one more thing that I think you should change.  This really is a matter of preference and is good housekeeping.  Change the name of the database in the Attach As field.  I appended _test to the end of the database name.  This should make it obvious that this database is NOT the system database.

With these few steps, you now have the resource database attached as a User Database.  With that, you now can take a look at the inner makings of some of the system views.  Take the following queries for example:

[codesyntax lang="tsql"]

[/codesyntax]

Both of these queries return exactly the same thing.  Both of these queries are only provided as a means of backwards compatibility.  You can actually see master.sys.sysdatabases (just like mssqlsystemresource_test.sys.sysdatabases).  One thing you can do though is script out that view and see what the source of the data is.  Well…with the resource database attached as described, you can now script it out.  When you script out the view, make sure you try to script the correct version of it.  Just like any user database, you will get a category of system tables and one of system views.  The objects under those categories cannot be scripted.  The objects, however, are also represented in this database as “user” objects – which can be scripted.

After you script out the object, sys.sysdatabases you will see that the source of the data is the following:

[codesyntax lang="tsql"]

[/codesyntax]

There you have it – this view is predicated on the newer view called sys.databases.  As has been said over and over again, it only exists as a backwards compatibility view until Microsoft decides to remove it.  Also note that the other part of the foundation for the sysdatabases view is the sys.master_files$ view.  You can now see this in the system views collection inside the Resource database that we attached.  Unfortunately you can’t script it, but at least you can see the columns that it would return.  You also can’t query directly to that view – you have to reach it through the system views.

page 1 of 1




Calendar
July 2014
M T W T F S S
« May    
 123456
78910111213
14151617181920
21222324252627
28293031  
Content
SQLHelp

SQLHelp


Welcome , today is Thursday, July 24, 2014