Minor ailments and Healthy SQL

Comments: 1 Comment
Published on: January 13, 2015

TSQL2sDay150x150One of the things that DBAs love to do is keep their servers running and healthy.  A healthy server, after all, is your ticket to a stress free day and a full night’s sleep.  Granted this not a guarantee but it sure helps make life easier.

We are always looking for the big ticket items to keep the servers tuned and purring.  But from time to time, like with humans, it’s not Ebola that takes us down but the little sniffles and minor aches and pains that keep us from doing our best.

This month as a part of the TSQL Tuesday party, Robert Pearl (blog | twitter) has asked us to write about Healthy SQL and the things that make SQL go yum.

Nagging Cough

Like that tickle in the back of the throat, that makes you cough now and again, there is an occasional tickle that can bug SQL Server and cause some pain here and there.  I have written about this nagging cough in the past.

The problem with this ailment is that since it is not always a cold, or always present, it is often times missed and frequently hard to diagnose.  I talked about this previously as one of those things that should be checked from time to time.  This is that nagging synonym* cough.  You can read about it here.

That’s great, we can take a little cough syrup, fix some synonyms and feel a lot better about SQL Server in the morning.

Aching Joints

Ever have one of those trick knees that decides to give out on you out of the blue?  You might be walking up the stairs (or down the stairs) and suddenly the knee is gone and you end up flat on your face.  And it could be great most of the time, it’s just that once in a while the knee decides to give a little twinge of pain, fold up and drop you to the floor.

SQL Server has a similar problem with this next one.  I come across on a frequent basis, within SQL Server, an bad knee in the form of a linked server.  Sure, I can hear you saying that linked servers are always bad.  Fair enough!  I have seen linked servers work wonderfully and most of the time they cause pain.

The type of pain with a linked server I would like to share today is around more of an edge case (like standing on the edge of the stairs with your knee about to give way).

As a good DBA would want to do, you may want to restore your databases to a test environment on a routine basis to ensure the backups worked and that you have a functioning recovery point for your databases in the event of a disaster.  I was working on just that sort of thing for a client recently when I ran into this beautifully pain filled knee.

This customer had not one, not two, not even three instances of this problem.  They had a glorious 492 instances of this problem.  The vendor for this client decided the best thing to do would be to replicate the production database to a separate database to help offload performance.  This other database happens to be on the same server (so no performance offload).  While that is not the most intelligent thing to do, it is not the end of this ailment.

In addition to the replication, there were 492 views that utilized linked servers to UNION ALL data between the two databases.  The data in each table between the two databases (in this case) is the same.  So we have a linked server to UNION ALL this data between two databases on the same server that is replicated.  Wowza!

Now, due to this linked server proliferation in the views to get to data the long way around, when restoring this database to a test environment there is a lot of cleanup work left to be done.  After all, the restore of the database is only a piece of the healthy backup puzzle.  You would want to test the data and the application against it.

Gratefully, this kind of cleanup can be made easy by doing a simple search and replace when querying sql_modules to find any views or stored procedures need to be updated to work in the test environment.  Here is an example of such a script to help fix that problem.

Take the results from a query such as that and now, I can either change the views en masse or I could copy all of the results from the ModCode column and paste those to a new window.  Using a regex (to replace all GO statements with a GO \n as shown in the pic) or something like SQL Prompt to prettify the code would be pretty easy from there to make it more useful.

Capture

Of course, this only helps address the issue with the linked servers in the views.  The same problem would exist with stored procedures.  It is up to the DBA to know which ones need to be changed and which should not be changed.  Just understand that linked servers are there to present yet another nagging symptom to keep your server from being healthy and worse is they help keep you from being healthy (remember the lack of sleep they cause).

If you are interested, I also have this article to help you find those pesky linked servers before you start diving down the restore rabbit hole.  The article will help evaluate the scope of linked server use and has a query to help identify linked servers.

*Funny afterthought is that both of these niggles that can help decrease health of your SQL Server have ties back to Linked Servers.  If you read the links, you will see what I mean.

Supported Compatibility Levels in SQL Server

Categories: News, Professional, Scripts, SSC
Comments: 1 Comment
Published on: May 21, 2014

It has been well documented and is well known that SQL Server supports certain older versions of SQL Server in a compatibility mode.  This setting is something that can be configured on the database properties level.  You can quickly change to an older compatibility level or revert the change to a newer compatibility level.

Changing the compatibility level is sometimes necessary.  Knowing what compatibility modes are available for each database is also somewhat necessary.  The common rule of thumb has been the current version and two prior versions.  But even with that, sometimes it is warm and fuzzy to be able to see the supported versions in some sort of format other than through the GUI for database properties.

Sure, one could go and check Books Online.  You can find that information there.  Or you could fire up this script and run with the output (as a guideline).

 

This script will return results such as the following.

Picture0002

And if we wanted to see the results for a SQL Server 2014 installation, we would see the following.

Picture0003

The output is displayed in the same format you might see it if you were to use the Database Properties GUI.  That said, if you are using the GUI in SQL Server 2014, you might run into the following.

Picture0006

Notice the additional compatibility level for SQL 2005?  If you check the documentation, you will probably find that compatibility level 90 is not supported in SQL 2014.  In fact it says that if a database is in 90 compatibility, it will be upgraded to 100 automatically (SQL 2008).  You can find all of that and more here.

If you tried to select compatibility 90, you might end up with an error.  If you are on 2014 CTP2, you will probably be able to change the compat level without error.

Anyway, this is the message you might see when trying to change to compatibility 90.

Picture0005

They sometimes say that “seeing is believing.”  Well in this case, you may be seeing a compatibility level in the 2014 GUI that just isn’t valid.  Keep that in mind when using the GUI or trying to change compatibility modes.

Missing Indexes Script v2

Comments: 10 Comments
Published on: May 15, 2014

missing_idx

Frequently we hear the analogy that <insert item here> is like opinions, everybody has one and not all of them are good (some may stink).

Well, this may just be another one of those <items>.  Whether it stinks or not may depend on your mileage.

I had shared a similar script back in January 2012 and wanted to share something a little more current.  As is the case for many DB professionals, I am always tweaking (not twerking) and refining the script to try and make it more robust and a little more accurate.

This version does a couple of things differently than the previous version.  For one, this is a single database at a time (the prior version looped through all of the databases with a less refined query).  Another significant difference is that this query is designed to try and pull information from multiple places about the missing indexes and execution statistics.  I felt this could prove more advantageous and useful than to just pull the information from one place.

Here is the current working script.

The following script gets altered on display.  n.VALUE is displayed but in the code it is actually n.value.  The code display is wrong but it is correct in the code as presented in the editor.  If copying from this page, please change the U-cased “VALUE” in the XML segment to “value” so it will work.  A download of the script has been added at the end.

[codesyntax lang=”tsql”]

[/codesyntax]

As has been the case in the past, this is not the end query.  I have a backlog of updates I want to make to the query and try to create something even more useful from it.

As a bit of fair caution, just because an index shows up as a recommended implementation from this script, it does not mean you should blanket implement it.  Also, like the previous script, the higher the impact value, the more probable the index will have a positive impact on your system.

In addition to the impact, I also look at the “Similar Queries” and “Similar Plans” numbers from the result set.  These values can sometimes lead to an understanding that their are other queries or other plans that could also benefit from the creation of the index.  Your mileage may vary.

Download script Missing_Index.

Table Hierarchy updated

Recently a need resurfaced to explore the foreign key tree (hierarchy/genealogy) as it related to a specific table within a database.  As I pulled out the script from the repository, I realized there was some unfinished work to be done.  But there was also some polish that needed to be added.  This is an update to the most recent posting of that script.  You can see several revisions in the series at this link or the group here.

Some of the changes involve formatting and and labeling.  I added a new column called “Direction” to help understand the relationship of the key to the table in question.  I also changed up the FKGenealogy (formerly called SortCol) to reflect the source table more accurately in the case when the key comes from an ancestor rather than a descendant.  The Level of the FK was also modified to help understand a little better how far away the ancestor was in relationship to the origin table.

A final adjustment also comes from the Genealogy attribute.  Ancestors were all starting at the wrong point in the lineage.  I adjusted that so the lineage can be seen from the point in the tree that the ancestor is related rather than as a root direct from the origin table.

All in all, this script should make more sense to the user than the previous versions.

T-SQL Tuesday #050: Automating Database Restores

Comments: 1 Comment
Published on: January 14, 2014

Here it is time for the party of the month for the SQL Server acolytes and I was running a bit behind.  Why?  Well, that was due in part to me rushing around trying to do some of what this months topic is.  Some would call that pretty could timing.

TSQL2sDay150x150Hemanth.D (blog|twitter) is hosting the T-SQL Tuesday blogging party this month, and he wants all of us to post about Automation. As Hemanth.D put it on his blog in his own words:

You could write about, what options you would consider when automating something? Where do you draw the line? What are our preferred tools for automation? T-SQL, PowerShell, VBScript or Batch files(?) or maybe just share something that you automated in the last couple of years.

You can read the invite he posted here.

History

As Hemanth.D mentioned in his invitation, this is not the first time this topic has come up for TSQLTuesday.  As it would happen, I also participated in the first go around with my contribution about sizing databases on limited information.  You can read that here.

This time around, I have a little bit of a different topic to approach.  I hadn’t considered this until after having read that Wayne Sheffield wrote about his efforts to verify backup files via script (automation).  You can read what Wayne wrote at this link.

Having read that, it seemed painfully obvious to me that I should go ahead and write about my efforts to automate backup restores.  After all, if you are verifying the backup files existence, you might also want to test the backups to ensure they are good.  Besides, we all need to test our backups anyway, right?

Present

I have a few different methods I have used over the years to automate restores.  In one iteration, the restore operations were hard coded in a procedure that was scheduled to run on a nightly or weekly basis.  It probably was also just hard coded to be specific to a database.  That kind of setup is not super useful except for that one database.

With that in mind, I worked on several iterations to help create a script for myself that would automate the restores of any database, with any number of backup files, to a different server, and not have file paths/names/anything hard-coded.  Well – there ended up being one thing hard-coded but that can be changed easily enough.

I decided on a script that would read the backup path for the most recent backup file from the msdb database of the server where the database was backed up.  I do this via a linked server that can be created/destroyed in the proc or that can reside permanently (I prefer to leave it in place).  Take the filepath of that backup file and restore it to the destination server.  All of this via tsql.

Now a caveat with this solution is that the backup path works best if it is a UNC path.  Even if you are backing up the database to the local server, backing up to UNC means that the restore operation can just grab that path and not encounter errors due to drive mappings (e.g. backed up to D but the D on the restore server is the cd-rom drive).

What if you don’t want to restore the source database with the same name to the new server?  Well, that has also been considered and a parameter can be passed to the stored procedure to allow for a new database name.  What if the default file paths are different?  That consideration has been made too!  All of that said, more testing is always welcome.

The script can be evaluated from here.

Next steps:

With the script, the next things to do would be to create SQL Agent jobs to run the script on a routine basis.  Test the script and verify it.

User of this script assumes all risk.

Where have all the Orphans gone?

Categories: Corner, News, Professional, SSC, SSSOLV
Comments: 2 Comments
Published on: October 3, 2012

Do your database users appear to be having an out of body experience?  Have they lost all ties to their login?

Does it feel like maybe what this baby rhino might be experiencing – just trying to find their place in your database?

Well, have we got a little snippet for you then.  This cure-all elixir … er script may be just what you need for those ghostly ghastly things called orphaned users.

Everybody has heard of them.  Everybody seems to want a quick fix for them.  But before you can fix your orphanage / database of these orphans – you have to know that they are there.

 

Cute Rhino

Well, not really – looks more like a hairless rat at this age.  But that is ok – I like rhinos of all ages.  But that is really beside the point.  I also like quick little scripts that will help me get a better picture of any databases that I may have inherited or that I must work on (which I had not previously worked on).

This script is naturally another one of those tools for the admin toolbox.  It is cursor based – but that is ok.  I might update it later to use a set based method and that new foreachdb that I have blogged about previously.

The reason for the looping is to run this quickly against all databases on an instance and immediately know where the orphans might be hiding.  This script is merely to report – it does not fix the orphans.  That is for you to work through.  Also of interest is that this script is not designed to work on SQL 2000 (it’s getting kinda up there in age anyway).

So here it is…another tool for the toolbox to help you become a better more efficient DBA.

[codesyntax lang=”tsql”]

[/codesyntax]

SQL Hide ‘n Seek

Categories: Corner, News, Professional, SSC
Comments: 1 Comment
Published on: July 11, 2012

When was the last time you had to find something within your database?  Did it feel more like a child’s game or an arduous task?

Child's Game or Arduous Task

What would you say if it could be more like child’s play (like the baby elephant is portraying) than a chore?

Child’s Play

The simplest solution may be to use a tool from a respectable vendor.  One such tool that comes to mind is SQL Search from RedGate.  You can find out more about that tool here.

SQL Search does require that an application be installed on the machine on which you will be using the search feature.  The tool is fine and I will leave it up to you to use or not use it.

Alternative

Instead of installing an application onto your machine, you could always write your own script.  The information is readily available within SQL Server for you to find the objects you seek.

For example, if I wanted to find any tables that had a particular column, I could use the following.

[codesyntax lang=”tsql”]

[/codesyntax]

Granted, this query returns a bit more information than you require.  Personally, I like to see the additional information related to the columns as I am doing a search through a database.  I always find it interesting to find columns of the same name and intent but to have a different definition within the database.

And if you desire to find code within the database that contains a particular column name, then something like the following could be helpful.

[codesyntax lang=”tsql”]

[/codesyntax]

Now, what if I want to search code and tables at the same time for a particular column name usage?  Well, I could take advantage of the following.

[codesyntax lang=”tsql”]

[/codesyntax]

Conclusion

Now, I have a script that will return a row for each time a column appears in a proc.  I can correlate which table and column matches to the proc and get the results I need quickly.

Now, you can take this and have a little fun with it.

A Trio of Functions

Categories: News, Professional, Scripts, SSC
Comments: 2 Comments
Published on: January 17, 2012

I found myself perusing an execution plan the other day.  I know, big surprise there.  This execution plan showed me some interesting things I had never really paid much attention to in the past.  When I started paying attention to these things, I found myself jumping down a rabbit hole.

It all started with a bit of curiosity to see if I could make an “admin” script perform a bit better.  The execution plans started showing some table valued functions that I knew I hadn’t included in the query.  Subsequently, I found myself wondering – what is that?

The items that made me curious were all table valued functions.  There were three of them (different) in this particular plan.  I started looking hither and thither to find these functions.  It didn’t take long to figure out that I could find them in the mssqlsystemresource database.  So I proceeded to making a copy of the database and attaching a copy of it for further learning opportunities.

The three functions are:

SYSSESSIONS

FNGETSQL

SYSCONNECTIONS

Knowing the query and based on these names, I began looking in the appropriate DMOs to see what I could find.  Here are the scripts for each of those DMO’s.

[codesyntax lang=”tsql”]

[/codesyntax]

Cool.  I can now see the internals of each of the DMOs – sort of.  You see, there is an OPENROWSET call in each of these objects.  Each call uses an undocumented feature called TABLE.  This is an internal command used by the engine and you won’t find much on it (mostly people asking what it is and Microsoft saying they won’t tell).

Here is the fun part.  If you try to run that code outside of querying the DMO, you will receive error messages.  If you try to create a new view utilizing the Openrowset, it will fail.  It is reserved for internal usage.  With that said, just continue to use the DMO and you will be fine.  Personally, I was curious to find out how it worked so I tried a bit to find it.

So there you have it.  If you are curious what is the internal makings of these DMOs, you can script them from the resource database.  Alternatively, you could also run sp_helptext.  I like to check these things from the resource database.  It feels more like an adventure.  Have fun with it and see what you will learn.

Missing Indexes

Comments: 11 Comments
Published on: January 12, 2012

SQL Server has means built into it to track possible missing indexes.  This used to be found through the use of the Index Tuning Wizard.  The process has improved over time (you can sort of see that from my April Fools post).

As luck would have it, I was recently asked to help fix a query that was somewhat related to the whole process.  You see, since SQL Server 2005, there are DMOs that help to track metadata related to column and index usage.  And if there a query is repeated enough that doesn’t have a good matching index, then the engine may think that a new index is needed.  This potential index information is recorded and becomes visible via the DMOs.

The query that I was asked to help fix was a dynamic query within a cursor that read information from the DMOs in order to generate some missing index information.  That particular query was failing for a couple of reasons, but on the same token it gave me an idea to modify and adapt the query to something more in line with what I might use.  After all, the queries that I used were in need of updating and this gets me started in that direction.

First, a little on why the query was failing.  A common problem with dynamic queries is the placement of quotes and having enough quotes in all required locations.  When you start nesting more levels into a dynamic query, the more confusing the quotes can get.  When running into something like this, I like to print the statement that I am trying to build dynamically.  If it doesn’t look right, then adjust the quotes until it looks right.

The second reason it was failing was a simple oversight.  Whether building dynamic queries or just using variables in your code, make sure you use properly sized variables.  In this case, the dynamic query variable was substantially inadequate.  The use of a print statement also helps to catch these types of errors rather quickly.

There were also a few things that would cause me to not use the original query in any environment.  The first problem is that the script contains a column which is the create statement for each proposed index.  In this create statement, all indexes were given the same name.  That would be a bit of a problem.

The next issue is my concern with the creation of indexes without ensuring that the index is going to provide greater benefit than cost.  Better stated is that the creation of these indexes just because the script spewed them out is no better than to create all of the indexes proposed by the Database Engine Tuning Advisor.  For this, I added a cautionary statement next to every index create statement.

So with these tweaks, as well as other less significant tweaks, here is the query.

[codesyntax lang=”tsql”]

[/codesyntax]

As I post this message, as I tend to do, I am looking for ways to improve upon the query and make it better.  This script should only be used with caution.  It is to provide an insight into potential missing indexes in each database.  A score is assigned to each potential index.  It is with the highest score indexes, that I typically begin analysis to improve performance.  I typically start from a query and execution plan to performance tune.  There are times when an alternative starting point is necessary.  This script is a tool for those times.  Please head the warning that these should be created with extreme caution.

Database Data and Log Size Info

Categories: News, Professional, Scripts, SSC
Tags: ,
Comments: 6 Comments
Published on: December 5, 2011

Have you ever wanted or needed to get the data and log file size for all of the databases in your server?

I have seen scripts that would get this information in the past.  All of the scripts seemed to use either a cursor or the proc from Microsoft called sp_MSforeachdb.  Having seen some of these scripts, I wondered if there might be a different way of doing it (that was also different than you see in my TableSpace Scripts).

After putting some thought into it, I decided on a plan of attack and went to work on building a query that would do the trick.  But before I continue, I will advise that running a script like this on large databases may take a while to execute.

Keeping these things in mind (potential slowness when run on large db and wanting to try something different), I came up with this method (tested on Instances with small databases as well as Instances with 2TB Databases).

[codesyntax lang=”tsql”]

[/codesyntax]

You will see that I did not entirely eliminate the looping mechanism.  Truth be told, it is so much faster on servers with Large Databases.

Also take note of the DMV that is in use in this query.  I am taking advantage of the performance monitor stats that are exposed through the DMV sys.dm_os_performance_counters.  One caveat to this method, is that this DMV shows us the size of the resource database as well.  I think it is fine to report that information back – but it won’t change much over time.  It is for that purpose that I use the Left Join in the query.

The other trick that I utilized is to Pivot that performance monitor data.  I think this works better than to write a bunch of sub-queries to generate the same sort of data layout.

You have probably also noted that I have chosen 250GB as the tipping point in this query.  There is no particular reason for that size – just a large enough database size to make the point.  For some, the tipping point may be a much smaller database size.  Feel free to change this value to suit your needs.

Some other caveats.  The perfmon cntr_value data is presented as an Int.  For more accurate math operations, I chose to cast many of these to a Float.

I also ran into a case where the perfmon data was reporting 0 as the Data File Size for one of my databases.  I had to throw a case statement in for the Data File Used Percent in order to avoid a Divide by 0 error.

I also wrapped the DataUsedMB in an ISNULL.  This was due to the model and mssqlsystemresource databases not having data in the dm_db_physical_stats function.  I could have left those as NULL, but wanted to show something for them both.

Check it out and Enjoy!

«page 1 of 7






Calendar
July 2015
M T W T F S S
« Jun    
 12345
6789101112
13141516171819
20212223242526
2728293031  
Content
SQLHelp

SQLHelp


Welcome , today is Tuesday, July 7, 2015