Default Sessions – Back to Basics

Comments: No Comments
Published on: September 1, 2016

Remember When…

sqlbasic_sargeBack in late December of 2015, a challenge of sorts was issued by Tim Ford (twitter) to write a blog post each month on a SQL Server Basic. Some have hash-tagged this as #backtobasics. Here is the link to that challenge sent via tweet.

I did not officially accept the challenge. Was an official acceptance required? I don’t know. I do know that I think it is a good challenge and that I intend to participate in the challenge. I hope I can meet the requirements and keep the posts to “basics”. Let’s hope this post holds up to the intent of the challenge.

With this being another installment in a monthly series, here is a link to review the other posts in the series – back to basics. Reviewing that link, you can probably tell I am a bit behind in the monthly series.

Default Sessions

SQL Server is full of some pretty cool stuff. Some of it is rather basic. Some of it is a bit complex. Whether basic or complex, there is a lot that seems to get overlooked because it just might be hidden from plain sight or maybe it is overlooked just because it is already done for you.

Today, I am going to talk about some of the stuff that is already done for you. It is pre-baked and ready to roll. Since it is pre-baked, it may be a good idea to get familiar with it and understand what kinds of data it may offer to you when troubleshooting.

defaultsessionIt is still somewhat surprising to me to discover how many people have never heard of the default trace in SQL Server. Slightly less surprising is how many people have never heard of the default Extended Events system_health session. What if I told you there were a bunch more default sessions than the default trace or system_health? I wrote about some of those default sessions previously.

With the evolution of SQL Server, there is an evolution in Extended Events. New features in SQL Server also means there may be some additional default sessions. I want to touch on some of these other default sessions in brief.

Default Sessions

Phone Home

First, lets start with an easy to find default session – telemetry_xevents. This session is a visible session and behaves like a user defined session with the caveat that it is a session installed with SQL Server 2016 and it is a system default session. I have another post queued up to discuss this session in a bit more detail so will just keep it brief here.

This session is there to basically capture specific data and phone it home to Microsoft. You can presume that based on the definition of the word telemetry

“is an automated communications process by which measurements and other data are collected at remote or inaccessible points and transmitted to receiving equipment for monitoring.”

Before you get your tin-foil hat in a crumple over sending data back to the mother-ship, understand that this data is not sensitive data. The type of data being gathered is usage metrics. Think of it in terms of gathering data about how many times a specific feature was used or how many times a group of errors occurred within the environment. This is in an effort to help improve the product.

While the session data is designed to be sent back to Microsoft, it could also be of use to the Enterprising DBA who is looking to get a better grasp of the environment or to troubleshoot various issues.

It’s a Stretch

With the release of SQL Server 2016 there is another default session that gets installed. This session is probably the most difficult to find. This is a private hidden session when it is installed. In addition, it is not present by default with SQL 2016 RTM but it was present by default with various CTP versions. The session is specific to the new stretch feature and is called rdaxesession.

Breaking down this particular session reveals pretty easily what feature the session is designed to support. The name rdaxesession breaks down to the following: rda = remote data archive, and then the rest is self explanatory. If I investigate the metadata for the session I will discover that the following events are traced in the session:


Well, that certainly clears it up! This session is all about the stretch feature. If you are on RTM and don’t use the stretch feature, then you won’t see this session even if you do query the metadata.

Don’t be a HADR

The last of the default sessions I want to cover both can be combined into a category about HA and DR – loosely at least. One of the sessions deals with backups and the second of the sessions deals with clusters.

When dealing with a cluster, it is important to know about the hidden trace file that records events related to the cluster. The session target data can be found in “%PROGRAMFILES%\Microsoft SQL Server\MSSQLxx.MSSQLSERVER\MSSQL\Log” by default and cannot be seen within SQL Server metadata at all (not that I have been able to find). For a few details about how this trace works, I recommend reading this article. This session can be called the SQLDiag session.

The session related to backups is actually broken down into two log files and is similar in nature to the SQLDiag session. This is true in that the session is hidden, is found in the same directory, is read in the same fashion, and is equally as hidden from within SQL Server as the SQLDiag session. The backup session deals specifically with managed backup.

What is managed backup? Well, it is a type of backup performed to Azure and which relies upon the “smart_admin” procedures, functions and process.

If you don’t perform managed backups or you don’t have a cluster configured, you will not need either of these sessions (and potentially not see the smart_admin files depending on SQL Server version). If you have either of these features, you will definitely want to ensure you become acquainted with them and how to retrieve the data.


There are plenty of mechanisms in place to help troubleshoot some of the more complex features of SQL Server. Getting to know the tools will prepare you for that moment when problems arise and you are under the gun.

I did not dive deep into any of these sessions holding that for later articles. I wanted to keep this article to an introductory level. If you are interested in the more detailed articles, please stay tuned for those upcoming articles.

If you are interested in reading more about Extended Events, I recommend reading my series which can be found here.

SQL Server CTE – Back to Basics

Categories: News, Professional, SSC
Comments: 2 Comments
Published on: July 14, 2016

Quick Flashback

sqlbasic_sargeBack in late December of 2015, a challenge of sorts was issued by Tim Ford (twitter) to write a blog post each month on a SQL Server Basic. Some have hash-tagged this as #backtobasics. Here is the link to that challenge sent via tweet.

I did not officially accept the challenge. Was an official acceptance required? I don’t know. I do know that I think it is a good challenge and that I intend to participate in the challenge. I hope I can meet the requirements and keep the posts to “basics”. Let’s just call this first post in the challenge to be my official acceptance.

With this being another installment in a monthly series, here is a link to review the other posts in the series – back to basics. Reviewing that link, you can probably tell I am a bit behind in the monthly series.

SQL Server CTE

kapowOf all the fundamental concepts within SQL Server, this one drives me a bit batty from time to time. Think about that statement for just a moment. I am not going to go in depth about the syntax of a CTE.

I am not really interested in breaking down the semantics and uses of the CTE. Rather, I want to break down something commonly associated to the CTE that is far more basic. I would classify this as more of a myth and misconception than anything else.

The topic du jour happens to be around punctuation. More specifically, do we use terminators or do we use beginninators? Yes that is a made-up term, but you get the drift. Where exactly should the punctuation be placed? Better yet, is it mandatory to include that punctuation every single time?

Those are the items I wish to discuss and hopefully demystify with a few examples (if that many are even necessary).


Punctuation shouldn’t be too big of a deal right? There is so little punctuation within t-sql that it really should be very basic and easy to understand. That said, there is this perpetual myth that a semi-colon is required at the beginning of a CTE. Furthermore, and somewhat more accurate, is that a semi-colon is a requirement in order for a CTE to work. Let’s take a look at both of those scenarios.

Case the First

Let’s just take a look at the most basic of scripts. A simple select statement without a CTE.

Straight away, we can see that I am accessing the AdventureWorks2014 Database. This happens to be on my 2014 instance as well. I have not terminated the select statement with a semi-colon. It really is somewhat a preference here since the statement will work with or without the semi-colon just the same (and despite the previous notion that it was going to become mandatory to terminate all statements with a semi-colon, don’t believe it until it actually happens).

Here is the SQL Server 2016 version:

Now let’s go with a basic CTE example written the same way that we commonly see CTEs written on the internet.

Notice that I am using the semi-colon as a statement beginninator and terminator all in the same statement. The statement runs perfectly fine – that much is true. The usual reason for the leading semi-colon is that the cte has to be preceded by a semi-colon. Let me see how true that is with a minor tweak!

What do you expect to happen from the execution of this statement? Well, as it stands the statement will execute without error just the same as it did in the previous example.


What? That worked? Yes. Yes it did. This is quite frankly due to the reality that a semi-colon is not required to precede a CTE. This statement was the only statement in the batch.

Case the Second

Based on those results, that must mean that a semi-colon is absolutely required when there is a statement preceding the CTE. Therefore, what is really required is the statement that immediately precedes the CTE must be terminated by a semi-colon. Not so fast there. Let’s test this one too!

Let’s use the following basic example.

That query works perfectly fine again. That must be due to the preceding semi-colon, right? Hold on to that thought. How about a rewrite as follows:

Here we tweaked the query to represent the notion that all previous statements should be terminated with a semi-colon. What do you think will happen? This will result in an error. GO is a batch terminator and therefore does not require the semi-colon. Surely this reinforces the theory that the semi-colon should come preceding to the CTE though, right? Let’s rewrite that query again.

I removed the semi-colon and now the code segment will execute without error. Wait there is more!

I want to continue tweaking this query. Let’s try multiple selects along with this CTE.

mind_blownNow I have a query that has a preceding select that is not terminated with a semi-colon. I have a CTE that is created right after that select and then I select from that CTE. What could ever possibly happen with this particular query?

The short of that story is that the query will execute without error and I will end up with two result sets.

Wait! Hold on two seconds there! Surely the semi-colon is an absolute requirement because we see it everywhere that it is a mandatory requirement.

The reality is that the semi-colon requirement is not really entirely accurate. If the CTE happens to be in the same batch, then the previous statement in the batch must be terminated by the semi-colon. Take the following change for example:

If I remove the semi-colon from after the “END” statement, then my query will terminate in an error. If I remove the BEGIN and END from around the first select statement, then the query will fail with an error. The error that I receive is part of the reason for the huge misconception about when a semi-colon is required:

Msg 319, Level 15, State 1, Line 38
Incorrect syntax near the keyword ‘with’. If this statement is a common table expression, an xmlnamespaces clause or a change tracking context clause, the previous statement must be terminated with a semicolon.

As I have already shown though, this is not accurate. Start thinking on a bigger scale than just semi-colons. Personally, I would prefer to see CTEs that are used in examples on the internet to be encapsulated with a BEGIN…END instead of using a preceding semi-colon (beginninator).

Has this behavior changed with SQL Server 2016? That is easily shown by the following example:


These examples clearly demonstrate the inaccuracy of the myth that a semi-colon is required. Despite the statement made in the error message, it just is not entirely the full story. A semi-colon to terminate the preceding statement within a batch is necessary but it is not required every time. If there is a separation between batches and the CTE, then a semi-colon is not required. And never is a semi-colon required to precede a CTE. It is required to terminate a previous statement within the same batch.

Please don’t be led astray by some notions on the web that dictate a CTE must be preceded by a semi-colon.

SQL Server Select – Back to Basics

Comments: No Comments
Published on: April 1, 2016

Prelude in SQL Minor

translucentman_greenBack in late December of 2015, a challenge of sorts was issued by Tim Ford (twitter) to write a blog post each month on a SQL Server Basic. Some have hash-tagged this as #backtobasics. Here is the link to that challenge sent via tweet.

I did not officially accept the challenge. Was an official acceptance required? I don’t know. I do know that I think it is a good challenge and that I intend to participate in the challenge. I hope I can meet the requirements and keep the posts to “basics”. Let’s just call this first post in the challenge to be my official acceptance.

With this being another installment in a monthly series, here is a link to review the other posts in the series – back to basics.

SQL Server Select

atlasian_manOf all the fundamental concepts within SQL Server, nothing is potentially more basic and fundamental than the often overlooked Select statement. Think about that statement for just a moment.

How often is the select statement overlooked and under-analyzed? Yet, this tiny little word is a rather large part of the transactions that occur within each instance of SQL Server on a routine basis.

Today, I want to break down the SQL Select statement and show the components of the statement along with the logical processing order of those components.

Let’s put on our thinking caps for a moment and think of all the possible components of this super fundamental piece of the TSQL language – the Select statement.

Anatomy of the Select

The first important piece of information with the Select is to understand what it does. What is a Select statement? What is the purpose of a Select? Really, the purpose of this statement should make sense based on the word alone – to select. In other words, to get or retrieve something that is of interest to you at that moment. In our case, that means to retrieve some piece of data from the database to meet our needs. Very simple, right? You issue a SELECT statement to get something out of the database.

Thinking through the Select statement, certain common elements probably stick out pretty easily. These elements might include the following:

  • Select
  • From
  • Join
  • Where

Then with a slightly more complex query, one might see the following components associated with a Select statement.

  • Top
  • Distinct
  • Group By
  • Order By

Then stepping it up to another level again, one might see the following:

  • Having
  • For
  • Into
  • With (Cube or Rollup)

Soak that in for a moment. Suddenly this fundamental piece of TSQL is starting to look a little more like black voodoo magic. It can get rather complex with this fundamental statement. Now soak in what the purpose of the SELECT is, as was stated just a bit ago. That purpose is to retrieve data that you need. One cannot always retrieve the desired data without some options to go with the SELECTion criteria. Suddenly, this complexity renders itself more as raw DBA power.

Knowing the various aspects of the Select, and not diving too far into what each does, you may be wondering how these pieces fit together into a query. Let’s take a quick look at a few different examples illustrating how these puzzle pieces fit together into a SELECT statement.

Here is an example showing a SELECT that utilizes the FROM, WHERE, GROUP BY, and ORDER BY clauses. When writing a SELECT statement, we will write the query in the order shown in the query. Using these clauses in a different location doesn’t read very well to humans and will throw a syntax error.

In this next example, I have used a few different clauses:

The real differences here being that I have used the DISTINCT, TOP and JOIN clauses. Again, trying to write this query with the TOP preceding the TOP or maybe with the JOIN preceding the top doesn’t really make much sense when reading it. Nor would it work if we tried to execute the query due to syntax errors.

This next example uses another option to output the results into an XML document using the FOR clause:

And in this last example, I take the previous example to output the results into a temp table.

These are all examples of the possibilities that can be used with the SELECT statement to retrieve data from the database in order to meet our needs. Each one illustrates different variations and some level of complexities between each of the examples.

Now the question is, did you know that SQL Server does not process these statements in the same syntactic sequence or English friendly forms that we write these SELECT statements? You may be surprised to learn that there is a different processing order to these pieces of the SELECT. This processing order is called the LOGICAL Processing order.

Let’s take one of the queries already shown and compare.

The query as we may write it will look like this:


The query as SQL Server will process it will look like this:


Or in other words, we may see it as shown in this video:

That is significantly different than we might think, but it makes sense if you think about it long enough.

How does one know that SQL Server processes these steps in this order? Well, besides the documentation in BOL that supports this, one can dive into the internals and generate a tree that will show this in action. For instance, let’s use the same query as just shown and see what the processing tree would look like.


Besides looking very colorful and almost as if it were in Greek, this processing tree is somewhat revealing in how SQL Server breaks down that SELECT statement we have been using. One more pic, and then a quick explanation.


This last image was generated using the SHOWPLAN_TEXT setting. I want to show this because it helps to reinforce a concept from the processing tree. In this last image, note that the OUTER REFERENCE is to the SalesPerson table. This also means (without stating it) that the inner reference is to the Employee table. I have added the INNER and OUTER table references to the image of the processing tree. Why is this important? Well, when trying to interpret the tree it is important to understand that one starts from the inner most node and then read from bottom to top, right to left (as you ascend the nodes).

In this case, we will see that the Employee table is first referenced and processed, then the SalesPerson table, then the ON clause is evaluated. In the tree, I highlighted the ON that pertains to the join in bright green. The vertical dotted lines on the left are to help visually keep the nodes aligned as you step in/out from one level to the next. As we climb the tree to the top left corner, we will see that the last piece of the select to be processed in this case is the TOP statement. We can also see that the top is dependent on the ORDER BY (highlighted in RED) in order to get the correct desired rows.


This article jumps through the SELECT statement very briskly. The SELECT statement is a powerful tool within SQL Server. Gaining an understanding how the SELECT statement is processed can be helpful to write an appropriate SELECT and how to potentially pseudo code that SELECT. If you can understand that the engine first determines the source of the data and then starts to break down the additional requirements that have been sent with the SELECT. The more options that are thrown in with the SELECT statement, the larger and more complex the processing tree becomes for that statement.

Used well, a SELECT will become your ally and a handy tool in the toolbox. This has been an introduction to the SELECT and how it is processed behind the scenes. If you would like to see more information about the different clauses related to the select statement, check out the BOL entry here.

As an aside, the tree that was displayed is called the “converted tree.” Here is a listing of the other possible trees.


Compressing Outcomes

Categories: News, Professional, SSC
Comments: No Comments
Published on: August 24, 2015

You find yourself cruising along, just doing your thing as a database administrator. Then the CIO plops down into a spare chair in your cubicle to talk about the weather.

Well, not entirely true there. The CIO dropped in because the alerts for disk space have been blowing up everybody’s inbox. He informs you that the situation isn’t going to change anytime soon because new shelves for the SAN just can’t be ordered at the moment. In the end, you are given a directive – just without so many words. Fix the problem, and do it without more disk space.

Fantastic you think to yourself. What to do now. Then you get the Gru light-bulb and it is on!

compressionEnter compression. Ok, compression isn’t something entirely new. It has been a feature shipped with SQL Server since 2008 Enterprise edition and can be just the thing to save the day (and everybody’s inbox before they start ignoring all alerts).

The decision has been made and the biggest tables in the database are now compressed using page compression. But you are concerned that it didn’t work because the space savings isn’t necessarily what you had hoped it would be. Lucky for you, you are running SQL Server 2012 and have the ability to start diving into the internals to confirm whether your worry is fact or fiction.

Using this handy dandy query that you are about to copy and paste from the internet, you can quickly investigate to see if there are pages in the tables that were unaffected by your attempts to compress the table.

Thanks to an undocumented function called dm_db_database_page_allocations, we can investigate quickly whether there are pages that failed to compress. We can also see which compression state they are in – if they are compressed at all.

Adding a script like this to your repository can be an easy aid in the struggle to ensure your expected results either match or don’t match. This would save a bit of time and allow you to move on to bigger and badder things – like 2048.

In addition to looking at the compression status for each page, I have thrown in a little extra. Call it the “considerations” column. Based on activity hitting the table or index, you may want to consider a different level of encryption. This additional data on the end of the output will help start you in that direction.

Tables, Columns and Defaults Oh My!

Comments: No Comments
Published on: August 19, 2015

Picture this if you could. You inherit a new database either through the change of a job, or finding a black box database server, or maybe due to the acquisition of a new application.

No matter the cause, you are now responsible for maintaining that database and you also really need to become acquainted with it. Some of the acquaintance that is needed is learning how things are related and figuring out what data goes where. Not just what data, but what type of data.

I know that one of the things I am frequently asked is to figure out what data type belongs with which column. Furthermore, does this same column exist in more than one table. It’s nice when things are named consistently and you are able to tie things together by column name.

There are also times, when that column that is so consistently named, might need to be confirmed on data types between each of the implementations. I know I have run into cases when the column might be differently defined in different tables.

So, what to do to tie this all together and get that intimate acquaintance with your database?

Luckily, I have a script that will help you figure out some of these connections.

First, the script.

Now, let’s give it a trial run. See if you can spot the difference by running that against the AdventureWorks database using the value “AccountNumber” for the @ColName parameter.



Maybe there is a legitimate reason for that Column to have a different data type specification than the other two. Maybe it doesn’t. One thing for certain is that you will be able to walk away with some questions to ask and hopefully gain a better understanding!

Table Size Script Update for 2015

Categories: News, Professional, Scripts, SSC
Comments: 2 Comments
Published on: August 3, 2015

As was recently promised, I have an db_engineupdate to an old tried and true script for calculating the size of an object.

That promise was doled out in the previous article when I discussed the internal table types. You can read the article here. Information from that article was used in this script – so it could be of use prior to taking a gander at this new revision.

And of course, previous releases can be found through the link chain here.

The script was in need of an update the last time around due to the internal table types that had been updated in SQL Server. Those internal tables haven’t really changed in the latest release (at least as far as calculating space is concerned). But, I didn’t like needing to update multiple places in the script for one. For the other part of it, I did not like that It was difficult to know what exactly the internal table numbers actually represented. So I added that documentation to this script.

In addition to that little change, I also wanted to figure out if a table might have been spread across multiple filegroups. Due to this, I added in a delimited list of filegroups.

And the last minor update is that the script now has an option to exclude MS Shipped objects, or to include them, or to look just at the MS Shipped objects.

All of this is included in the effort of trying to get a more complete view of the size of the objects within a database and to be able to do so without needing to loop through the objects.

Note: A quick update to move the index creation outside of the table creation instead of inline since that is a feature not supported in older versions of SQL Server.

Effects of sp_rename on Stored Procedures

There comes a time when mistakes are made.  Sometimes those mistakes can be as annoying as a spelling mistake during the creation of a stored procedure.  When a mistake such as that happens, we are given a few choices.  One could either rename the stored procedure, drop and recreate the stored procedure or simply leave the mistake alone.

When choosing to rename the stored procedure, one may quickly reach for the stored procedure that can be readily used for renaming various objects.  That procedure was provided by Microsoft after-all and is named sp_rename.  Reaching for that tool however might be a mistake.  Here is what is documented about the use of sp_rename to rename a stored procedure.  That documentation can be read at this link on MSDN.

We recommend you do not use this statement to rename stored procedures, triggers, user-defined functions, or views; instead, drop the object and re-create it with the new name.

And later in the same documentation, one can read the following.

Renaming a stored procedure, function, view, or trigger will not change the name of the corresponding object name in the definition column of the sys.sql_modules catalog view. Therefore, we recommend that sp_rename not be used to rename these object types. Instead, drop and re-create the object with its new name.

Now, a chief complaint against dropping and recreating the stored procedure, as recommended, is that process can cause permissions issues.  I am less concerned about the permissions issues and see that as more of a nuisance that is easily overcome due to great documentation and a few quick script executions to restore the permissions.  Despite that, I think we might have a means to address the rename and permissions issue that will be shared later in this article.

Using sp_rename

When using sp_rename, it would be good to understand what happens and what one might expect to see.  Let’s use the following script to create a stored procedure to step through an exercise to rename a stored procedure and evaluate the results.

When I execute that series of batches, I will get an output that matches the following.



When looking at the results we can see that the use of sp_rename does indeed change the name of the stored procedure as it is represented via sys.objects and metadata.  We can also see that the definition of the stored procedure does not change as it is held within the metadata.

If I choose to check the definition through the use of OBJECT_DEFINITION()  instead of sys.sql_modules, you will be pleased to know that sys.sql_modules calls OBJECT_DEFINITION() to produce the definition that is seen in the catalog view.

Well, that does pose a potential problem.  We see that the object definition is unchanged and may report the name as being different than what the object name truly is.  What happens if I execute the stored procedure?  Better yet, if I can execute the stored procedure and then capture the sql text associated to that plan, what would I see?

Yes!  The renamed stored procedure does indeed execute properly.  I even get three results back for that execution.  Better yet, I get an execution plan which I can pull a plan_hash from in order to evaluate the sql text associated to the plan.  In case you are wondering, the execution plan does contain the statement text of the procedure.  But for this case, I want to look at the entire definition associated to the plan rather than the text stored in the plan.  In this particular scenario, I only see the body of the procedure and not the create statement that is held in metadata.


For this particular execution and plan, I can see a plan_hash of 0xE701AFB2D865FA71.  I can now take this and provide it to the following query to find the full proc definition from metadata.

And after executing that query, I can see results similar to the following.



Now is that because in some way the query that was just run was also running OBJECT_DEFINITION()?  Let’s look at the execution plan for both OBJECT_DEFINITION() and the query that was just run.



Looking at the XML for that particular plan and we see xml supporting that plan.  There is no further function callout and the plan is extremely simple.

Now looking at the plan for the query involving the query_plan_hash we will see the following.



Looking at this graphical plan, we can see that we are calling FNGETSQL.  Looking at the XML for this plan, we can verify that FNGETSQL is the only function call to retrieve the full sql text associated to this plan execution.  FNGETSQL is an internal function for SQL server used to build internal tables that might be used by various DMOs.  You can read just a bit more about that here.

What now?

After all of that, it really looks pessimistic for sp_rename.  The procedure renames but does not properly handle metadata and stored procedure definitions.  So does that mean we are stuck with drop and create as the Microsoft documentation suggests?

If you have access to the full procedure definition you could issue an alter statement.  In the little example that I have been using, I could issue the following statement.

After executing that script, I could check sys.sql_modules once again and find a more desirable result.

And my results…



If you don’t have the text to create the proc, you could use SSMS to script it out for you.  It is as simple as right-clicking the proc in question, selecting modify and then executing the script.  It should script at with the correct proc name (the beauty of SMO) and then you can get the metadata all up to snuff in your database.

Of course, if you prefer, you could just drop and recreate the procedure.  Then reapply all of the pertinent permissions.  That is pretty straight forward too.

Using Synonyms to Extend SSIS

Comments: No Comments
Published on: July 3, 2014

There are a million and one uses for synonyms.  There are at least that many uses for SSIS.  The reality is, not all of those uses are good for you nor for your data nor for your database.

Recently I wrote an article about some good and some bad with synonyms.  You can read that article from my work blog site here.  Today, I just want to expand on that article a little bit.  I glossed over some things pretty quick in that article that I though might be fun to explain with a little more detail.

The Bad – a little closer look

First, let’s take a look a little closer at the bad.  More specifically, in the example I used, there was a vendor that created a synonym for everything.  I really do mean everything.  This is one case where using the word “literally” would be accurate.  On the client database, I could run a quick query such as the following and see over 7300 synonyms.

[codesyntax lang=”tsql”]


In the case of this client and this vendor, 7300+ synonyms is far too many.  It just led to mass confusion.  If you were to run that query, you might see something like the following image.


I added a “derived” column to show the total count of synonyms and the record name as it relates to that total.  That is a crazy amount of synonyms.  That just makes me think that somebody got into the krazy kool-aid one day, was bored and gave life to a synonym beast.

The Good – With more detail

On the flip side, in the aforementioned article, I talked about synonyms as a means to tweak performance in SSIS.  Normally I would not tout a synonym as a performance tweak.  So I want to caution that the performance gains are specific to SSIS and a handful of those millions of uses for SSIS.

Let’s just begin with a little bit of background.  For that background, some pictures will be really handy.  So here goes.


In the preceding image we see a very simple segment of a data flow.

The data source uses a sql command to fetch the data required for the data flow.  In the beginning, it is very straight forward.  You probably have some package lying around with something similar to this.

In the following image, we see what the SQL Command was for that data source circled in red in the previous image.


In the next image we see a slight tweak to the query.  This time to include a reference to a table that is defined/obfuscated by a synonym.


At this point I can hear some of you saying, “Ok, I see what he is doing.”  While many others are wondering why I just made the query more complex than the previous example.

Well as luck would have it, this change serves a couple of purposes.  1) The data has been staged in a separate database.  That database has a different name in every environment (recall the aforementioned article).  So the synonym minimizes code changes when deploying the package.  2) The synonym allows us to confirm that there is data in the stage table and that the data matches a ClientCode in the destination table.  3) Lastly, the synonym reduces my dataset which reduces memory requirements and also gets the data loaded faster (because it is smaller).

In addition to this minor tweak, I can also do something like the following.


In the preceding image, we see two datasources.  Each datasource is subsequently sorted and then eventually joined.  Much like the previous example, due to naming standards and an effort to try and minimize code changes during deployments, at least one datasource is pulling in too much data.  The data is filtered down due to the Join transformation, but this is not very efficient.


Through the use of a synonym, the datasources can be reduced to a single datasource.  This will eliminate the need for the Sort transformations and Join transformation.  Removing those three transformations reduced memory requirements.  And like the previous example, since we can trim down the number of records, the data flow will run a little bit faster too.


As You can see, the code is simple.  It’s not a super TSQL trick or anything to add a synonym into an existing query.  It just gets referenced like any other table.  Once again, in this case, the synonym is pointing to a table in a staging database.  That table has been loaded as a part of an ETL process and now needs to be manipulated a little bit through some other transformations and then inserted eventually into a “target” database.


As with tuning stored procedures or any TSQL, a similar technique was used here.  Reducing the datasets to contain JUST the data that is needed for the flow.  To facilitate that reduction in data to be just the essential data, I employed synonyms.

The reasons for using a synonym in this case were to: 1) restrict data to precisely what was needed, 2) ensure data being loaded was “constrained” by data in the destination table (e.g. only load for a specific client that does exist), and 3) minimize code changes during deployments.

When dealing with databases that serve the same purpose but follow some absurd naming standard that changes the name between environments, it can become cumbersome to maintain code during deployments.  This is particularly true when dealing with cross database joins or lookups.

Auditing and Event SubClasses

Categories: News, Professional, Scripts, SSC
Comments: 2 Comments
Published on: May 28, 2014

A recent discussion got me to thinking about Auditing.  To be honest, it got started with a complaint about some documentation that seemed overly light about the various fields related to auditing as it stands in SQL Server.

In talking to the person who raised the valid concern about the lack of good documentation, I was curious why he suddenly had so many questions about auditing and its functionality within SQL Server.  Reflecting on the answers, it seems that it made good sense and it all kind of fell into place with the whole Audit Life Cycle.  I hadn’t previously considered the Life Cycle, but it makes sense to map it out.  Here is a simple rendition of what an audit Life Cycle might entail.






In order to audit anything, it is necessary to know what you really want to audit, why you want to audit it and how to achieve those goals within the tools given to you.  In that vein, it makes sense that one would need to study up on the topic to figure out what different things meant within the tool.

Of course, once you start collecting that data, then you also need to figure out how to measure it and then to determine if adjustments to the auditing plan need to be made.  In the end, it boils down to what is the data to be collected, what are you doing with that data and what does that data represent.

In our simple discussion, the data trying to be understood was related to the Event Subclass field in this View (sys.trace_subclass_values) and in this Microsoft document (one of several).  The beauty of this field is that it is not just tied to Auditing, but you will also find it in Profiler, server side traces, and Extended Events.

With so little information to help understand what the field values represent, maybe it is better to just turn to the data to help understand what the values might represent or how to interpret them.  To do this, we can query a few catalog views as in the following query.

[codesyntax lang=”tsql”]


With the above query, I can filter down to just the Event Types that have Audit in the name.  Or I could add a different filter so I can try and better understand the different subclasses in a more focused effort.

I hope this helps in your efforts to provide a better auditing or “profiling” type of experience in your environment.


Missing Indexes Script v2

Comments: 11 Comments
Published on: May 15, 2014


Frequently we hear the analogy that <insert item here> is like opinions, everybody has one and not all of them are good (some may stink).

Well, this may just be another one of those <items>.  Whether it stinks or not may depend on your mileage.

I had shared a similar script back in January 2012 and wanted to share something a little more current.  As is the case for many DB professionals, I am always tweaking (not twerking) and refining the script to try and make it more robust and a little more accurate.

This version does a couple of things differently than the previous version.  For one, this is a single database at a time (the prior version looped through all of the databases with a less refined query).  Another significant difference is that this query is designed to try and pull information from multiple places about the missing indexes and execution statistics.  I felt this could prove more advantageous and useful than to just pull the information from one place.

Here is the current working script.

The following script gets altered on display.  n.VALUE is displayed but in the code it is actually n.value.  The code display is wrong but it is correct in the code as presented in the editor.  If copying from this page, please change the U-cased “VALUE” in the XML segment to “value” so it will work.  A download of the script has been added at the end.

[codesyntax lang=”tsql”]


As has been the case in the past, this is not the end query.  I have a backlog of updates I want to make to the query and try to create something even more useful from it.

As a bit of fair caution, just because an index shows up as a recommended implementation from this script, it does not mean you should blanket implement it.  Also, like the previous script, the higher the impact value, the more probable the index will have a positive impact on your system.

In addition to the impact, I also look at the “Similar Queries” and “Similar Plans” numbers from the result set.  These values can sometimes lead to an understanding that their are other queries or other plans that could also benefit from the creation of the index.  Your mileage may vary.

Download script Missing_Index.

«page 1 of 6

October 2016
« Sep    


Welcome , today is Friday, October 21, 2016