SQL Servers Black Box Recorder – Def Trace

Across many professions and industries there is often a need to have some sort of device that “audits” everything that happens with a device or process. We are probably all quite familiar with the infamous black box recorders used by the passenger airline and train industries. It is also quite possibly fairly common knowledge that many pace makers perform the same sort of functionality – on a smaller scale. Various metrics and data points are registered and recorded somewhere. Whether to a local storage device or whether the device phones home to some other remote location, it is recorded.

We often pontificate about the what-ifs for a black box recorder within SQL Server, right? We wish and dream and go about creating our own little recorder to suit the requirements we deem necessary for successful monitoring of the server for the just-in-case scenario. Well, the truth of the matter is that we really don’t need to go to such great lengths to create a “black box recorder” because Microsoft has already done all of that for us.

Wait, what? Yes, that’s right! Truth be told this shouldn’t be much news for most Senior Level data professionals working with SQL Server for the past few years. But if you are new to the product, this might be good news for you. Then again, it might just be bad news depending on your take.

Very much like what you may find with the airline industry, the black box recorder in SQL Server is not just a single method (device) implemented to capture all of the desired data points. On a passenger jet, you may find that there are three or more such devices that contribute to the capture and recording of the in-flight data. In SQL Server, there are three major processes that help capture our in-flight data. Over the next few articles I will discuss each of these processes. These processes include:

  • Default Trace
  • system_health Extended Event Session
  • sp_server_diagnostics procedure

This multi-tiered approach does provide a sort of fail-safe system. If one should be disabled, there still remain up to two more processes that may be running. That said, each of these can be disabled but it does take a bit of an overt effort. And since it does require somebody to put out effort to try and disable each of the black box components, you could potentially capture the culprit via audits from the other components or via an explicit audit that you create for these types of purposes.

Since this will be something to discuss over a few articles, I will break out each process into an individual article. For today, I will discuss the default trace.

Default Trace

The default trace by itself is something that can be turned off via configuration option. There may be good reason to disable the default trace. Before disabling the default trace, please consider the following that can be captured via the default trace. I will use a query to demonstrate the events and categories that are configured for capture in the default trace.

In this query, I have requested a few more data points than necessary to illustrate the point. That is to help illustrate an additional point that the default trace isn’t filtering out any data for these events. If the event fires (in this trace), it is recorded. Let’s divert right back to the events and categories for now. The results of that query will produce the following sample list of events for me on SQL Server 2014:

That is quite a range of events covered by this particular recording device. From changes to objects down to security related events and even errors and warnings. If somebody drops an object, the default trace can catch it. If one of the various DBCC statements is executed, it will be trapped in this trace.

There is one thing that is captured by this trace that is not overly obvious. In fact, it is rather annoying in how it is recorded in my opinion. Server configuration settings such as “cost threshold of parallelism” or the lesser utilized “user options” are not registered as an “Object:Alter” event but rather as an ErrorLog event. I wouldn’t call this type of “change” an error and don’t necessarily like seeing these registered with other legitimate errors. That said, it would be nice to see these logged differently (they are currently logged the same way with Extended Events). So, if somebody is dorking around with server configurations, it becomes a bit more convoluted to figure it out, but we can get there. Let’s see how that works:

Looking at my system, I get the following sample results:

You can see there are plenty of settings that I have been playing with captured by the default trace. If you happen to have a junior DBA or maybe a cowboy DBA that loves to shoot from the hip and make changes, this will help you find all the necessary details to bring to that DBA for discussion.

The default trace comprises one third of the black box recorder. I have shown a quick use for the default trace and have also shown some of the various events that are captured from this trace. I recommend getting to know your default trace just a little bit better. You never know when you may need to resort to the flight recorder data held in the default trace. Being familiar with the default trace before you need to use it will help improve your comfort level when under the stress of trying to figure out what happened just before the server went belly up!

For more uses of Extended Events, I recommend my series of articles designed to help you learn XE little by little.

Interested in seeing the power of XE over Profiler? Check this one out!

This has been the fourth article in the 2018 “12 Days of Christmas” series. For a full listing of the articles, visit this page.

How to: File Target use in Extended Events

I have been doing a lot of posts of late about extended events but have failed to share some of the essential building blocks. I am not looking to do a series of definitions, but just share a couple of quick tips and examples here and there that will make the use of extended events a little easier to undertake.

The first one to tackle is the use of the file target. Using a file target is not difficult from a creation point of view (event session creation). But, some basics are critical or you could end up with fewer follicles.

Let’s take a quick peek at some of these tips.

File System Prep

This first little tip comes from a painful experience. It is common sense to only try and create files in a directory that exists, but sometimes that directory has to be different on different systems. Then comes a little copy and paste of the last code used that worked. You think you are golden but forgot that one little tweak for the directory to be used. Oops.

Take this example session. When you run this code, you will not see an error (don’t change anything).

However, if you try to start the session, you will receive a pretty little error. Here is the code to try and start the session, along with it’s associated error.

Msg 25602, Level 17, State 22, Line 25
The target, “5B2DA06D-898A-43C8-9309-39BBBE93EBBD.package0.event_file”, encountered a configuration error during initialization. Object cannot be added to the event session. The operating system returned error 3: ‘The system cannot find the path specified.
‘ while creating the file ‘C:\Database\XEs\FileTargetDemo_0_130670484928250000.xel’.

If you double check in the SSMS GUI, you will see that the session is there. It is just stopped. You can also check that the session exists if you recall my query from a previous post where I show the status of the session.

If you take it a step further and try to read the log file that should be created, you will be greeted once more with a reminder that there is a problem. Try this (sans change again).

And you will receive the following message:

Msg 25718, Level 16, State 3, Line 31
The log file name “C:\Database\XEs\FileTargetDemo*.xel” is invalid. Verify that the file exists and that the SQL Server service account has access to it.

The beauty here is that all is not lost. These are easy problems to get around and resolve. All it takes is to adjust the file path to be the correct path or a path that exists (to which you have permissions).

Read the Log File

If you have been reading this article, then this might seem like a bit of redundancy. That is by design. DBAs love redundancy—well at least when it means there is a means to recovery due to the redundancy.

Unlike reading from the ring_buffer, when you read from the file target, you will use different code. The file target is stored in a binary representation on disk that a function will produce as XML format for human readability. So to read a file target, you would need to do something like the following.

And you definitely do not want to try the following with a file target. That is unless of course you are looking to go bald at an earlier than expected age in this profession.

If you find yourself doing this version, all you will get for results is a short xml string similar to the following:

This is not what you would be looking for when trying to read the event session data. That said, it at least provides some information that could be of use. You will see that buffers are tracked and the file name is tracked. For this last demo, I corrected the FileTargetDemo session to use a directory on the file system that existed and to which I had the appropriate level of permissions.

I just covered two quick tips to help make life a little bit easier when dealing with file targets and extended event sessions. I will try to continue little tips like this over the coming months. I hope you have found this useful and that you will indeed be able to put it to use.

 

For more uses of Extended Events, I recommend my series of articles designed to help you learn XE little by little.

Interested in an article demonstrating the power of XE over Profiler? Check this one out!

This has been the third article in the 2018 “12 Days of Christmas” series. For a full listing of the articles, visit this page.

Upgrading From SQL Server Profiler

I just shared an article describing how to use Extended Events to perform your SQL Server Profiler duties. If not, you can read all about it here.

In that article, I showed one common use-case for SQL Server Profiler and how to achieve the same result via Extended Events. What I didn’t show in that article was how to correlate all of your favorite trace/profiler options into the shiny new XE equivalents. Is there even a way to do that?

As luck would have it, there is a means to correlate trace/profiler events to Extended Event events (yeah that sounds weird).  With the release of SQL Server 2012, Microsoft introduced a couple of catalog views to help with this correlation effort. Those views have a pretty straightforward naming convention. For example, one of the views is named sys.trace_xe_event_map while the other is sys.trace_xe_action_map. For this article, I will be focusing on the former.

Making the Upgrade

When looking to finally make the upgrade away from trace/profiler, a big key is to figure out if your favorite trace events are even an option at the next level. I have talked about finding events in XE in previous articles by trying to search for a specific topic that might apply to your current condition.

But if you are already familiar with specific trace events, you may just want/need to know what the new name is in XE. This is where those catalog views come into play. And in support of that, here is a query that can help in that upgrade effort:

Looking through the results, one would notice that not every trace event maps to an Extended Event event. The events that don’t map, in general, deal with audits (which is actually driven by Extended Events).

This is a really good start to getting you on your way to that much needed upgrade away from profiler/trace. What if there are several traces that are already in use or scripted for various reasons in the environment?

Decision Time

If you happen to have traces that are already deployed in the environment or in script form, it can be a bit of a tedious pain to convert those to Extended Events. Do you manually recreate the traces as XE sessions? Do you abandon the upgrade due to the effort and annoyance it will create? Or do you find some automated means of performing the conversion?

Of those three options, only two are valid options. Those options involve performing the conversion of the traces to XE sessions. There are pros and cons for each. You may opt to take your time and learn more about Extended Events by performing the manual upgrade, or you may choose to save time by using an automated routine.

Should you decide to try the automated routine, there is one already out and available to help you on your way. Jonathan Kehayias wrote the script and you can download it here.

For more uses of Extended Events, I recommend my series of articles designed to help you learn XE little by little.

Interested in another article demonstrating the power of XE over Profiler? Check this one out!

This has been the second article in the 2018 “12 Days of Christmas” series. For a full listing of the articles, visit this page.

How To: XEvents as Profiler

A common excuse for not delving into extended events seems to be the jump from Profiler to extended events.  There appears to be an inherent fear with how to use extended events.  In addition to that fear, is the fear that extended events does not do what Profiler can do so easily.

Today, I would like to share a short tutorial on how to use extended events as if it were a profiling session.  I am only going to show one of the many possible means to profile via extended events.  Just understand that this means you have flexibility and a way to do more with this simple example than what I am going to share.

Requirement

You need to capture TSQL queries that are occurring against your instance in order to try and determine the code that is being passed to SQL Server from the application.

How-To

Rather than jump to Profiler, you are going to need to jump to a fresh, clean query window in SSMS.  Just like in Profiler, you will want to capture certain “statement” oriented events.  For this session, I want to start with sp_statement_starting and sql_statement_starting.  I am not going to use the _completed forms of those events because I want to capture as much as I can – without capturing too much noise.  Sometimes, there may be a _starting without a coordinated _completed and that could throw a wrench in the works.

With the basic information in hand, we are ready to implement some of the things learned in previous XEvents posts (here and here).  Let’s go ahead and start setting up the session that will capture the information we seek.

One caveat to consider is in the sp_statement_starting event, I have specified a specific set option be used.

This particular option is disabled by default.  If you want to know the object_name of code that is being used (triggers, stored procedures, etc), then this option must be enabled.

I have specified the same actions to be used for each of the events I am trapping, but this is purely up to you.  If you do not wish to capture these actions, then you can remove them or add different actions as necessary.  I have also specified a different predicate for each of the events.  This is something that works so much better in XEvents than Profiler – the filtering that can be applied is much more flexible.

Once the session is started, then it is merely a matter of figuring out how to look at the data.  I generally use TSQL to parse the data from the file, but you could also fiddle with the GUI as well.  You can access that by right-clicking the event session in management studio and then selecting “Watch Live Data”.  If everything is configured properly, then you will start to see data after the first event is triggered.  From within the GUI, you can pick and choose which columns to display in your view.  You can pause the display of the session data, or you can even “erase” the data from the current view (just like in Profiler).  You can even filter, aggregate or group the data in the view from the GUI.

The use of the GUI is up to you.  I still prefer to use the script route.  With that route, here is a sample of what you may need to write in order to display the data from the session data file that has been captured.

It really is that simple.  Now you have a means to run a Profiler-like trace on your server without the impact of Profiler.  XEvents provides the means to run a more evolved Profiler session on your data to capture things like queries from an application.  XEvents does all of this without the severe penalty of Profiler and with many additional bonuses.  Try it out and enjoy!

For more uses of Extended Events, I recommend my series of articles designed to help you learn XE little by little.

Interested in another article demonstrating the power of XE over Profiler? Check this one out!

This is the first article in the 2018 “12 Days of Christmas” series. For the full list of articles, please visit this page.

Quick and Easy XE for Azure DB

The Cloud

It has been a minute since I gave much love or attention to Extended Events in Azure SQL DB. Things have changed a touch since then. We will see about some of those changes in a future article. As for this time, let’s focus on how to get a session in Azure SQL DB up and running really quick and easy.

Create a Session

I am going to keep this as easy as possible for creating a session and thus I will demonstrate how to do it from the GUI. And for those more in tune with their scripting side, there is a future article on that as well as a glimpse of a script near the end of this article.

Recall from the previous article that things in Azure SQL DB are different for Extended Events. XE is database scoped rather than server scoped (as is the case with your on-premises servers). Due to this change, finding the GUI for XE is a little different.

In order to find the GUI for XE, you must drill down into the database and then you will see “Extended Events.” (Side note, there is no XE Profiler for Azure SQL DB as of this writing.) If you right-click Sessions, you will get a menu with the option to create a New Session.

After clicking “New Session…” the familiar window for a new session will appear.

Here, you see a session that I started creating already. For this session, I opted to go with the template called “Query Detail Tracking” (more on templates for Azure SQL DB in a future article). I also like to enable “Causality tracking” so that checkbox is ticked in this example. The template has the events I wish to capture already and I am only going to use the ring buffer (file target has some additional requirements for an Azure SQL DB session and should be treated in an entirely separate article) so I will skip those screens straight to the advanced screen before completing this session.

From the advanced screen, all I want to do for now is to decrease the dispatch latency from the default 30 seconds down to 10 seconds. For this session, I just want to capture the events quickly in the target and be able to review them as soon as possible. I won’t be able to utilize the “Watch Live data” option for this session (as shown below), so a quick dispatch is essential.

After I have selected all of the configurations I desire for this session, then I click the script button at the top of the window. From there, I will select “New Query Editor Window.” Once scripted, I will receive the following script for this particular session.

Notice that there is nothing about starting this session in the script to start the session.

Note that “on database” has been specified instead of the traditional “on server” for the session state command. In order to stop the session, it is just as easy as follows.

Conclusion

This has been a very simple introduction into the creation of an Extended Event session using a template for Azure SQL DB. I demonstrated the use of the GUI to configure the session quickly and then to subsequently script that configuration before creating the session.

For more uses of Extended Events, I recommend my series of articles designed to help you learn XE little by little.

Interested in seeing the power of XE over Profiler? Check this one out!

This is a quick pre-cursor to the 2018 “12 Days of Christmas” series. The series will begin on December 25th (the first day of Christmas) and you can easily follow it via this page.

File Maintenance – Cleaning Up Old Files

Comments: 1 Comment
Published on: December 21, 2018

Using SSIS to Maintain the File System

We have all run into a need or a desire to clean up old stale files from the file system, whether it be to remove old backup files or flat files that are created from one process or another.  And based on this need/desire, we have all come up with a method to help with achieve that goal.

Some of the methods might be to include a flag in a maintenance plan that may be used.  Other methods may be to use a SQL script employing xp_cmdshell and delete statements.  Yet another may utilize the sp_oa stored procs and DMO.  And still others may have ventured into powershell to accomplish the same task.  The point is, there are many methods.

I am adding yet another method to the mix.  Why?  I didn’t much like the option of using the sp_oa method or the xp_cmdshell route.  I am very novice with powershell and it would take a lot more tinkering to get the script working properly.  Also, I felt pretty comfortable with SSIS and had approval to try and get this done using that method.  And just because I am a novice with powershell, does not mean that I will not circle back around to try and accomplish this task via that means.

Note: This article was originally written in 2011 and got stuck in an unpublished state. Things have changed since then so i will definitely be circling back around for a powershell version.

Requirements

The method employed needs to be able to do the following:

  1. Remove multiple file types
  2. Be configurable
  3. Clean out files from numerous directories
  4. Remove files older than a specified number of days.

Setup

The solution I chose utilizes SSIS.  It also requires that there be a table in a database that helps to drive the package.

The table looks like the following.

The filepath column holds the FileSystem Path for each directory that needs to be cleaned.  Paths that are supported are local (e.g. C:\temp ) and unc paths (\\machine\c$\temp).  I set this attribute to a length of 256, but if you have a longer path, you will want to adjust the length.

The Process column will hold a value describing what that path relates to, such as MaintainDirectory.  In my example, I am using MaintainDirectory to control which directories hold files that potentially need to be deleted.

Here is an example of the contents of that table I am using currently.

The last piece of the setup before we start working on the SSIS package is the need for a string splitting function.  Pick the string splitter of your liking.  I have one that I like and am sure you have one that you prefer.  The SSIS package relies on the return field from the splitter being named “Item.”  If it is named something else, please make the adjustments in the package as necessary.

The Package

The package I created has been created in SSIS 2008.  To meet the requirements already set forth, I utilized the following objects: ADO.Net Data Source, 2 Execute SQL Tasks, 2 ForEach Loop Containers, a Script Task, and 8 variables.  Let’s take a look at these starting with the variables.

Variables

  • SQLServerName – The value held here is used in an Expression for the Data Source.  This will overwrite the ServerName value in the Data Source.
  • DatabaseName – Used alongside the SQLServerName variable in an Expression for the Data Source.  This value will overwrite the InitialCatalog value in the Data Source.  This should be the name of the database where the FilePaths table and String Split function exist.
  • DaysToKeep – This value is the cutoff point for which files to keep and which files will be deleted.  This variable is used as a ReadOnly variable in the Script Task.
  • obj_FileExtension – This object variable is used to store the result set from one of the Execute SQL tasks and the results of the string split function from the FileExtensionList variable.
  • FileExtensionList – This is a delimited list of file extensions that need to be evaluated for deletion.  It is important to note that the file extensions that are to be processed are case sensitive.  The extension must appear in this list as it appears in the file system.
  • FileExtension – to be used in one of the ForEach loops.  This variable will receive the FileExtension from the obj_FileExtension variable one at a time.
  • obj_ListOfDirectories – This variable will receive the result set of an Execute SQL Task to be later consumed by one of the ForEach loops.
  • DirectoryToMaintain – receives one at a time the Directory to process for file deletion.  The ForEach loop stores a value from obj_ListOfDirectories in this variable for processing.

Execute SQL Tasks

The two Execute SQL Tasks are simple in function.  One is to get the list of directories to maintain from the FilePaths table.  The other is strictly to split the string for the FileExtensionList variable.

The first is named “Get Directory List” and should receive the Full Result Set from the following query.

The Result Set tab of this task also needs to be modified.  it should look like this.

From this task, we flow to the next Execute SQL Task named “Split FileList.”  The setup of this task is very much like the previous task.  We want to receive the full result set.  We have a configuration to make on the result set tab.  We also need to map a parameter.  Let’s take a look at those real quick.

Parameter Mapping

Result Set

And this is the query that we will be executing.

Notice that the Parameter we named in the Parameter Mapping tab is being used in the function call.  I chose this method because I could see and understand how it works better.

ForEach Loops

The next stop in the flow is the ForEach Loop – Directory object.  As the name implies, this ForEach Loop is designed to work with the obj_ListOfDirectories variable/array.

With this first Loop container, we have two tabs that need to be configured in the properties.  Both Loop containers are similar in that they need the same tabs to be configured.  First, let’s talk about the Collection tab.

On the Collection tab, we need to set the Enumerator option to “ForEach ADO Enumerator.”  Then we need to select the obj_ListOfDirectories from the drop down labeled “ADO Source Object Variable.”  Your screen should look like the following image.

With this tab configured, we can focus our attention to the quick changes that need to be made on the Variable Mappings tab.  On this tab, we are telling the enumerator how to handle the data from the object variable.  We are mapping columns from the result set to variables for further consumption.  When configured, it should look like the following.

Inside of this ForEach loop container, we have another ForEach loop container.  This second ForEach loop container handles the file extensions that we listed out in delimited fashion in the FileExtensionList variable.  I have called this container “ForEach Loop – FileExtension” (just keeping it simple).

The collection tab follows the same configuration setup.  The difference of course being that this container will use the obj_FileExtension object from the source variable dropdown menu.

The variable mapping tab is also slightly different.  We will be mapping column 0 of the object to the FileExtension variable.  The explanation for the different number between the two loop container variable mappings is simple.  In obj_ListOfDirectories, we have multiple columns being returned.  In obj_FileExtension, we have but one single column being returned.

This inner Loop container will loop through each of the extensions for each of the directories that have been returned to the outer loop container.  the inner loop container has the remainder of the workload in it via the Script Task.

Script Task

It is via the script task that we actually get to start deleting files.  This was the most difficult piece of the entire package – though the script is not very large.

For the script task, I chose to implement it via the Visual Basic option (instead of C#).  I have three ReadOnlyVariables employed by the script.  Those variables are: User::DaysToKeep,User::DirectoryToMaintain, and User::FileExtension.

Once you have set those on the script tab, the next step is to click the Edit Script… button where we need to place the following script.

An important note of interest is the need for the Try…Catch.  Without this block as it is, you could run into an issue where the file (such as those pesky temp files) may be in use by some process and cause the package to error.  The Try…catch will move past that nasty error and delete the files that it can.

Inside this script, you will see that I am comparing the LastWriteTime to the PurgeDays and ensuring that the file extension matches one that is in the list.  Then we move into the try…catch and either delete the file that matches those criteria or throw an exception and move on to the next file.

When all is said and done, your package should look something like this.

You should also have a variable list that looks like this.

Each variable that is not an Object has a value assigned to it at this point.  These values will be overwritten where applicable.

Next Steps

Having this package is a good start.  But unless you are prepared to manually run this on a daily basis, it needs to be added to a job and scheduled.  There are two ways to go about scheduling this package.

The first option is to configure the FileExtensionList and DaysToKeep variables and save the package with those values.  Then run this package through SQL Agent with those values every time.  The drawback to this method is that if you need to add or remove a file extension (as an example) then you need to edit the package and re-save it.

The alternative option is pass the values through the job to overwrite those variables as the job runs.  Should you need to remove or add a file extension, it would just be done at the job definition level.

Let’s take a look at this second option.  I will skip past how to create the job as an SSIS job in SQL Server and we will look directly how to modify those variables from the job properties.

To configure these variables directly from the SQL Agent job, open the Job properties and click on the Set Values tab (assuming you have defined this job as an SSIS Job type).  You should get a screen similar to this (void of the set values shown in the pic).  Just add the parameters (variables) we have discussed to this point with appropriate values to fit your needs/environment.

I have chosen to only include the four variables shown above since the remaining variables are either objects or get overwritten in the ForEach loops during processing.  The only thing remaining now is to set the schedule for the job.  Once set, the job (and package) will take care of the rest.

Conclusion

I have now shown you how to maintain some of the directories on your system through the use of SSIS and SQL server.  There are many methods to accomplish this goal, it is up to each of us to choose the best method for our environment and comfort level (by means of supporting the chosen solution).

If you would like to read more interesting stuff concerning SSIS, you might want to check any of these articles: lost password, expected range errors, and synonyms extending SSIS.

T-SQL Tuesday Participation Over the Years

Comments: No Comments
Published on: December 19, 2018

Loads of TSQL Tuesdays

TSQL2sDay150x150

Years ago the TSQL Tuesday party was started by Adam Machanic (b|t). The premise of the monthly event is to get people a topic each month to try and ease some of the difficulty with writing (figuring out what to write) and to get more involved in the community. The party started in December 2009 and is now embarking on the 10th year. The party is running strong and will likely continue for many years to come.

I have personally participated in quite a large number of these events. I am far from a perfect participation record like Rob Farley (b | t) but I do participate when I can. Every now and again I think about what I might have participated in or what I have written in the past for a TSQL Tuesday. This post will serve as a table of contents for all of those articles. This is to help make it easier to find for me (as well as give me a chance to review and fix problems from really old posts). This post will provide a quick synopsis of my article (if one exists) along with topic and links to original invite and roundups for the monthly party.

T-SQL Tuesday #001: Date/Time Tricks (Dec. 2009)

Invite and roundup

My Article: N/A

T-SQL Tuesday #002: A Puzzling Situation (January 2010)

Invite and roundup

My Article: TSQL Tuesday – But I was late

This was the first time I participated in TSQL Tuesday. I was very new to the blogging concept – and it shows. The puzzling problem I encountered in the article was a fun little job that kept giving different results than running the same code from SSMS. Check it out!

T-SQL Tuesday #003: RELATIONSHIPS (February 2010)

Invitation and summary

My Article: Relationships

I covered all sorts of different relationships that affect SQL Server from work, to marital, to table relationships in a database. It all ties together in requiring an individual to constantly tune the different types of relationships.

T-SQL Tuesday #004: IO (March 2010)

Invitation and roundup.

My Article: IO IO IO

What if we could create a numbers table without IO?  What if we could perform several different kinds of queries without IO?  Itzik Ben Gan proposed a cascading CTE solution that does this sort of thing. This article shows that method in use to help improve certain IO conditions.

T-SQL Tuesday #005: Reporting (April 2010)

Invitation and roundup.

My ArticleIP and Default Trace…T-SQL Tuesday #005

Having been introduced to a requirement to report on IP addresses of connections, I dive into a solution that will help show hostname, ip address and a few more tidbits.

When it is necessary to provide reports on activity occurring on the server, it pays to do a little prep work.  Be Prepared.  It is not an easy task to be able to go back in time and report on data that isn’t captured.  The little prep work that one may need to do is well worth the effort in the end.

My 2nd ArticleBLOB Report T-SQL Tuesday #005- Reporting

Learning how to decrypt the individual object sizes broken down into type and filegroup. Then report on the data that was retrieved from the DMOs.

T-SQL Tuesday #006: “What About BLOB?” (May 2010)

Invitation and roundup.

My ArticleT-SQL Tuesday #006: A Blobbing We Will Go

In the DMVs for SQL 2005 and SQL 2008 there is more than one place you can find the information about the size of your LOB data?  And it goes without saying that there is more than one way to find information about LOBs in your database.

T-SQL Tuesday #007 – Summertime in the SQL (June 2010)

Invitation and roundup.

My Article: Data Compression

I see database compression, as offered with SQL 2008, to be more like these file compression utilities than DriveSpace.  Data compression in SQL 2008 is not an all or none implementation.  You get to pick and choose what gets compressed.  That is a big time bonus for me.

T-SQL Tuesday #008: Gettin’ Schooled

Invitation and roundup.

My ArticleGettin’ Skewled

I am learning that learning is not just formalized education in a classroom or in specific settings.  There are things to be learned from all aspects of life.  This can be learned if only a little observation is used.

T-SQL Tuesday #009: Beach Time (August 2010)

Invitation and roundup.

I hosted this particular event.

My ArticleR & R

I find it completely useless to go on vacation if I am going to be checking email or project statuses every 10 minutes.  There is no rest or relaxation in doing those things while I am supposed to be doing something else.  Vacation should be fun and enjoyable.  Thus, if I am to enjoy vacation, I need to do a few extra things in the office prior to leaving.

T-SQL Tuesday #010 – Indexes (September 2010)

Invitation and roundup.

My ArticleTSQL Tuesday Indexes and Blobs

How does one find what columns were LOB columns in the database.  I knew I had some past blog posts about various aspects of LOBs, but I had never broken it down to find the columns in the LOB.  Even better was that I wanted to know what columns were in what index that were also a BLOB.

T-SQL Tuesday #011 – Misconceptions in SQL Server (October 2010)

Invitation and roundup.

My ArticleA Haunting TSQL Tuesday Tale

I chose the myth that truncate is unrecoverable. Check it out and see how a truncate is most certainly capable of a rollback or being recoverable.

T-SQL Tuesday #012 – Why are DBA skills necessary? (November 2010)

Invitation and roundup.

My ArticleT-SQL Tuesday #012 – Skills

As a DBA, we occasionally have the opportunity of attracting a new client or a new job or a new database.  It seems that more often than not, some of the skills requisite (at least they should be) of owning a database are missing. (Nunchuku skills could come in handy from time to time too!)

T-SQL Tuesday #13 – What the Business Says Is Not What the Business Wants (December 2010)

Invitation and roundup.

My ArticleT-SQL Tuesday #13 – Business Requirements

I think a common area that is easily overlooked when it comes to requirements and interpretation of requirements is report creation.  A common problem is that there are no defined or written requirements for the creation of a report.

T-SQL Tuesday #014 – RESOLUTIONS (January 2011)

Invitation and roundup.

My ArticleTSQL Tuesday 14: Committed

This month was all about resolutions and goals. My list of goals were pretty good and entirely profession based.

T-SQL Tuesday #015 – Automation in SQL Server (February 2011)

Invitation and roundup.

My ArticleT-SQL Tuesday #15 DBA Automaton

I shared a script that will load a sample of data from every table in every database for every column and give you the length of the pertinent columns (I have excluded obvious columns such as numeric types and certain LOB types).

T-SQL Tuesday #016 – Aggregate Functions (March 2011)

Invitation and roundup.

My ArticleT-SQL Tuesday #016: Aggregates and Statistics

Super geeky article on stats, quartiles and the like. For my data analysis and trending, I wanted to find a simple distribution across quartiles.

T-SQL Tuesday #017 – APPLY Knowledge (April 2011)

Invitation and roundup – unavailable on origin site.

My ArticleT-SQL Tuesday #17 – APPLY Knowledge

In this article I take an unexpected twist in my application of the APPLY operator. In that twist, I show a quick and simple script to grab role members.

T-SQL Tuesday #018 – CTEs (May 2011)

Invitation and roundup.

My ArticleT-SQL Tuesday #18 – CTEs

I shared a rather complex series of CTEs that are used in the same way a hierarchy and ancestry tree would be generated – except for Foreign Key relationships. That script was a load of fun.

T-SQL Tuesday #019 – Disasters and Recovery (June 2011)

Invitation and roundup.

My ArticleT-SQL Tuesday #19 – Disasters & Recovery

Who needs a single disaster when you can enjoy multiple disasters in a single sitting? I share three different types of disasters. I am sure many have had experiences with all three and probably many more types of disasters.

T-SQL Tuesday #020 – T-SQL Best Practices (July 2011)

Invitation and roundup.

My Article: N/A

I missed the announcement or something like that.

T-SQL Tuesday #021 – A Day Late and Totally Full Of It. (Aug. 2011)

Invitation and roundup.

My ArticleTSQL Tuesday 21 – FAIL FAIL FAIL

I don’t hide much as I share my feelings about crap code and then show my own crap code followed by a little bit of explanation on an improved version of the same code.

T-SQL Tuesday #022 – Data Presentation (September 2011)

Invitation and roundup

My ArticleT-SQL Tuesday #22 – Data Presentation

I chose to touch on several aspects of data presentation – Performance, Accuracy, Display, and Business Requirements.

T-SQL Tuesday #023 – Joins (October 2011)

Invitation and roundup.

My Article: N/A

D’oh – missed another one in 2011.

T-SQL Tuesday #024 – Prox ‘n’ Funx (November 2011)

Invitation and roundup.

My ArticleT-SQL Tuesday #024: Prox ‘n’ Funx

Big takeaway from this month was the value of a very good string splitter.

T-SQL Tuesday #025 – Invitation to Share Your Tricks (Dec. 2011)

Invitation and roundup.

My ArticleT-SQL Tuesday #025 – Holiday Gifts

In the spirit of the Holidays, I wanted to share some tricks and tips.  They can be my gifts to you during the holidays.  And maybe they can help you give to somebody else.

T-SQL Tuesday #026 – Second Chances (January 2012)

Invitation and roundup. (Original site toasted).

My ArticleTSQL Tuesday #26 or #23 – Identity Crisis

Having missing TSQLTuesday 23 and the topic being about Second Chances, I chose to write about JOINs and sort of fulfilling the requirements for both TSQL Tuesdays in one shot.

T-SQL Tuesday #027 – Invitation to The Big Data Valentine’s Edition (February 2012)

Invitation and roundup.

My Article: N/A

After that second chance I blew it the very next month. Luckily I have a good reason – Valentines Day!

T-SQL Tuesday #028 – Jack of All Trades or Master of None (March 2012)

Invitation and no roundup.

My ArticleT-SQL Tuesday #028 – Jack of All Trades, Master of None?

This one brings up bad memories of when as the DBA it was also required to be the janitor.

 

More updates coming soon!

What is T-SQL Tuesday?

T-SQL Tuesday is a monthly blog party hosted by a different blogger each month. This blog party was started by Adam Machanic (blog|twitter). You can take part by posting your own participating post that fits the topic of the month and follows the requirements below. Additionally, if you are interested in hosting a future T-SQL Tuesday, contact Steve Jones via the tsqltuesday website – here.

How to Participate

  • Your post must be published between 00:00 GMT Tuesday and 00:00 GMT Wednesday.
  • Your post must contain the T-SQL Tuesday logo from above and the image should link back to this blog post.
  • Trackbacks should work. But, please do add a link to your post in the comments section below so everyone can see your work.
  • Tweet about your post using the hash tag #TSQL2sDay.

T-SQL Tuesday 109: Influence Somebody Recap

Comments: No Comments
Published on: December 18, 2018

How have you impacted somebody in the community?

This month I am in charge of the topic for TSQLTuesday. The invite was published here a couple of weeks ago. I knew I was picking something rather difficult for people in general and probably more difficult for technologists due to the notion of jumping head first into a discussion with somebody about yourself. I have been asked by previous employers to do my own personal annual job review. This idea is very much like that – but more personal.

Some say it eats at the core of being prideful to discuss this topic or to broach the “review” with others. I like to take a different angle on it. I see it as a means to personal growth and has nothing to do with bragging. I tend to find that people gifted with humility will ask others how they could improve professionally or personally. Being able to find where you are doing well or not so well is key to progress. I would hope those that participated found this difficult exercise helpful in uncovering a new angle on personal and/or professional improvement.

 

Discomfort

There is no hiding from the fact that this task provided a great deal of discomfort and probably angst as demonstrated from this tweet.

 

Multiple participants, as you will see as you read their articles, shared their feelings on the level of discomfort and difficulty they had with the topic. I expected the discomfort, I felt it too.

As I read the articles, I could feel the discomfort alongside the author. I get it – it’s hard because none of us want to come off as bragging about ourselves. Take into account that discomfort and then the message that each of these people has to share with you!

WHO?

Steve Jones (b | t) – Influence. Steve is not comfortable writing about this topic as he dives in to share with us. As he does he hits a very strong note for me. We, the SQL Community, give a lot of ourselves and that giving grows exponentially. We give of ourselves and many times we end up the recipient of this exponential experience.

It can be scary and intimidating to share knowledge with others publicly, but it is also immensely rewarding! -Steve Jones

I believe the same can be said of sharing a personal review publicly.

Wayne Sheffield (b | t) – Influence Somebody. I personally know that Wayne had a hard time with this topic and sharing an article. Wayne shares a couple of stories outlining some very small actions on his part. He was largely unaware of his influence and in turn ended up dumbfounded and awestruck at what his influence meant to other people.

Small actions. That’s all it takes to make a positive difference in a person’s life. Be interested, and encouraging. You never know what kind of effect you are truly having on people. And when you do find out, prepare to be astonished. – Wayne Sheffield

Malathi Mahadevan (b | t) – To Influence and be Influenced. Mala is a very humble person. Mala shares a personal experience about her influence several other people as she worked on her very first book project this past year. The really cool part is the joy she got from seeing many people filled with the joy of having the opportunity to be in the book.

It is a positive way to end the year on a tone of gratitude to people who have influenced you – and it is doubly positive/uplifting to see/hear of what you have done to other people too. – Mala

I am grateful for the friendship I have with Mala, she is a genuine and good person.

Shane O’Neill (b | t) – Influence Somebody. Shane shares how difficult this task was going in and even stumbles on a bit of writers block. Shane doesn’t go out daily to try and find somebody to profoundly impact. He goes about his day in an unassuming way as he just does what he does – shares knowledge just because it needs to be shared. Shane is aware of the impact as people have met him and thanked him personally for his contributions to the community. That’s a win win!

I’ve had some great surprises from people in the community about stuff they’ve seen on here.

It’s nice to know that when you’re shouting into the void, that the void sometimes shouts back.

Each token of appreciation, each expression of gratitude is a replenishment to my morale.

Just a way to say that all efforts put in, while not showing straight away, build up. – Shane O’Neill

I am sure many of us have felt that replenishment to morale from the various tokens of gratitude. The sincere and personal sharing of appreciation carries so much more weight and replenishes the morale so much more completely.

Rob Farley (b | t) – People I Influence. Rob is a humble kind and caring person! He struggled with the topic as he illustrated on twitter. All of that aside, he hit on a note I was hoping somebody would hit on – employees and coworkers. Employees are a reflection of leadership. If the leader is a class act, s/he will impact the employees in a positive manner for the better of the company and the employee.

That aside, Rob is hopeful of being an influence to other people outside of work. Rob is not comfortable with thinking about whether he is influential or not and I get it. I don’t think many of us think regularly about our sphere of influence – until somebody gives us that token of gratitude for some small action we performed.

For a while now, I’ve thought that I would rather employ MVPs than be one myself (although I hope this doesn’t actually cause me to lose my status), and I have often looked for opportunity to get my team in front of people.

As an employer who encourages his staff to be community minded and on the path to being MVPs like Rob is, I think this an outstanding level of influence that is merely demonstrative of the humility that resonates within Rob.

Jason Brimhall (me) – To Influence or be Influenced. I share a couple of experiences in my entry for the month. More important than the stories is the message that as one gives of themselves, there is a gain that will be returned to that person at some point that is likely exponential in nature to the little token of giving that happened in the first place. Give and you will receive! Steve hit on that same note.

And in case you are still curious what this TSQL Tuesday thing is, here are the details…

What is T-SQL Tuesday?

TSQL2sDay150x150T-SQL Tuesday is a monthly blog party hosted by a different blogger each month. This blog party was started by Adam Machanic (blog|twitter). You can take part by posting your own participating post that fits the topic of the month and follows the requirements below. Additionally, if you are interested in hosting a future T-SQL Tuesday, contact Steve Jones via the tsqltuesday website – here.

How to Participate

  • Your post must be published between 00:00 GMT Tuesday, December 11e, 2018, and 00:00 GMT Wednesday December 12e, 2018.
  • Your post must contain the T-SQL Tuesday logo from above and the image should link back to this blog post.
  • Trackbacks should work. But, please do add a link to your post in the comments section below so everyone can see your work.
  • Tweet about your post using the hash tag #TSQL2sDay.

Synonyms in SQL Server – Good and Bad

When SQL Server 2005 was released, a nifty little feature was included called synonyms.  Despite being around since SQL Server 2005, I think this feature is often under-utilized or, more importantly, it is implemented in a very bad way.

Today I want to share a couple of examples.  We will take a look at examples of both good and bad implementations of synonyms.

First, let’s take a look at the syntax for creating a synonym.  Per BOL (and for the fun of it, here is the 2005 link).

So a sample implementation of a Synonym could be something like the following.

Before we delve into that sample synonym, lets look at an example of a really bad implementation.

The BAD

While working with a client, I received a request to look into why a linked server query was failing.  (This isn’t necessarily the bad, just hang in there for a bit.)  The message was something like this:

The OLE DB provider “SQLNCLI10” for linked server “blahblahblah” indicates that either the object has no columns or the current user does not have permissions on that object.

The error message seems at least semi-descriptive and gives a starting point.  In this case, I decided to verify the linked server was created properly, verified that the permissions were done properly and even tested the linked server.  On the source (linked) and destination server (let’s call the Source server ServerA and the Destination server we will call ServerB), I verified that permissions were in place for each database to be touched.  Still no dice!

Well, let’s go take a look and see if that referenced table actually exists.  It did not!  Does it exist as a view?  It did not!  Alas, the table existed as a synonym.  This is where it gets wonky.  In looking at the definition of the synonym, I found that the table defined in the synonym had a linked server table as its source.  To top things off, the linked server was back on the originating server that was coming across the link in the first place.  So yes, that would be ServerB initiated a query against ServerA to pull data back to ServerB.  But the data needed (as defined by the vendor) was available on ServerA – supposedly.  Reality had that data actually sitting on ServerB the whole time.

At any rate, thanks to having a synonym for each and every table sitting on ServerA that referenced a table across a linked server on ServerB, we had mass confusion.  In the end, the query was far simpler to execute by just pulling it from the originating query server (ServerB).

This implementation of a synonym was not the best.  All it did was cause confusion, create documentation inaccuracies and delay the developer from accomplishing her task.  Do you really need 1000s of synonyms in your database?  Do you need all of them going across a linked server?  If you do, did you ever think about the potential for performance issues?  (The vendor in this case used those linked servers and synonyms to perform a data conversion that took 36 hrs each time for a rather small dataset – ouch!!!!).

On the other Hand

Imagine, if you will, two databases sitting on the same box.  One database will be for your user data, and the other for “staging” data for processes such as those related to ETL.  Imagine further that, for some inane reason, the naming standard of your databases must include the suffix denoting the environment of the database.

Now picture an SSIS package that must utilize tables from both databases in data sources somehow.  At least one of the tables has millions of records.  And both of the tables have nearly 100 columns.  Even just pulling in the minimum amount of data using a query from each source can cause memory issues.  Two separate data sources means you will likely have a sort transformation (for each source) as well as a join transformation.

Trying to reduce the amount of data in the larger table source could be done via TSQL.  But to reference a database in one environment versus another environment means a code change with each deployment (due to the TSQL – think three part naming).  So you have been hampered by the environment.  Or have you?

By using a synonym in this situation, the data can be joined in a tsql data source by referencing that synonym.  Let’s look back at the sample synonym posted earlier in this article.

You can see that this synonym follows the same sort of naming standards as was just laid out in the preceding scenario.  If I create a synonym in each environment by the same name, and referencing the appropriate environment named database, I have just opened up a performance tweak for my SSIS datasource.

By implementing this slight tweak, I have been able to gain a 10x performance improvement in package performance.  I am now requiring SSIS to ingest fewer records and thus chew up less memory.  Fewer transformations are required and the package can just fly into the required transformations, rather than tinkering around with the transformations needed to just get the data into a usable state for those transformations.

There are other benefits within SSIS to using synonyms for databases on the same server as well.  Especially when dealing with this kind of naming standard that requires the databases to be named differently in each environment.

Conclusion

How you use a synonym can be a huge asset or it can be a significant dampener to performance.  There are benefits and uses for these nifty little things.  Check them out and let us know how you have been able to put synonyms to use to benefit you.

Check out some of these other articles on synonyms here and here.

Maintenance Plan Owner – Back to Basics

We all inherit things from time to time through our profession.  Sometimes we inherit some good things, sometimes we inherit some things that are not so good.  Other times we inherit some things that are just plan annoying.  Yet other times, we inherit things that may be annoying and we probably just haven’t discovered them yet.

Dizzying, I know.

Inheritance

Have you ever taken over a server that had several maintenance plans on it?  Have you ever really checked who the owner of those plans is?  Or, maybe you had a failing job relating to one of these maintenance plans and you changed the job owner, but did you really fix the root cause?  That could be one of those things that you inherited that could be annoying but you just don’t know it yet.

Step by Step

No this is not New Kids on the Block (I think I just threw up in my mouth thinking that).

Let’s create a generic maintenance plan and see what happens.

The first thing we do is navigate to Maintenance Plans under the Management menu in Management Studio.

 

Right Click the Maintenance Plan folder and select New Maintenance Plan… from the context menu.  This will prompt us with the following dialog box.

In this box, we can type a name for this Maintenance Plan that is to be created.  I chose MaintPlanOwner, since that is the topic of this article.

After clicking ok on this dialog box, you will be presented with a blank canvas with which to design your maintenance plan.  I have chose a simple task for the purposes of this article.

I will create a subplan named Statistics and add the Update Statistics task to the canvas.

You can see this illustrated to the left.  I chose to update the statistics on all databases and left all other options as the default option – for simplicity of this article.

At this point, the only thing left to do is to save this Maintenance Plan.  Once the plan is saved, then we can move on to the next step – some fun with TSQL.

 

 

Fun with TSQL

This is the stage of the article where we get to play with TSQL and investigate at a high level the Maintenance Plan we just created.

Within the msdb database, we have some system tables that store information about SSIS packages, DTS packages, and Maintenance Plans.  We will be investigating from a SQL 2008 and SQL 2005 standpoint (it changed in 2005 and then again in 2008).

In SQL 2005, we can query the sysdtspackages90 and sysdtspackagefolders90 to gain insight into who owns these Maintenance Plans.  In SQL 2008 and up, we can query sysssispackages and sysssispackagefolders to gain the same insight.  These system tables are within the msdb database.

In SQL Server, we can use the following to find that I am now the owner of that maintenance plan we just created.

Notice that in this query, I delve out to the sys.server_principals catalog view.  I did this to retrieve the name of the owner of the package that was found in the sysdtspackages90 and sysssispackages tables respective to version of SQL Server. I also am running a dynamic SQL query to support both views dependent on version of SQL Server.  I figured this might be a tad more helpful than the previous version here. This query would yield the following result set for that new “Maintenance Plan” that was just created.

Caveat

Let’s assume that this package is scheduled via a SQL Agent job on a production server.  I then get moved to a different department and no longer have permissions on this particular production server.  The job will start failing due to the principal not having access.  One fix would be to change the owner of the job.

That will work.  However, there is a problem with that fix.  As soon as somebody opens and saves the Maintenance Plan, the owner of the job will revert back to the owner of the Maintenance Plan.  When that happens, then the job will fail again.

A permanent fix is needed.  The permanent fix is to change the owner of the Maintenance Plan.  The following will change the owner to ‘sa’ for both SQL 2005 and SQL 2008 (and up).

SQL 2005

SQL 2008

Now if you run the code used earlier to investigate, you will find that the owner has indeed changed.  The results of that query should be similar to the following.

There you have it.  No more hair tugging over something as benign as the owner of a Maintenance Plan.  This is one of those things that should be looked at as soon as you inherit a new server.

The Wrap

In this article I took a rather long route to a simple fix. It’s easy to try each of the steps I showed in this article thinking it will help. It isn’t illogical to try some of those steps. They just don’t work unfortunately. In the end, getting to know the settings in the database and what the errors are really trying to get at is most helpful. Sometimes, it just takes a few more steps to get to the real meaning of the error.

This has been another post in the back to basics series. Other topics in the series include (but are not limited to): Backups, backup history and user logins.

«page 2 of 30»

Calendar
January 2019
M T W T F S S
« Dec    
 123456
78910111213
14151617181920
21222324252627
28293031  

Welcome , today is Saturday, January 19, 2019