PowerShell to Backup XE Session

Quite some time ago, I shared a few articles that peeled back the top layer of how to use PowerShell (PoSh) with Extended Events (XEvents). Among those articles, I showed how to retrieve the metadata, predicates and actions, and targets (to mention a few). Those are prime examples of articles showing some of the basics, which means there is plenty of room for some deeper dive articles involving both PoSh and XEvents. One topic that can help us bridge to the deeper end of the XEvents pool is how to generate scripts for our XEvent Sessions.

In this article, I will venture to show how to generate good backup scripts of our sessions using PoSh. That said, there are some caveats to using PoSh to generate these scripts and I will share those as well.

Wading to the Deeper End!

PoSh provides many methods and classes that allow us to more easily do various things. One of my favorite uses for PoSh is to automate tasks across the enterprise database server footprint. Nothing like creating a script that can effectively hit all of the servers one manages in one fell swoop to help minimize effort and time spent touching all of the servers. Afterall, a prime directive of the effective DBA is to be lazy.

So, when I look at creating a script in PoSh, I also look at how can I make this script function for a single server just the same as it would work for hundreds of servers. That will be one of the “nice to have” add-ons that you will see in this script. But, how did we get there in the first place?

A critical component of using PoSh to generate the scripts for each of the XEvent Sessions on the server is the GetScript(). GetScript() can be called for many different objects within PoSh. Another critical component is the XEvent SMO classes that were added as of SQL Server 2012. Take a mental note of that because it will come into play in a bit.

In addition to these critical components, I used some of the stuff that I mentioned in the previous articles as well as my article on XEvent management with PoSh. As I have said many times, these articles are building blocks and do rely upon many fundamentals divulged in previous articles.

Getting into the Script

Given this is a script that interacts with SQL Server, we need to ensure the SqlServer module is installed and loaded. It if is already installed, that is great. During my testing, I found that one of my servers, I ran this locally on, did not have it installed. With that discovery, I figured it is just easier to implement a check. If the module is not installed, then install it.

The next major component is how I fetch my list of servers. I prefer to have all of my enterprise SQL Servers listed within SQL Server. I keep a table with all of the servers and designate various attributes for the Servers (e.g. prod or dev, business unit, decommissioned etc).

Once, I know which servers need to be queried, I need to make sure I set my PoSh location properly.

With this script, I also have the ability to script out a specific XEvent session or to script every XEvent session deployed on the server. When scripting all sessions on the server, I opted to put all of the sessions into a single output script file. Since, this script can be run against multiple servers, I name each output according to the Server Name and the Instance on that server followed by the XEvent Session name (or all_sessions in the event all sessions are being returned).

Since I prefer to use a custom event_file path, I also wanted to ensure my script would produce a means to ensure the custom file path is created. I needed to validate that this only gets added to those sessions that had an event_file added to it.

There are a few more nuggets throughout that help bring this whole thing together. This is what the completed script looks like.

If you follow along on that script, you will see that I concatenate the sessions together and then force overwrite the output file. This ensures that I am not appending to an existing file and also ensures that I have the most current version of the XEvent session script. I configure this to output to the local server from where the script is executed.

Caveats

Using PoSh to generate scripts is extremely handy especially when working across numerous servers. That said, there are certain problems for which one must be aware. First is that the SMO classes for PoSh were introduced as of SQL Server 2012. These objects are not backwards compatible.

The second issue I have may be less of a concern to you than me, but it goes well with concerns I mentioned in the article I wrote about how to do this same task in TSQL. The script generated via PoSh adds the events in alphabetical order. I have also seen on some cases where the predicate is not in the exact order that was originally created. Predicate order is essential! Event order may not be an essential thing for anything other than Source Control or for the demanding OCD DBA. Your mileage may vary.

Final Thoughts

Bringing PoSh to the world of XEvents can open up your eyes to a world of better automation and DBA efficiency as you manage your enterprise. PoSh also can help us dive deeper into the world of XEvents as we will see in some upcoming articles.

Working with Extended Events will help you become a better DBA. Working with PoSh can also help you in many various tasks to become a better DBA. Combine the two and you just might have a super weapon.

Interested in exploring the vast world of Extended Events? Check these out! The library of articles is pretty large and continues to grow.

PowerShell ISE Crashes

Working with PowerShell brings a lot of advantages and power to help manage a server. The more current your PoSh version, the more efficiently you will be able to manage your server. Sometimes getting to the current PoSh versions comes with a little pain such as ISE crashes.

I recently had the mis-adventure of working through some ISE crashes after bringing some systems up to PoSh 5.1 that were either PoSh 2.0 or 3.0. It’s not a very fun situation to run a WMI update and then run into a crash of any type when testing if it worked. Your first thought is something terrible has happened.

As it stands, the problem is more of a nuisance than a critical failure. That said, it is enough of a problem that anyone who uses the ISE or .Net applications may experience a slight cardiac event.

Fonts

As you work to quickly recover from your missed heart beat, you start digging through logs and then hitting good old trusty google.

Diving through the logs, you might just happen across an error similar to the following:

Problem signature: Problem Event Name: PowerShell NameOfExe: PowerShell_ISE.exe FileVersionOfSystemManagementAutomation: 6.1.7600.16385 InnermostExceptionType: System.Xml.XmlException OutermostExceptionType: System.Reflection.TargetInvocation
DeepestPowerShellFrame: indows.PowerShell.GuiExe.Internal.GPowerShell.Main DeepestFrame: indows.PowerShell.GuiExe.Internal.GPowerShell.Main ThreadName: unknown.

Maybe the first error you encounter might look like this one instead:

System.TypeInitializationException

“FileFormatException: No FontFamily element found in FontFamilyCollection
that matches current OS or greater: Win7SP1”.

Inner exception originates from: CompositeFontParser

Either way, the error shoots us back to the same fundamental problem. The ISE won’t load, you get an error message and you can’t confirm that the WMI patch was applied properly.

As you work your fingers faster and faster through the pages on google, you discover that this problem is caused more explicitly by a patch for the .Net framework and not necessarily the work to upgrade your PoSh version. It only waited to manifest itself after the upgrade.

That’s gravy and all, but how does one fix the problem? For me, the quickest and most reliable fix was to simply jump straight to the root of the problem – fonts. The ISE is a WPF application and it also requires a fallback font (if a character isn’t present in your font set, then the app chooses a substitute from the fallback font – or something like that).

The fix is extremely simple and really underscores why this is merely a nuisance issue and not a critical problem. Thus it shouldn’t cause any sort of sinking internal feelings of any sort. There are a few plausible fixes floating around out there. I recommend just doing a manual font replacement. It is all but three simple steps:

  1. Download GlobalUserInterface.CompositeFont
  2. XCOPY the font to %windir%\Microsoft.NET\Framework\v4.0.30319\WPF\Fonts
  3. XCOPY the font to %windir%\Microsoft.NET\Framework64\v4.0.30319\WPF\Fonts

After you have copied the font to those two directories, then all that is needed to be done is launch the ISE. I ran into the same problem on three or four servers and the fix took no more than 5 minutes on each of the servers.

Conclusion

I previously mentioned that I have been working more and more with PoSh to try and improve my skillset there. This is one of those very low-level trinkets that I ran into as I have been working to hone my skills in that tech. For other, possibly, interesting articles about my experiences with PowerShell, you can check out these articles.

Given this job is tightly related to the system_health black box sessions (sp_server_diagnostics and system_health xe session), I recommend fixing the job. In addition, I also recommend reading the following series about XE and some of those black box recorder sessions – here.

Powershell, Me and the MCM part II

Comments: No Comments
Published on: December 20, 2010

Last week I posted the first part of this series.  I thought it would be a good idea to give a little follow-up on how my foray into Powershell as I attempted to download this videos.

Attempt 1:  Failed miserably.  I did not have enough disk space and thus only downloaded about half of the videos.  I tried to remedy this by changing the path to a different volume but that threw out some error messages.  The script works best when continuing to use the $env variable in the path for the location to dump the files.

Attempt 2:  From a different machine this was working beautifully until I noticed that every once in a while a file would error out and the download would resume with the next video in the list.  When I attempted to look up those particular videos, they were all present.  So I restarted thinking it was a local environment problem.

Attempt 3:  I figured out that attempt 2 was failing because my laptop was going to standby – duh!!!  I disabled the standby and finally got the download to work from start to finish without fail.

Now, I can place these videos on my Android and watch/listen from anywhere I have my phone.  Since I have an auxiliary port in my car, I can just plug the phone in, let it charge, and listen to the videos while I commute – through the car speakers.  It is nice to have such a high-level of training material available for a quick refresher or for that first time through.  I recommend people get on board and take advantage of what Microsoft, Paul Randal, and others have given to the community through this training.  Since I pay out of pocket for my training – this is the right price for me.

Thanks for the Christmas Gift.

Powershell, Me and the MCM

Comments: 3 Comments
Published on: December 16, 2010

If you haven’t heard yet, Microsoft has made some significant changes to the MCM program.  The changes make the certification more accessible to the masses.

You can read more about that from sources such as the following:

  1. Grant Fritchey
  2. Jason Strate
  3. Glenn Berry
  4. Microsoft

This post is not so much about the changes.  More it is about the information dissemination related to the MCM.  I saw on twitter that there were some videos available to watch concerning the MCM training.  I was curious where these videos were so I decided to go looking for the videos.  In my search for the videos, I came across the Blog post referenced above by Jason Strate.  In that post, he has a link to the videos.  I decided to check out the videos and I decided to start downloading them so I could use them as study aids as I work on the MCITP.

Downloading those videos one by one is somewhat tedious and time consuming.  Thus while some were in the process of downloading, I started working on a few other things and saw another twitter post come across about a Powershell script to download those MCM training videos.  This is great news – except I have never run a powershell script.

Well, no time like the present to learn.  First thing to do is to check out the blog article about the script – it has a few downloads.  The blog post is by Eric Humphrey, and can be found here.  After having read what he had to say about the script and downloading the necessary files that he lists, it was time to find out how to run a powershell script.  A little looking and I came across this article.

Excellent I am well under way now to run this script.  After following some of the recommendations in that article (e.g. security settings), it was time to look at the script and see what I needed to know from there, such as paths required for add-in files or paths for destination files.  Some quick adjustments to the userprofile path and to the download directory, as well as copying the HTMLAgilityPack into the userprofile path – and we are all set.

Now I have the script running and downloading the files – but it looks like it is going to be running for a long time.  This is a very useful tool for me at this moment.  This demonstrates another use for Powershell as well.  I haven’t yet decided that I will pursue the MCM, however I will use these videos to improve my knowledge base on SQL server.  I would recommend that people go check out these videos.  It was somewhat surprising to me to see that many of the pages had not been viewed very frequently at all.  This is great training material.  I recommend getting it and adding it to the reference material you may already have.

If you are just looking to browse the videos individually, you can find the site here.

Unicode, Emojis and Databases Oh My!

Categories: News, Professional, Scripts, SSC
Comments: No Comments
Published on: January 5, 2020

Over the past several (or pervious if you have been following along) articles, I have explored the use of special characters in certain database scenarios. Using these special characters brings certain fun and flare to your hum drum naming schemes. In addition to the fun, there comes a little bit of extra peculiarities to ensure everything works as expected.

While figuring out some examples, I found myself searching for good reliable sources constantly. I also found myself trying to find various emojis that would work well to represent one idea or another. Sometimes, the effort paid off, and sometimes it seemed frivolous.

If only there was an easy comprehensive source to search through all available unicode characters for SQL Server. Then a thought occurred to me. Why not create my own database with these characters?

Laying the Groundwork

After determining that a database would work best locally to make it easier to research these characters, I quickly determined that I would need to figure out how to write a powershell script so I could scrape a list of characters from a website. That should be easy, right?

With a sample such as that, I just need to figure out how to pull the image for each character and the unicode value of each. Next task to figure out is where can i find some decent powershell examples on the web that would do a similar task.

In addition to figuring out how to do the powershell end of this, there is the database side of things. What should the table design be for this kind of data? Which values and attributes should be stored? Am I overthinking this? Is this too much work?

Well, this is indeed far too much work. Especially given this awesome gem I found while trying to find the exact reference link I was looking for in one of the pervious articles. You see, as it turns out Solomon Rutzky had already done all (and I really do mean all) of the heavy lifting for this when he wrote a function that will do exactly what I was looking to do – here. The function that Solomon created will return all 188,657 code points that are usable in SQL Server.

Now, instead of building out a script to import the data from somewhere else, I just need to employ the use of this fabulous script and move on with all the oodles of time I was spared.

Put a bow on it

Playing around with emojis in a database is a fun endeavor. Not only is it fun to play with for personal growth, but it does have some business advantages. We live in an era were these emojis are popping up everywhere and are even being stored long term in many databases. I view it as a great opportunity to improve your skill set and better your career for the future.

Interested in learning about some deep technical information instead? Check these out!

Want to learn more about your indexes? Try this index maintenance article or this index size article.

This is the twelfth article in the 2019 “12 Days of Christmas” series. For the full list of articles, please visit this page.

*Previous has been intentionally mis-spelled throughout this article as “pervious” as an ode to a fat finger mistake in a recent article.

Get a List of Files with Data

Categories: News, Professional, Scripts, SSC
Comments: No Comments
Published on: December 30, 2019

Suppose you have a directory on your server that houses over 300,000 csv files. These files are automatically created by an automated process related to your SQL Server operations and are automatically created when the process runs. The process runs on a schedule every 15 minutes.

Now suppose that you have been tasked with figuring out which of the files have data of value in them and which can be ignored. The files that can be ignored will all be either 0k or 1k in size. However, some of the 1k files can also be ignored because they only contain a header. Every csv file should have a header. So, we must figure out a way to filter out those files that are 1k or less and only have a header row.

Filtering out these files with this criteria will allow for us to find files that have useful data in them. Now why do you need to figure out which ones have useful data? Let’s just say that maybe some of the files were created with some bad data in them and you need to figure out which files may have bad data so those particular files can be regenerated. As luck would have it, the automated process does not have any sort of logging and does the bare minimum to create the files in the first place. (Sometimes you just inherit a flawed process where inadequate thought was given.)

Finding the Files

The hard part has been done at this point. At least you know some of the attributes that will help distinguish wanted files from the unwanted files. Sorting through these files by hand could be rather bothersome and cumbersome. The trick here is to find a way to search all of these files quickly and filter easily the bad from the good. Sounds like an opportunity for another automation or script. I have just the powershell script for that.

With this script, I have a couple of things to help me find files of interest quickly. First, I have a filter in place to allow me to search a specific number of days worth of files. Second, I have a filter in place to ensure it only returns file names where there is more than just a header row present.

Now, instead of spending hours perusing files trying to find something of value, I have reduced my time spent to just a mere fraction of that. Next steps after this would be to go and add some additional logging and better robustness to the initial process to reduce the chance of bad data being put into the files in the first place.

Put a bow on it

Automated processes are fantastic. We as DBAs strive to have more tasks automated than not. That said, automation without planning is just creating a time sink later on in the job. At some point, you may need to employ some powershell script similar to this in order to find where your automated process has gone wrong (if you did not plan that process well enough from the start).

Interested in learning about some deep technical information instead? Check these out!

Want to learn more about your indexes? Try this index maintenance article or this index size article.

This is the sixth article in the 2019 “12 Days of Christmas” series. For the full list of articles, please visit this page.

It’s an Honor: Microsoft MVP

Comments: No Comments
Published on: July 1, 2019

Today (July 1st) is the notification day for all Microsoft MVPs. Microsoft changed it a couple of cycles back to be all on the same day helping all candidates to come to a single emotional day. I received my notification early this morning that I was awarded the MVP again for the 2019-2020 cycle. This is my 6th award overall.

I am humbled to be among so many great professionals and honored at the same time for having been chosen. Are you new curious about this MVP thing? Well, its sorta like being named MVP for a sports league while at the same time a little bit different. Here is what Microsoft has to say about the MVP. I personally think you may get more out of reading about the MVP here (a little bias might be involved).

This is a community award for your efforts to promote and help others work with Microsoft products. I was awarded for my efforts in the Data Platform. Many others receive the award for their efforts in AI, Powershell, or Azure (as examples). Generally, the people that are awarded are seen as being passionate about the Microsoft Product. These people spend a lot of time, effort, energy, and sometimes money to help share their enthusiasm for the product.

Microsoft is doing a fantastic job at recognizing people for their efforts in promoting the MS brand. Not everybody that promotes that brand gets to be an MVP – sadly. If you didn’t make it this time, keep trying. I just advise that the goal be more about the community than the award. The MVP award is great, but giving to the community is the real reward.

I want to congratulate all those awarded a Microsoft MVP award past, present and future. I also want to thank all of those who have given to the community regardless of whether an award was received or not. There are many of you out there that have been a positive influence whether you know it or not – just for giving to the community and helping somebody else. THANK YOU!

Automating like an Enterprise DBA

Comments: 1 Comment
Published on: January 8, 2019

TSQL Tuesday

The second Tuesday of the month comes to us a little early this month. That means it is time again for another group blog party called TSQLTuesday. This party that was started by Adam Machanic has now been going for long enough that changes have happened (such as Steve Jones (b | t) managing it now). For a nice long read, you can find a nice roundup of all TSQLTuesdays over here.

Automation

The theme as chosen by Garry Bargsley (b | t) is about automation. Specifically, Garry has provided two requirements about automation for this month. As is always, there is leeway in a post that participates in TSQL Tuesday.

One of the things that should seem very commonplace to a data professional is the effort to become a lazy DBA. A lazy DBA is not a bad thing. It just means the DBA works hard to automate the repetitive mundane tasks that may be tedious and/or time consuming. Time can always be better spent somewhere else, right?

If you are lacking in any ideas for what can be automated, here are a few TSQL Tuesday roundups from when we have talked about automation previously (yes it is a hot topic – ALWAYS!).

  1. August 2010 – Beach Time – what do you automate to earn beach time?
  2. February 2011 – Automation in SQL Server – Give your best tricks for making your life easier through automation.
  3. January 2014 – Automation – How much of it is the same?
  4. September 2015 – The Enterprise – How does one manage an enterprise of databases?
  5. September 2017 – PowerShell Automation – Find something and automate it.

In the past, I have written about automation a few times. Some of my favorites are automated restores, automation in the cloud, and my poor mans automated audit.

I automate many processes and have automated loads of tasks over the years. You see, automation means I can spend more time doing other tasks that require more time, more thought/concentration, more effort, and frankly more interest. So what have I automated recently that may be different from what I have previously written? This time, I have something that may seem utterly trivial but in the end it is rather tedious and time consuming to manually check over and over and over.

PowerShell

When I automate a task, I generally will try to use the tool that seems the most appropriate for the task: windows scheduler, SQL Agent, TSQL, SSIS, VB, C#  and now I am trying to add PoSh to that list. I don’t believe there is a one size fits all automation tool. Sometimes, one has to be flexible enough to adapt other technologies into the tool-belt.

I have been working with a client to check their servers for SQL Server version, SSMS version, PoSH version and so on. All of this to try and get the appropriate updates installed on the server. Believe it or not, many of their servers were still running PoSH v2 and didn’t have any Service Packs installed for their database servers. OUCH!

Touching every single server (even if it is only 10 servers) is far too tedious and error prone. So, I spent a little time klooging together with my neanderthal level PoSH skills and found a way to retrieve various pieces of information from the servers and then store those data points in a database so I could report on the entire environment easily with TSQL. In addition, I could show change history and find approximately (at worst) when an update was installed.

Of all of the things I scripted to start tracking, the one I want to share this time can also be used to audit security on each of the database servers. I use the following script to audit the localadmins on each of the database servers in the enterprise. In order to trap each local admin on the server, I also recurse through domain groups to find all users of a group to find everybody that may have access. Here is a version of the script that is similar to what I use now.

Could I improve on the efficiency of this script? Most definitely I believe there is room for improvement. Remember, I am very novice at my PoSH skills. Scripting issues aside, it works and basically fetches a list of servers from a database, then iterates through each of those servers to fetch the complete list of local admins on each of the servers. Then the script writes out the complete list of admins for each server back to my database so I can generate a history of changes to the admins or report on who has admin access on the server.

For anybody that has admin access to a database server, the permission path (nested group path) is recorded in hierarchical form separated by the carrot character (^). Using this script, I have been able to provide a report to domain admins to clean out various unwanted individuals from access that was not intended or necessary.

Wrapping it Up

TSQL2sDay150x150Automation is an essential tool for every data professional. Wait, no, that’s not accurate. Automation is an essential tool in all facets of IT. Automation is a definitive method to work more efficiently and offload some of the mundane repetitive tasks that consume too much time.

Even if the task is not trivial but needs to be repeated and done so without error, the best tool is automation. Performing tasks over and over naturally leads to higher risk of error. The way to minimize that risk is to perform the task via some automation script or routine.

File Maintenance – Cleaning Up Old Files

Comments: 1 Comment
Published on: December 21, 2018

Using SSIS to Maintain the File System

We have all run into a need or a desire to clean up old stale files from the file system, whether it be to remove old backup files or flat files that are created from one process or another.  And based on this need/desire, we have all come up with a method to help with achieve that goal.

Some of the methods might be to include a flag in a maintenance plan that may be used.  Other methods may be to use a SQL script employing xp_cmdshell and delete statements.  Yet another may utilize the sp_oa stored procs and DMO.  And still others may have ventured into powershell to accomplish the same task.  The point is, there are many methods.

I am adding yet another method to the mix.  Why?  I didn’t much like the option of using the sp_oa method or the xp_cmdshell route.  I am very novice with powershell and it would take a lot more tinkering to get the script working properly.  Also, I felt pretty comfortable with SSIS and had approval to try and get this done using that method.  And just because I am a novice with powershell, does not mean that I will not circle back around to try and accomplish this task via that means.

Note: This article was originally written in 2011 and got stuck in an unpublished state. Things have changed since then so i will definitely be circling back around for a powershell version.

Requirements

The method employed needs to be able to do the following:

  1. Remove multiple file types
  2. Be configurable
  3. Clean out files from numerous directories
  4. Remove files older than a specified number of days.

Setup

The solution I chose utilizes SSIS.  It also requires that there be a table in a database that helps to drive the package.

The table looks like the following.

The filepath column holds the FileSystem Path for each directory that needs to be cleaned.  Paths that are supported are local (e.g. C:\temp ) and unc paths (\\machine\c$\temp).  I set this attribute to a length of 256, but if you have a longer path, you will want to adjust the length.

The Process column will hold a value describing what that path relates to, such as MaintainDirectory.  In my example, I am using MaintainDirectory to control which directories hold files that potentially need to be deleted.

Here is an example of the contents of that table I am using currently.

The last piece of the setup before we start working on the SSIS package is the need for a string splitting function.  Pick the string splitter of your liking.  I have one that I like and am sure you have one that you prefer.  The SSIS package relies on the return field from the splitter being named “Item.”  If it is named something else, please make the adjustments in the package as necessary.

The Package

The package I created has been created in SSIS 2008.  To meet the requirements already set forth, I utilized the following objects: ADO.Net Data Source, 2 Execute SQL Tasks, 2 ForEach Loop Containers, a Script Task, and 8 variables.  Let’s take a look at these starting with the variables.

Variables

  • SQLServerName – The value held here is used in an Expression for the Data Source.  This will overwrite the ServerName value in the Data Source.
  • DatabaseName – Used alongside the SQLServerName variable in an Expression for the Data Source.  This value will overwrite the InitialCatalog value in the Data Source.  This should be the name of the database where the FilePaths table and String Split function exist.
  • DaysToKeep – This value is the cutoff point for which files to keep and which files will be deleted.  This variable is used as a ReadOnly variable in the Script Task.
  • obj_FileExtension – This object variable is used to store the result set from one of the Execute SQL tasks and the results of the string split function from the FileExtensionList variable.
  • FileExtensionList – This is a delimited list of file extensions that need to be evaluated for deletion.  It is important to note that the file extensions that are to be processed are case sensitive.  The extension must appear in this list as it appears in the file system.
  • FileExtension – to be used in one of the ForEach loops.  This variable will receive the FileExtension from the obj_FileExtension variable one at a time.
  • obj_ListOfDirectories – This variable will receive the result set of an Execute SQL Task to be later consumed by one of the ForEach loops.
  • DirectoryToMaintain – receives one at a time the Directory to process for file deletion.  The ForEach loop stores a value from obj_ListOfDirectories in this variable for processing.

Execute SQL Tasks

The two Execute SQL Tasks are simple in function.  One is to get the list of directories to maintain from the FilePaths table.  The other is strictly to split the string for the FileExtensionList variable.

The first is named “Get Directory List” and should receive the Full Result Set from the following query.

The Result Set tab of this task also needs to be modified.  it should look like this.

From this task, we flow to the next Execute SQL Task named “Split FileList.”  The setup of this task is very much like the previous task.  We want to receive the full result set.  We have a configuration to make on the result set tab.  We also need to map a parameter.  Let’s take a look at those real quick.

Parameter Mapping

Result Set

And this is the query that we will be executing.

Notice that the Parameter we named in the Parameter Mapping tab is being used in the function call.  I chose this method because I could see and understand how it works better.

ForEach Loops

The next stop in the flow is the ForEach Loop – Directory object.  As the name implies, this ForEach Loop is designed to work with the obj_ListOfDirectories variable/array.

With this first Loop container, we have two tabs that need to be configured in the properties.  Both Loop containers are similar in that they need the same tabs to be configured.  First, let’s talk about the Collection tab.

On the Collection tab, we need to set the Enumerator option to “ForEach ADO Enumerator.”  Then we need to select the obj_ListOfDirectories from the drop down labeled “ADO Source Object Variable.”  Your screen should look like the following image.

With this tab configured, we can focus our attention to the quick changes that need to be made on the Variable Mappings tab.  On this tab, we are telling the enumerator how to handle the data from the object variable.  We are mapping columns from the result set to variables for further consumption.  When configured, it should look like the following.

Inside of this ForEach loop container, we have another ForEach loop container.  This second ForEach loop container handles the file extensions that we listed out in delimited fashion in the FileExtensionList variable.  I have called this container “ForEach Loop – FileExtension” (just keeping it simple).

The collection tab follows the same configuration setup.  The difference of course being that this container will use the obj_FileExtension object from the source variable dropdown menu.

The variable mapping tab is also slightly different.  We will be mapping column 0 of the object to the FileExtension variable.  The explanation for the different number between the two loop container variable mappings is simple.  In obj_ListOfDirectories, we have multiple columns being returned.  In obj_FileExtension, we have but one single column being returned.

This inner Loop container will loop through each of the extensions for each of the directories that have been returned to the outer loop container.  the inner loop container has the remainder of the workload in it via the Script Task.

Script Task

It is via the script task that we actually get to start deleting files.  This was the most difficult piece of the entire package – though the script is not very large.

For the script task, I chose to implement it via the Visual Basic option (instead of C#).  I have three ReadOnlyVariables employed by the script.  Those variables are: User::DaysToKeep,User::DirectoryToMaintain, and User::FileExtension.

Once you have set those on the script tab, the next step is to click the Edit Script… button where we need to place the following script.

An important note of interest is the need for the Try…Catch.  Without this block as it is, you could run into an issue where the file (such as those pesky temp files) may be in use by some process and cause the package to error.  The Try…catch will move past that nasty error and delete the files that it can.

Inside this script, you will see that I am comparing the LastWriteTime to the PurgeDays and ensuring that the file extension matches one that is in the list.  Then we move into the try…catch and either delete the file that matches those criteria or throw an exception and move on to the next file.

When all is said and done, your package should look something like this.

You should also have a variable list that looks like this.

Each variable that is not an Object has a value assigned to it at this point.  These values will be overwritten where applicable.

Next Steps

Having this package is a good start.  But unless you are prepared to manually run this on a daily basis, it needs to be added to a job and scheduled.  There are two ways to go about scheduling this package.

The first option is to configure the FileExtensionList and DaysToKeep variables and save the package with those values.  Then run this package through SQL Agent with those values every time.  The drawback to this method is that if you need to add or remove a file extension (as an example) then you need to edit the package and re-save it.

The alternative option is pass the values through the job to overwrite those variables as the job runs.  Should you need to remove or add a file extension, it would just be done at the job definition level.

Let’s take a look at this second option.  I will skip past how to create the job as an SSIS job in SQL Server and we will look directly how to modify those variables from the job properties.

To configure these variables directly from the SQL Agent job, open the Job properties and click on the Set Values tab (assuming you have defined this job as an SSIS Job type).  You should get a screen similar to this (void of the set values shown in the pic).  Just add the parameters (variables) we have discussed to this point with appropriate values to fit your needs/environment.

I have chosen to only include the four variables shown above since the remaining variables are either objects or get overwritten in the ForEach loops during processing.  The only thing remaining now is to set the schedule for the job.  Once set, the job (and package) will take care of the rest.

Conclusion

I have now shown you how to maintain some of the directories on your system through the use of SSIS and SQL server.  There are many methods to accomplish this goal, it is up to each of us to choose the best method for our environment and comfort level (by means of supporting the chosen solution).

If you would like to read more interesting stuff concerning SSIS, you might want to check any of these articles: lost password, expected range errors, and synonyms extending SSIS.

Azure Data Studio and XEvents

Azure Data Studio (ADS) is getting all sorts of love and attention these days. So much so that they have finally gotten around to adding Extended Events (XE) to the tool – sort of. Now we have the power to run traces on SQL Server via ADS.

The presence of XE in ADS comes via an extension and comes with a few other caveats. I will explore the extension for XE available in ADS in this article and discuss some of the caveats. As you read the article, it might be helpful to go ahead and download ADS if you do not already have it.

History

Roughly 10 years ago Microsoft felt it necessary to introduce a cool tool called Extended Events. Soon after they decided deprecate the features called “Profiler” and “Trace”. Unfortunately the page with the deprecation announcement is no longer available, but some evidence of how long it has been deprecated is available here.

The deprecation announcement remains in effect (and online) for all versions since SQL Server 2012. It just may be difficult to find the 2012 announcement as we roll into newer releases of SQL Server.

Now, we have XE Profiler (or XEvent profiler depending on your release of SSMS – read more here). Profiler is deprecated and now we have some confusion in SSMS as to what is Profiler since we are now using that term with the “XE Profiler” feature.

Now enter ADS. XE is not included with ADS by default. You have to install an extension to gain access to the feature. So, the first thing you will need to do is visit the extensions node and then search for “SQL Server Profiler”. I can hear you right now. It isn’t even using any part of the real feature name anymore – they are just calling it the same exact thing as the deprecated feature.

And yes, my heart breaks a little more every time I see “SQL Server Profiler”. We have been teaching Database Professionals for years to use Extended Events and not SQL Server Profiler. And they have been adopting that change in rather large numbers. This just seems like it will cause so much more confusion. Nevertheless, once you have selected the extension, look to the right hand side and you will see a screen similar to this.

After installing the extension and the reloading ADS we are ready to start using this extension. For the remainder of this article, I will just refer to it as “XE extension”.

Where did it go?

After the reload of ADS, finding the XE extension is not really that easy. If you read the info page where you clicked install, there is some info there on how to access it. In short, on a Windows machine Alt-P will be your friend. Where you use that key combination is not your friend though. If you are in a script for instance and hit that key combo, no connection will be made to your server – even if your script is connected.

In addition to that not-so-obvious message, there is a more obvious message box that pops up in the bottom right corner letting you know a connection could not be established. Unfortunately, the problem can’t be resolved from this screen. Just close the tab and try again from the instance connection as shown here.

While a bit annoying, I can manage with that little caveat – just as long as I remember between uses what I did. After the XE extension is open, you should see a screen similar to the following.

In the preceding picture, I show three indicators of a connection being established. The top right corner in the example is difficult to tell that there is a server name there but it is. In my example I am just using the shorthand notation to connect to my server or “.” (a dot) which connects me to the localhost instance.

The top left indicator is a drop down list of all XE sessions I have on the server.

Beyond that, I don’t find it terribly useful. I can’t edit a session or script it from this tool yet.

Managing a session doesn’t appear to be possible at this point, so let’s try to create a new session and see what happens.

Sweet! The create session does something useful. I can create a new session. Unfortunately, all I can do is use one of three templates similar to the XE Profiler tool in SSMS. The full feature XE GUI tool has a much more complete list of templates and possibilities ever since SQL Server 2012. I documented that in this article.

Unfortunately, I have no use for the three default templates. So, for me, this tool drives me back to needing to use TSQL to create my sessions if I want to use ADS. Here is the big takeaway from that statement. The use of TSQL has been the biggest detractor for most Data Professionals when using XE. They want a full featured GUI. So, you are using ADS and must create an XE session, you will need to pull out your TSQL skills and probably need to pull down some of my helper scripts to get you going. The alternative would be to use SSMS where there is a full featured GUI that is more powerful than you might think.

Conclusion

There surely will continue to be more development around this idea of an XE style profiler. More development generally means that the product will mature and get better over time. This article shows how there is more being added to the feature to try and give you better control over the tool. We love control so the addition of these options is actually a good thing. Is it enough to sway me away from using the already established, more mature, and high performing tools that have been there for several generations? Nope! I will continue to use TSQL and the GUI tools available for XE that predated the XEvent Profiler.

Some say that data professionals really want the “Profiler” tool. In my opinion, that is certainly not the majority and now calling XE by the name “Profiler” is going to cause confusion at the least. Some say that maybe this tool needs to integrate a way to shred XML faster. To that, I say there are methods already available for that such as Powershell, the live data viewer, the Target Data viewer, or even my tools I have provided in the 60 day series.

«page 1 of 4

Calendar
April 2020
M T W T F S S
« Mar    
 12345
6789101112
13141516171819
20212223242526
27282930  

Welcome , today is Sunday, April 5, 2020