PowerShell ISE Crashes

Working with PowerShell brings a lot of advantages and power to help manage a server. The more current your PoSh version, the more efficiently you will be able to manage your server. Sometimes getting to the current PoSh versions comes with a little pain such as ISE crashes.

I recently had the mis-adventure of working through some ISE crashes after bringing some systems up to PoSh 5.1 that were either PoSh 2.0 or 3.0. It’s not a very fun situation to run a WMI update and then run into a crash of any type when testing if it worked. Your first thought is something terrible has happened.

As it stands, the problem is more of a nuisance than a critical failure. That said, it is enough of a problem that anyone who uses the ISE or .Net applications may experience a slight cardiac event.

Fonts

As you work to quickly recover from your missed heart beat, you start digging through logs and then hitting good old trusty google.

Diving through the logs, you might just happen across an error similar to the following:

Problem signature: Problem Event Name: PowerShell NameOfExe: PowerShell_ISE.exe FileVersionOfSystemManagementAutomation: 6.1.7600.16385 InnermostExceptionType: System.Xml.XmlException OutermostExceptionType: System.Reflection.TargetInvocation
DeepestPowerShellFrame: indows.PowerShell.GuiExe.Internal.GPowerShell.Main DeepestFrame: indows.PowerShell.GuiExe.Internal.GPowerShell.Main ThreadName: unknown.

Maybe the first error you encounter might look like this one instead:

System.TypeInitializationException

“FileFormatException: No FontFamily element found in FontFamilyCollection
that matches current OS or greater: Win7SP1”.

Inner exception originates from: CompositeFontParser

Either way, the error shoots us back to the same fundamental problem. The ISE won’t load, you get an error message and you can’t confirm that the WMI patch was applied properly.

As you work your fingers faster and faster through the pages on google, you discover that this problem is caused more explicitly by a patch for the .Net framework and not necessarily the work to upgrade your PoSh version. It only waited to manifest itself after the upgrade.

That’s gravy and all, but how does one fix the problem? For me, the quickest and most reliable fix was to simply jump straight to the root of the problem – fonts. The ISE is a WPF application and it also requires a fallback font (if a character isn’t present in your font set, then the app chooses a substitute from the fallback font – or something like that).

The fix is extremely simple and really underscores why this is merely a nuisance issue and not a critical problem. Thus it shouldn’t cause any sort of sinking internal feelings of any sort. There are a few plausible fixes floating around out there. I recommend just doing a manual font replacement. It is all but three simple steps:

  1. Download GlobalUserInterface.CompositeFont
  2. XCOPY the font to %windir%\Microsoft.NET\Framework\v4.0.30319\WPF\Fonts
  3. XCOPY the font to %windir%\Microsoft.NET\Framework64\v4.0.30319\WPF\Fonts

After you have copied the font to those two directories, then all that is needed to be done is launch the ISE. I ran into the same problem on three or four servers and the fix took no more than 5 minutes on each of the servers.

Conclusion

I previously mentioned that I have been working more and more with PoSh to try and improve my skillset there. This is one of those very low-level trinkets that I ran into as I have been working to hone my skills in that tech. For other, possibly, interesting articles about my experiences with PowerShell, you can check out these articles.

Given this job is tightly related to the system_health black box sessions (sp_server_diagnostics and system_health xe session), I recommend fixing the job. In addition, I also recommend reading the following series about XE and some of those black box recorder sessions – here.

Powershell, Me and the MCM part II

Comments: 1 Comment
Published on: December 20, 2010

Last week I posted the first part of this series.  I thought it would be a good idea to give a little follow-up on how my foray into Powershell as I attempted to download this videos.

Attempt 1:  Failed miserably.  I did not have enough disk space and thus only downloaded about half of the videos.  I tried to remedy this by changing the path to a different volume but that threw out some error messages.  The script works best when continuing to use the $env variable in the path for the location to dump the files.

Attempt 2:  From a different machine this was working beautifully until I noticed that every once in a while a file would error out and the download would resume with the next video in the list.  When I attempted to look up those particular videos, they were all present.  So I restarted thinking it was a local environment problem.

Attempt 3:  I figured out that attempt 2 was failing because my laptop was going to standby – duh!!!  I disabled the standby and finally got the download to work from start to finish without fail.

Now, I can place these videos on my Android and watch/listen from anywhere I have my phone.  Since I have an auxiliary port in my car, I can just plug the phone in, let it charge, and listen to the videos while I commute – through the car speakers.  It is nice to have such a high-level of training material available for a quick refresher or for that first time through.  I recommend people get on board and take advantage of what Microsoft, Paul Randal, and others have given to the community through this training.  Since I pay out of pocket for my training – this is the right price for me.

Thanks for the Christmas Gift.

Powershell, Me and the MCM

Comments: 3 Comments
Published on: December 16, 2010

If you haven’t heard yet, Microsoft has made some significant changes to the MCM program.  The changes make the certification more accessible to the masses.

You can read more about that from sources such as the following:

  1. Grant Fritchey
  2. Jason Strate
  3. Glenn Berry
  4. Microsoft

This post is not so much about the changes.  More it is about the information dissemination related to the MCM.  I saw on twitter that there were some videos available to watch concerning the MCM training.  I was curious where these videos were so I decided to go looking for the videos.  In my search for the videos, I came across the Blog post referenced above by Jason Strate.  In that post, he has a link to the videos.  I decided to check out the videos and I decided to start downloading them so I could use them as study aids as I work on the MCITP.

Downloading those videos one by one is somewhat tedious and time consuming.  Thus while some were in the process of downloading, I started working on a few other things and saw another twitter post come across about a Powershell script to download those MCM training videos.  This is great news – except I have never run a powershell script.

Well, no time like the present to learn.  First thing to do is to check out the blog article about the script – it has a few downloads.  The blog post is by Eric Humphrey, and can be found here.  After having read what he had to say about the script and downloading the necessary files that he lists, it was time to find out how to run a powershell script.  A little looking and I came across this article.

Excellent I am well under way now to run this script.  After following some of the recommendations in that article (e.g. security settings), it was time to look at the script and see what I needed to know from there, such as paths required for add-in files or paths for destination files.  Some quick adjustments to the userprofile path and to the download directory, as well as copying the HTMLAgilityPack into the userprofile path – and we are all set.

Now I have the script running and downloading the files – but it looks like it is going to be running for a long time.  This is a very useful tool for me at this moment.  This demonstrates another use for Powershell as well.  I haven’t yet decided that I will pursue the MCM, however I will use these videos to improve my knowledge base on SQL server.  I would recommend that people go check out these videos.  It was somewhat surprising to me to see that many of the pages had not been viewed very frequently at all.  This is great training material.  I recommend getting it and adding it to the reference material you may already have.

If you are just looking to browse the videos individually, you can find the site here.

Automating like an Enterprise DBA

Comments: No Comments
Published on: January 8, 2019

TSQL Tuesday

The second Tuesday of the month comes to us a little early this month. That means it is time again for another group blog party called TSQLTuesday. This party that was started by Adam Machanic has now been going for long enough that changes have happened (such as Steve Jones (b | t) managing it now). For a nice long read, you can find a nice roundup of all TSQLTuesdays over here.

Automation

The theme as chosen by Garry Bargsley (b | t) is about automation. Specifically, Garry has provided two requirements about automation for this month. As is always, there is leeway in a post that participates in TSQL Tuesday.

One of the things that should seem very commonplace to a data professional is the effort to become a lazy DBA. A lazy DBA is not a bad thing. It just means the DBA works hard to automate the repetitive mundane tasks that may be tedious and/or time consuming. Time can always be better spent somewhere else, right?

If you are lacking in any ideas for what can be automated, here are a few TSQL Tuesday roundups from when we have talked about automation previously (yes it is a hot topic – ALWAYS!).

  1. August 2010 – Beach Time – what do you automate to earn beach time?
  2. February 2011 – Automation in SQL Server – Give your best tricks for making your life easier through automation.
  3. January 2014 – Automation – How much of it is the same?
  4. September 2015 – The Enterprise – How does one manage an enterprise of databases?
  5. September 2017 – PowerShell Automation – Find something and automate it.

In the past, I have written about automation a few times. Some of my favorites are automated restores, automation in the cloud, and my poor mans automated audit.

I automate many processes and have automated loads of tasks over the years. You see, automation means I can spend more time doing other tasks that require more time, more thought/concentration, more effort, and frankly more interest. So what have I automated recently that may be different from what I have previously written? This time, I have something that may seem utterly trivial but in the end it is rather tedious and time consuming to manually check over and over and over.

PowerShell

When I automate a task, I generally will try to use the tool that seems the most appropriate for the task: windows scheduler, SQL Agent, TSQL, SSIS, VB, C#  and now I am trying to add PoSh to that list. I don’t believe there is a one size fits all automation tool. Sometimes, one has to be flexible enough to adapt other technologies into the tool-belt.

I have been working with a client to check their servers for SQL Server version, SSMS version, PoSH version and so on. All of this to try and get the appropriate updates installed on the server. Believe it or not, many of their servers were still running PoSH v2 and didn’t have any Service Packs installed for their database servers. OUCH!

Touching every single server (even if it is only 10 servers) is far too tedious and error prone. So, I spent a little time klooging together with my neanderthal level PoSH skills and found a way to retrieve various pieces of information from the servers and then store those data points in a database so I could report on the entire environment easily with TSQL. In addition, I could show change history and find approximately (at worst) when an update was installed.

Of all of the things I scripted to start tracking, the one I want to share this time can also be used to audit security on each of the database servers. I use the following script to audit the localadmins on each of the database servers in the enterprise. In order to trap each local admin on the server, I also recurse through domain groups to find all users of a group to find everybody that may have access. Here is a version of the script that is similar to what I use now.

Could I improve on the efficiency of this script? Most definitely I believe there is room for improvement. Remember, I am very novice at my PoSH skills. Scripting issues aside, it works and basically fetches a list of servers from a database, then iterates through each of those servers to fetch the complete list of local admins on each of the servers. Then the script writes out the complete list of admins for each server back to my database so I can generate a history of changes to the admins or report on who has admin access on the server.

For anybody that has admin access to a database server, the permission path (nested group path) is recorded in hierarchical form separated by the carrot character (^). Using this script, I have been able to provide a report to domain admins to clean out various unwanted individuals from access that was not intended or necessary.

Wrapping it Up

TSQL2sDay150x150Automation is an essential tool for every data professional. Wait, no, that’s not accurate. Automation is an essential tool in all facets of IT. Automation is a definitive method to work more efficiently and offload some of the mundane repetitive tasks that consume too much time.

Even if the task is not trivial but needs to be repeated and done so without error, the best tool is automation. Performing tasks over and over naturally leads to higher risk of error. The way to minimize that risk is to perform the task via some automation script or routine.

File Maintenance – Cleaning Up Old Files

Comments: 1 Comment
Published on: December 21, 2018

Using SSIS to Maintain the File System

We have all run into a need or a desire to clean up old stale files from the file system, whether it be to remove old backup files or flat files that are created from one process or another.  And based on this need/desire, we have all come up with a method to help with achieve that goal.

Some of the methods might be to include a flag in a maintenance plan that may be used.  Other methods may be to use a SQL script employing xp_cmdshell and delete statements.  Yet another may utilize the sp_oa stored procs and DMO.  And still others may have ventured into powershell to accomplish the same task.  The point is, there are many methods.

I am adding yet another method to the mix.  Why?  I didn’t much like the option of using the sp_oa method or the xp_cmdshell route.  I am very novice with powershell and it would take a lot more tinkering to get the script working properly.  Also, I felt pretty comfortable with SSIS and had approval to try and get this done using that method.  And just because I am a novice with powershell, does not mean that I will not circle back around to try and accomplish this task via that means.

Note: This article was originally written in 2011 and got stuck in an unpublished state. Things have changed since then so i will definitely be circling back around for a powershell version.

Requirements

The method employed needs to be able to do the following:

  1. Remove multiple file types
  2. Be configurable
  3. Clean out files from numerous directories
  4. Remove files older than a specified number of days.

Setup

The solution I chose utilizes SSIS.  It also requires that there be a table in a database that helps to drive the package.

The table looks like the following.

The filepath column holds the FileSystem Path for each directory that needs to be cleaned.  Paths that are supported are local (e.g. C:\temp ) and unc paths (\\machine\c$\temp).  I set this attribute to a length of 256, but if you have a longer path, you will want to adjust the length.

The Process column will hold a value describing what that path relates to, such as MaintainDirectory.  In my example, I am using MaintainDirectory to control which directories hold files that potentially need to be deleted.

Here is an example of the contents of that table I am using currently.

The last piece of the setup before we start working on the SSIS package is the need for a string splitting function.  Pick the string splitter of your liking.  I have one that I like and am sure you have one that you prefer.  The SSIS package relies on the return field from the splitter being named “Item.”  If it is named something else, please make the adjustments in the package as necessary.

The Package

The package I created has been created in SSIS 2008.  To meet the requirements already set forth, I utilized the following objects: ADO.Net Data Source, 2 Execute SQL Tasks, 2 ForEach Loop Containers, a Script Task, and 8 variables.  Let’s take a look at these starting with the variables.

Variables

  • SQLServerName – The value held here is used in an Expression for the Data Source.  This will overwrite the ServerName value in the Data Source.
  • DatabaseName – Used alongside the SQLServerName variable in an Expression for the Data Source.  This value will overwrite the InitialCatalog value in the Data Source.  This should be the name of the database where the FilePaths table and String Split function exist.
  • DaysToKeep – This value is the cutoff point for which files to keep and which files will be deleted.  This variable is used as a ReadOnly variable in the Script Task.
  • obj_FileExtension – This object variable is used to store the result set from one of the Execute SQL tasks and the results of the string split function from the FileExtensionList variable.
  • FileExtensionList – This is a delimited list of file extensions that need to be evaluated for deletion.  It is important to note that the file extensions that are to be processed are case sensitive.  The extension must appear in this list as it appears in the file system.
  • FileExtension – to be used in one of the ForEach loops.  This variable will receive the FileExtension from the obj_FileExtension variable one at a time.
  • obj_ListOfDirectories – This variable will receive the result set of an Execute SQL Task to be later consumed by one of the ForEach loops.
  • DirectoryToMaintain – receives one at a time the Directory to process for file deletion.  The ForEach loop stores a value from obj_ListOfDirectories in this variable for processing.

Execute SQL Tasks

The two Execute SQL Tasks are simple in function.  One is to get the list of directories to maintain from the FilePaths table.  The other is strictly to split the string for the FileExtensionList variable.

The first is named “Get Directory List” and should receive the Full Result Set from the following query.

The Result Set tab of this task also needs to be modified.  it should look like this.

From this task, we flow to the next Execute SQL Task named “Split FileList.”  The setup of this task is very much like the previous task.  We want to receive the full result set.  We have a configuration to make on the result set tab.  We also need to map a parameter.  Let’s take a look at those real quick.

Parameter Mapping

Result Set

And this is the query that we will be executing.

Notice that the Parameter we named in the Parameter Mapping tab is being used in the function call.  I chose this method because I could see and understand how it works better.

ForEach Loops

The next stop in the flow is the ForEach Loop – Directory object.  As the name implies, this ForEach Loop is designed to work with the obj_ListOfDirectories variable/array.

With this first Loop container, we have two tabs that need to be configured in the properties.  Both Loop containers are similar in that they need the same tabs to be configured.  First, let’s talk about the Collection tab.

On the Collection tab, we need to set the Enumerator option to “ForEach ADO Enumerator.”  Then we need to select the obj_ListOfDirectories from the drop down labeled “ADO Source Object Variable.”  Your screen should look like the following image.

With this tab configured, we can focus our attention to the quick changes that need to be made on the Variable Mappings tab.  On this tab, we are telling the enumerator how to handle the data from the object variable.  We are mapping columns from the result set to variables for further consumption.  When configured, it should look like the following.

Inside of this ForEach loop container, we have another ForEach loop container.  This second ForEach loop container handles the file extensions that we listed out in delimited fashion in the FileExtensionList variable.  I have called this container “ForEach Loop – FileExtension” (just keeping it simple).

The collection tab follows the same configuration setup.  The difference of course being that this container will use the obj_FileExtension object from the source variable dropdown menu.

The variable mapping tab is also slightly different.  We will be mapping column 0 of the object to the FileExtension variable.  The explanation for the different number between the two loop container variable mappings is simple.  In obj_ListOfDirectories, we have multiple columns being returned.  In obj_FileExtension, we have but one single column being returned.

This inner Loop container will loop through each of the extensions for each of the directories that have been returned to the outer loop container.  the inner loop container has the remainder of the workload in it via the Script Task.

Script Task

It is via the script task that we actually get to start deleting files.  This was the most difficult piece of the entire package – though the script is not very large.

For the script task, I chose to implement it via the Visual Basic option (instead of C#).  I have three ReadOnlyVariables employed by the script.  Those variables are: User::DaysToKeep,User::DirectoryToMaintain, and User::FileExtension.

Once you have set those on the script tab, the next step is to click the Edit Script… button where we need to place the following script.

An important note of interest is the need for the Try…Catch.  Without this block as it is, you could run into an issue where the file (such as those pesky temp files) may be in use by some process and cause the package to error.  The Try…catch will move past that nasty error and delete the files that it can.

Inside this script, you will see that I am comparing the LastWriteTime to the PurgeDays and ensuring that the file extension matches one that is in the list.  Then we move into the try…catch and either delete the file that matches those criteria or throw an exception and move on to the next file.

When all is said and done, your package should look something like this.

You should also have a variable list that looks like this.

Each variable that is not an Object has a value assigned to it at this point.  These values will be overwritten where applicable.

Next Steps

Having this package is a good start.  But unless you are prepared to manually run this on a daily basis, it needs to be added to a job and scheduled.  There are two ways to go about scheduling this package.

The first option is to configure the FileExtensionList and DaysToKeep variables and save the package with those values.  Then run this package through SQL Agent with those values every time.  The drawback to this method is that if you need to add or remove a file extension (as an example) then you need to edit the package and re-save it.

The alternative option is pass the values through the job to overwrite those variables as the job runs.  Should you need to remove or add a file extension, it would just be done at the job definition level.

Let’s take a look at this second option.  I will skip past how to create the job as an SSIS job in SQL Server and we will look directly how to modify those variables from the job properties.

To configure these variables directly from the SQL Agent job, open the Job properties and click on the Set Values tab (assuming you have defined this job as an SSIS Job type).  You should get a screen similar to this (void of the set values shown in the pic).  Just add the parameters (variables) we have discussed to this point with appropriate values to fit your needs/environment.

I have chosen to only include the four variables shown above since the remaining variables are either objects or get overwritten in the ForEach loops during processing.  The only thing remaining now is to set the schedule for the job.  Once set, the job (and package) will take care of the rest.

Conclusion

I have now shown you how to maintain some of the directories on your system through the use of SSIS and SQL server.  There are many methods to accomplish this goal, it is up to each of us to choose the best method for our environment and comfort level (by means of supporting the chosen solution).

If you would like to read more interesting stuff concerning SSIS, you might want to check any of these articles: lost password, expected range errors, and synonyms extending SSIS.

Azure Data Studio and XEvents

Azure Data Studio (ADS) is getting all sorts of love and attention these days. So much so that they have finally gotten around to adding Extended Events (XE) to the tool – sort of. Now we have the power to run traces on SQL Server via ADS.

The presence of XE in ADS comes via an extension and comes with a few other caveats. I will explore the extension for XE available in ADS in this article and discuss some of the caveats. As you read the article, it might be helpful to go ahead and download ADS if you do not already have it.

History

Roughly 10 years ago Microsoft felt it necessary to introduce a cool tool called Extended Events. Soon after they decided deprecate the features called “Profiler” and “Trace”. Unfortunately the page with the deprecation announcement is no longer available, but some evidence of how long it has been deprecated is available here.

The deprecation announcement remains in effect (and online) for all versions since SQL Server 2012. It just may be difficult to find the 2012 announcement as we roll into newer releases of SQL Server.

Now, we have XE Profiler (or XEvent profiler depending on your release of SSMS – read more here). Profiler is deprecated and now we have some confusion in SSMS as to what is Profiler since we are now using that term with the “XE Profiler” feature.

Now enter ADS. XE is not included with ADS by default. You have to install an extension to gain access to the feature. So, the first thing you will need to do is visit the extensions node and then search for “SQL Server Profiler”. I can hear you right now. It isn’t even using any part of the real feature name anymore – they are just calling it the same exact thing as the deprecated feature.

And yes, my heart breaks a little more every time I see “SQL Server Profiler”. We have been teaching Database Professionals for years to use Extended Events and not SQL Server Profiler. And they have been adopting that change in rather large numbers. This just seems like it will cause so much more confusion. Nevertheless, once you have selected the extension, look to the right hand side and you will see a screen similar to this.

After installing the extension and the reloading ADS we are ready to start using this extension. For the remainder of this article, I will just refer to it as “XE extension”.

Where did it go?

After the reload of ADS, finding the XE extension is not really that easy. If you read the info page where you clicked install, there is some info there on how to access it. In short, on a Windows machine Alt-P will be your friend. Where you use that key combination is not your friend though. If you are in a script for instance and hit that key combo, no connection will be made to your server – even if your script is connected.

In addition to that not-so-obvious message, there is a more obvious message box that pops up in the bottom right corner letting you know a connection could not be established. Unfortunately, the problem can’t be resolved from this screen. Just close the tab and try again from the instance connection as shown here.

While a bit annoying, I can manage with that little caveat – just as long as I remember between uses what I did. After the XE extension is open, you should see a screen similar to the following.

In the preceding picture, I show three indicators of a connection being established. The top right corner in the example is difficult to tell that there is a server name there but it is. In my example I am just using the shorthand notation to connect to my server or “.” (a dot) which connects me to the localhost instance.

The top left indicator is a drop down list of all XE sessions I have on the server.

Beyond that, I don’t find it terribly useful. I can’t edit a session or script it from this tool yet.

Managing a session doesn’t appear to be possible at this point, so let’s try to create a new session and see what happens.

Sweet! The create session does something useful. I can create a new session. Unfortunately, all I can do is use one of three templates similar to the XE Profiler tool in SSMS. The full feature XE GUI tool has a much more complete list of templates and possibilities ever since SQL Server 2012. I documented that in this article.

Unfortunately, I have no use for the three default templates. So, for me, this tool drives me back to needing to use TSQL to create my sessions if I want to use ADS. Here is the big takeaway from that statement. The use of TSQL has been the biggest detractor for most Data Professionals when using XE. They want a full featured GUI. So, you are using ADS and must create an XE session, you will need to pull out your TSQL skills and probably need to pull down some of my helper scripts to get you going. The alternative would be to use SSMS where there is a full featured GUI that is more powerful than you might think.

Conclusion

There surely will continue to be more development around this idea of an XE style profiler. More development generally means that the product will mature and get better over time. This article shows how there is more being added to the feature to try and give you better control over the tool. We love control so the addition of these options is actually a good thing. Is it enough to sway me away from using the already established, more mature, and high performing tools that have been there for several generations? Nope! I will continue to use TSQL and the GUI tools available for XE that predated the XEvent Profiler.

Some say that data professionals really want the “Profiler” tool. In my opinion, that is certainly not the majority and now calling XE by the name “Profiler” is going to cause confusion at the least. Some say that maybe this tool needs to integrate a way to shred XML faster. To that, I say there are methods already available for that such as Powershell, the live data viewer, the Target Data viewer, or even my tools I have provided in the 60 day series.

T-SQL Tuesday #108: New Horizons Beyond SQL Server

Comments: 1 Comment
Published on: November 13, 2018

There comes a point in one’s career when a change is requisite. Big or small there always seems to be a tipping point that mandates some sort of change. Maybe the change is an entirely new career field. Maybe the change is adapting to the ever improving features of a specific software or product. Maybe, that change means learning a tangential technology.

This is precisely the goal Malathi Mahadevan (b | t) seems to have envisioned for the 108th installment of TSQL Tuesday.

If you are interested in reading the original invite, you can find that here.

So the challenge for this T-SQL Tuesday is – pick one thing you want to learn that is not SQL Server. Write down ways and means to learn it and add it as another skill to your resume. If you are already learning it or know it – explain how you got there and how it has helped you. Your experience may help many others looking for guidance on this.”

Personally, I am not one to settle, so learning and improving are important. New technologies, changes in technologies, new features, tangential technologies – they are ways to continue to learn and improve – most of the time. Sometimes, a new technology offers a good change of pace and offers an exit from something that is becoming too standard, while providing an entrance to something different, difficult, exciting and expanding.

Through the year (2018), I created a few goals for myself around some of these new or different technologies:

  1. Become proficient at MySQL (maybe even certify who knows)
  2. Become proficient at PowerShell
  3. Work towards the TCM (I got to busy with 1 and 2 to even accord any time towards this one)

Proficient is sort of a vague term because it can have a different meaning to different people. For me, I will describe what I have been doing to become proficient in both PoSh and MySQL.

PowerShell

A shout out is absolutely necessary for Adam Machanic (twitter) for picking the right blog meme that has been able to survive so long in the SQLFamily. This party has helped many people figure out fresh topics as well as enabled them to continue to learn.

I have dabbled in PowerShell over the years – just enough to “be dangerous” as some may say. I wouldn’t call it proficient and probably not even dangerous really. I was able to muddle my way through being able to create some basic scripts to perform certain tasks.

This rudimentary ability just isn’t enough to be able to put the skill on a resume (imho). It certainly wasn’t enough skill to be able to manage a large group of servers and perform various enterprise type tasks. My objective was to be able to create a set of tools for myself that I could use and have repeatable success at great ease.

I sat down with my first set of various tasks I wanted to be able to perform and worked on them as I set time aside to learn PoSh better – every day. It took several weeks and by no means am I on the same level as Rob Sewell (b | t) or Chrissy LeMaire (b | t) or Sean McCown (b | t). That said, I do feel I am far more capable in PoSh now than I was last year.

This is a skill that I plan to continue to hone. I am learning every time I pick it up and try to do something different. It is good that I am able to keep learning. Next year, I plan on being able to say I am more proficient than I am now. I am also hopeful to be able to be good enough to properly contribute to the dbaChecks project. I do also hope to share some of the scripts I have created on my blog as well.

Here are some of my first dabbles with powershell that I have integrated into other blog posts. Yes, they are very rudimentary.

MySQL

Learning MySQL is probably not too big of a stretch to be honest. I certainly think it is far more similar to SQL Server in many regards than PoSh. After all, it is just another little DBMS and does happen to be rather popular.

I wanted to pick up MySQL so I could support clients that have it installed here there and everywhere in their environments. Many clients have more than one DBMS platform and it is good to understand and be able to administer multiple platforms with a high level of competence. Unfortunately, MySQL comes with a fair amount of gotchas. There are serious limitations depending on version and flavor. Some clients may be stuck on a rather old version of MariaDB (akin to SQL 2000). This means developing a broad set of scripts and skills quickly and on the fly.

I have a ways to go in my learning due to the varied flavors of MySQL but I am getting there. I do feel pretty comfortable hopping in and troubleshooting performance issues and doing a quick health assessment at this point. I would call that proficient. Similar to what I said about PoSh, I plan on being able to say next year that I am more proficient. More opportunity with this platform is what lends itself to better proficiency.

TSQL2sDay150x150The Wrap

I believe in continuous integration / improvement when it comes to personal growth and development. It is necessary to keep your personal skills sharp as well as keep yourself marketable.

Oh, and if you are interested in some of my community contributions (which according to Jens Vestargaard is an awesome contribution), read this series I have published.

T-SQL Tuesday #104: Just Can’t Cut That Cord

We all have our favorite scripts, tools or utilities. Those are the things that help make our jobs easier. Some of us may have an unhealthy relationship with some of those scripts (similar in nature to the relationship many have with their phone). Whether or not the need to cut that proverbial cord exists, today we are not discussing the health of that dependence. Suffice it to say, sometimes we simply need to upgrade our scripts. How else can we get better scripts or make our scripts better – by sharing them.

This is precisely the goal Bert Wagner (b | t) seems to have envisioned for the 104th installment of TSQL Tuesday.

If you are interested in reading the original invite, you can find that here.

For this month’s T-SQL Tuesday, I want you to write about code you’ve written that you would hate to live without.

Maybe you built a maintenance script to free up disk space, wrote a query to gather system stats for monitoring, or coded some PowerShell to clean up string data.  Your work doesn’t need to be completely original either – maybe you’ve improved the code in some open source project to better solve the problem for your particular situation.”

There is a high probability that through the sharing of your script, somebody out there can benefit from that script. In addition, it is very likely that somebody will make a suggestion to help make your script better. Worst case (emphasis on worst case here), you have the script stored somewhere with half decent instructions on what it does and making it easily accessible for you to use again and again. Just in case you forget you have it out there – you can google for it again and find it on your own blog ;).

Personally, I have been able to find and re-use some of my older scripts. Not only do I get to re-discover them, but I also get to re-imagine a new use or improvement for the script.

Brief Intermission

A shout out is absolutely necessary for Adam Machanic (twitter) for picking the right blog meme that has been able to survive so long in the SQLFamily. This party has helped many people figure out fresh topics as well as enabled them to continue to learn.

Easy Access

While pondering the topic for today, I had the thought occur about how frequently I post a script on my blog already anyway. An easy out for this topic would have been to re-share one of those old scripts. For instance, I could easily redo a recent article about server access that has a couple scripts demonstrated in it. Or I could go back a few years to my articles about foreign keys (here or here) and space use (here or here). Even more intriguing could be to re-envision some of my articles on Extended Events. But where would the fun in that be?

Rather than take the easy road and rehash something, I have something different. This one goes hand in hand with the numerous articles and scripts I have previously provided on auditing – yet it is different.

Not every shop can afford third party software or even Enterprise edition and so they have to come up with a different way to audit their database instances. One of the problems with a home grown solution is to ensure the data is not stored local to the server (lots of good reasons for that). Here is an example of what I did for one client that happened to have a developer that found a back door that was giving him SA access to the SQL Server Instance and was changing things and trying to cover his tracks – even after being warned.

First the query

This query will be run from a job on a different server that is restricted in access to just a select few people. I do rely on the use of the default trace in this query. I am also reliant upon a little bit of sneaky behavior. If I run this from a separate server, prying eyes are usually unlikely to find that it is running and thus makes it easier to catch them red-handed. In addition, if they discover via some sort of trace and by a lot of luck that it is running, then they have no access to the remote server to alter anything that was captured.

The query does go out to the default trace and pull back any changes to permissions or principals on the server in question. The captured data is then stored in a database that is also restricted to a select few people. Lastly, the captured data can be routinely queried, or automated reports can be created to send email notifications of changes encountered.

The second part of the trickery here is that I am using a linked server to perform the queries (a slight change and I could also do this via powershell which will be shown in a future article). The linked server query uses the openquery format and sends the default trace query to the remote server. Since I am running this from a job on an administrative server that pulls a limited data set, I am not overly concerned with the linked server setup here.

Storing It

Once I query the data, I need to put it somewhere on my administrative server. The table setup for that is very straight forward.

After creating this table, I am ready to store the data. All I need to do is throw the audit query into an agent job and schedule it to run on a regular schedule. For my purposes, I usually only run it once a day.

TSQL2sDay150x150The Wrap

This has been my diatribe about service and giving back to the community. When done properly, there is a natural born effect of enhancing one’s personal life equal in some way to the amount of effort given towards the community.

Oh, and if you are interested in some of my community contributions (which according to Jens Vestargaard is an awesome contribution), read this series I have published.

Single User Mode – Back to Basics

In a recent article, I took you on a trip through how to hack (ethically) a SQL Server to regain sysadmin access. In that article, I made quick mention of restarting SQL Server into single-user mode. It only makes sense to show quickly how to get into single-user mode.

Before getting into that, I do hope that there is something you will be able to learn from this basics article. If you are curious, there are more basics articles on my blog – here.

Single-User

So, what exactly is this single-user mode thing? Single-user mode is basically the official back-door into SQL Server for various reasons such as:

  • Somebody deleted all of the logins that were in the sysadmin role.
  • The sa account is disabled or the password has been forgotten.
  • Somebody deleted any Windows groups that were members of the sysadmin role.
  • All members of the sysadmin role are no longer with the company.
  • You need to restore the master database
  • You want to keep SQL Server all to yourself because you are greedy!

These are some pretty solid reasons to need to be able to use the back door. But how exactly do we get to the back door?

Two Paths

As luck would have it, there are two ways to enable single-user mode. You can either get there by making some changes for the SQL Server service in Configuration Manager, or you can utilize a command prompt. I won’t cover the gui path beyond the gentle reminder that you must remember to undo your change when using that method.

My preferred method is through the command line. Using my SQL Server 2017 as the experiment, I would navigate to the Binn directory for that instance. In this case, as shown in the next image.

Before getting too far ahead of myself, I am going to stop my SQL Server.

Notice, I also queried to find all of my services related to SQL before stopping the MSSQLServer service via the net stop mssqlserver command. We will come back to some net start and net stop commands later.

With the service successfully stopped, I can now restart the service in single-user mode.

And then the validation that we are indeed starting in single-user mode…

But wait, did you notice that bit of trickery on the startup command?

This is a pro-tip for when you must use single-user mode. Inevitably, somebody will steal the single-user connection and you will be locked out of the session. By using an app name after the single-user switch, you are telling SQL Server to only accept connections for that specific application. Since most apps will not be using sqlcmd, you will have far less contention to gain that connection and you will be able to complete your task much easier.

You could also pass something like this instead…

In this case, I would be limiting the connections to a query from SSMS (and not object explorer).

Now that I have a single-user connection, I can add a sysadmin or restore the master database or just sit on it and play devious. It all depends on what your objective for the single-user session happens to be.

More Command Line

Remember that reference to the NET commands? Well, it turns out we can also start SQL Server in single-user via net start. Let’s check it out.

The command is pretty simple:

The effect here is the same as navigating to the Binn directory and starting SQL Server with the sqlservr.exe executable. The big difference is considerably less typing and less verbose output of the service startup.

When using the net start method, you do need to know the service name of the SQL Server instance. To get that, I do recommend the following powershell script.

This will produce results similar to the following.

From the results, I can pick the SQL Server service and then pass that to the net start command fairly easily.

The Wrap

Starting SQL Server in single-user mode should be a tool every data professional holds in the bag. This is an essential tool that can be used in multiple scenarios and ensure you are able to fully maintain and control your server. I have shown how to get to single-user mode via two command line methods and mentioned a GUI method. The nice thing about the command line methods is that you don’t have to remember to undo the startup switch like you do with the GUI method.

If you feel the need to read more about single-user mode, here is an article and another on the topic.

This has been another post in the back to basics series. Other topics in the series include (but are not limited to): Backups, backup history and user logins.

Syspolicy Phantom Health Records

SQL Server comes with a default SQL agent job installed (for most installations) to help manage the collection of system health data. I would dare say this job is ignored by most people and few probably even know it exists.

This topic is not new to me. You may recall a previous article I wrote entailing one type of failure and how to resolve that failure. That article can be found here.

I recently ran into a variant of the problem outline in that previous article that requires just a bit of a different approach. The errors turn out to be similar on the surface but really are different upon closer inspection.

Phantom Health Records

If you are unfamiliar with the topic, I recommend reading the previous article. Then after reading that article, maybe brush up a little bit on the SQL Agent. The failures we will be looking at are within the SQL Agent and come from the job called: syspolicy_purge_history.

For this latest bout of failure, I will start basically where I left off in the the last article. The job fails on a server and I have access to other servers of the same SQL version where the job works.

Before diving any further into this problem, let’s look at what the error is:

A job step received an error at line 1 in a PowerShell script.
The corresponding line is ‘set-executionpolicy RemoteSigned -scope process -Force’.
Correct the script and reschedule the job. The error information returned by PowerShell is:
‘Security error. ‘. Process Exit Code -1. The step failed.

Having the error in hand, and knowing that the job works elsewhere, my next logical step (again based on experience from the last article with this job) is to script the job from another server and use it to replace the job on the server where it fails. In principle this is an AWESOME idea.

 

Sadly, that idea was met with initial failure. As it turns out, the error remained exactly the same. This is good and unfortunate at the same time. Good in that I was able to confirm that the job was correctly configured with the following script in the job:

Since the step fails from SQL Server let’s see what else we can do to make it run. Let’s take that code and try it from a powershell ise. So, for giggles, let’s cram that script into powershell and see what blows up!

Now isn’t that a charming result! This result should be somewhat expected since the code I just threw into the ISE is not entirely powershell. If you look closer at the code, you will notice that it is using sqlcmd like conventions to execute a parameterized powershell script. Now, that makes perfect sense, right? So let’s clean it up to look like a standard PoSH script. We need to replace some parameters and then try again.

This will result in the following (resume reading after you scratch your head for a moment):

The key in this failure happens to be in the sqlserver. PoSH thinks we are trying to pass a drive letter when we are just trying to access the SQLServer stuff. Depending on your version of server, SQL Server, and PoSH you may need to do one of a couple different things. For this particular client/issue, this is what I had to try to get the script to work.

If you read the previous article, you may notice this command looks very much like the command that was causing the problems detailed in that previous article. Yes, we have just concluded our 180 return to where we started a few years back. Suffice it to say, this is expected to be a temporary fix until we are able to update the system to PoSH 5 and are able to install the updated sqlserver module.

As is, this script is not quite enough to make the job succeed now. To finish that off, I created a ps1 file to house this script. Then from a new step (defined as a sqlcmd step type) in the syspolicy purge job, I execute that powershell script as follows:

Tada, nuisance job failure alert is resolved and the system is functioning again.

Conclusion

I dare say the quickest resolution to this job is to probably just disable it. I have seen numerous servers with this job disabled entirely for the simple reason that it fails frequently and just creates noise alerts when it fails. Too many fixes abound for this particular job and too few resolve the failures permanently.

I would generally err on the side of fixing the job. Worst case, you learn 1000 ways of what not to do to fix it. 😉

Given this job is tightly related to the system_health black box sessions (sp_server_diagnostics and system_health xe session), I recommend fixing the job. In addition, I also recommend reading the following series about XE and some of those black box recorder sessions – here.

«page 1 of 4

Calendar
January 2019
M T W T F S S
« Dec    
 123456
78910111213
14151617181920
21222324252627
28293031  

Welcome , today is Tuesday, January 22, 2019