Mass Backup All Sessions

Migrating Extended Event Sessions from one server to another should be a simple task. So simple, one would think there was no need to give it a second thought, right?

Well, I have previously written about this topic, you are welcome to read it here. The article discusses quite a bit about scripting out your XE Sessions. One thing lacking in that article is the ability to script out every session on a server.

If you are still not using Extended Events, I recommend checking out this library of articles that will cover just about all of the basics concerning Extended Events.

New and Improved

What about scripting out all of the sessions in SSMS? Surely there is an easy way to do that, right? Well, you might think that. Let me step through the problem that I have seen in SSMS (and unfortunately it is not consistent).

First, from Object Explorer Details (or F5), let’s try to script a single session.

 

When scripting a single session from the “Object Explorer Details”, I have several sub-menus that allow me to script the session to a “New Query Editor Window”. Now, let’s see what happens when trying to script multiple sessions.

 

 

With several sessions selected, I try yet again to script the sessions and I get an unwanted result. Notice that the “Script Session as” option is grayed out and unusable. However, if I try it again (several times or maybe just once, your mileage may vary and it seems to not be relevant to version of SSMS), I may see something like this.

Tada! Luck was with me and it finally worked that time. So, what should I do to be able to consistently script all of sessions? Well, that comes with an enhancement to the script I presented in the prior article here.

Lets just dive straight into the new script.

This is a rather lengthy script, so I won’t explain the entire thing. That said, this script will produce the exact XE Session as it was written when you deployed it to the server. In addition, the script will ensure the destination directory for the event_file target is created as a part of the script.

I can definitely hear the gears of thought churning as you ponder about this whole scenario. Surely, you have all of your XE Sessions stored in source control so there is no need whatsoever for this little script. Then again, that would be in an ideal environment. Sadly, source control is seldom considered for XE Sessions. Thus, it is always good to have a backup plan.

Why

Sadly, I had the very need of migrating a ton of sessions from one server to another recently and the methods in SSMS just wouldn’t work. There was no source control in the environment. Building out this little script saved me tons of time in migrating all of the sessions for this server and also provided me with a good script to place in source control.

Conclusion

In the article today, I have provided an excellent tool for backing up all of your XE sessions on the server. This script will help create the necessary scripts for all of your XE Sessions (or even just a single session if you like) in order to migrate the sessions to a new server or place them in source control.

To read more about Extended Events, I recommend this series of articles.

Event Log File Paths

How does one consistently find the correct path to the Extended Event Log file (XEL file)?

This is a topic that I ventured into some time ago. The previous article can be read here. In that article I covered some of the various trouble spots with capturing the file path for various XE log files. One of the main problems being that there is frequently an inconsistency in where XE logs may actually be stored.

Using what was shown in that previous article, I have some improvements and minor tweaks to fill some gaps I hadn’t completed in the previous script.

If you are still not using Extended Events, I recommend checking out this library of articles that will cover just about all of the basics concerning Extended Events.

New and Improved

First, lets just dive straight into the new script.

One of the things I wanted to accomplish with this update was to find the correct path for all of the sessions on the server. As mentioned in the previous article, sometimes there are complications with that. Due to the way log files can be specified for an XE session, behaviors can be a bit funky sometimes when trying to parse the correct paths. Due to those problems, I couldn’t quite short-cut the logic in the previous script and had to do the less desirable thing and create a cursor.

In addition to the cursor, I threw in a fix for when a full path is not declared for the session (at the time of creation) and the session was subsequently never started. In these odd cases, the script had been returning an empty result set and thus was not working properly. Now, it is fixed and here is an example of the output.

The third column in this result set is purely for informational purposes so I could determine at which point the file path was being derived. For the 30+ sessions running on my test instance, most paths are resolved via the first select. In the image, that is denoted by the label “Phase1” and circled in red. The system_health session happened to be running, but did not have a full path declared so it fell into the “Phase2” resolution group and is circled in blue. The last group includes those cases where a path could not be resolved for any number of reasons so they fall to the “FailSafe” grouping and an example is circled in green in the image.

Why

Truth be told, there is a method to short cut this script and get the results faster but I felt it would be less accurate. I could obviously just default to the “FailSafe” group automatically if a full path is not defined in the session creation. Would that be accurate though? Most of the time it would be accurate, but then there are the edge cases where occasionally we forget that something has changed. One such case of this is if after the session is created, you decide the SQL Server log files needs to be moved from the default path (this is where the XEL files default to if no path is defined)?

I have run across multiple scenarios where the logs were required (both technical as well as political) to be moved from the default location. Ideally, this move occurs prior to server startup. When the log file path is changed, the logs are not moved automatically to the new location. This, for me, is a case where it is best to be thorough rather than snake bit. I also like to document these things so I can compare them later if necessary.

Alternatively, here is the just good enough to pass muster version of that script.

 

Conclusion

In the article today, I have shown some of the internals to retrieving file paths for Extended Event Sessions. I dove into metadata to pull out the path for the session and discussed some concerns for some of these methods. In the end, you have a few viable options to help retrieve the file path in a more consistent fashion.

To read more about Extended Events, I recommend this series of articles.

Puzzles and Daily Trivia

TSQL2sDay150x150TSQL Tuesday

The second Tuesday of the month comes to us a little early this month. That means it is time again for another group blog party called TSQLTuesday. This party that was started by Adam Machanic has now been going for long enough that changes have happened (such as Steve Jones (b | t) managing it now). For a nice long read, you can find a nice roundup of all TSQLTuesdays over here.

The Why?

This month, Matthew Mcgiffen (b | t) invites us to come to a little puzzle party for our TSQL Tuesday party. Bring your favorite brain teaser, puzzle, questions, or interesting and complex TSQL problems/solutions.

This ties pretty nicely into the topic from last month (well at least for me). If you recall, last month Todd asked all of us to share some of our uses for databases in our personal lives. I submitted my article here, but forgot about one of my favorite uses for a database in my personal life – a daily trivia set about SQL Server.

So, why not elaborate on that database a bit today. Almost like a two for one. However, there is one little quick departure I want to make. My first puzzle solved with TSQL was written about many moons ago and can be found here.

TSQL Challenges

A long time ago, Jacob Sebastian ran regular challenges involving TSQL to get you to think about ways to solve problems using TSQL. The site is no longer available, but challenge #97 was about solving Sudoku puzzles. Here is my solution to that particular challenge. A little TSQL and a bit of the black arts and there is a nifty little solution. Even though I have this little trick in the bag, I still solve the Sudoku puzzles the hard way.

What a nice little stroll down memory lane there. That solution alone could satisfy the request for this TSQL Tuesday. Alas, we won’t stop there!

QOTD

Several years ago, I put together a little database to help with daily trivia questions. The database is a simple design and had a primary function to help teach SQL Server facts and internals to those with varying levels of DBA experience as well as helpdesk members. That said, the topic of questions doesn’t have to be SQL specific – it was my primary use.

With just a few tables and a few procs, I have a database that I can use to create questions, track responses from participants, and email questions and answers to participants on a daily basis – automatically. If I run out of questions, I just add more to the table. Nothing super complex there.

What this offers me is a mechanism to mentor multiple people without the burnout and while also gauging their true interest level in improving their SQL skill set.

Wrapping it Up

One of my favorite database automations is to send trivia style questions daily. This helps me to mentor and assess other dba talent within an organization without being too aggressive. Not only can a database be used for automation but it can also be used for numerous other automations. Beyond being highly useful for automation and training, there is also the possibility of using TSQL to solve puzzles like Sudoku puzzles.

 

Database Offline – Redux

I have written a couple articles showing how to audit database offline events via the default trace as well as via Extended Events. These are great for a more proactive approach. Sometimes the proactive approach is just not plausible for the given circumstances. A good example of this is when you inherit servers (e.g. new client, new project, new job) and there are databases on the server that were taken offline prior to your inheritance.

In a situation like this, you need to find out when the databases were taken offline, but your somewhat limited in data available to you for your research purposes. The default trace has rolled beyond the retention point for your server for events that were trapped. Setting up an Extended Events session would be less than helpful given the databases are already offline. So we must find an alternative to finding approximately when the databases were taken offline.

Alternatives

If your predecessor leaves you with a database that has been taken offline and no documentation about it, there is an alternative to find the approximate time it was taken offline – using TSQL. Granted, you could explore the file system and make the same sort of guess based on the file modified date. Data people generally prefer some sort of script and a script that is native to their language (tsql).

So, let’s take a look at a viable option for figuring out the mystery behind your database that has been set offline with no good info surrounding it.

Running this query, I see the following on my test server:

Note that this query uses GETUTCDATE. The differential_base_time column in sys.master_files is in UTC time. Thus, in order to compare properly, we need to ensure we use a UTC datestamp for comparison. Also, of note, this query doesn’t work if there is no full database backup for the database in question. Who doesn’t backup their databases though, right? No seriously, you really should.

Since this is my test server and I am well prepared, I just so happen to have an XEvent session running that tracks the offline events that I can use to compare the results.

Yes, there is a bit of variance in the actual database offline event and the differential_base_time column. This variance is due to timing of the backup and the actual offline event. Again, this is about finding an approximate time of when the database was taken offline as a viable alternative when other data is not really available.

 

Conclusion

We all aspire to having a perfect database environment where nothing surprising or unexpected happens. Unfortunately, that is the desire of dreams and fairy tales. The unexpected will happen. A database can be offline at the time that you take over responsibility of the database. You will be asked when it was taken offline. This script will help you get a reasonable deduction for that offline event in the absence of better data.

Extended Events is a powerful tool to help in troubleshooting and tuning your environment. I recommend investing a little time in reading the 60 day series about Extended Events. This is not a short series but is designed to provide an array of topics to help learn the tool over time. Don’t forget to go back and read the companion article showing how to audit these events via the default trace.

Database Recovery Monitoring with XE

On of the greatest benefits of Extended Events (xe) is how the tool simplifies some of the otherwise more difficult tasks.

Recently, I wrote a rewrite of my database recovery progress report script. That script touches on both the error log and some DMVs along with some fuzzy logic to join the data sets together. That script may not be the most complex script out there, but it is more more complex than using the power of XE.

Database recovery (crash recovery) is a nerve wrenching situation under the wrong conditions. It can be as bad as a root canal and just as necessary to endure that pain at times. When the business is waiting on you waiting on the server to finish crash recovery, you feel nervous at best. If you can be of some use and provide some information back to the business, that anxiety dissipates and the business becomes more calm as well. While the previous script can help you get that information easily enough, I want to introduce the easiest method to capture that information currently available.

If you are interested in a history lesson first, here are the first couple of versions of the aforementioned script (here and here).

Discovery First

As always, I like to explore the event repository to see if there is an event that may be applicable to my situation. This can be done via TSQL script or from the XE Gui. I will almost always break out my scripts to figure out if an event is there or not.

This query will yield any events that match my description. In this case, I am looking for events related to “database_recovery”. This search will yield four relevant events we can use to track our database recovery progress. Those events are shown in the following image (with the event names being circled in green).

If I explore the events a little more, I will eventually come across an attribute in the database_recovery_progress_report event that leads to a map. This map is called recovery_phase. For me, that is an interesting attribute/map and makes me want to look at it further.

Things are coming together a little bit now. We all know (or should know) that there are the analysis, redo and undo phases to crash recovery. This aligns with that knowledge and throws in a couple more phases of the recovery process.

So, now we know there are four relevant events for us to use and that one of the events will tell us specifically which phase of recovery is currently processing. We have enough information that an event session can now be built.

You may notice that I have thrown a lot of actions including the kitchen sink at this event session. Some of that is for consistency across sessions and some of it is simply for exploratory wants (and not needs). Feel free to add/remove actions form this list as you explore the use of this session in your environment.

Here is what that session produces on my test server with a simple stop/start of the SQL Server instance.

In the preceding image, I have the different events circled in red. I have also added the event_sequence action so I can see the relationship between these events as the progress from one to the next. If you note the items circled in green (and connected by green arrow), you will see a couple of different things such as the trace message, the database name, the database id, and the recovery time remaining). Circled in blue are the “destress” items that let us know that the recovery is 100% complete.

Wrap

SQL Server recovery is a safeguard to protect the data in the event of an unexpected failure. The recovery process is necessary and has several phases to roll through in order to bring the database back online.

Just because you need to have patience during the crash recovery process does not mean you have to work hard to get a status of the process or that you need to stress throughout the process. This XE event session will take a lot of work and stress out of the process. I would recommend having this lightweight session running on the server with the startup state set to enabled. This will make your job easier and definitely can make you look like a rockstar DBA.

This article has demonstrated the power of Extended Events, for a lot more reading on the topic, here is a list of over 100 articles.

«page 1 of 129

Calendar
May 2019
M T W T F S S
« Apr    
 12345
6789101112
13141516171819
20212223242526
2728293031  

Welcome , today is Monday, May 27, 2019