XEvent Session Supporting Cast

So far, in this series, I have discussed the means to view the metadata associated with Extended Events. I also demonstrated some quick and simple means to retrieve some metadata via the catalog views and DMVs.

Beyond the metadata in the the catalog views and DMVs, there are a few more objects that come in handy when dealing with Extended Events. I call these the supporting cast. This cast would come in the form of yet another DMV and a couple of tables. Granted this is not the end of the tools that can be used when dealing with Extended Events, the remaining tools would be classified differently and are going to be saved for another time.

Supporting Cast

Tables

First up in the supporting cast role would be the group of tables. These are not your ordinary tables. These tables could be your best friend as you work to convert yourself away from server side traces and profiler traces. These tables contain conversion (and therefor static) data to help translate from server trace events to extended event events (yes it is extended event events – sounds redundant but that is what it is).

The tables you will come to love are:

Name Description
trace_xe_event_map One row per Extended Events event that is mapped to a SQL Trace event class.
trace_xe_action_map One row per Extended Events action that is mapped to a SQL Trace column ID.

And, as you might have imagined, the quick and easy way to view what tables might be related to extended events is fairly straight forward as well.

If I query the trace_xe_event_map table, I will see the mapping between 138 trace event ids to the extended event events along with the package through which that event is exposed for extended events. Don’t worry about packages just yet, I will discuss that at a later time. If I do the same thing for the action_map table, I will see a similar result.

As I mentioned previously, these tables will be essential in converting any of those old profiler or server side traces into the XEvent platform.

DMV

Of all of the XEvent DMVs, all but one are used for the metadata of running sessions. The remaining DMV deals more with troubleshooting and session performance than it does with metadata. The name of the DMV is sys.dm_os_dispatcher_pools.

Why is this DMV thrown down to the supporting cast role? Well, despite (current) documentation, this DMV is not exclusively used by Extended Events. You will see dispatcher pools and threads for various different background tasks like XEvents, In-Memory OLTP (Hekaton), and filestream – to name a few.

dispatcherpool

The preceding results could be viewed using a query similar to the following:

If it seems there may be a problem with the server and you just don’t see events dumping into the target as quickly as you think they should, you may want to check this DMV. In the DMV, one will find things like queue_length and dispatcher_waiting_count. A high waiting_count would indicate no events are firing. While a high queue_length would indicate there are many events backing up while waiting to be processed.

Stay tuned for more quick tidbits as I continue this series.

Viewing Running XEvent Metadata

Comments: No Comments
Published on: September 3, 2015

In the previous article, I wrote about getting into some of the metadata to take a peek at session metadata relevant to deployed Extended Event sessions. Today, I will continue talking about metadata. One difference being that today I will focus on diving into the metadata of running sessions.

Running session metadata, while it can be still be viewed in the catalog views, is exposed via DMVs. Putting it a slightly different way, if I query DMVs that are related to extended events, I will only see metadata for running sessions.

DMV

Similar to the catalog views, there is a consistent naming pattern and a simple query to help us figure out what is out there related to Extended Events.

That little query will show us a list of DMVs like the following – descriptions added.

Name Description
sys.dm_xe_objects List of objects exposed by an event package.
sys.dm_xe_object_columns schema information for the objects.
sys.dm_xe_packages Lists all the packages registered with the Extended Events engine.
sys.dm_xe_sessions Lists the Running Extended Events sessions.
sys.dm_xe_session_targets Information about session targets.
sys.dm_xe_session_events Information about events tied to a running session.
sys.dm_xe_session_event_actions Information about the “actions” tied to an event in a running session.
sys.dm_xe_map_values Provides a mapping of internal numeric keys to human-readable text.
sys.dm_xe_session_object_columns Shows the configuration values for objects that are bound to a session.

Since much of this is information about running sessions, it becomes very useful in trying to query the metadata for those sessions. However, if the session is not running and you don’t realize certain DMVs (e.g. sys.dm_xe_session* named DMVs) are only applicable to running sessions – you could end up being rather frustrated.

So, in a similar vein to the query presented in the previous article; if I want to query the DMVs to find where I might be storing session data, I would run something like the following.

This query will show us the file location for each of the deployed sessions that have data saved to a file on the operating system (as opposed to memory). But look at the results real quick. The target_data is more than just the file path. Sure you can manually parse that information from the results. Or you could go back to just using the query provided in the previous article. Or, get daring and try a little XML parsing – with this query.

And here we have yet another one of those frustrating things when dealing with Extended Events. Not only do the DMVs just show the information as it pertains to running sessions, but you also have to deal with XML now. Don’t think this is the last of dealing with XML in XEvents – not by a long shot!

This is just one quick example of the type of data that can be viewed from the running session metadata. Imagine wanting to figure out how long the session has been running. Or maybe you want to know if there is something that might be blocking an event from firing. The data in these DMVs can help you access that kind of data – quickly!

Stay tuned for more XE content soon to come!

Viewing Deployed XEvent Metadata

Comments: 1 Comment
Published on: September 2, 2015

Today will be the first of a few introductory level posts about Extended Events.

When dealing with Extended Events, there are a few ways to take a look some of the metadata. Not all metadata is created equal when dealing with Extended Events. Some of the metadata is pertinent to a running XEvent session and some is pertinent to a deployed session.

If you have a session that has been configured/deployed to an instance of SQL Server, you will want to look at the Catalog Views – especially if that session is not running. Today, I will take a quick look at the catalog views. Yes – quick!

Catalog Views

First, I want to cover a quick query that will reveal what catalog views are available for your use. The query is very simple.

Straight forward, catalog views for Extended Events have a very common naming pattern – server_event_sessions%. The rest of the predicate is just for eye candy.

Among the list returned by that query we will see a list of results such as this:

Use the following catalog views to obtain the metadata that is created when you create an event session.

Name Description
sys.server_event_sessions Lists all editable, deployed event sessions.
sys.server_event_session_actions List of actions on each event of an event session.
sys.server_event_session_events List of each event in an event session.
sys.server_event_session_fields List of customizable columns that were set on events and targets.
sys.server_event_session_targets List of event targets for an event session.

Beyond just a list of the Catalog views, I find it useful to combine some of these to garner the information that would be useful in querying data from the session or even to possibly rebuild the event session if needed.

One may want to figure out where the data for an event session is being stored. The first inclination may be to look at the server_event_session_targets catalog view. Doing that could cause a little frustration since you will only see some binary data and the type of target attached to the session.

But, playing with the views a little more and becoming a little more familiar with the data presented by the views, you may notice that the server_event_session_fields suddenly becomes more attractive. Why? Well, because it contains the filepaths that would be necessary to query session data – if the session is deployed to a file target. Suddenly, hope is not lost.

To get that data, one would need to write a query like this:

This demonstrates one of the more frustrating (not by far the most frustrating part though) things about dealing with extended events. That is the entity attribute value model employed to store metadata. This is nothing new within SQL Server (e.g. agent jobs, schedules, or even sysobjvalues – internally). Sadly that doesn’t make dealing with the metadata terribly easy – but knowing can make it more manageable.

So, my recommendation here is to play around a bit and start to get to know the catalog views as they pertain to extended event metadata.

Stay tuned for more posts like this. As was mentioned, this is the first in what will be a long series on Extended Events.

Shredding Extended Event Actions

lovehatedice_v1The other day I wrote about that torrid love/hate relationship DBAs tend to have with working with XML. In that same post, I promised I would have a follow up post about XML in Extended Events.

Well, today I present to you another opportunity for you to renew your love/hate relationship with XML.

In the previous post (which you can read here), I discussed just one facet of shredding the XML related to Extended Events. Today, we have “actions” to discuss. I am not going to get into what an “action” is today. I will save that for a near future post.

For the shredding of the actions, I will use the TreeHuggerCPU Event Session I used in the last article (for the sake of consistency). The session doesn’t necessarily need to be running. I will just pull the actions related to the session from the metadata accessible via the system catalog views.

Should the session be running (and in my case it is for demonstration purposes), I could open the session data and view it as XML and see something like the following:

action_xml

 

I have highlighted two different types of nodes available in the XML data. In the previous article, I discussed the “data” nodes and I have highlighted that in red here. Today, we are talking actions, and those are highlighted in green this time around. It is the “action” nodes that we will be shredding via the following script.

 

With this script, I can either search for all actions tie to an XE Session, for the actions tied to one event within a Single XE Session, for all actions tied to a specific event across multiple sessions, or for all actions that are tied to any event tied to any event session deployed to the server.

Combine this with the previous script and suddenly all of that XML just got several times easier.

Shredding XML in XEvents

lovehateOne of the biggest pains with Extended Events is the thing we love to hate – XML. XML is so foreign to many DBAs. It’s not relational and often brings nightmares of parsing and performance issues.

Despite that, Extended Events takes advantage of XML much like we have seen in so many other areas of SQL Server. You might be familiar with execution plans, SSRS, SSIS, or maybe even the ring buffer. If you look hard enough, you will find XML within the database engine. Whether you love or hate it, you still have to deal with it. Today, I want to dive into a means of dealing with XML, as far as Extended Events is concerned.

Mad Scientist Lab

Let’s head on over to the lab to dive into XML head first. I will be using a session as an example of which I have previously written – here.

If the session is already deployed – great. If not, you may need to create it to execute (successfully) these scripts. Note that I am starting the session and then fetching some data and then stopping the session. To see some data, you may want to wait a few cycles before stopping the event session.

The sole purpose is just so I can take a look at the session data in XML format. Now that I have some data, I would see something that might look like the following:

For today’s lab, I just want to focus on the “data” node while saving the attributes of the event node, and the action node(s) for another discussion.

xml_datanode

 

The “data” node happens to be the data that is directly tied to an Extended Event event. When looking at the event metadata, this would be called the event columns. Knowing that all of these columns follow a fairly standard format can make it a bit easier to figure out how to query this data. One of the daunting things with XML is figuring out how to query the XML data to make it more relational – a format we may be more accustomed to seeing (as DBAs).

Due to the daunting task of figuring out how to query the XML and because it is a pretty decent format for consumption, I decided to simplify the entire process. Why not write some code that will write the XML parsing code for me? And that is what we have here.

This script will take an event session name, an Extended Event event name, or a combination of both (imagine having multiple events tied to a session) to produce the XML strings automagically. This script does only produce some pretty generic column aliases, so that part is left to the user of the script to edit after generating the XML parse statements.

With this script, I can quickly retrieve all of the XML parse statements for all of the data nodes within the session or event that I specify. This can significantly reduce the amount of time taken to produce a usable script to consume the event session data.

This is just the script to parse the event data. If there are actions tied to the session, this will not produce the statements for those actions. The script for that will be provided in a future article. Stay tuned!

Compressing Outcomes

Categories: News, Professional, SSC
Comments: No Comments
Published on: August 24, 2015

You find yourself cruising along, just doing your thing as a database administrator. Then the CIO plops down into a spare chair in your cubicle to talk about the weather.

Well, not entirely true there. The CIO dropped in because the alerts for disk space have been blowing up everybody’s inbox. He informs you that the situation isn’t going to change anytime soon because new shelves for the SAN just can’t be ordered at the moment. In the end, you are given a directive – just without so many words. Fix the problem, and do it without more disk space.

Fantastic you think to yourself. What to do now. Then you get the Gru light-bulb and it is on!

compressionEnter compression. Ok, compression isn’t something entirely new. It has been a feature shipped with SQL Server since 2008 Enterprise edition and can be just the thing to save the day (and everybody’s inbox before they start ignoring all alerts).

The decision has been made and the biggest tables in the database are now compressed using page compression. But you are concerned that it didn’t work because the space savings isn’t necessarily what you had hoped it would be. Lucky for you, you are running SQL Server 2012 and have the ability to start diving into the internals to confirm whether your worry is fact or fiction.

Using this handy dandy query that you are about to copy and paste from the internet, you can quickly investigate to see if there are pages in the tables that were unaffected by your attempts to compress the table.

Thanks to an undocumented function called dm_db_database_page_allocations, we can investigate quickly whether there are pages that failed to compress. We can also see which compression state they are in – if they are compressed at all.

Adding a script like this to your repository can be an easy aid in the struggle to ensure your expected results either match or don’t match. This would save a bit of time and allow you to move on to bigger and badder things – like 2048.

In addition to looking at the compression status for each page, I have thrown in a little extra. Call it the “considerations” column. Based on activity hitting the table or index, you may want to consider a different level of encryption. This additional data on the end of the output will help start you in that direction.

Tables, Columns and Defaults Oh My!

Comments: No Comments
Published on: August 19, 2015

Picture this if you could. You inherit a new database either through the change of a job, or finding a black box database server, or maybe due to the acquisition of a new application.

No matter the cause, you are now responsible for maintaining that database and you also really need to become acquainted with it. Some of the acquaintance that is needed is learning how things are related and figuring out what data goes where. Not just what data, but what type of data.

I know that one of the things I am frequently asked is to figure out what data type belongs with which column. Furthermore, does this same column exist in more than one table. It’s nice when things are named consistently and you are able to tie things together by column name.

There are also times, when that column that is so consistently named, might need to be confirmed on data types between each of the implementations. I know I have run into cases when the column might be differently defined in different tables.

So, what to do to tie this all together and get that intimate acquaintance with your database?

Luckily, I have a script that will help you figure out some of these connections.

First, the script.

Now, let’s give it a trial run. See if you can spot the difference by running that against the AdventureWorks database using the value “AccountNumber” for the @ColName parameter.

ColData_Queryresult

 

Maybe there is a legitimate reason for that Column to have a different data type specification than the other two. Maybe it doesn’t. One thing for certain is that you will be able to walk away with some questions to ask and hopefully gain a better understanding!

Compressing Encrypted Backups

TSQL2sDayA common requirement, whether it be based out of pure want or truly out of necessity, is to make a large database backup file, that is encrypted, be much smaller.

This was a knock for the early days of Transparent Data encryption (circa SQL Server 2012). If TDE were enabled, then a compressed backup (though compression was available) was not an option. Not only did compression in the 2012 implementation of TDE make the database backup not smaller, it occasionally caused it to be larger.

This was a problem.  And it is still a problem if you are still on SQL 2012. Having potentially seen this problem, amongst many others, Ken Wilson (blog | twitter) decided to ask us to talk about some of these things as a part of the TSQL Tuesday Blog party. Read all about that invite here.

Encrypted and Compressed

dbsecurityWell, thankfully Microsoft saw the shortcoming as well. With SQL Server 2014, MS released some pretty cool changes to help us encrypt and compress our database backups at rest.

Now, instead of a database backup that could potentially get larger due to encryption and compression combined, we have a significant hope of reducing the encrypted backup footprint to something much smaller. Here is a quick example using the AdventureWorks2014 database.

In this little exercise, I will perform three backups. But before I can even get to those, I need to ensure I have a Master Key set and a certificate created. The encrypted backups will require the use of that certificate.

Do this in a sandbox environment please. Do not do this on a production server.

In the first backup, I will attempt to backup the AW database using both encryption and compression. Once that is finished, then a backup that utilizes the encryption feature only will be done. And the last backup will be a compressed only backup. The three backups should show the space savings and encryption settings of the backup if all goes well. The compressed and encrypted backup should also show an equivalent savings as the compression only backup.

With that script executed, I can query the backup information in the msdb database to take a peek at what happened.

This should produce results similar to the following:

backup_results

Looking at the results, I can see that the compression only backup and the compression with encryption backup show very similar space savings. The compression only dropped to 45.50MB and the Compression with encryption dropped to 45.53MB. Then the encryption only backup showed that, interestingly, the CompBackSizeMB (compressed_backup_size) got larger (which is the actual size on disk of this particular backup).

At any rate, the compression now works with an encrypted backup and your backup footprint can be smaller while the data is protected at rest. Just don’t go using the same certificate and password for all of your encrypted backups. That would be like putting all of your eggs in one basket.

With the space savings available in 2014, and if you are using SQL 2014, why not use encrypted backups?

What is DBCC?

DB Nuts and BoltsIf I ask you what is DBCC, what would your answer be?

I want you to think about that one for just a little bit. What comes to mind when you hear DBCC? Is it the default go-to used when talking about consistency checks? Is it something more involved? Maybe it is something entirely different?

I ponder this because I hear it from time to time used in complete replacement for CheckDB. I have to stop and think for a minute because as of SQL Server 2016, there are 35 DBCC statements of various use that are documented. Never-mind the bounty of un-documented statements (e.g. page, ind).

So, I ask, what does DBCC mean to you?

In trying to come up with some sort of answer, I did a little bit of research. If you look in the documentation (including the SQL 2000 documentation), one will find this definition for DBCC:

The Transact-SQL programming language provides DBCC statements that act as Database Console Commands for SQL Server.

It seems the documentation says it pretty clearly that DBCC is Database Console Commands. (And the SQL 2000 documentation says something similar). But every now and then you hear somebody say it means Database Consistency Checker. And it seems the root of that may come from the SQL 6.5 or earlier days. I found this snippet in BOL from SQL 6.5:

Used to check the logical and physical consistency of a database, check memory usage, decrease the size of a database, check performance statistics, and so on. DBCC is the SQL Server “database consistency checker.” DBCC helps ensure the physical and logical consistency of a database; however, DBCC is not corrective. It is recommended that you make periodic checks to ensure the logical and physical consistency of your data.

This seems a little odd to me since not all DBCC statements are used for database consistency checking as this definition would call it. It seems more of a documentation bug that has taken hold than a legitimate acronym. Just think about it. With that definition, one will also see a list of DBCC statements – many of which do not perform consistency checks. Let’s look at them:

For instance, DBCC PINTABLE is hardly useful for checking consistency. The TRACESTATUS is also one that does not quite fit this acronym. That is likely why you will see the difference in the SQL 2000 and beyond documentation for DBCC.

Don’t be surprised if you hear me ask which DBCC statement is being inferred if I hear somebody say they ran DBCCs. It is just plain more clear to hear “CheckDB came up clean” over the alternative “DBCC came up clean”. There is little room for interpretation there.

In short, DBCC is an acronym for Database Console Command, and it seems more of a documentation mistake when it was called Database Consistency Checker.

Another interesting thought. How many people say DBCC Command(s)? Think about that one for a minute – Database Console Command Command(s). That one is less of an issue imho than the prior.

Just remember, think about what you mean to say when talking about a specific DBCC statement and if you really mean a consistency check, try saying CheckDB instead. I bet the clarity in the conversation will improve and there will be less hair tugging.

HealthySQL – A Review

How well do you know the health of your SQL Servers and databases? Ever wonder about the current health or even what the health was like a few months back? Sure, one could anecdotally say the databases and servers are healthy, but how do you quantify it?

Many have asked or pondered what to do to quantify a healthy SQL server. How do you get the hard numbers to show whether the server is or is not healthy? How do you get the numbers to show that the server is performing just as well or better than it did three months ago?

dbhealth_maintNow we have a way to answer those questions and more! Robert Pearl has finally published his book on Healthy SQL. In the book, Robert takes you on a journey from mapping out a plan, to capturing data, storing that data, and then how to report on that data. You will learn what to capture and what some of the various things actually mean (e.g. waits and indexes).

Throughout the book, you will see that Robert will introduce you to various tools. These tools can either be native to SQL Server or they could be third party tools. You will get a healthyDBgood primer on what the tool is and how to quickly use it as you become acquainted with the internals and the Health of your database and server.

As you progress through the book and learn about what makes a database healthy, you will encounter a chapter on how to store the data and make a repository to track your database health. You will want to create a repository so you can progress into the sections on reporting about your database health. Who doesn’t want to market to management how healthy the databases are?

With the tools in this book, and the repository you will create, audits and discussions will become much easier. You will be able to implement processes to help make you more aware of the environment and help keep you from getting that annoying page in the middle of the night.

Take a look at the book and enjoy.  You can get it from Amazon here.

If you are a young DBA, new to being a DBA, an accidental DBA, or just any DBA looking for a little extra help in figuring out how to check and track the health of your server, it is worth reading this book.

«page 1 of 9






Calendar
September 2015
M T W T F S S
« Aug    
 123456
78910111213
14151617181920
21222324252627
282930  
Content
SQLHelp

SQLHelp


Welcome , today is Friday, September 4, 2015