Awesome SQL Server Feature

TSQL2sDayThe second Tuesday of April 2016 is now upon us and you know what that means. Well, I hope you know what that means.

It is time for TSQL Tuesday. It is now the 77th edition of this monthly blog party. This month the host is Jens Vestergaard (blog | twitter) and he insists we do a little soul searching to figure out what about SQL Server really makes our hearts go pitter patter for SQL Server. Ok, so he didn’t really put it that way but you get the point, right? What is it about SQL Server that ROCKS in your opinion?

Well, I think there are a lot of really cool features in SQL Server that ROCK! It really is hard to pick just one feature because there are a lot of really good features that can make life so much easier as a database professional. Then again, there is that topic that bubbles to the top in my articles – a lot. If you haven’t followed my blog, here is a quick clue: click here.

Why is this feature so AWESOME?

Truth be told, there are a ton of reasons why I really like it. Before diving into the why, I need to share an experience.

A client using Microsoft Dynamics AX to manage the Point of Sale (POS) systems for their retail chain has been running into a problem with the POS database at each store. Approximately a year ago, this client had upgrade most of the store databases to SQL Server Standard Edition from Express due to the size restriction of the Express Edition. This SKU upgrade was necessary because the database had grown to exceed 10GB. Most of this growth was explicitly related to the INVENTDIM table consuming 3.5GB of space in the data file.

Right here, you may be asking what the big deal is. Just upgrade the SKU to Standard Edition and don’t worry about the size of the database. I mean, that is an easy fix, right? Sure, that may be perfectly acceptable in an environment with one or maybe even a handful of servers. Imagine a retail chain with more than 120 stores and a database at each store. No extrapolate standard edition licensing costs for all of those stores. Suddenly we are talking a pretty big expense to just upgrade. All of that just because one table chews up 35% of the size limitation of a data file in SQL Server Express Edition.

What if there was an alternative with SQL Express to mitigate that cost and maintain the POS functionality? Enter the SYNONYM! You may recall from a previous post a thing or two that I have said about synonyms in SQL Server. There is good and bad to be had with this feature and most of the bad comes from implementation and not the feature itself.

Using a synonym, I can extend this database beyond the 10GB limitation – or at least that is the proposed theory. To make this work properly, the plan was to create a new database, copy the INVENTDIM table from the POS database to this new database, rename the old INVENTDIM table in the POS database, create a synonym referencing the new table in the new database, and then select from the table to confirm functionality. Sounds easy right? Here is the script that basically goes with that set of steps.

This seems to make a fair amount of sense. Querying the INVENTDIM synonym produces results just as would be expected. Notice that there is one additional step in the script which I did not mention. That step removes unnecessary rows from the INVENTDIM table based on an actual inventory item or barcode for the particular dimension variant related to the item. This helps to trim the table to specific rows related to the retail store available for purchase there. In addition, it serves as a failsafe to get the data down to less than 10GB in case of failure with the synonym.

failedTesting from within SQL Server proved very optimistic. The synonym was working exactly as desired. Next up was to test the change by performing various transactions through the POS.

The solution not only failed, it failed consistently and dramatically. It didn’t even come close. How is this possible? What is Dynamics AX doing that could possibly subvert the synonym implementation? Time to start troubleshooting.

I checked through the logs. Nothing to be found. I checked and validated permissions. No Dice! I checked the ownership chaining. Still no dice! What in the world is causing this failure?

What if I switch to use a view instead of a synonym? I created a view with cross database ownership chains in tact. Test the application again and still failed. What if I use the synonym pointed to a table in the same database? Test from the application and all of a sudden we have success. Now the head-scratching gets a little more intense.

xe_superheroIt is time to get serious. What exactly is the Dynamics AX POS application doing that is leading to failure that does not happen when we query direct from within Management Studio? The means to get serious is to now implement that awesome tool I alluded to previously – Extended Events (XE or XEvents).

With no clues being available from any of the usual sources (including application error messages), XE or profiler is about the only thing left to try and capture the root cause of this failure. Since this happens to be a SQL Server 2014 implementation (yeah I omitted that fact), the only real option in my opinion was to use XE. Truth be told, even on SQL Server 2008 R2, my go to tool is XE. In this case, here is what I configured to try and catch the problem:

With the session running, I had the POS tests begin again. Bang! It failed again, but I expected it and wanted it to fail again. This time around, finding the problem turned out to be really easy. As soon as the error hit, I was able to check the trapped events and see what it was that had been missing and ultimately causing this string of failures.

xe_trappederror_ax

Using the GUI (yeah rare occasion for me with XE), I filtered the events down for display purposes only to make it easier to see what was found by running these tests that was pertinent to the problem. Here is the highlighted text a little larger and easier to see:

Snapshot isolation transaction failed accessing database ‘AxRetailDIM’ because snapshot isolation is not allowed in this database. Use ALTER DATABASE to allow snapshot isolation.

Wow! Light bulb shines bright and the clue finally clicks. The POS databases for this client are all set to allow snapshot isolation. Since this error is coming at the time when the failure occurs in the application, it stands to reason that this is the root cause. Time to test by changing the snapshot isolation setting.

That is a quick change and easy enough to test again. With the XE Session still running, and the change in effect, it’s time to test via the POS application again. To my expectations the application is working now. This is good news! Time to test again and again and again to make sure it wasn’t a fluke that it worked and that it was only going to work just the once.

Not a single failure after the change to allow snapshot isolation. One small change with such a big impact and so few clues to be found except in that super Awesome Super Hero feature of SQL Server called Extended Events!

Being able to quickly find the root cause of so much pain is why I enjoy working with the Extended Events feature. It is an efficient way to find a ton of information while causing little overhead to the server.

The bonus here is that XE allowed us to pinpoint a problem with the proposed solution to help save costs while extending a database beyond the 10GB limitation of SQL Express.

Note: I left some notes in the XE session script. These notes help to point out differences between implementing this particular session on SQL Server 2012 (or later) and SQL Server 2008 (or R2).

All about the Change

Comments: 1 Comment
Published on: January 12, 2016

TSQL2sDayThe second Tuesday of January 2016 is now upon us and you know what that means. Well, I hope you know what that means.

It is time for TSQL Tuesday. It is now the 74th edition of this monthly blog party. This month the host is Robert Davis (blog | twitter) and he has asked us to “Be the change”. Whether the inspiration for this topic is the new year and resolutions, or Ghandi (you must be the change), or CaddyShack (be the ball), we will be discussing “Change.”

Specifically, Robert requested that we discuss data changes and anything relating to data changes. Well, I am going to take that “anything” literally and stretch the definition of changing data just a bit. It will all make sense by the end (I hope).

Ch-ch-changes

Changes happen on a constant basis within a database. Data will more than likely be blackbox2changing. Yes, there are some exceptions to that, but the expectation that data is changing is not an unreal expectation.

Where that expectation becomes unwanted is when we start talking about the data that helps drive the configuration of the server. Ok, technically that is a setting or configuration option or a button, knob, whirlygig or thingamajig. Seldom do we really think about these settings as data. Think about it for a moment though. We can certainly derive some data about these changes (if these settings themselves are not actually data).

So, while you may call it settings changes, I will still be capturing data about the changes. Good? Good! Another term for this is auditing. And auditing applies to all levels including ETL processes and data changes etc. By that fortune, I just covered the topic again – tangentially.

How does one audit configuration changes? Well, there are a few different methods to do this. One could use a server side trace, SQL audit, Extended Events or (if somebody wants to) a custom solution not involving any of those using some sort of variation of tsql and error log monitoring. The point is, there are options. I have discussed a few options for the custom solution path as well as (recently published article using…) the default trace path. Today I will dive into what it looks like via SQL Audit.

When creating an audit to figure out what changes are occurring within the instance, one would need to utilize the SERVER_OPERATION_GROUP action audit group. This action group provides auditing of the following types of events:

  • Administer Bulk Operations
  • Alter Settings
  • Alter Resources
  • Authenticate
  • External Access
  • Alter Server State
  • Unsafe Assembly
  • Alter Connection
  • Alter Resource Governor
  • Use Any Workload Group
  • View Server State

From this group of events, we can guess at the types of actions that might trigger one of these events to fire for the audit. Some of the possible actions would be:

Action Example
Issue a bulk administration command BULK INSERT TestDB.dbo.Test1
FROM ‘c:\database\test1.txt’;
Issue an alter connection command KILL 66
Issue an alter resources command CREATE RESOURCE POOL PrimaryServerPool
WITH {}
Issue an alter server state command DBCC FREEPROCCACHE
Issue an alter server settings command Perform sp_configure with reconfigure
Issue a view server state command

SELECT *

FROM sys.dm_xe_session_targets

Issue an external access assembly command CREATE ASSEMBLY SQLCLRTest
FROM ‘C:\MyDBApp\SQLCLRTest.dll’
WITH PERMISSION_SET = EXTERNAL_ACCESS;
Issue an unsafe assembly command CREATE ASSEMBLY SQLCLRTest
FROM ‘C:\MyDBApp\SQLCLRTest.dll’
WITH PERMISSION_SET = UNSAFE;
Issue an alter resource governor command ALTER RESOURCE GOVERNOR DISABLE
Authenticate see view server state vsst type occurs for auth events
Use any workload group See Resource Governor

This is quite a bit of interesting information. All of these events can be audited from the same audit group. The interesting ones of this bunch are the ones that indicate some sort of change has occurred. These happen to be all but the “Authenticate”, “View Server State” and “Use any workload Group” events even though these events may be stretched to say something has changed with them as well.

With all of that in mind, I find the the “alter server settings” event to be the most problematic. While it does truly capture that something changed, it does not completely reveal to me what was changed – just that a reconfigure occurred. If a server configuration has changed, I can capture the spid and that reconfigure statement – sure. Once that is captured, I now have to do something more to figure out what configuration was “reconfigured”. This is highly frustrating.

Here’s an example from the audit I created:

audit_alterserversettings

This is only a small snippit of the results. I can see who made the configuration change, the time, the spid, the source machine etc. I just miss that nugget that tells me the exact change that was made. At least that is the case with the changes made via sp_configure. There are fixes for that – as previously mentioned.

Here is another bit of a downside. If you have the default trace still running, a lot of this information will be trapped in that trace. Furthermore, some of the events may be duplicated via the object_altered event session (e.g. the resource governor events). What does this really mean? Extra tracing and a bit of extra overhead. It is something to consider. As for the extended events related events and how to do this sort of thing via XE, I will be exploring that further in a future post.

Suffice it to say that, while not a complete solution, the use of SQL Audit can be viable to track the changes that may be occurring within your SQL Server – from a settings point of view.

Auditing Needs Reporting

Comments: No Comments
Published on: October 13, 2015

TSQL2sDay

 

Welcome to the second Tuesday of the month. And in the database world of SQL Server and the SQL Server community, that means it is time for TSQL2SDAY. This month the host is Sebastian Meine (blog / twitter), and the topic that he wants us to write about is: “Strategies for managing an enterprise”. Specifically, Sebastian has requested that everybody contribute articles about auditing. Auditing doesn’t have to be just “another boring topic”, rather it can be interesting and there is a lot to auditing.

For me, just like I did last month, I will be just doing a real quick entry. I have been more focused on my 60 Days of Extended Events series and was looking for something that might tie into both really well that won’t necessarily be covered in the series. Since I have auditing scheduled for later in the series, I was hoping to find something that meets both the XE topic and the topic of Auditing.

audit_wordcloudNo matter the mechanism used to capture the data to fulfill the “investigation” phase of the audit, if the data is not analyzed and reports generated, then the audit did not happen. With that in mind, I settled on a quick intro in how to get the audit data in order to generate reports.

Reporting

An audit can cover just about any concept, phase, action within a database. If you want to monitor and track performance and decide to store various performance metrics, that is an audit for all intents and purposes. If you are more interested in tracking the access patterns and sources of the sa login, the trapping and storing of that data would also be an audit. The data is different between the two, but the base concept boils down to the same thing. Data concerning the operations or interactions within the system is being trapped and recorded somewhere.

That said, it would be an incomplete audit if all that is done is to trap the data. If the data is never reviewed, how can one be certain the requirements are being met for that particular data trapping exercise? In other words, unless the data is analysed and some sort of report is generated from the exercise it is pretty fruitless and just a waste of resources.

There is a plenitude of means to capture data to create an audit. Some of those means were mentioned on Sebastian’s invite to the blog party. I want to focus on just two of those means because of how closely they are related – SQL Server Audits and Extended Events. And as I previously stated, I really only want to get into the how behind getting to the audit data. Once the data is able to be retrieved, then generating a report is only bound by the imagination of the intended consumer of the report.

SQL Server Audits

Audits from within SQL Server was a feature introduced at the same time as Extended Events (with SQL Server 2008). In addition to being released at the same time, some of the metadata is recorded with the XEvents metadata. Even some of the terminology is the same. When looking deep down into it, one can even find all of the targets for Audits listed within the XEvents objects.

Speaking of Targets, looking at the documentation for audits, one will see this about the Targets:

The results of an audit are sent to a target, which can be a file, the Windows Security event log, or the Windows Application event log. Logs must be reviewed and archived periodically to make sure that the target has sufficient space to write additional records.

That doesn’t look terribly different from what we have seen with XEvents thus far. Well, except for the addition of the Security and Application Event Logs. But the Target concept is well within reason and what we have become accustomed to seeing.

If the audit data is being written out to one of the event logs, it would be reasonable to expect that one knows how to find and read them. The focus today will be on the file target. I’m going to focus strictly on that with some very basic examples here.

I happen to have an Audit running on my SQL Server instance currently. I am not going to dive into how to create the audit. Suffice it to say the audit name in this case is “TSQLTuesday_Audit”. This audit is being written out to a file with rollover. In order for me to access the data in the audit file(s), I need to employ the use of a function (which is strikingly similar to the function used to read XE file targets) called fn_get_audit_file. The name is very simple and task oriented – making it pretty easy to remember.

Using the audit I mentioned and this function, I would get a query such as the following to read that data. Oh, and the audit in question is set to track the LOGIN_CHANGE_PASSWORD_GROUP event.

There are some tweaks that can be made to this, but I will defer to the 60 day XE series where I cover some of the tweaks that could/should be made to the basic form of the query when reading event files / audit files.

XE Audits

Well, truth be told, this one is a bit of trickery. Just as I mentioned in the preceding paragraph, I am going to defer to the 60 day series. In that series I cover in detail how to read the data from the XE file target. Suffice it to say, the method for reading the XE file target is very similar to the one just shown for reading an Audit file. In the case of XEvents, the function name is sys.fn_xe_file_target_read_file.

Capturing data to track performance, access patterns, policy adherence, or other processes is insufficient for an audit by itself. No audit is complete unless data analysis and reporting is attached to the audit. In this article, I introduced how to get to this data which will lead you down the path to creating fantastic reports.

One Easy Strategy for the Database Enterprise

Comments: 1 Comment
Published on: September 8, 2015

TSQL2sDay

 

Welcome to the second Tuesday of the month. And in the database world of SQL Server and the SQL Server community, that means it is time for TSQL2SDAY. This month the host is Jen McCown (blog / twitter), half of the MidnightDBA team, and the topic that she wants us to write about is: “Strategies for managing an enterprise”. Specifically, she wants to know “How do you manage an enterprise? Grand strategies? Tips and tricks? Techno hacks? Do tell.”

For me, this month, I will be just doing a real quick entry. I have been more focused on my 60 Days of Extended Events series and was looking for something that might tie into both really well that won’t necessarily be covered in the series, but that might work well as an “Enterprise” worthy topic.

ussenterpriseSo, what I decided to land on was the system_health session.

Enterprise

Wait, isn’t the system_health session one of those things that is configured per Instance?

Yes it is!

The system_health session is a default Extended Events session that is running by default on every instance of SQL Server (keyword is default) since SQL Server 2008. Whether you want it to be running or not is an entirely different conversation. But by default it is running.

There is a small problem with that default though. That problem is in the 2008 and 2008 R2 flavors of SQL Server. The default behavior is that the session only dumps the events to the ring buffer. And if you are only dumping the events to the ring buffer, you can imagine this is not entirely that useful. Why? Well, the ring buffer is just a memory target and is considerably more volatile than to write the event session data out to a file. One need not try terribly hard to see why this can be frustrating (unless of course you didn’t even know it was there).

So what to do to help push this in a more enterprise friendly direction? The answer is to add a file target like was done in the 2012 (and up) flavors of SQL Server. Here is the entire system_health session as defined in u_tables.sql (the backup script of the session deployed to the install directory):

Now, with all of the session data going out to disk, you can also schedule a scraper to copy the files to a central log folder on the network. Unfortunately, placing the files directly on a UNC share (via mapped drive or via UNC naming) does not work in 2008 or R2. I have a few more configurations to run on that still, but it doesn’t look good.

At least by dumping the session data to an event file, you are closer to an enterprise worthy solution. Just remember to do it!

One last thing. After you alter the system_health session, make sure you start it again.

 

Compressing Encrypted Backups

TSQL2sDayA common requirement, whether it be based out of pure want or truly out of necessity, is to make a large database backup file, that is encrypted, be much smaller.

This was a knock for the early days of Transparent Data encryption (circa SQL Server 2012). If TDE were enabled, then a compressed backup (though compression was available) was not an option. Not only did compression in the 2012 implementation of TDE make the database backup not smaller, it occasionally caused it to be larger.

This was a problem.  And it is still a problem if you are still on SQL 2012. Having potentially seen this problem, amongst many others, Ken Wilson (blog | twitter) decided to ask us to talk about some of these things as a part of the TSQL Tuesday Blog party. Read all about that invite here.

Encrypted and Compressed

dbsecurityWell, thankfully Microsoft saw the shortcoming as well. With SQL Server 2014, MS released some pretty cool changes to help us encrypt and compress our database backups at rest.

Now, instead of a database backup that could potentially get larger due to encryption and compression combined, we have a significant hope of reducing the encrypted backup footprint to something much smaller. Here is a quick example using the AdventureWorks2014 database.

In this little exercise, I will perform three backups. But before I can even get to those, I need to ensure I have a Master Key set and a certificate created. The encrypted backups will require the use of that certificate.

Do this in a sandbox environment please. Do not do this on a production server.

In the first backup, I will attempt to backup the AW database using both encryption and compression. Once that is finished, then a backup that utilizes the encryption feature only will be done. And the last backup will be a compressed only backup. The three backups should show the space savings and encryption settings of the backup if all goes well. The compressed and encrypted backup should also show an equivalent savings as the compression only backup.

With that script executed, I can query the backup information in the msdb database to take a peek at what happened.

This should produce results similar to the following:

backup_results

Looking at the results, I can see that the compression only backup and the compression with encryption backup show very similar space savings. The compression only dropped to 45.50MB and the Compression with encryption dropped to 45.53MB. Then the encryption only backup showed that, interestingly, the CompBackSizeMB (compressed_backup_size) got larger (which is the actual size on disk of this particular backup).

At any rate, the compression now works with an encrypted backup and your backup footprint can be smaller while the data is protected at rest. Just don’t go using the same certificate and password for all of your encrypted backups. That would be like putting all of your eggs in one basket.

With the space savings available in 2014, and if you are using SQL 2014, why not use encrypted backups?

SQL Server and Defaults

TSQL2sDayWhat is that default setting?

SQL server does a fantastic job of having numerous settings at the server level and at the database level predefined for you. Additionally, the OS has a bunch of settings that are predefined. These are notoriously the default settings. Or as some would say: “Set it and Forget it Settings.” Sadly, the set it part never really happens – except during the install.

Today is the second Tuesday of the the month, and that means it is TSQL Tuesday. As I mentioned in a previous article, this month the topic is hosted by Andy Yun (blog | twitter). Andy has chosen to have everybody talk about default settings in SQL Server. You can read everything Andy has said in his invite for the month.

deadlydefaults

Defaults Defaults Defaults

I could ramble on about database settings that get changed, but I have already addressed the changing of database settings – here. While I did address the changing of the settings, I did not address that it is an awesome thing to use to find out who might be changing those default settings (at the database level) that you have already set and optimized for the instance.

Or I could go on about how the OS has various settings that are far less than optimal for SQL Server. A good example would the that tree hugger setting that I talked about last month. But rather than do that, I will advise that you just read that article – here.

Or we can belabor the point about a fantastic setting that equates to NOLOCK. But, that isn’t a default setting and one would need a pretty good reason to change the default setting to READ UNCOMMITTED. So, just read a bit about the coolness you can see in the execution plans when the NOLOCK directive is used – here.

There are just so many wonderful default settings within SQL Server (or within the Windows OS) that could be discussed or that have already been discussed.

About all of those settings, I will say this. Database Settings are not “One Size Fits ALL.” But that is what a default setting is trying to do. If you find default settings within a database, then you should probably evaluate the setting and make sure it is set appropriately.

What default to discuss then?

Rather than talk about a plethora of settings that could / should be changed, I want to speak about one I doubt most would consider. This is the default setting that can only be adjusted from between the ears!

This is a default behavior I have noticed with many DBAs. I have heard various names for this behavior and most are probably quite accurate. The default setting that I think should be changed is one that will lead to a longer more fulfilling career. Let me see if I can describe the behavior well enough.

Often times, companies will purchase monitoring software to help alert to problems within the environment. This is a good thing. A DBA can’t be available 24x7x365. So the software should be configured and tuned to each database to allow for different tolerance/alert thresholds. This is a good thing too – if it is done. When done properly, the DBA can get a good nights rest as well as feel confident the environment is running properly.

If the software is not properly tuned and configured, the DBA will probably rectify that sooner rather than later. This is not the big behavior to change.

What I do see all too often is a complete reliance on the software that is monitoring – to a fault. I have seen hundreds of DBAs just sit and watch the pretty little dials and gauges on the screen of the software and only react when one of the items turns red. A career surely can’t be built off of this kind of behavior.

Rather than sit there and wait for something to fail, why not proactively check the servers? Why not try to build a script repository that will do everything (and more) that the monitoring software can do? While building that repository, think of the skills that will be gained and the knowledge that can be retained across jobs! In addition, I have been some places where a script repository was able to replace the purchased software and saved 100’s of thousands of dollars per year in software maintenance costs.

One of my favorite statements is that a “Senior DBA should be able to script his own solutions!” Being able to create monitoring scripts to replace that canned app will certainly get you to that next level. It will also get you out of that default behavior of complete reliance on the canned software and imminent career stagnation.

Learn a little and grow your career.

Oh, and there are some great monitoring tools out there. They can provide a great asset to a company – if and when used properly.

saynoribbon

Extended Events, Birkenstocks and SQL Server

TSQL Tuesday

I bring you yet another installment in the monthly meme called T-SQL Tuesday.  This is the 67th edition, and this time we have been given the opportunity to talk about something I really enjoy – Extended Events.

Props to Jes Borland (blog | twitter) for picking such an awesome topic. There is so much to cover with extended events, it’s like a green canvas ready for anything you can imagine.

Birkenstocks?

birkentstocks

I will save the explanation here for later when hopefully it all ties together for you (well, at least buckles up).

treehugger

While that is all fun and playful, let’s get down to the serious side now. One of my favorite quick fixes as a consultant is to come in and find that the server is set to “environment friendly” / “green” / “treehugger” mode. You can read more about power saving cpus from my friend Wayne Sheffield here.

That poor old cpu thing has been beat up pretty good. But how can we tell if the server is running in that mode if the only thing we can do is look in SQL Server (can’t install cpu-z, or don’t have adequate permissions on the server to see windows settings – just play along)? Lucky for us there is this cool thing called Extended Events.

In SQL Server we have this cool event called perfobject_processor. This particular event has some really cool metrics that it captures.  One such metric is the frequency. The frequency is an indicator to us whether the server has the cpu set to balanced, high performance, or power saver. Having that in mind, let’s create a session to trap this data and experiment a little with the cpu settings.

Well, that looks amazingly easy and straight forward. I am telling the session to trap the additional CPU information such as numa_node_id and cpu_id. You can eliminate those if you wish. They may be beneficial when trying to identify if there is an issue on a specific processor though.

To experiment, I will break out the age old argument provoker – xp_cmdshell. I will use that to cycle through each of the power saving settings and look at the results. Here is the bulk of the script all together.

And now for the XE Parser.

If I parse through the extended event after each change of the power scheme, I would be able to see the effect of each scheme change in the event session as well as in a tool such as Resource Monitor. Here is what I was able to see with each of the changes.

Balanced Saver

From Resource Monitor:

balanced_cpu

And the XE data:

balanced_results

This is my default power scheme. On my laptop, this is ok. For a production SQL server, this will cause problems.

High Performance

fullpower fullpower_results

Quickly, you should be able to spot that the blue line in the graph, and the numeric values from the XE session correlate to the processor giving you everything it has. This is good for SQL Server.

Power Saver

powersaver_cpu

See how that blue line falls off sharply?

 

powersaver_results

 

Supporting that steep fall in the graph, we can see that the XE trapped the percent of max frequency as 36%. You might be lucky and attain 36%. Don’t be surprised if you see something even lower. Please don’t use this setting on a production box – unless you want to go bald.

We can see that we have great tools via Extended Events to help troubleshoot various problems. As I said, this is one of my favorites because it is a very common problem and a very easy fix.

SQL Server is not GREEN! Do not put birkenstocks on the server and try to turn the server into a tree hugger. It just won’t work out that well. Set your fileservers or your print servers to be more power conscientious, but this is something that will not work well on SQL Server.

Final thought. If you have not figured out the birkenstocks, well it is a common stereotype with environmentalists in some areas that they may wear woolly socks and birkenstocks.

No wool socks were harmed in the making of this blog post!

Monitoring SQL Server

TSQL2sDay150x150Welcome to the fabulous world of blog parties, SQL Server and what has been the longest running SQL Server related meme in the blogosphere – TSQLTuesday.

This month we are hosted by Catherine Wilhemsen (blog | twitter) from Norway. And interestingly, Catherine has asked for us to talk about monitoring SQL Server.  Wow! Talk about a HUGE topic to cover in such a short space. Well, let’s give it a go.

I am going to try and take this in a bit of a different direction, and we shall see if I have any success with it or not.

Direction the First

Monitoring is a pretty important piece of the database puzzle. Why? Well, because you want to try and find out before the end-users that something is happening. Or do you? It is a well established practice at many shops to allow the end-users to be the monitoring solution. How does this work, you ask?

It works, by waiting for an end-user to experience an error or some unexpected slowness. Then the user will either call you (the DBA), your manager, the company CEO, or (if you are lucky) the helpdesk. Then, the user will impatiently wait for you to try and figure out what the problem is.

The pros to this solution involve a much lower cost to implementation.  The cons, well we won’t talk about that because I am trying to sell you on this idea. No, in all seriousness, the con to this approach could involve a lot of dissatisfaction, job loss, outages, delays in processing, delays in paychecks, dizziness, fainting, shortness of breath, brain tumors, and rectal bleeding.  Oh wait, those last few are more closely related to trial medications for <insert ailment here>.

If you are inclined to pursue this type of monitoring – may all the hope, prayers, faith and luck be on your side that problems do not occur.

New Direction

This methodology is also rather cheap to implementation.  The risk is relatively high as well and I have indeed seen this implementation. In this new approach, we will require that the DBA eyeball monitor the databases all day and all night.

At the DBA’s disposal is whatever is currently available in SQL Server to perform the monitoring.  It is preferred that only Activity Monitor and Profiler be used to perform these duties. However, the use of sp_who2 and the DMVs is acceptable for this type of duty.

The upside to this is that you do not incur any additional cost for monitoring over what has been allocated for the salary of the DBA. This an easy and quick implementation and requires little knowledge transfer or ability.

The downside here is – well – look at the problems from the last section and then add the glassed over stoner look of the 80s from staring at the monitor all day.

If you have not had the opportunity to use this type of monitoring – consider how lucky you are.  This has been mandated by several companies (yes I have witnessed that mandate).

Pick your Poison

Now we come to a multi-forked path.  Every path at this level leads to a different tool set.  All of these tools bare different costs and different levels of knowledge.

The pro here is that these come with lower risk to those suspicious symptoms from the previous two options. The con is that it will require a little bit more grey matter to configure and implement.

You can do anything you would like at this level so long as it involves automation.  You should configure alerts, you should establish baselines, you should establish some level of history for what has been monitored and discovered. My recommendation here is to know your data and your environment and then to create scripts to cover your bases.

One last thought, no matter what solution you decide to implement, you should also monitor the monitor. If the DBA collapses from long hours of eyeball monitoring, who will be there to pick him/her up to resume the monitoring?

If you opt to not implement any of these options, or if you opt to implement either of the first two options, I hope you have dusted off your resume!

Extended Events and Data Types

Comments: No Comments
Published on: April 14, 2015

TSQL2sDay150x150

Today is another one of those installments in the long-running TSQL Party held monthly (a.k.a TSQL2SDAY).

This month we have an open invitation from Mike Donnelly (blog | twitter), asking us to talk about something new we have learned and then to teach about it. You can read the invitation in Mike’s own words on his blog.

Coincidentally, the topic is both pretty straight forward and easy on the one hand while somewhat difficult on the other hand.  Mike said: “The topic this month is straight forward, but very open ended. You must learn something new and then write a blog post explaining it.” I find the topic to be difficult because I will usually blog about a topic when I have learned something new. On the other hand, sharing new stuff  is pretty straight forward and enjoyable.  Enter the brain split!

So, what I have learned recently?

Quite a bit.  But what would I really like to share on this occasion?

For today, I would like to share more information about extended events.  XEvents are great.  There is a wealth of information to be garnered from XEvents.  Furthermore, XEvents provide a great opportunity to keep learning.

While researching some events to help troubleshoot a specific issue, it dawned on me that there was some info that I had only looked at when I was looking at specific events.  I started wondering how much of that info was out there.  So here I will share some of this information that is available to you via queries within SQL Server.  Much of this info is attainable through the re-purposing of some scripts I shared previously – here.

Custom Data Types

This wasn’t too much of a surprise because I had seen them throughout and taken advantage of the custom data types to get better information.  But I might consider these custom data types to be more of the EAV model coming through than custom data types.  One can expose the custom data types through an evaluation of data in the map_values DMV.  Let’s take a look at a script that would lay the groundwork to see these data types.

Evaluating this data, one will see that in addition to the “standard” datatypes such as integer, there will be a “wait_types” data type.  This data type will map to all of the wait types available through extended events.  Additionally, the event that is associated to each of these custom data types is exposed through this query.  When getting ready to use an extended event, knowing the kinds of data that will be exposed through a data point in the session will make the session data more usable.  Knowing there is a custom data type (yes, it is really just a key value pair), can be extremely helpful.

Collection Flags

Many of the available events have “customizable” collection flags exposed.  Understanding that these collection flags can be on or off is essential to saving some hair.  Not all data is automatically collected for all events.  Some prime examples of such events that do not automatically collect certain pieces of data are sp_statement_completed and object_created.  The nice thing about these flags is that they have a value of “customizable” in the column_type field.  Another good thing with these flags is that the description field gives a little documentation on what the behavior should be for the “on” and “off” states.

There is a good reason that some of those may be off by default.  The addition of this information may cause an additional load or may be information overload.  It is up to the consumer to determine if the data is going to be of significant importance.  Once determined, enable or disable the flag as appropriate.

These queries provide a good amount of information about the extent of custom data types as well as the collection flags that may be available to use when creating event sessions in SQL Server.  Understanding that this data and these options are there is important to capturing better event info.

Security as a Fleeting Thought

Comments: 6 Comments
Published on: February 10, 2015

Today we have another installment in what is known as TSQL Tuesday.  This month we have an invitation and topic given to us by the infamous Kenneth Fisher ( blog | twitter).

TSQL2sDay150x150Today, the invitation is for us to share our stories on how we like to manage security.  Or at least that is the request that was made by Kenneth.  I am going to take a bit of a twist on that request.  Instead of sharing how I like to manage security, I am going to share some interesting stories on how I have seen security managed.

Let’s just call this a short series on various case studies in how to manage your security in a very peculiar way.  Or as the blog title suggests, how to manage your security as an afterthought.

Case Study #1

dbsecurityWe have all dealt with the vendor that insists on the user account that will be used for their database and application be one of two things.  Either it needs to be sa or needs to be a member of the sysadmin fixed server role.  The ensuing discussion with those vendors is always a gem.  They insist the application will break, you as the diligent DBA prove otherwise, and then the senior manager sponsoring the application comes around with a mandate that you must provide the access the vendor is requesting.

Those are particularly fun times.  Sometimes, there is a mutual agreement in the middle on what security can be used and sometimes the DBA just loses.

But what about when it is not a vendor application that mandates such relaxed security for their application and database?  What if it happens to be the development group?  What if it happens to be a developer driven shop and you are the consultant coming in to help get things in order?

I have had the distinct pleasure of working in all of those scenarios.  My favorite was a client that hosted ~700 clients, each with their own database.  There were several thousand connections coming into the server and every single connection was coming in as ‘sa’.  Yes, that is correct.  There were no user logins other than the domain admins group on the server – which was also added to the sysadmin security role.  That is always a fun discussion to start and finish.  The look of color disappearing from the clients’ eyes as the realize the severity of the problem.

Please do not attempt this stunt at home.

Case Study #2

In a similar vain, another one that I have seen far too often is the desire to grant users dbo access within a database.  While this is less heinous than granting everybody sysadmin access – it is only a tad better.  Think about it in this way – does Joe from financing really need to be able to create and drop tables within the accounting database?  Does Marie from human resources need to be able to create or drop stored procedures from the HR database?  The answer to both should be ‘NO’.

In another environment, I was given the opportunity to perform a security audit.  Upon looking over things, it became very clear what the security was.  Somebody felt it necessary to add [Domain Users] to the dbo role on every database.  Yes, you read that correctly.  In addition to that, the same [Domain Users] group was added to the sysadmin server fixed security role.  HOLY COW!

In this particular case, they were constantly trying to figure out why permissions and objects were changing for all sorts of things within the database environment.  The answer was easy.  The fix is also easy – but not terribly easy to accept.

Please do not attempt this stunt at home.

Case Study #3

I have encountered vendor after vendor that has always insisted that they MUST have local admin and sysadmin rights on the box and instance (respectively).  For many this is a grey area because of the contracts derived between the client and the vendor.

For me, I have to ask why they need that level of access.  Does the vendor really need to be able to backup your databases and investigate system performance on your server?  Does that vendor need, or are they even engaged, to troubleshoot your system as a whole?  Or, do they just randomly sign in and apply application updates without your knowledge or perform other “routine” tasks unknown to you?

I have seen vendors change permissions and add back door accounts far too often.  They seldom if ever are capable of providing the level of support necessary when you are stuck with deadlocks by the second or blocking chains that tie up the entire server.  In addition, they are generally unavailable for immediate support when a production halting issue arises in their application – or at least not for a few hours.

This is specifically in regards to application vendors.  They are not your sysadmin and they are not your DBA.  If they must have RDP access or access to the database – put it under tight control.  Disable the account until they request access.  Then a request can be made and a note documented about why the access is needed.  Then the account can be enabled, monitored and disabled after a specified amount of time.

Please do not attempt this stunt at home.

This also changes when that vendor happens to be providing you IT functionality and is not specifically tied to an application.  Those relationships are a bit different and do require a little more trust to the person who is acting on your behalf as your IT staff.

Conclusion

I have shared three very dangerous stunts that are sometimes portrayed to be done by professionals.  Do not try this in your environment or at home.  It is dangerous to treat security with so little concern.  Security is not some stunt, and should be treated with a little more care and attention.

If you find yourself in any of these situations, an audit is your friend.  Create some audit process within SQL Server or on the Local server to track changes and accesses.  Find out what is going on and be prepared to act while you build your case and a plan for implementing tighter security.

«page 1 of 6






Calendar
April 2016
M T W T F S S
« Mar    
 123
45678910
11121314151617
18192021222324
252627282930  
Content
SQLHelp

SQLHelp


Welcome , today is Friday, April 29, 2016