It’s an Honor: Microsoft MVP

Comments: No Comments
Published on: July 1, 2019

Today (July 1st) is the notification day for all Microsoft MVPs. Microsoft changed it a couple of cycles back to be all on the same day helping all candidates to come to a single emotional day. I received my notification early this morning that I was awarded the MVP again for the 2019-2020 cycle. This is my 6th award overall.

I am humbled to be among so many great professionals and honored at the same time for having been chosen. Are you new curious about this MVP thing? Well, its sorta like being named MVP for a sports league while at the same time a little bit different. Here is what Microsoft has to say about the MVP. I personally think you may get more out of reading about the MVP here (a little bias might be involved).

This is a community award for your efforts to promote and help others work with Microsoft products. I was awarded for my efforts in the Data Platform. Many others receive the award for their efforts in AI, Powershell, or Azure (as examples). Generally, the people that are awarded are seen as being passionate about the Microsoft Product. These people spend a lot of time, effort, energy, and sometimes money to help share their enthusiasm for the product.

Microsoft is doing a fantastic job at recognizing people for their efforts in promoting the MS brand. Not everybody that promotes that brand gets to be an MVP – sadly. If you didn’t make it this time, keep trying. I just advise that the goal be more about the community than the award. The MVP award is great, but giving to the community is the real reward.

I want to congratulate all those awarded a Microsoft MVP award past, present and future. I also want to thank all of those who have given to the community regardless of whether an award was received or not. There are many of you out there that have been a positive influence whether you know it or not – just for giving to the community and helping somebody else. THANK YOU!

Detail Job History – Back to Basics

Recently, I covered the need to understand job failure frequency and knowing the value of your SQL Agent jobs. You can read the specifics in the article – here.

Sometimes, just knowing the frequency of the job failure is good enough. Sometimes, more information is helpful. Having more information is particularly helpful when meeting with the business to discuss the validity of the job in question.

What do you do in times like this? The most basic answer to that question is – get more data. But that barely scratches the surface. The real question being asked there is how do you go about gathering that data?

There are two methods to gather the data – the hard way and the easy way. Do you like to work hard? Or would you rather work more efficiently?

Efficiency Matters

As was discussed in the previous article, I prefer to do things just a little bit less manually where possible. The consistency of a script matters, but it also is just so much faster than doing things the hard, manual, iterative way. So let’s build a little bit on the script from the previous article.

And here is a sample of the output.

With this script, I have the ability to quick show which step is failing, what the command is for that step, what kind of process is running on that step, any passwords (in the event of an SSIS password), and of course the failure frequency. This is golden information at the fingertips. There is no need to click through the GUI to gather this information. You can get it quickly and easily in one fell swoop.

The Wrap

An important part of any DBAs job is to ensure database related jobs are running prim and proper. Sometimes that just doesn’t happen. When jobs are being overlooked, it is useful to be able to gather data related to consistency of job success or failure. This script will help you in your investigation efforts. In addition, I also recommend this article in your agent job audit efforts.

This has been another post in the back to basics series. Other topics in the series include (but are not limited to): Backups, backup history and user logins.

Database File Changes

Data professionals around the globe are frequently finding themselves occupied with figuring out why and when a file (data or log) for a database has changed in size. Whether that change is a growth or shrink, or if the change was expected to happen or not.

I have written previously about the need to monitor these changes and how to do it more efficiently (there is even a fail-safe that occasionally works. As SQL Server improves, so does our ability to capture these types of events.

Given that SQL Server has undergone many enhancements, let’s take a look at the enhancements related to capturing the database file size changes.

Database File Related Events

If you read the articles mentioned previously, you will have encountered a previous article that include an Extended Event session that can be used for tracking file changes. That session is really good enough in most cases. One major change that I would suggest off the top is the asynchronous file target. While that target will work on all editions of SQL Server since 2008, the name of the target was changed in SQL Server 2012 to event_file. No big deal there. The XEM file is also no longer necessary, so that piece can just be wiped out.

That said, what else has changed? Let’s cover some deprecated events that may get you frustrated if you encounter them. The following events should be avoided because they will do nothing (a couple of them have been covered in this previous article).

  • sqlserver.databases_log_growth – Databases log growth
  • sqlserver.databases_log_file_size_changed – Databases log file size changed
  • sqlserver.databases_data_file_size_changed – Databases data file size change
  • sqlserver.auto_shrink_log – Auto shrink log ended.

Each of the preceding events have been replaced by a single event called database_file_size_change.

And then there is this one that I can’t seem to get to generate any events but it might possibly still be valid. In short, don’t be surprised one way or the other if it does something.

  • sqlserver.auto_shrink_log_caught_exception – Auto shrink log caught an exception.

Great, we have some events we can avoid. Now let’s look at some events we should consider.

  • sqlserver.database_file_size_change – Occurs when any of the data or log files for a database changes size. Use this event to monitor changes to the size of database files.
  • sqlserver.databases_shrink_data_movement – Databases shrink data movement
  • sqlserver.databases_log_file_used_size_changed – Databases log file used size changed (this one gets noisy – use judiciously).
  • sqlserver.databases_log_shrink – Databases log shrink

Now that we have some events that are viable as well as some events to avoid, let’s put a session together.

All we need to do at this juncture is test the session.

Let’s drill into the database_file_size_change event and take a closer look.

There are a few things going on with this event. As previously mentioned, this event captures multiple different types of size related events. If you now look at the sections that I have circled in red, you will note that there is a flag that tracks if the event was an automatic size change. It doesn’t matter if it was a growth or shrink, both can be automatic. The way we can tell if it was an automatic shrink is due to the negative value in the size_change_kb field.

Next up, we have the green highlighted section. I have three distinct timestamps circled and separated by a dotted red line. Note the time difference between each of the groups. Each group of events is separated by 30 minutes. As it turns out, if you have Autoshrink enabled on your database, the timer is a 30 minute interval. Yes! Autoshrink just so happened to be enabled on this database – for testing purposes.

Wrapping it Up

Extended Events is a powerful tool with plenty of ease of use and flexibility. This flexibility allows the DBA to capably monitor the server for any issue be it small or large. This article demonstrated how to use Extended Events to monitor for file size changes and the same principles can be applied to any of the waits you may need to investigate. If you are in need of a little tune-up for your XE skiils, I recommend reading a bit on Extended Events to get up to date. For other basics related articles, feel free to read here.

In addition, through the power of XE, we were able to capture a quick glimpse into some internals related to the database engine. In this case, we were able to see autoshrink in action and discover the frequency that it runs on when enabled.

Job History – Back to Basics

How necessary is that SQL Server Agent job that you have running on the server? I ask that question of my clients on a routine basis.

Sometimes, I will ask that question as a routine part of a health check for the server. Others, it might be due to a migration or upgrade that is required for the server. Generally, the answer to the question will be one of two things: a) “Yes it is necessary.” or b) “What is that job doing?”.

Believe it or not, both answers will typically spawn more questions. You see, I don’t usually ask that question unless the job is failing on a regular basis. You wouldn’t believe how many jobs exist out there that are scheduled and just fail every time as well.

When I encounter a situation like this, it means it is time to have a discussion. In order to have that discussion, one needs to have done a bit of homework in order to better understand the situation. For me, part of this homework involves running the numbers and figuring out the frequency of the job’s failure or success.

Data Gathering

For me, I like to understand how often a job has executed and what is the frequency of failure for that quantity of executions. If I see a job that has not succeeded successfully in 60 consecutive executions, it is probably a safe bet that the job is not needed. Why? Well, if nobody has noticed the job hasn’t been working for that long, the likelihood of the job providing any use to the business is slim to none. In this case, I would present a case to the business as to why it should be removed.

But, how do I get to that point? Well, you could go through the job history for each job one by one and run some manual analytics. Or, you could take advantage of a script. I prefer the script route because it is faster, more reliable and a lot less mundane.

Running that script against my sandbox, I may see something such as the following.

Here you will note that the “wtf” job has two entries. One entry for “Succeeded” (in green) and one entry for “Failed” (in red). Each row receiving counts for number of executions.

This is the type of information I can use in a meeting to discuss with the business. This is no longer a discussion of opinion, but rather one that is now based on facts and data. It now becomes very easy to demonstrate to the business that a job has failed 60/60 times and nobody noticed it or cared enough about the consistent failures to do anything about it. Imagine if the failing job happens to be the database backups. I wonder what the action items for that job failure might include.

The Wrap

An important part of any DBAs job is to ensure database related jobs are running prim and proper. Sometimes that just doesn’t happen. When jobs are being overlooked, it is useful to be able to gather data related to consistency of job success or failure. This script will help you in your investigation efforts. In addition, I also recommend this article in your agent job audit efforts.

This has been another post in the back to basics series. Other topics in the series include (but are not limited to): Backups, backup history and user logins.

Public Role Permissions – Back to Basics

Permissions in the database environment is such an important task. Setting permissions correctly is paramount to a successful audit and one of your best defenses against improper/unwanted access. Yet, in spite of the importance, security is often overlooked, neglected, improperly configured or just flat out ignored. Let’s not forget the times that security is intentionally misconfigured so certain individuals are permitted a backdoor access.

Security, just like performance tuning, is a perpetual (and sometimes iterative) task. There is no excuse for setting your security and forgetting it. It must be routinely reviewed.

While performing a script review for a client, I was reminded of the need to also review their security setup. The reminder was more of a slap in the face as I noticed that the developer had built in some permissions assignments for some upgrade scripts. Unfortunately, we were not permitted to alter any of the scripts due to them being from a third party vendor (and that vendor refused as well to fix the problems with the scripts but I digress).

What could be wrong with this?

I want you to digest that for just a moment. This is an example of the permissions this particular vendor insists on setting for the public role. What could possibly be wrong with that? Let’s examine a couple of the permissions like “Control” and “View Change Tracking”.

View Change Tracking

This permission is an elevated permission that is required in order to use the change tracking functions. This permission is necessary for the following reasons:

  1. Change tracking records contain the PK value for rows that have been deleted. If sensitive information was deleted that a user should not be able to access, the user would be able to reverse engineer the data from the change tracking data.
  2. A user may be denied access to a column that contains sensitive data. If data in the column is changed, the data would be stored in the change tracking and a user can determine the values that were updated for the sensitive data.

Control

I am going to take this one direct from the Microsoft documentation.

Confers ownership-like capabilities on the grantee. The grantee effectively has all defined permissions on the securable. A principal that has been granted CONTROL can also grant permissions on the securable. Because the SQL Server security model is hierarchical, CONTROL at a particular scope implicitly includes CONTROL on all the securables under that scope. For example, CONTROL on a database implies all permissions on the database, all permissions on all assemblies in the database, all permissions on all schemas in the database, and all permissions on objects within all schemas within the database.

Now digest that a bit. Once digested, consider what the public role does to user access in a database. The public role permissions are inherited by all users of the database whether the users have been granted the permission or not. You should only grant permissions to the public role that you really honestly believe that ALL users should have. If you are being serious in your role, then the amount of times you grant permissions to the public role should either be a) never, b) when you want to have a data breach, or c) you are testing in a sandbox to improve your skills.

Check for Perms

When you are uncertain of which permissions have been assigned to the public role, or you just haven’t reviewed your permissions real-estate in some time, it is best to pull out a script and start the process. As luck would have it, I have a few scripts that can help with that (here or here) and I have a new one that I am sharing now.

Let’s start with a basic query that will display all of the permissions assigned to the public role in a specific database.

There is nothing super special about this query. Looking at it, it is querying the permissions for the public role specifically. I display where the permission is a “Deny” or “Grant”. Then we list the permission name and then the schema and the object.

Let’s take that script and evolve it now. I am going to plan for the worst and expect that some permissions have been applied that shouldn’t have by some vendor upgrade script (because – well, history). Since I am expecting the worst, I am going to add some script generating code that will revoke the unwanted permissions. And still expecting the worst would be that revoking the permissions will break something, I will also add some code that can generate the appropriate “Grant” statements.

That looks better. I have a way of identifying the unwanted permissions as well as an easy script I can execute to remove the unwanted permissions. Note the use of the collate in the final two columns. As it turns out, permission_name from sys.database_permissions has a column collation of Latin1_General_CI_AS_KS_WS. Since I ran into some errors (shown below), it is easier to direct the DB engine to use the collation that matches the permission_name column.

Msg 451, Level 16, State 1, Line 11
Cannot resolve collation conflict between “SQL_Latin1_General_CP850_CS_AS” and “Latin1_General_CI_AS_KS_WS” in add operator occurring in SELECT statement column 5.
Msg 451, Level 16, State 1, Line 11
Cannot resolve collation conflict between “SQL_Latin1_General_CP850_CS_AS” and “Latin1_General_CI_AS_KS_WS” in add operator occurring in SELECT statement column 6.

Alas, this is still not quite as efficient of a script as I would like. I may have hundreds of databases on the instance and need to evaluate all of them. Time for the bigger guns.

That will take care of all of the permissions for the public role in all of the databases, with a slight caveat. I am only checking against that objects that are not flagged as is_ms_shipped. Now, isn’t there also a public role at the server scope? Indeed there is! Let’s also capture those permissions.

Now, I feel many times better about what could possibly be going wrong with the public role.

If you are in a tightly controlled environment or you are just sick of people doing this sort of thing to your servers, there are more extreme measures that can be taken. You can read about it here or here.

The Wrap

It is amazing what some people will do that just doesn’t make sense. Granting permissions to the public role is one of these cases. That behavior also explains why there are documents and procedures for hardening the public role (here and here).

If necessary, I recommend locking down your public role. It will make your job a little easier and give you better rest at night.

This has been another post in the back to basics series. Other topics in the series include (but are not limited to): Backups, backup history and user logins.

«page 2 of 132»

Calendar
August 2019
M T W T F S S
« Jul    
 1234
567891011
12131415161718
19202122232425
262728293031  

Welcome , today is Wednesday, August 21, 2019