When Too Much is Not a Good Thing

In my previous article, I demonstrated some fun ways of locking down access for certain users to ensure they did not have data access while at the same time allowing them to review the database schema. The solution provided had the added benefit of reducing the chances that such a user could unknowingly cause a server outage.

In that solution, I showed the value of the “View Definition” permission. There are a couple of caveats to this permission that could lead to unwanted results.

It is these unwanted results that oft occur due to lack of attention to detail, use of internet examples, misunderstanding of the requirements; and always from granting way too much access. I am going to explore a couple of these erroneous methods and why they absolutely bug me.

You Get Everything!

Let’s start with the most common mis-step that I see. A user requests access to the database and the stand-in DBA takes the easy route. The easy route here is to grant the requester sa access, dbo access, and every other database and server role there is because it is easier to just tick the boxes.

This method of assigning permissions happens far too frequently. I have seen it in hundreds of databases for clients. I have also written about it here.

You see, once you grant somebody sa access, none of the rest of the permissions grants or denies matter. Sysadmin overrides everything! In this particular case, this would be way too much access for the user who just needs to see the schema. Be extremely careful about when and to whom you grant sysadmin access.

Next up in the realm of granting way too much access is the far too frequent practice of assigning the permissions to the public role in each database. I have written about this practice as well – here and here. This one really gets my ire. Granting permissions to the public role starts to cross into the realm of high risk and downright negligent. Let’s see how some internet examples demonstrate the solution to our problem with regards to the public role.

First, using the test user and role we created in the yesterdays article, let’s confirm that we do not have access to view definitions outside of the assigned role permissions.

After executing that query, I can confirm that the public role does not have the “View Definition” permission. Let’s now revoke the permission to the role and confirm no schema definitions could be viewed.

Now, knowing that the user and the role do not have permissions, let’s go ahead and assign permissions to the public role.

Just like magic, a user that should not be able to view schema definitions can now view information it is not supposed to see. This is a security problem. Just to confirm, let’s evaluate the principal permissions compared to the role permissions.

Keep this in mind as you go down the dark path to granting permissions to the public role. Just because it is easy doesn’t make it right. Similar note, you should proceed cautiously with scripts from the internet that suggest granting permissions to the Public role. They are wrong!

Finally, we down into the realm of “only slightly better” but still way over the top for the requirements. This last method uses the “VIEW ANY DEFINITION” way of granting permissions. Now the “ANY” key word actually scopes the permissions to every database on the instance. Let’s clean up the last example by revoking the permission to public so we have a blank slate.

With permissions, revoked, I will now grant View Any Defintion to the test user. Why? Well, because I don’t have that role created in every database (more on the benefits of that approach later). When setting this permission, it is server scoped so make sure you are in the master database else you chance seeing this error.

Msg 4621, Level 16, State 10, Line 22
Permissions at the server scope can only be granted when the current database is master

Yup, that just made my blood curdle a bit. Let’s check out the perms now in the DBA database (reminder that the scope of the permission change was the master database and is a server scoped change).

As we can see here, all databases now have the View Definition permission assigned to testvwdef. If there happens to be a sensitive database, you may have exposed yourself to a risk by doing this.

Performing the same tests as previously done would reveal similar results – except on a broader scope.

There are some caveats to help reduce this risk but they do not undermine the need to be responsible or the need to avoid using the “ANY” keyword. Recall that I granted the View Any to the individual user instead of the role? That was simply due to me not having that role in each database. If we grant permissions to roles, and the role does not exist in the master database while trying to grant permissions from that scope, then we see this nifty little message.

Msg 15151, Level 16, State 1, Line 25
Cannot find the login ‘BusinessViewDef’, because it does not exist or you do not have permission.

That is one measure to help prevent this over-permissioning problem. The next possible caveat is a minor stop-gap and can be easily overlooked should the user ever be added to the database in the future. For “Any” to work in its full splendor, the principal to which it was granted must have already been granted “connect” to the database.

 

The Wrap

This article has explored various different options for minimizing risk to over extending permissions to users. With a little planning and care, we can find various ways to fulfill user requests without compromising the environment.

Every now and then, it will require extra effort in trying to get the pertinent details from the requester. That said, with that extra effort you will find satisfaction and calm knowing your environment is secure and that you are providing quality service to your customers.

This has been another post in the back to basics series. Other topics in the series include (but are not limited to): Backups, backup history and user logins.

Who needs data access? Not You!

Does this sound at all familiar? A user submits a request stating they must have elevated access to a database. In your attempt to gather more information about the request, they must have access because so-and-so has elevated access and they are curious about the database.

The conversation doesn’t go quite as smoothly as that. In fact, it may take quite a lot of effort to squeeze that tiny bit of real information from the request.

After assembling that tidbit of information, you may find yourself choking back a scoff or three. “You need access because you want to explore the database?” Yeah I can hear the incredulity in your thoughts from here. At no point is this kind of “need” a legitimate reason to gain access to a database.

“I need elevated access!” Oh No you don’t!

Beyond the back and forth of the demand for access just to play around in the data and the DBA resisting the access in the best interest of the data and company, there may possibly be a middle ground that can be reached. I am not talking about creating a dummy database with obfuscated data. Nor am I talking about a dummy database with fake data.

Every now and then, this kind of request may cross your desk and all the requester is really trying to do is find some means of becoming familiar with the database. This can be ferreted out with some extra warnings and more discussions with the requester. Maybe the request really has nothing to do with the data and everything to do with just finding a way to get to know the schema because there is no documentation.

Given the plausibility of this need, there are many ways to grant the request – some may require more resources than others. Let’s suppose the user has absolutely no understanding of TSQL. With that minimal knowledge, you know there is likely to be many potential issues that could cause the database to be brought down and unusable. This is terrible if the affected database is in production. It is also very bad if the database is a non-prod database. Given this potential, you know you don’t want to grant any permission that implies read or write permissions in the database (if there is data present).

What if we could create a database without the data. Would that work? Absolutely that would work. This would in essence be a sandbox for the user and all responsibility for database issues could be directed back to the specific user. This is a fabulous solution if you have adequate resources for more databases/instances/servers to be created. There are multiple ways to create an empty database, but to ensure the best end-user experience considerations for database naming, resources, and potential impact to other databases must be acknowledged.

Personally, I prefer to maintain the same database name regardless of tier (dev, test, prod etc). This also applies to schema only types of databases where limited access would be granted. Changing database names on the users only breeds confusion and complicates code promotions. If you need to provide the same sort of solution for 30-40 or more databases, you can imagine what kind of confusion that would cause on a server where the actual data-filled databases actually exist (think dbcc clonedatabase types of solutions here where the cloned database must be named differently).

What to do? What to do?

Thankfully there is still at least one more entirely viable option that remains. We can use the power of security to achieve some semblance of the user want and still preventing the person from viewing the data or even causing performance issues.

Limiting access for prying eyes is fairly easy while still granting the opportunity to explore the database design. In SQL Server we have a permission called “View Definition” which will help us accomplish exactly this requirement. In preparation for this, one must determine the scope of the need. Generally speaking, this should be done on a per-database approach. There is a method to grant this for all databases on the server but that is a topic for another day. Once the scope is determined, it is time to proceed with the setup.

I prefer to take advantage of Roles within the database. This simplifies the future maintenance and management of the permissions. If a new principal needs the same permissions as Joe from accounting, then I just add the new user to the same role(s) as Joe – piece of cake.

With the Role created, user added to the role, and the “View Definition” permission granted to the role, we are ready to test.

First order of business is to connect with that new login we created (a SQL login was created and is used in this example, but the effects are the same with a Windows Login) as shown in the preceding image. After connected, let’s try some things.

The first test I want to perform is to confirm the user has no read access to data to ensure this permission does not grant extra access along the way.

Great! Select access to objects from this permission alone does not work. Now, let’s test the ability to view object definitions. First, let’s try via the object designer.

First we see a warning about not having enough permissions to save changes, which is good. After clicking OK on that warning, a new message pops up to remind us that the table is “read only”. Perfect! The permission we gave this user do not allow them to alter objects – just view them (as we would expect).

Seeing these results should also help confirm for us that I can browse the table definition directly in SSMS. Let’s take a gander from a TSQL approach.

Great, we can explore the table definitions etc with TSQL. That is of course a system stored procedure, so let’s try one more method.

Running this query, I will see results similar to this.

Knowing that we can run custom queries to explore the schema is almost comforting. It does confirm that we are able to query system objects (yes a select works here but this is read only object definition data). But does this mean the user can also see system data? Let’s try!

Excellent! So it seems we have a plausible option to keep the footprint small and minimize access to the data.

The Wrap

This article has explored various different options for reducing the access to data for users who “need” it. With a little planning and care, we can find various ways to fulfill user requests without compromising the environment.

Every now and then, it will require extra effort in trying to get the pertinent details from the requester. That said, with that extra effort you will find satisfaction and calm knowing your environment is secure and that you are providing quality service to your customers.

This has been another post in the back to basics series. Other topics in the series include (but are not limited to): Backups, backup history and user logins.

It’s an Honor: Microsoft MVP

Comments: No Comments
Published on: July 1, 2019

Today (July 1st) is the notification day for all Microsoft MVPs. Microsoft changed it a couple of cycles back to be all on the same day helping all candidates to come to a single emotional day. I received my notification early this morning that I was awarded the MVP again for the 2019-2020 cycle. This is my 6th award overall.

I am humbled to be among so many great professionals and honored at the same time for having been chosen. Are you new curious about this MVP thing? Well, its sorta like being named MVP for a sports league while at the same time a little bit different. Here is what Microsoft has to say about the MVP. I personally think you may get more out of reading about the MVP here (a little bias might be involved).

This is a community award for your efforts to promote and help others work with Microsoft products. I was awarded for my efforts in the Data Platform. Many others receive the award for their efforts in AI, Powershell, or Azure (as examples). Generally, the people that are awarded are seen as being passionate about the Microsoft Product. These people spend a lot of time, effort, energy, and sometimes money to help share their enthusiasm for the product.

Microsoft is doing a fantastic job at recognizing people for their efforts in promoting the MS brand. Not everybody that promotes that brand gets to be an MVP – sadly. If you didn’t make it this time, keep trying. I just advise that the goal be more about the community than the award. The MVP award is great, but giving to the community is the real reward.

I want to congratulate all those awarded a Microsoft MVP award past, present and future. I also want to thank all of those who have given to the community regardless of whether an award was received or not. There are many of you out there that have been a positive influence whether you know it or not – just for giving to the community and helping somebody else. THANK YOU!

Public Role Permissions – Back to Basics

Permissions in the database environment is such an important task. Setting permissions correctly is paramount to a successful audit and one of your best defenses against improper/unwanted access. Yet, in spite of the importance, security is often overlooked, neglected, improperly configured or just flat out ignored. Let’s not forget the times that security is intentionally misconfigured so certain individuals are permitted a backdoor access.

Security, just like performance tuning, is a perpetual (and sometimes iterative) task. There is no excuse for setting your security and forgetting it. It must be routinely reviewed.

While performing a script review for a client, I was reminded of the need to also review their security setup. The reminder was more of a slap in the face as I noticed that the developer had built in some permissions assignments for some upgrade scripts. Unfortunately, we were not permitted to alter any of the scripts due to them being from a third party vendor (and that vendor refused as well to fix the problems with the scripts but I digress).

What could be wrong with this?

I want you to digest that for just a moment. This is an example of the permissions this particular vendor insists on setting for the public role. What could possibly be wrong with that? Let’s examine a couple of the permissions like “Control” and “View Change Tracking”.

View Change Tracking

This permission is an elevated permission that is required in order to use the change tracking functions. This permission is necessary for the following reasons:

  1. Change tracking records contain the PK value for rows that have been deleted. If sensitive information was deleted that a user should not be able to access, the user would be able to reverse engineer the data from the change tracking data.
  2. A user may be denied access to a column that contains sensitive data. If data in the column is changed, the data would be stored in the change tracking and a user can determine the values that were updated for the sensitive data.

Control

I am going to take this one direct from the Microsoft documentation.

Confers ownership-like capabilities on the grantee. The grantee effectively has all defined permissions on the securable. A principal that has been granted CONTROL can also grant permissions on the securable. Because the SQL Server security model is hierarchical, CONTROL at a particular scope implicitly includes CONTROL on all the securables under that scope. For example, CONTROL on a database implies all permissions on the database, all permissions on all assemblies in the database, all permissions on all schemas in the database, and all permissions on objects within all schemas within the database.

Now digest that a bit. Once digested, consider what the public role does to user access in a database. The public role permissions are inherited by all users of the database whether the users have been granted the permission or not. You should only grant permissions to the public role that you really honestly believe that ALL users should have. If you are being serious in your role, then the amount of times you grant permissions to the public role should either be a) never, b) when you want to have a data breach, or c) you are testing in a sandbox to improve your skills.

Check for Perms

When you are uncertain of which permissions have been assigned to the public role, or you just haven’t reviewed your permissions real-estate in some time, it is best to pull out a script and start the process. As luck would have it, I have a few scripts that can help with that (here or here) and I have a new one that I am sharing now.

Let’s start with a basic query that will display all of the permissions assigned to the public role in a specific database.

There is nothing super special about this query. Looking at it, it is querying the permissions for the public role specifically. I display where the permission is a “Deny” or “Grant”. Then we list the permission name and then the schema and the object.

Let’s take that script and evolve it now. I am going to plan for the worst and expect that some permissions have been applied that shouldn’t have by some vendor upgrade script (because – well, history). Since I am expecting the worst, I am going to add some script generating code that will revoke the unwanted permissions. And still expecting the worst would be that revoking the permissions will break something, I will also add some code that can generate the appropriate “Grant” statements.

That looks better. I have a way of identifying the unwanted permissions as well as an easy script I can execute to remove the unwanted permissions. Note the use of the collate in the final two columns. As it turns out, permission_name from sys.database_permissions has a column collation of Latin1_General_CI_AS_KS_WS. Since I ran into some errors (shown below), it is easier to direct the DB engine to use the collation that matches the permission_name column.

Msg 451, Level 16, State 1, Line 11
Cannot resolve collation conflict between “SQL_Latin1_General_CP850_CS_AS” and “Latin1_General_CI_AS_KS_WS” in add operator occurring in SELECT statement column 5.
Msg 451, Level 16, State 1, Line 11
Cannot resolve collation conflict between “SQL_Latin1_General_CP850_CS_AS” and “Latin1_General_CI_AS_KS_WS” in add operator occurring in SELECT statement column 6.

Alas, this is still not quite as efficient of a script as I would like. I may have hundreds of databases on the instance and need to evaluate all of them. Time for the bigger guns.

That will take care of all of the permissions for the public role in all of the databases, with a slight caveat. I am only checking against that objects that are not flagged as is_ms_shipped. Now, isn’t there also a public role at the server scope? Indeed there is! Let’s also capture those permissions.

Now, I feel many times better about what could possibly be going wrong with the public role.

If you are in a tightly controlled environment or you are just sick of people doing this sort of thing to your servers, there are more extreme measures that can be taken. You can read about it here or here.

The Wrap

It is amazing what some people will do that just doesn’t make sense. Granting permissions to the public role is one of these cases. That behavior also explains why there are documents and procedures for hardening the public role (here and here).

If necessary, I recommend locking down your public role. It will make your job a little easier and give you better rest at night.

This has been another post in the back to basics series. Other topics in the series include (but are not limited to): Backups, backup history and user logins.

Database Offline – Redux

I have written a couple articles showing how to audit database offline events via the default trace as well as via Extended Events. These are great for a more proactive approach. Sometimes the proactive approach is just not plausible for the given circumstances. A good example of this is when you inherit servers (e.g. new client, new project, new job) and there are databases on the server that were taken offline prior to your inheritance.

In a situation like this, you need to find out when the databases were taken offline, but you are somewhat limited by the data available to you for your research purposes. The default trace has rolled beyond the retention point for your server for events that were trapped. Setting up an Extended Events session would be less than helpful given the databases are already offline. So we must find an alternative to finding approximately when the databases were taken offline.

Alternatives

If your predecessor leaves you with a database that has been taken offline and no documentation about it, there is an alternative to find the approximate time it was taken offline – using TSQL. Granted, you could explore the file system and make the same sort of guess based on the file modified date. Data people generally prefer some sort of script and a script that is native to their language (tsql).

So, let’s take a look at a viable option for figuring out the mystery behind your database that has been set offline with no good info surrounding it.

Running this query, I see the following on my test server:

Note that this query uses GETUTCDATE. The differential_base_time column in sys.master_files is in UTC time. Thus, in order to compare properly, we need to ensure we use a UTC datestamp for comparison. Also, of note, this query doesn’t work if there is no full database backup for the database in question. Who doesn’t backup their databases though, right? No seriously, you really should.

Since this is my test server and I am well prepared, I just so happen to have an XEvent session running that tracks the offline events that I can use to compare the results.

Yes, there is a bit of variance in the actual database offline event and the differential_base_time column. This variance is due to timing of the backup and the actual offline event. Again, this is about finding an approximate time of when the database was taken offline as a viable alternative when other data is not really available.

Conclusion

We all aspire to having a perfect database environment where nothing surprising or unexpected happens. Unfortunately, that is the desire of dreams and fairy tales. The unexpected will happen. A database can be offline at the time that you take over responsibility of the database. You will be asked when it was taken offline. This script will help you get a reasonable deduction for that offline event in the absence of better data.

Extended Events is a powerful tool to help in troubleshooting and tuning your environment. I recommend investing a little time in reading the 60 day series about Extended Events. This is not a short series but is designed to provide an array of topics to help learn the tool over time. Don’t forget to go back and read the companion article showing how to audit these events via the default trace.

Quick Permissions Audit

Whether it is for a client, an audit, or just for good housekeeping, DBAs will often need to figure out who has access to what.  In addition, they may need to know by what means people have access within your databases.

When that need arises, it is frequently adequate to just perform a quick audit and then dive deeper if the quick audit shows anything questionable. Today, I am sharing a means to perform this quick audit. I will start with the logins (server principals), peek at the users (database principals), then the server roles, and wrapping it up with the database roles. What I don’t do is dive into the windows group memberships.  So if a windows group exists within SQL Server as a login, I do not try to figure out who is a member of that role.  That can be obtained, but is a separate discussion.

So, in the vein of a quick permissions audit, I will quickly get to the script to help perform this permissions audit.

As you look through this code, you will notice that I have done a few interesting things.  Here is a quick recap:

  • Build a temp table based on a delimited list of databases input into a variable
  • Build a temp table based on a delimited list of logins input into a variable
  • Used a string splitter known as DelimitedSplit8K by Jeff Moden (google it) or LMGTFY
  • Used FOR XML to build a delimited string of Server Role Permissions
  • Used FOR XML to build a delimited string of Database Role Permissions
  • Used a CURSOR to build a dynamic SQL string to execute for each database

Holy cow! That is a lot of stuff for such a simple quick audit.  A cursor? Seriously? Well yeah! It is OK in this type of scenario to use a cursor.  This is a well-performing cursor for an administrative function. In addition, I chose to use sub-queries to build my delimited string of permissions so I could keep those things together in a single record per login/role.

If I run that query on my instance of SQL Server, I would end up with something that looks like the following.

Now, obviously, unless you had these same databases, logins, and even the same permissions assigned to these logins, your results will be different. So, make sure you go and change those inputs for logins and databases to be queried.

For more articles on audits and auditing check here and here.

PowerShell ISE Crashes

Working with PowerShell brings a lot of advantages and power to help manage a server. The more current your PoSh version, the more efficiently you will be able to manage your server. Sometimes getting to the current PoSh versions comes with a little pain such as ISE crashes.

I recently had the mis-adventure of working through some ISE crashes after bringing some systems up to PoSh 5.1 that were either PoSh 2.0 or 3.0. It’s not a very fun situation to run a WMI update and then run into a crash of any type when testing if it worked. Your first thought is something terrible has happened.

As it stands, the problem is more of a nuisance than a critical failure. That said, it is enough of a problem that anyone who uses the ISE or .Net applications may experience a slight cardiac event.

Fonts

As you work to quickly recover from your missed heart beat, you start digging through logs and then hitting good old trusty google.

Diving through the logs, you might just happen across an error similar to the following:

Problem signature: Problem Event Name: PowerShell NameOfExe: PowerShell_ISE.exe FileVersionOfSystemManagementAutomation: 6.1.7600.16385 InnermostExceptionType: System.Xml.XmlException OutermostExceptionType: System.Reflection.TargetInvocation
DeepestPowerShellFrame: indows.PowerShell.GuiExe.Internal.GPowerShell.Main DeepestFrame: indows.PowerShell.GuiExe.Internal.GPowerShell.Main ThreadName: unknown.

Maybe the first error you encounter might look like this one instead:

System.TypeInitializationException

“FileFormatException: No FontFamily element found in FontFamilyCollection
that matches current OS or greater: Win7SP1”.

Inner exception originates from: CompositeFontParser

Either way, the error shoots us back to the same fundamental problem. The ISE won’t load, you get an error message and you can’t confirm that the WMI patch was applied properly.

As you work your fingers faster and faster through the pages on google, you discover that this problem is caused more explicitly by a patch for the .Net framework and not necessarily the work to upgrade your PoSh version. It only waited to manifest itself after the upgrade.

That’s gravy and all, but how does one fix the problem? For me, the quickest and most reliable fix was to simply jump straight to the root of the problem – fonts. The ISE is a WPF application and it also requires a fallback font (if a character isn’t present in your font set, then the app chooses a substitute from the fallback font – or something like that).

The fix is extremely simple and really underscores why this is merely a nuisance issue and not a critical problem. Thus it shouldn’t cause any sort of sinking internal feelings of any sort. There are a few plausible fixes floating around out there. I recommend just doing a manual font replacement. It is all but three simple steps:

  1. Download GlobalUserInterface.CompositeFont
  2. XCOPY the font to %windir%\Microsoft.NET\Framework\v4.0.30319\WPF\Fonts
  3. XCOPY the font to %windir%\Microsoft.NET\Framework64\v4.0.30319\WPF\Fonts

After you have copied the font to those two directories, then all that is needed to be done is launch the ISE. I ran into the same problem on three or four servers and the fix took no more than 5 minutes on each of the servers.

Conclusion

I previously mentioned that I have been working more and more with PoSh to try and improve my skillset there. This is one of those very low-level trinkets that I ran into as I have been working to hone my skills in that tech. For other, possibly, interesting articles about my experiences with PowerShell, you can check out these articles.

Given this job is tightly related to the system_health black box sessions (sp_server_diagnostics and system_health xe session), I recommend fixing the job. In addition, I also recommend reading the following series about XE and some of those black box recorder sessions – here.

page 1 of 1

Calendar
December 2019
M T W T F S S
« Nov    
 1
2345678
9101112131415
16171819202122
23242526272829
3031  

Welcome , today is Sunday, December 15, 2019