SQL Server Principals – Back to Basics

Categories: News, Professional, Security, SSC
Comments: 1 Comment
Published on: January 28, 2016

securedb

Prelude in SQL Minor

Back in late December of 2015, a challenge of sorts was issued by Tim Ford (twitter) to write a blog post each month on a SQL Server Basic. Some have hash-tagged this as #backtobasics. Here is the link to that challenge sent via tweet.

I did not officially accept the challenge. Was an official acceptance required? I don’t know. I do know that I think it is a good challenge and that I intend to participate in the challenge. I hope I can meet the requirements and keep the posts to “basics”. Let’s just call this first post in the challenge to be my official acceptance.

SQL Server Principals

A fundamental component of SQL Server is the security layer. A principle player in security in SQL Server comes via principals. SQL Server principals come in more than flavor. (This is where a lot of confusion gets introduced.) The different types of principals are database and server. A database principal is also called a database user (sometimes just user). A server principal is also called a server login, or login for short.

Server and database principals are quite a bit different in function and come with different permission sets. The two are sometimes used, in reference, interchangeably, but that is done incorrectly. Today I hope to unravel some of what each is and how to see permissions for each type of principal.

Let’s first look at definitions for each type of principal and how to query for basic information about each principal (e.g. name and creation date).

Principals

In general principals are entities to which permissions are granted. These entities can request access or resources. As mentioned already, these principals can be scoped to different levels. These levels as mentioned include database and server.

Server Principals

Server Principals are the kind of principals that include logins and server roles. You may be familiar with some of the server roles:

  • public
  • sysadmin
  • securityadmin
  • serveradmin
  • setupadmin
  • processadmin
  • diskadmin
  • dbcreator
  • bulkadmin

The logins can be created from a windows login or group, or be created as a SQL Login. These principals can also include custom created server roles (in addition to the system created server roles already listed). Once a principal is created, permissions may be granted to the principal. When these permissions are granted, then when the principal attempts to request a resource (related to the permission), to perform a task, the principal can complete that task.

What permissions can be granted to a principal at the server scope? A list of permissions can be created via the following query:

And a sample of the results could look like this:

server permissions

An interesting note here is in the red highlighted permissions. If you look at the documentation for server permissions you will not find those two permissions (at least not as of this writing).

From the permissions returned by the query, you will see that these are all permissions related to server administration type of tasks. Note that these permissions do not grant the ability to do the type of tasks attributed to database types of actions. For instance, the server permissions do not grant the explicit permission to create a reference, execute a procedure or create a table within a database. These are all permissions reserved for the database scope.

Database Principals

Database principals are the type of principals scoped to the database level. These principals will request resources from the database and depending on permissions granted to the principal be able to perform various tasks within the database. The types of database principals include database roles, application roles, and database users. SQL Server Logins can be mapped to a database user and thereby be granted access to the database as the database principal.

Since a database principal can include the database roles, here is a list of the potential database roles:

  • public
  • db_owner
  • db_accessadmin
  • db_securityadmin
  • db_ddladmin
  • db_backupoperator
  • db_datareader
  • db_datawriter
  • db_denydatareader
  • db_denydatawriter

These roles can significantly simplify security management within the database. One can easily assign a database principal to be a member of the db_backupoperator role and thus grant that principal the ability to backup the database without much more need to grant or deny permissions.

Considering the ease of role management, one thing that bugs me and that I see frequently is the addition of a database user to every database role in all databases. It makes little sense to add a user to the db_owner role and then also to the db_datareader and db_denydatareader roles. For one, db_owner already has the ability to read from the tables thus negating db_datareader. Consider the db_denydatareader now – it is opposite to db_datareader. Why try to grant and deny read access to the same user? It makes no sense at all.

The next thing that pains me about these roles is the public role. I have written about the public role previously, but it needs stating again. Do not grant any additional permissions to the public role. This is like enabling the guest user and opening up the database to everybody. I have seen a rash of granting “alter server state” and “view server state” to the public server level role and it is painful to see. The same advice applies to the public role whether it is at the server or database scope.

Once a database principal has been created, it is time to proceed to giving the principal the necessary access. Here are some of the possible permissions that can be granted (along with a query to find even more):

db_permissions

Within these results, it is apparent that a database principal can be granted the ability to perform backups, create procedures, execute procedures and even create encryption keys. Between the server scope and the database scope, there is decent level of granularity to control access and resources within the database instance.

Finding Permissions for Each Principal

It is not uncommon to need to know and report on who has been granted what level of permissions within the database environment. If you have been with the database since inception to conception, you probably have documentation on every permission that has been granted. If you inherit a database, your odds on good documentation about the permissions is probably significantly lower. I have shared a comprehensive script previously to show all of the permissions. Sometimes a little less info is more than adequate for the current needs.

Here is a quick alternative with just a little less info to provide database permissions and server permissions based on the input of a specific list of users and databases.

Now, this script does require the use of a function to split strings. The one I am referencing was written by Jeff Moden and can be found here. In this example, I am looking at a few test principals that I created – testuser, Phantom, Gargouille and Garguoille (which is invalid). Running the script, I would receive results such as the following:

audit_output2

This is a pretty quick running script to gather report worthy data on principals and permissions.

All about the Change

Comments: 1 Comment
Published on: January 12, 2016

TSQL2sDayThe second Tuesday of January 2016 is now upon us and you know what that means. Well, I hope you know what that means.

It is time for TSQL Tuesday. It is now the 74th edition of this monthly blog party. This month the host is Robert Davis (blog | twitter) and he has asked us to “Be the change”. Whether the inspiration for this topic is the new year and resolutions, or Ghandi (you must be the change), or CaddyShack (be the ball), we will be discussing “Change.”

Specifically, Robert requested that we discuss data changes and anything relating to data changes. Well, I am going to take that “anything” literally and stretch the definition of changing data just a bit. It will all make sense by the end (I hope).

Ch-ch-changes

Changes happen on a constant basis within a database. Data will more than likely be blackbox2changing. Yes, there are some exceptions to that, but the expectation that data is changing is not an unreal expectation.

Where that expectation becomes unwanted is when we start talking about the data that helps drive the configuration of the server. Ok, technically that is a setting or configuration option or a button, knob, whirlygig or thingamajig. Seldom do we really think about these settings as data. Think about it for a moment though. We can certainly derive some data about these changes (if these settings themselves are not actually data).

So, while you may call it settings changes, I will still be capturing data about the changes. Good? Good! Another term for this is auditing. And auditing applies to all levels including ETL processes and data changes etc. By that fortune, I just covered the topic again – tangentially.

How does one audit configuration changes? Well, there are a few different methods to do this. One could use a server side trace, SQL audit, Extended Events or (if somebody wants to) a custom solution not involving any of those using some sort of variation of tsql and error log monitoring. The point is, there are options. I have discussed a few options for the custom solution path as well as (recently published article using…) the default trace path. Today I will dive into what it looks like via SQL Audit.

When creating an audit to figure out what changes are occurring within the instance, one would need to utilize the SERVER_OPERATION_GROUP action audit group. This action group provides auditing of the following types of events:

  • Administer Bulk Operations
  • Alter Settings
  • Alter Resources
  • Authenticate
  • External Access
  • Alter Server State
  • Unsafe Assembly
  • Alter Connection
  • Alter Resource Governor
  • Use Any Workload Group
  • View Server State

From this group of events, we can guess at the types of actions that might trigger one of these events to fire for the audit. Some of the possible actions would be:

Action Example
Issue a bulk administration command BULK INSERT TestDB.dbo.Test1
FROM ‘c:\database\test1.txt’;
Issue an alter connection command KILL 66
Issue an alter resources command CREATE RESOURCE POOL PrimaryServerPool
WITH {}
Issue an alter server state command DBCC FREEPROCCACHE
Issue an alter server settings command Perform sp_configure with reconfigure
Issue a view server state command

SELECT *

FROM sys.dm_xe_session_targets

Issue an external access assembly command CREATE ASSEMBLY SQLCLRTest
FROM ‘C:\MyDBApp\SQLCLRTest.dll’
WITH PERMISSION_SET = EXTERNAL_ACCESS;
Issue an unsafe assembly command CREATE ASSEMBLY SQLCLRTest
FROM ‘C:\MyDBApp\SQLCLRTest.dll’
WITH PERMISSION_SET = UNSAFE;
Issue an alter resource governor command ALTER RESOURCE GOVERNOR DISABLE
Authenticate see view server state vsst type occurs for auth events
Use any workload group See Resource Governor

This is quite a bit of interesting information. All of these events can be audited from the same audit group. The interesting ones of this bunch are the ones that indicate some sort of change has occurred. These happen to be all but the “Authenticate”, “View Server State” and “Use any workload Group” events even though these events may be stretched to say something has changed with them as well.

With all of that in mind, I find the the “alter server settings” event to be the most problematic. While it does truly capture that something changed, it does not completely reveal to me what was changed – just that a reconfigure occurred. If a server configuration has changed, I can capture the spid and that reconfigure statement – sure. Once that is captured, I now have to do something more to figure out what configuration was “reconfigured”. This is highly frustrating.

Here’s an example from the audit I created:

audit_alterserversettings

This is only a small snippit of the results. I can see who made the configuration change, the time, the spid, the source machine etc. I just miss that nugget that tells me the exact change that was made. At least that is the case with the changes made via sp_configure. There are fixes for that – as previously mentioned.

Here is another bit of a downside. If you have the default trace still running, a lot of this information will be trapped in that trace. Furthermore, some of the events may be duplicated via the object_altered event session (e.g. the resource governor events). What does this really mean? Extra tracing and a bit of extra overhead. It is something to consider. As for the extended events related events and how to do this sort of thing via XE, I will be exploring that further in a future post.

Suffice it to say that, while not a complete solution, the use of SQL Audit can be viable to track the changes that may be occurring within your SQL Server – from a settings point of view.

Auditing Needs Reporting

Comments: No Comments
Published on: October 13, 2015

TSQL2sDay

 

Welcome to the second Tuesday of the month. And in the database world of SQL Server and the SQL Server community, that means it is time for TSQL2SDAY. This month the host is Sebastian Meine (blog / twitter), and the topic that he wants us to write about is: “Strategies for managing an enterprise”. Specifically, Sebastian has requested that everybody contribute articles about auditing. Auditing doesn’t have to be just “another boring topic”, rather it can be interesting and there is a lot to auditing.

For me, just like I did last month, I will be just doing a real quick entry. I have been more focused on my 60 Days of Extended Events series and was looking for something that might tie into both really well that won’t necessarily be covered in the series. Since I have auditing scheduled for later in the series, I was hoping to find something that meets both the XE topic and the topic of Auditing.

audit_wordcloudNo matter the mechanism used to capture the data to fulfill the “investigation” phase of the audit, if the data is not analyzed and reports generated, then the audit did not happen. With that in mind, I settled on a quick intro in how to get the audit data in order to generate reports.

Reporting

An audit can cover just about any concept, phase, action within a database. If you want to monitor and track performance and decide to store various performance metrics, that is an audit for all intents and purposes. If you are more interested in tracking the access patterns and sources of the sa login, the trapping and storing of that data would also be an audit. The data is different between the two, but the base concept boils down to the same thing. Data concerning the operations or interactions within the system is being trapped and recorded somewhere.

That said, it would be an incomplete audit if all that is done is to trap the data. If the data is never reviewed, how can one be certain the requirements are being met for that particular data trapping exercise? In other words, unless the data is analysed and some sort of report is generated from the exercise it is pretty fruitless and just a waste of resources.

There is a plenitude of means to capture data to create an audit. Some of those means were mentioned on Sebastian’s invite to the blog party. I want to focus on just two of those means because of how closely they are related – SQL Server Audits and Extended Events. And as I previously stated, I really only want to get into the how behind getting to the audit data. Once the data is able to be retrieved, then generating a report is only bound by the imagination of the intended consumer of the report.

SQL Server Audits

Audits from within SQL Server was a feature introduced at the same time as Extended Events (with SQL Server 2008). In addition to being released at the same time, some of the metadata is recorded with the XEvents metadata. Even some of the terminology is the same. When looking deep down into it, one can even find all of the targets for Audits listed within the XEvents objects.

Speaking of Targets, looking at the documentation for audits, one will see this about the Targets:

The results of an audit are sent to a target, which can be a file, the Windows Security event log, or the Windows Application event log. Logs must be reviewed and archived periodically to make sure that the target has sufficient space to write additional records.

That doesn’t look terribly different from what we have seen with XEvents thus far. Well, except for the addition of the Security and Application Event Logs. But the Target concept is well within reason and what we have become accustomed to seeing.

If the audit data is being written out to one of the event logs, it would be reasonable to expect that one knows how to find and read them. The focus today will be on the file target. I’m going to focus strictly on that with some very basic examples here.

I happen to have an Audit running on my SQL Server instance currently. I am not going to dive into how to create the audit. Suffice it to say the audit name in this case is “TSQLTuesday_Audit”. This audit is being written out to a file with rollover. In order for me to access the data in the audit file(s), I need to employ the use of a function (which is strikingly similar to the function used to read XE file targets) called fn_get_audit_file. The name is very simple and task oriented – making it pretty easy to remember.

Using the audit I mentioned and this function, I would get a query such as the following to read that data. Oh, and the audit in question is set to track the LOGIN_CHANGE_PASSWORD_GROUP event.

There are some tweaks that can be made to this, but I will defer to the 60 day XE series where I cover some of the tweaks that could/should be made to the basic form of the query when reading event files / audit files.

XE Audits

Well, truth be told, this one is a bit of trickery. Just as I mentioned in the preceding paragraph, I am going to defer to the 60 day series. In that series I cover in detail how to read the data from the XE file target. Suffice it to say, the method for reading the XE file target is very similar to the one just shown for reading an Audit file. In the case of XEvents, the function name is sys.fn_xe_file_target_read_file.

Capturing data to track performance, access patterns, policy adherence, or other processes is insufficient for an audit by itself. No audit is complete unless data analysis and reporting is attached to the audit. In this article, I introduced how to get to this data which will lead you down the path to creating fantastic reports.

Database Settings Changes – Red Handed

Comments: 4 Comments
Published on: July 8, 2015

One of my pet-peeves (and consequently frequent topic of discussion) is finding database settings (or any setting that has changed) without knowing about it. Worse yet is finding that the change has occurred and nobody claims to have any knowledge of it or having done the deed.

This happened again recently where a database was set to single_user and suddenly performance in the database tanked. Change the database back to multi_user and the performance issues are magically resolved.

Fortunately there is a means to combat this. Well, sort of. The default trace in SQL Server does capture the event that occurs when the database is set to single_user or read_only. Unfortunately, all that is captured is that an Alter Database occurred. There is no direct means of mapping that event to the statement or setting that changed.

This inadequacy got me to thinking. The default trace is looking at a set of specific “events”, why wouldn’t that set of events be available within Extended Events. It only seems logical! So I decided to query the event catalog and lo and behold, I found just the event I was seeking – object_altered. Combine this with a recently used predicate (object_type = ‘DATABASE’) and we are well on our way to having just the trap to catch the source of these database changes red-handed.

Easy enough to create this particular session. The event does not capture the “whodunnit” without a little extra prodding. So, I added in a couple of actions to get that information – sqlserver.nt_username,sqlserver.server_principal_name,sqlserver.client_hostname. Additionally, the event does not explicitly tell me what setting changed – just that some setting changed. For this, I decided to add the sql_text action so I could correlate event to the actual setting being changed. Then to cap it all off, I made sure the predicate specified that we only care about database settings changes as previously mentioned.

Running the session and then testing some settings changes should prove fruitful to capturing good info. Here are a few of the tests that I ran and the results of those tests (by querying the session data).

Now to try and look at the results.

DB Change Data

 

There you have it! I have just been caught red-handed changing my AdventureWorks2014 database to single_user and multi_user.

Bonus

For more ideas on settings and changes and so forth, Andy Yun (blog | twitter) has invited all to participate in TSQL2SDAY on this very topic. He has invited all to talk about their experiences with “default settings” and what you might change them to! You can read about it here. I have another article coming up that will fit just nicely with that. Let’s just call this a preview and maybe it can help you get those tsql2sday juices flowing.

Database Drops in SQL 2012

In the previous article on this topic (which can be read here), I discussed the problem of having a database get dropped and the need to find out who dropped the database and when they dropped it.  In that article, I demonstrated how the information (at least some of it) could be found through the use of Extended Events.

What I forgot to mention is the method I shared was for SQL Server 2014. While the events do exist for SQL Server 2012, there is a minor modification that needs to be made in order to avoid the dreaded error message that says something like:

Msg 25713, Level 16, State 23, Line 1
The value specified for event attribute or predicate source, “object_type”, event, “object_created”, is invalid.

I am sure many of us would rather not have to deal with such a terrible thing as an error when we want to do something that should just work. Well, here is the fix for that error if you tried to deploy that XE Session to a previous version (such as 2012).

Do you see that change? Instead of using the map_value in 2012, one must use  the map_key. This was a good change in 2014 to allow us to use human friendly terms instead of needing to lookup the map_key for everything like in 2012.

In following the theme from the previous article, here is the rest of the setup for testing this XEvent session to see how it would trap that data and how to parse the data from the session.

Enjoy!

Nice and simple for a test. This should be enough to have trapped some data for our XEvent session.  Let’s parse the data and take a look.

With the XEvent session started, and a database that has been created and dropped, this last query will produce some data to help track who has been adding or removing databases from your SQL Instance.  If you have run the example code, you should see data very similar to what is shown in this next image.

xe_results

In the attached image, I have separated the pair of statements for the DB create from the pair of statements for the DB drop (recall that I mentioned each will fire twice).  I hope this serves you well in your attempts to reign in the control of your environment and discover who may be creating rogue databases or even dropping the databases that are essential to the business.

Audit who Dropped the Database

Categories: News, Professional, Scripts, SSC
Comments: 12 Comments
Published on: April 7, 2015

In the first article on this topic (which can be read here), I discussed the problem of having a database get dropped and the need to find out who dropped the database and when they dropped it.  In that article, I demonstrated how the information (at least some of it) could be found through querying the default trace.  It is a good enough solution if you have not disabled the default trace.

On the other hand, what if you need to do something more flexible?  What if you wanted to track this down through a more robust tool such as extended events?  XEvents has what you need to be able to properly track these types of events.  In this article, I share how to use Extended Events to capture this kind of audit data.

There are a few things to note with this event session.  First is that I am trapping two separate events: sqlserver.object_deleted and sqlserver.object_created.  The next important note is that I have to enable a flag on both events to trap the database name.  That is done like this: SET collect_database_name = (1).  Last note is something that should be noticed after enabling the session and performing a couple of trials.  The events I am using will fire twice for every DROP or CREATE operation.  This happens due to the ddl_phase for each.  There is one event fired for the start of the event and then another event when the event commits or hits a rollback.  Because of this, I am outputting the ddl_phase in my query to read from the session data.  Let’s run a quick test and see how this data can be queried.

Also of importance is to note the file path for the output file. If the path does not exist or you do not have permissions to the directory, you will get an error message and the session will not create.

Nice and simple for a test. This should be enough to have trapped some data for our XEvent session.  Let’s parse the data and take a look.

With the XEvent session started, and a database that has been created and dropped, this last query will produce some data to help track who has been adding or removing databases from your SQL Instance.  If you have run the example code, you should see data very similar to what is shown in this next image.

xe_results

In the attached image, I have separated the pair of statements for the DB create from the pair of statements for the DB drop (recall that I mentioned each will fire twice).  I hope this serves you well in your attempts to reign in the control of your environment and discover who may be creating rogue databases or even dropping the databases that are essential to the business.  Enjoy!!

Audit Schema Change Report

Comments: 4 Comments
Published on: March 11, 2015

In a recent article on SSG, I discussed how to use Extended Events to function in a Profiler like fashion.  You can read about that here.  I recommend reading that article first because it helps to lay some of the groundwork for this article.

Within Management Studio, from a righ-click context menu, there is an option to run a report called “Schema Changes History”.  The name of this report might indicate that there is some means to find when a change occurred in the schema and potentially what the change was as well as who made the change.  It does say “Schema Changes History” after-all.

If you run the report, you would be able to see a report similar to the following.

Change_Report

 

This looks like it could be a pretty useful report.  If I drill into the collapsed items, I can get better information.  But, how exactly is this report producing this kind of data?  I don’t have a database audit running to trap this information.  I also don’t have SSDT-BI installed, so I can’t export the report and check the report definition.

I could run a trace and refresh the report data and see what pops there.  I would really hate to use Profiler though, and I really don’t want to go through the hassle of creating a server side trace to figure out the source data definitions.  Then the lightbulb moment occurs.  I already have an XE session that can trace the SQL queries just like Profiler.  Why not use a better tool to do the job and satisfy my curiosity at the same time?

So, go read that article, I wrote for SSG, to get the XE session definition in order to follow along a little better.  With that XEvent session running, I can refresh the report data and start to evaluate what is happening in order to produce the Audit report.

By this point, you have probably thought to yourself that the source of the data must be coming from the default trace.  If you thought that, then you are correct.  It is a good presumption based on a knowledge of what kind of data is being tracked in the default trace.  But I really wanted to know for certain what the source of the data was.  As I perused the XEvent session data, I noticed several queries similar to the following in the results.

XE_Capture

Yes, I am using the GUI (*shrug*) in this image to explore the data from the session.  That is just for display friendliness.  But, as you look through those queries you start to see a few patterns and some obvious signs that the source of the data is the default trace.  Here is a cleaner look at the code being used to get the data-set.

Now, I must confess that I altered it a bit to make sure it was doing what I thought.  Just a little thing like swapping a missing definition for the table variable for a quick insert into a temp table.  But the reality is, it is pretty much the source code of the report.  It is not the prettiest of code for the source, but it works.

Due to the lack of prettiness to the code and the seemingly overly complex means to get to the end result, I decided I wanted to “tune” it up a little bit.

Now, I have something that is easier to read and maintain (my opinion) and works well.  It can also be used easily enough in an RDL should you wish to create an additional report different from the canned report in SSMS.  Or just continue to use it from within Management Studio and check the results without the overhead of the report presentation.

Auditing and Event SubClasses

Categories: News, Professional, Scripts, SSC
Comments: 2 Comments
Published on: May 28, 2014

A recent discussion got me to thinking about Auditing.  To be honest, it got started with a complaint about some documentation that seemed overly light about the various fields related to auditing as it stands in SQL Server.

In talking to the person who raised the valid concern about the lack of good documentation, I was curious why he suddenly had so many questions about auditing and its functionality within SQL Server.  Reflecting on the answers, it seems that it made good sense and it all kind of fell into place with the whole Audit Life Cycle.  I hadn’t previously considered the Life Cycle, but it makes sense to map it out.  Here is a simple rendition of what an audit Life Cycle might entail.

 

 

AuditCycle_trans

 

 

In order to audit anything, it is necessary to know what you really want to audit, why you want to audit it and how to achieve those goals within the tools given to you.  In that vein, it makes sense that one would need to study up on the topic to figure out what different things meant within the tool.

Of course, once you start collecting that data, then you also need to figure out how to measure it and then to determine if adjustments to the auditing plan need to be made.  In the end, it boils down to what is the data to be collected, what are you doing with that data and what does that data represent.

In our simple discussion, the data trying to be understood was related to the Event Subclass field in this View (sys.trace_subclass_values) and in this Microsoft document (one of several).  The beauty of this field is that it is not just tied to Auditing, but you will also find it in Profiler, server side traces, and Extended Events.

With so little information to help understand what the field values represent, maybe it is better to just turn to the data to help understand what the values might represent or how to interpret them.  To do this, we can query a few catalog views as in the following query.

[codesyntax lang=”tsql”]

[/codesyntax]

With the above query, I can filter down to just the Event Types that have Audit in the name.  Or I could add a different filter so I can try and better understand the different subclasses in a more focused effort.

I hope this helps in your efforts to provide a better auditing or “profiling” type of experience in your environment.

 

Audit Configuration Changes

Do you know the last time a Server Property was changed on your instances of SQL Server?

Are you wondering when the setting for max degree of parallelism was changed?

Do you know who changed the setting?

In some environments there are a lot of hands trying to help mix the pot.  Sometimes more hands can make light work.  This is not always the case though.  More hands in the mix can be a troublesome thing.  Especially when things start changing and the finger pointing starts but nobody really knows for sure who made the change or when the change was made.

I know, that is why there is this concept called change management.  If you make a change to a setting, it should be documented, approved and communicated.  Unfortunately the process does not always dictate the work flow and it may be circumvented.  This is why auditing is a tool that is in place and should be in place – like it or not.

Auditing can be a very good tool.  Like many things – too much of a good thing is not a good thing.  Too much auditing can be more of a hindrance than help.  You don’t want to cause interference by auditing too many things.  You also don’t want too much data that the facts get blurred.  I hope that this script strikes more of a balance with just the right amount of data being captured to be of use.

The basic principle to auditing server configs is to find what values changes, when they were changed and by whom.  There are ancillary details that can be helpful in tracking the source of the change such as the hostname of the source computer making the change.  These are all things that we should capture.  But if a setting hasn’t changed – then we need not necessarily report that the setting was unchanged (it should go without saying).

So for this, I created a couple of tables and then a script that I can add to a job to run on a regular basis.  I can put the script in a stored procedure should I desire.  I’ll leave that to you as an exercise to perform.

[codesyntax lang=”tsql”]

[/codesyntax]

Here I am trapping the config settings on a daily basis (as the script is written for now). I then cross reference the current settings against the previous settings.  Then I check the default trace to see if any of those settings exist in the trace file.

The default trace captures the data related to configuration changes.  On busy systems, it is still possible for these settings to roll out of the trace files.  For those systems, we may need to make some alterations to the script (such as running more frequently and changing the comparisons to account for smaller time intervals than a day break).

To ensure proper comparison between the audit tables and the trace file, note the substring function employed.  I can capture the configuration name and then join to the audit tables on configuration name.

This has proven useful to me so far in tracking who did what to which setting and when they did it.

I hope you will find good use for it as well.

Public Role and Security

Tags: ,
Comments: 2 Comments
Published on: December 20, 2011

Having flown a fair amount lately, I was thinking about the various levels of security within an airport.  Part of that comes from seeing signs like the following all over the place.

 

These signs are placed in various places for very good reason.  But seeing a sign such as this made me wonder about the various access levels in an airport and how they might relate to the database world.

Let’s start with some of the zones that might be recognizable in an airport.  First there is the ticketing counter.  This is a general access area open to the public.  Anybody can approach a ticketing counter whether they are intending to purchase a ticket or not.  Another area similar to this is frequently the baggage claim area.  These are common areas and generally less secure than other areas.

Next, you might encounter the concourses.  Only ticketed passengers and authorized airport personnel may enter these areas after some degree of screening.  From these areas you have greater access to the airplanes.  You have been explicitly granted permission to enter an aircraft and are required to have a separate pass for each craft you desire to board.  If I were to correlate this to security in SQL server, this would most closely match the db_datareader database role – for which explicit permission has to be granted for the user in order to access each additional database.

From this same area you may witness that there are several aircraft crews.  Each member filling a specific role.  There is a pilot, copilot, flight attendants and ground crew.  Each role may have different access throughout the airport.  And in the case of the flight crew, they have to be given access to each plane they will board.  A United Airlines flight crew cannot go and pilot a British Airways craft for instance – they are responsible for specific flights belonging to UA.

Another potential role is that of the control tower.  The personnel manning the tower have access to quite a bit more than a pilot or passenger.  They have access to communications between all flights and the ground within their airspace.  They are coordinating efforts and trying to make the whole thing go smoothly.  These guys are much more like the specialized server roles in SQL Server.  They can be passengers and have public access.  They can also assist in the piloting of a craft (if you believe what you see in the movies) while giving instruction for flight path, landing and takeoff.

The point is, there is highly segregated roles in an Airport and in the air when an aircraft is involved.  The same should be true in a database environment.  There are special server roles that include public, sysadmin, securityadmin, and diskadmin (amongst others).  Then there are specific database roles that come prepackaged as well as the ability to create any number of specific roles that you need to run your environment.

Now let’s step back out again to the airport example and the public access areas.  These are the least secure areas.  Also, there is a group of people that we should call public.  I am a part of this particular group.  Every person that enters an airport is a member of this group.  The crew piloting a craft is a member of this public group, but they are also members of other more restricted groups.

Being a member of just the public group does not get me permission to enter the pilots cabin.  It does not grant me permission to enter the flight control tower.  It does not even grant me permission to stand behind the ticket counter.  You wouldn’t want just any old Joe Schmoe entering those particular areas – so they become more secure.  And the public group is denied access.

Back to the public server role in SQL Server.  This role is granted VIEW Any Database as well as Connect, but by default is limited to just those permissions.  Can that be changed?  Sure – just like I could walk behind the ticket counter or walk into a pilots cabin (the flight staff may occasionally allow you to take a peek – typically children though).

Just because it can be changed – doesn’t mean it should be done.  In the example of me taking a peek into the Pilots cabin, that is a one person permission being granted.  If I granted that permission to the public role in SQL Server, now everybody can do that same thing.  So think about it for a minute, do you really want everybody being able to change the schema in your database if you decide to grant alter any to public?  I really doubt it.

A good rule of thumb with the public role is to leave it be.  Do not add permissions to this role.  Add permissions on a per database  and per group of users basis.  Create roles within the database and grant permissions to that role – in each database.  And remember the rule of least privilege – don’t grant more permissions to a user/role than necessary to perform the job function.  Just the same as in an airport – everybody has their role and it is strictly defined.  If the user need not have access – then don’t grant the permissions.

I want to re-iterate that point.  To help prevent unauthorized access, keep permissions in the public role to a minimum and create roles within the database to manage the different job functions as necessary/possible.

«page 1 of 2






Calendar
May 2016
M T W T F S S
« Apr    
 1
2345678
9101112131415
16171819202122
23242526272829
3031  
Content
SQLHelp

SQLHelp


Welcome , today is Thursday, May 5, 2016