CRM Data Source Connection Error

Working through some security modernization recently with a client I ran into a fun little nugget. What we were trying to accomplish was to make the service accounts being used more secure through the use of Managed Service Accounts.
Making the change was fairly easy and all of our services seemed to start up swimmingly. SQL Server was running perfectly fine. Even when we got to SSRS and applied the change to the Service account via SSRS Configuration Manager, things went smooth.

After ensuring the services all started came the task of validation. From a sysadmin perspective we often figure the validation process means looking through the tools we know and sometimes we don’t know what to do via the application to validate functionality. For this we more or less rely on the end-users. While waiting for the end-users to perform their validations, we resort to the tools known best to us – SSMS etc.

In the case of testing reports, one may attempt to use the Report Manager url (or report server url). When opening reports, you may see that everything is working correctly through that tool. So, you continue to wait for the end-user to validate. Eventually they say it all looks good only to report back a week later that nothing is working and now you are stuck trying to figure out what to do to resolve the problem.

SSRS Service Account Needs PrivUserGroup

This is particularly interesting given everything worked just fine before the service account change and given that the only thing that changed was the service account (via Reporting Services Configuration Manager). Nothing changed with the execution account (an execution account was not actually employed in this configuration at any point).

We also determined that nothing was necessary to be changed with the CRM’s “SRS Data Connector”. What is the next best thing? Well, time to take a look into the logs and try to find some clues there. Perusing the logs, I was able to find a couple of key indicators. Here are the two enlightening errors.

First error:

“Cannot create a connection to data source ‘CRM’.”

Better, more verbose exception that broke the case wide open:

“Immediate caller <SQL Server Reporting Services Service Account> has insufficient privilege to run report as user <SID>”

Googling for these errors revealed a lot of information about the errors in the event an Execution Account for SSRS was being used. That scenario didn’t apply but it did give a hint at what to look at next. Comparing the previous service accounts permissions in Active Directory to the new service accounts permissions we found the “PrivUserGroup” permission was missing. This permission was the exact permission mentioned in all of the other articles. Adding that permission immediately solved the issue for the end-users and their ability to run reports via the CRM application.

The Wrap

This article takes us to the edge with a couple of CRM related errors after changing the service account to a more secure Managed Service Account. Despite the CRM reports working properly within Report Manager (via SSRS), the reports would fail in CRM.

Comparing the permissions for the old service account versus the new service account shed a bright light on the problem indicating a missing permission. PrivUserGroup is essential to proper report functionality and should be added to either your execution account or your SSRS service account.

This has been another post in the back to basics series. Other topics in the series include (but are not limited to): Backups, backup history and user logins.

Where is that Mask?

Comments: 1 Comment
Published on: December 28, 2019

It is always a pleasantly alarming situation when all of a sudden a client calls because an automated routine is broken. No matter what the routine is, it’s not the best of news from a client – especially not if it is a routine you helped them migrate to a new server.

I am sure you can hear the klaxon like sound in your head mimicking the panic and shrill anxiety of a department head saying “it has always worked in the past!”

Not the best of news to receive, but also not the least bit cause of panic for you. You remain calm, cool, and collected as you get ready to start troubleshooting the problem.

During a recent client engagement, I had the pleasure of helping them solve a similar problem. Shortly after a server migration and after having proven the process to be properly working, the automated routine started doing something entirely different and new.

It’s Hidden

You see, what once was working suddenly started masking data for a critical component of the process. Due to this change in the data, bank accounts were no longer properly syncing and it was causing a problem with the accounting department who could no longer balance the books. Luckily, the issue was found within a day and the resolution started promptly else it could have been a nightmare.

First order of business, after discovering that the process was indeed running as expected and it was just a change of data, was to figure out what had changed. Was there a code change maybe? Maybe somebody with too many permissions decided to enable a feature on the database side?

We went through and validated everything we could think of. There was no code changes in the related procs, views, or otherwise. Then we went through and validated any sort of database changes such as the enabling of encryption for this key field. There was no encryption or any sort of database feature enabled, but the behavior was eerily similar to some sort of data masking feature having been enabled.

I set out to confirm if data masking was enabled to any extent within the database. The easy way to do something like that is through the use of the following script.

In this particular case, the result of the query was an empty set. There was nothing on the database side enabled that could have caused this sudden change in data behavior. That said, at least we were able to truly confirm that data-masking was not enabled within the database (or any database on the server for that matter.

As it turned out, somebody from the finance team enabled the masking feature from within the application. Since it was entirely application driven, the data was entirely replaced with a masked version of the data so the critical data was overwritten and could not be un-masked. Due to this, a new revision to the process had to be created on the fly to get them back in business.

Put a bow on it

There may be times that data can be changed which will have negative effects on down stream automated processes. When these changes occur, how equipped are we to figure out the root cause? Today, I shared one easy script that could help eliminate one database feature as a culprit to an anomalous data masking suddenly getting enabled. The more tools you have at your fingertips, the better data professional you can be. Add this one to your tool belt!

Interested in more Extended Events articles? Check these out!

Want to learn more about your indexes? Try this index maintenance article or this index size article.

This is the fourth article in the 2019 “12 Days of Christmas” series. For the full list of articles, please visit this page.

Ad Hoc Queries Disabled

DBAs and ad hoc queries sometimes go together like oil and water. While we may prefer to avoid the ad hoc queries, sometimes it is the best method to achieve a task.

What exactly is an ad hoc query though? An ad hoc query is a query that should serve a single use purpose (not always the case), is routinely unplanned and quite possibly was untested (again, not always the case). Sometimes, however, the definitions of ad hoc don’t fully apply to queries in SQL Server. One such case is through the use of dynamic type queries and queries issued to linked servers via openrowset.

In my experience, queries used against a linked server and using the openrowset functionality is typically more of a prepared statement. However, it is treated as an ad hoc query. As such, you may encounter an error you may not have been expecting.

Msg 15281, Level 16, State 1, Line 161
SQL Server blocked access to STATEMENT ‘OpenRowset/OpenDatasource’ of component ‘Ad Hoc Distributed Queries’ because this component is turned off as part of the security configuration for this server. A system administrator can enable the use of ‘Ad Hoc Distributed Queries’ by using sp_configure. For more information about enabling ‘Ad Hoc Distributed Queries’, search for ‘Ad Hoc Distributed Queries’ in SQL Server Books Online.

This article will help show how to resolve for this error so your queries may go on as intended.

Ad Hoc

Let’s first take a look at a pretty typical type of query I use routinely to check various things on my lab servers.

This is a query that I use (or something like it) to retrieve various Extended Events data from different lab servers. Notice, I use the openrowset method in order to query my linked server. If I do not have the server configured for ad hoc distributed queries then I will receive the error previously noted. The solution for that error is to enable the setting. We can do that via the following query.

Once enabled, then the openrowset queries will work across linked servers. This needs to be enabled on the server that is local in order to send the ad hoc query across to the remote server. With the setting now enabled, running the query that threw the error (from above) now returns the follow results.

The Wrap

This article took a look at an error that may occur depending on your use of linked servers and the use of openrowset. Documentation states that this is something that should be done infrequently and alludes to the issue being resolved through the use of linked servers. In my case, this crops up when using openrowset to query my linked servers. I find that using openrowset to query the linked server is far more reliable, efficient, and better performing. YMMV.

This has been another post in the back to basics series. Other topics in the series include (but are not limited to): Backups, backup history and user logins.

When Too Much is Not a Good Thing

In my previous article, I demonstrated some fun ways of locking down access for certain users to ensure they did not have data access while at the same time allowing them to review the database schema. The solution provided had the added benefit of reducing the chances that such a user could unknowingly cause a server outage.

In that solution, I showed the value of the “View Definition” permission. There are a couple of caveats to this permission that could lead to unwanted results.

It is these unwanted results that oft occur due to lack of attention to detail, use of internet examples, misunderstanding of the requirements; and always from granting way too much access. I am going to explore a couple of these erroneous methods and why they absolutely bug me.

You Get Everything!

Let’s start with the most common mis-step that I see. A user requests access to the database and the stand-in DBA takes the easy route. The easy route here is to grant the requester sa access, dbo access, and every other database and server role there is because it is easier to just tick the boxes.

This method of assigning permissions happens far too frequently. I have seen it in hundreds of databases for clients. I have also written about it here.

You see, once you grant somebody sa access, none of the rest of the permissions grants or denies matter. Sysadmin overrides everything! In this particular case, this would be way too much access for the user who just needs to see the schema. Be extremely careful about when and to whom you grant sysadmin access.

Next up in the realm of granting way too much access is the far too frequent practice of assigning the permissions to the public role in each database. I have written about this practice as well – here and here. This one really gets my ire. Granting permissions to the public role starts to cross into the realm of high risk and downright negligent. Let’s see how some internet examples demonstrate the solution to our problem with regards to the public role.

First, using the test user and role we created in the yesterdays article, let’s confirm that we do not have access to view definitions outside of the assigned role permissions.

After executing that query, I can confirm that the public role does not have the “View Definition” permission. Let’s now revoke the permission to the role and confirm no schema definitions could be viewed.

Now, knowing that the user and the role do not have permissions, let’s go ahead and assign permissions to the public role.

Just like magic, a user that should not be able to view schema definitions can now view information it is not supposed to see. This is a security problem. Just to confirm, let’s evaluate the principal permissions compared to the role permissions.

Keep this in mind as you go down the dark path to granting permissions to the public role. Just because it is easy doesn’t make it right. Similar note, you should proceed cautiously with scripts from the internet that suggest granting permissions to the Public role. They are wrong!

Finally, we down into the realm of “only slightly better” but still way over the top for the requirements. This last method uses the “VIEW ANY DEFINITION” way of granting permissions. Now the “ANY” key word actually scopes the permissions to every database on the instance. Let’s clean up the last example by revoking the permission to public so we have a blank slate.

With permissions, revoked, I will now grant View Any Defintion to the test user. Why? Well, because I don’t have that role created in every database (more on the benefits of that approach later). When setting this permission, it is server scoped so make sure you are in the master database else you chance seeing this error.

Msg 4621, Level 16, State 10, Line 22
Permissions at the server scope can only be granted when the current database is master

Yup, that just made my blood curdle a bit. Let’s check out the perms now in the DBA database (reminder that the scope of the permission change was the master database and is a server scoped change).

As we can see here, all databases now have the View Definition permission assigned to testvwdef. If there happens to be a sensitive database, you may have exposed yourself to a risk by doing this.

Performing the same tests as previously done would reveal similar results – except on a broader scope.

There are some caveats to help reduce this risk but they do not undermine the need to be responsible or the need to avoid using the “ANY” keyword. Recall that I granted the View Any to the individual user instead of the role? That was simply due to me not having that role in each database. If we grant permissions to roles, and the role does not exist in the master database while trying to grant permissions from that scope, then we see this nifty little message.

Msg 15151, Level 16, State 1, Line 25
Cannot find the login ‘BusinessViewDef’, because it does not exist or you do not have permission.

That is one measure to help prevent this over-permissioning problem. The next possible caveat is a minor stop-gap and can be easily overlooked should the user ever be added to the database in the future. For “Any” to work in its full splendor, the principal to which it was granted must have already been granted “connect” to the database.

 

The Wrap

This article has explored various different options for minimizing risk to over extending permissions to users. With a little planning and care, we can find various ways to fulfill user requests without compromising the environment.

Every now and then, it will require extra effort in trying to get the pertinent details from the requester. That said, with that extra effort you will find satisfaction and calm knowing your environment is secure and that you are providing quality service to your customers.

This has been another post in the back to basics series. Other topics in the series include (but are not limited to): Backups, backup history and user logins.

Who needs data access? Not You!

Does this sound at all familiar? A user submits a request stating they must have elevated access to a database. In your attempt to gather more information about the request, they must have access because so-and-so has elevated access and they are curious about the database.

The conversation doesn’t go quite as smoothly as that. In fact, it may take quite a lot of effort to squeeze that tiny bit of real information from the request.

After assembling that tidbit of information, you may find yourself choking back a scoff or three. “You need access because you want to explore the database?” Yeah I can hear the incredulity in your thoughts from here. At no point is this kind of “need” a legitimate reason to gain access to a database.

“I need elevated access!” Oh No you don’t!

Beyond the back and forth of the demand for access just to play around in the data and the DBA resisting the access in the best interest of the data and company, there may possibly be a middle ground that can be reached. I am not talking about creating a dummy database with obfuscated data. Nor am I talking about a dummy database with fake data.

Every now and then, this kind of request may cross your desk and all the requester is really trying to do is find some means of becoming familiar with the database. This can be ferreted out with some extra warnings and more discussions with the requester. Maybe the request really has nothing to do with the data and everything to do with just finding a way to get to know the schema because there is no documentation.

Given the plausibility of this need, there are many ways to grant the request – some may require more resources than others. Let’s suppose the user has absolutely no understanding of TSQL. With that minimal knowledge, you know there is likely to be many potential issues that could cause the database to be brought down and unusable. This is terrible if the affected database is in production. It is also very bad if the database is a non-prod database. Given this potential, you know you don’t want to grant any permission that implies read or write permissions in the database (if there is data present).

What if we could create a database without the data. Would that work? Absolutely that would work. This would in essence be a sandbox for the user and all responsibility for database issues could be directed back to the specific user. This is a fabulous solution if you have adequate resources for more databases/instances/servers to be created. There are multiple ways to create an empty database, but to ensure the best end-user experience considerations for database naming, resources, and potential impact to other databases must be acknowledged.

Personally, I prefer to maintain the same database name regardless of tier (dev, test, prod etc). This also applies to schema only types of databases where limited access would be granted. Changing database names on the users only breeds confusion and complicates code promotions. If you need to provide the same sort of solution for 30-40 or more databases, you can imagine what kind of confusion that would cause on a server where the actual data-filled databases actually exist (think dbcc clonedatabase types of solutions here where the cloned database must be named differently).

What to do? What to do?

Thankfully there is still at least one more entirely viable option that remains. We can use the power of security to achieve some semblance of the user want and still preventing the person from viewing the data or even causing performance issues.

Limiting access for prying eyes is fairly easy while still granting the opportunity to explore the database design. In SQL Server we have a permission called “View Definition” which will help us accomplish exactly this requirement. In preparation for this, one must determine the scope of the need. Generally speaking, this should be done on a per-database approach. There is a method to grant this for all databases on the server but that is a topic for another day. Once the scope is determined, it is time to proceed with the setup.

I prefer to take advantage of Roles within the database. This simplifies the future maintenance and management of the permissions. If a new principal needs the same permissions as Joe from accounting, then I just add the new user to the same role(s) as Joe – piece of cake.

With the Role created, user added to the role, and the “View Definition” permission granted to the role, we are ready to test.

First order of business is to connect with that new login we created (a SQL login was created and is used in this example, but the effects are the same with a Windows Login) as shown in the preceding image. After connected, let’s try some things.

The first test I want to perform is to confirm the user has no read access to data to ensure this permission does not grant extra access along the way.

Great! Select access to objects from this permission alone does not work. Now, let’s test the ability to view object definitions. First, let’s try via the object designer.

First we see a warning about not having enough permissions to save changes, which is good. After clicking OK on that warning, a new message pops up to remind us that the table is “read only”. Perfect! The permission we gave this user do not allow them to alter objects – just view them (as we would expect).

Seeing these results should also help confirm for us that I can browse the table definition directly in SSMS. Let’s take a gander from a TSQL approach.

Great, we can explore the table definitions etc with TSQL. That is of course a system stored procedure, so let’s try one more method.

Running this query, I will see results similar to this.

Knowing that we can run custom queries to explore the schema is almost comforting. It does confirm that we are able to query system objects (yes a select works here but this is read only object definition data). But does this mean the user can also see system data? Let’s try!

Excellent! So it seems we have a plausible option to keep the footprint small and minimize access to the data.

The Wrap

This article has explored various different options for reducing the access to data for users who “need” it. With a little planning and care, we can find various ways to fulfill user requests without compromising the environment.

Every now and then, it will require extra effort in trying to get the pertinent details from the requester. That said, with that extra effort you will find satisfaction and calm knowing your environment is secure and that you are providing quality service to your customers.

This has been another post in the back to basics series. Other topics in the series include (but are not limited to): Backups, backup history and user logins.

It’s an Honor: Microsoft MVP

Comments: No Comments
Published on: July 1, 2019

Today (July 1st) is the notification day for all Microsoft MVPs. Microsoft changed it a couple of cycles back to be all on the same day helping all candidates to come to a single emotional day. I received my notification early this morning that I was awarded the MVP again for the 2019-2020 cycle. This is my 6th award overall.

I am humbled to be among so many great professionals and honored at the same time for having been chosen. Are you new curious about this MVP thing? Well, its sorta like being named MVP for a sports league while at the same time a little bit different. Here is what Microsoft has to say about the MVP. I personally think you may get more out of reading about the MVP here (a little bias might be involved).

This is a community award for your efforts to promote and help others work with Microsoft products. I was awarded for my efforts in the Data Platform. Many others receive the award for their efforts in AI, Powershell, or Azure (as examples). Generally, the people that are awarded are seen as being passionate about the Microsoft Product. These people spend a lot of time, effort, energy, and sometimes money to help share their enthusiasm for the product.

Microsoft is doing a fantastic job at recognizing people for their efforts in promoting the MS brand. Not everybody that promotes that brand gets to be an MVP – sadly. If you didn’t make it this time, keep trying. I just advise that the goal be more about the community than the award. The MVP award is great, but giving to the community is the real reward.

I want to congratulate all those awarded a Microsoft MVP award past, present and future. I also want to thank all of those who have given to the community regardless of whether an award was received or not. There are many of you out there that have been a positive influence whether you know it or not – just for giving to the community and helping somebody else. THANK YOU!

Public Role Permissions – Back to Basics

Permissions in the database environment is such an important task. Setting permissions correctly is paramount to a successful audit and one of your best defenses against improper/unwanted access. Yet, in spite of the importance, security is often overlooked, neglected, improperly configured or just flat out ignored. Let’s not forget the times that security is intentionally misconfigured so certain individuals are permitted a backdoor access.

Security, just like performance tuning, is a perpetual (and sometimes iterative) task. There is no excuse for setting your security and forgetting it. It must be routinely reviewed.

While performing a script review for a client, I was reminded of the need to also review their security setup. The reminder was more of a slap in the face as I noticed that the developer had built in some permissions assignments for some upgrade scripts. Unfortunately, we were not permitted to alter any of the scripts due to them being from a third party vendor (and that vendor refused as well to fix the problems with the scripts but I digress).

What could be wrong with this?

I want you to digest that for just a moment. This is an example of the permissions this particular vendor insists on setting for the public role. What could possibly be wrong with that? Let’s examine a couple of the permissions like “Control” and “View Change Tracking”.

View Change Tracking

This permission is an elevated permission that is required in order to use the change tracking functions. This permission is necessary for the following reasons:

  1. Change tracking records contain the PK value for rows that have been deleted. If sensitive information was deleted that a user should not be able to access, the user would be able to reverse engineer the data from the change tracking data.
  2. A user may be denied access to a column that contains sensitive data. If data in the column is changed, the data would be stored in the change tracking and a user can determine the values that were updated for the sensitive data.

Control

I am going to take this one direct from the Microsoft documentation.

Confers ownership-like capabilities on the grantee. The grantee effectively has all defined permissions on the securable. A principal that has been granted CONTROL can also grant permissions on the securable. Because the SQL Server security model is hierarchical, CONTROL at a particular scope implicitly includes CONTROL on all the securables under that scope. For example, CONTROL on a database implies all permissions on the database, all permissions on all assemblies in the database, all permissions on all schemas in the database, and all permissions on objects within all schemas within the database.

Now digest that a bit. Once digested, consider what the public role does to user access in a database. The public role permissions are inherited by all users of the database whether the users have been granted the permission or not. You should only grant permissions to the public role that you really honestly believe that ALL users should have. If you are being serious in your role, then the amount of times you grant permissions to the public role should either be a) never, b) when you want to have a data breach, or c) you are testing in a sandbox to improve your skills.

Check for Perms

When you are uncertain of which permissions have been assigned to the public role, or you just haven’t reviewed your permissions real-estate in some time, it is best to pull out a script and start the process. As luck would have it, I have a few scripts that can help with that (here or here) and I have a new one that I am sharing now.

Let’s start with a basic query that will display all of the permissions assigned to the public role in a specific database.

There is nothing super special about this query. Looking at it, it is querying the permissions for the public role specifically. I display where the permission is a “Deny” or “Grant”. Then we list the permission name and then the schema and the object.

Let’s take that script and evolve it now. I am going to plan for the worst and expect that some permissions have been applied that shouldn’t have by some vendor upgrade script (because – well, history). Since I am expecting the worst, I am going to add some script generating code that will revoke the unwanted permissions. And still expecting the worst would be that revoking the permissions will break something, I will also add some code that can generate the appropriate “Grant” statements.

That looks better. I have a way of identifying the unwanted permissions as well as an easy script I can execute to remove the unwanted permissions. Note the use of the collate in the final two columns. As it turns out, permission_name from sys.database_permissions has a column collation of Latin1_General_CI_AS_KS_WS. Since I ran into some errors (shown below), it is easier to direct the DB engine to use the collation that matches the permission_name column.

Msg 451, Level 16, State 1, Line 11
Cannot resolve collation conflict between “SQL_Latin1_General_CP850_CS_AS” and “Latin1_General_CI_AS_KS_WS” in add operator occurring in SELECT statement column 5.
Msg 451, Level 16, State 1, Line 11
Cannot resolve collation conflict between “SQL_Latin1_General_CP850_CS_AS” and “Latin1_General_CI_AS_KS_WS” in add operator occurring in SELECT statement column 6.

Alas, this is still not quite as efficient of a script as I would like. I may have hundreds of databases on the instance and need to evaluate all of them. Time for the bigger guns.

That will take care of all of the permissions for the public role in all of the databases, with a slight caveat. I am only checking against that objects that are not flagged as is_ms_shipped. Now, isn’t there also a public role at the server scope? Indeed there is! Let’s also capture those permissions.

Now, I feel many times better about what could possibly be going wrong with the public role.

If you are in a tightly controlled environment or you are just sick of people doing this sort of thing to your servers, there are more extreme measures that can be taken. You can read about it here or here.

The Wrap

It is amazing what some people will do that just doesn’t make sense. Granting permissions to the public role is one of these cases. That behavior also explains why there are documents and procedures for hardening the public role (here and here).

If necessary, I recommend locking down your public role. It will make your job a little easier and give you better rest at night.

This has been another post in the back to basics series. Other topics in the series include (but are not limited to): Backups, backup history and user logins.

Database Offline – Redux

I have written a couple articles showing how to audit database offline events via the default trace as well as via Extended Events. These are great for a more proactive approach. Sometimes the proactive approach is just not plausible for the given circumstances. A good example of this is when you inherit servers (e.g. new client, new project, new job) and there are databases on the server that were taken offline prior to your inheritance.

In a situation like this, you need to find out when the databases were taken offline, but you are somewhat limited by the data available to you for your research purposes. The default trace has rolled beyond the retention point for your server for events that were trapped. Setting up an Extended Events session would be less than helpful given the databases are already offline. So we must find an alternative to finding approximately when the databases were taken offline.

Alternatives

If your predecessor leaves you with a database that has been taken offline and no documentation about it, there is an alternative to find the approximate time it was taken offline – using TSQL. Granted, you could explore the file system and make the same sort of guess based on the file modified date. Data people generally prefer some sort of script and a script that is native to their language (tsql).

So, let’s take a look at a viable option for figuring out the mystery behind your database that has been set offline with no good info surrounding it.

Running this query, I see the following on my test server:

Note that this query uses GETUTCDATE. The differential_base_time column in sys.master_files is in UTC time. Thus, in order to compare properly, we need to ensure we use a UTC datestamp for comparison. Also, of note, this query doesn’t work if there is no full database backup for the database in question. Who doesn’t backup their databases though, right? No seriously, you really should.

Since this is my test server and I am well prepared, I just so happen to have an XEvent session running that tracks the offline events that I can use to compare the results.

Yes, there is a bit of variance in the actual database offline event and the differential_base_time column. This variance is due to timing of the backup and the actual offline event. Again, this is about finding an approximate time of when the database was taken offline as a viable alternative when other data is not really available.

Conclusion

We all aspire to having a perfect database environment where nothing surprising or unexpected happens. Unfortunately, that is the desire of dreams and fairy tales. The unexpected will happen. A database can be offline at the time that you take over responsibility of the database. You will be asked when it was taken offline. This script will help you get a reasonable deduction for that offline event in the absence of better data.

Extended Events is a powerful tool to help in troubleshooting and tuning your environment. I recommend investing a little time in reading the 60 day series about Extended Events. This is not a short series but is designed to provide an array of topics to help learn the tool over time. Don’t forget to go back and read the companion article showing how to audit these events via the default trace.

Quick Permissions Audit

Whether it is for a client, an audit, or just for good housekeeping, DBAs will often need to figure out who has access to what.  In addition, they may need to know by what means people have access within your databases.

When that need arises, it is frequently adequate to just perform a quick audit and then dive deeper if the quick audit shows anything questionable. Today, I am sharing a means to perform this quick audit. I will start with the logins (server principals), peek at the users (database principals), then the server roles, and wrapping it up with the database roles. What I don’t do is dive into the windows group memberships.  So if a windows group exists within SQL Server as a login, I do not try to figure out who is a member of that role.  That can be obtained, but is a separate discussion.

So, in the vein of a quick permissions audit, I will quickly get to the script to help perform this permissions audit.

As you look through this code, you will notice that I have done a few interesting things.  Here is a quick recap:

  • Build a temp table based on a delimited list of databases input into a variable
  • Build a temp table based on a delimited list of logins input into a variable
  • Used a string splitter known as DelimitedSplit8K by Jeff Moden (google it) or LMGTFY
  • Used FOR XML to build a delimited string of Server Role Permissions
  • Used FOR XML to build a delimited string of Database Role Permissions
  • Used a CURSOR to build a dynamic SQL string to execute for each database

Holy cow! That is a lot of stuff for such a simple quick audit.  A cursor? Seriously? Well yeah! It is OK in this type of scenario to use a cursor.  This is a well-performing cursor for an administrative function. In addition, I chose to use sub-queries to build my delimited string of permissions so I could keep those things together in a single record per login/role.

If I run that query on my instance of SQL Server, I would end up with something that looks like the following.

Now, obviously, unless you had these same databases, logins, and even the same permissions assigned to these logins, your results will be different. So, make sure you go and change those inputs for logins and databases to be queried.

For more articles on audits and auditing check here and here.

PowerShell ISE Crashes

Working with PowerShell brings a lot of advantages and power to help manage a server. The more current your PoSh version, the more efficiently you will be able to manage your server. Sometimes getting to the current PoSh versions comes with a little pain such as ISE crashes.

I recently had the mis-adventure of working through some ISE crashes after bringing some systems up to PoSh 5.1 that were either PoSh 2.0 or 3.0. It’s not a very fun situation to run a WMI update and then run into a crash of any type when testing if it worked. Your first thought is something terrible has happened.

As it stands, the problem is more of a nuisance than a critical failure. That said, it is enough of a problem that anyone who uses the ISE or .Net applications may experience a slight cardiac event.

Fonts

As you work to quickly recover from your missed heart beat, you start digging through logs and then hitting good old trusty google.

Diving through the logs, you might just happen across an error similar to the following:

Problem signature: Problem Event Name: PowerShell NameOfExe: PowerShell_ISE.exe FileVersionOfSystemManagementAutomation: 6.1.7600.16385 InnermostExceptionType: System.Xml.XmlException OutermostExceptionType: System.Reflection.TargetInvocation
DeepestPowerShellFrame: indows.PowerShell.GuiExe.Internal.GPowerShell.Main DeepestFrame: indows.PowerShell.GuiExe.Internal.GPowerShell.Main ThreadName: unknown.

Maybe the first error you encounter might look like this one instead:

System.TypeInitializationException

“FileFormatException: No FontFamily element found in FontFamilyCollection
that matches current OS or greater: Win7SP1”.

Inner exception originates from: CompositeFontParser

Either way, the error shoots us back to the same fundamental problem. The ISE won’t load, you get an error message and you can’t confirm that the WMI patch was applied properly.

As you work your fingers faster and faster through the pages on google, you discover that this problem is caused more explicitly by a patch for the .Net framework and not necessarily the work to upgrade your PoSh version. It only waited to manifest itself after the upgrade.

That’s gravy and all, but how does one fix the problem? For me, the quickest and most reliable fix was to simply jump straight to the root of the problem – fonts. The ISE is a WPF application and it also requires a fallback font (if a character isn’t present in your font set, then the app chooses a substitute from the fallback font – or something like that).

The fix is extremely simple and really underscores why this is merely a nuisance issue and not a critical problem. Thus it shouldn’t cause any sort of sinking internal feelings of any sort. There are a few plausible fixes floating around out there. I recommend just doing a manual font replacement. It is all but three simple steps:

  1. Download GlobalUserInterface.CompositeFont
  2. XCOPY the font to %windir%\Microsoft.NET\Framework\v4.0.30319\WPF\Fonts
  3. XCOPY the font to %windir%\Microsoft.NET\Framework64\v4.0.30319\WPF\Fonts

After you have copied the font to those two directories, then all that is needed to be done is launch the ISE. I ran into the same problem on three or four servers and the fix took no more than 5 minutes on each of the servers.

Conclusion

I previously mentioned that I have been working more and more with PoSh to try and improve my skillset there. This is one of those very low-level trinkets that I ran into as I have been working to hone my skills in that tech. For other, possibly, interesting articles about my experiences with PowerShell, you can check out these articles.

Given this job is tightly related to the system_health black box sessions (sp_server_diagnostics and system_health xe session), I recommend fixing the job. In addition, I also recommend reading the following series about XE and some of those black box recorder sessions – here.

page 1 of 1

Calendar
April 2020
M T W T F S S
« Mar    
 12345
6789101112
13141516171819
20212223242526
27282930  

Welcome , today is Sunday, April 5, 2020