SSRS Subscription Schedules – Enhanced

Reporting Services

 

Over the past couple of articles I have illustrated some of the fun that can be had when dealing with the scheduling capabilities of Reporting Services (SSRS). The first article covered how to create more advanced schedules (from the SSRS point of view). In another I article, I showed how to retrieve scheduling information from the ReportServer database. In that last article, I also promised a follow-up article for more in-depth scheduling details.

SSRS provides the capability to review the scheduled reports (subscriptions) in a far moare detailed fashion than shown in that previous article. That ability is held within the ReportServer database. This article will dive into the source of this scheduling information within the ReportServer database.

This dive will be a bit more detailed than the first time I dove into SSRS scheduling – here. That particular dive was missing an important set of data.

Deeper Dive

My first dive into building a report of the report schedules was pretty comprehensive and I used it quite frequently. Many others also used it regularly for their environments as well. So the first attempt wasn’t bad by any stretch. The main problem (at least for now) is that the script does not account for any of the custom schedules that can be built. I have to be honest in that I hadn’t really considered that feasibility. Times and experience change that perspective. When that perspective changes, it is time to dive back in and add coverage for the shortcoming in the script.

When I dove back in to fetch the custom scheduling information, I realized there was a discrepancy even in the old report in that I was not gathering Job information pertinent to the schedule. Recall that SSRS subscriptions are performed via the SQL Agent. With that in mind, it is reasonable that the Agent job information is pertinent and germane to the report subscription and probably should be included in a report. Couple that with the methods on creating custom schedules for the SSRS reports, and we have a resounding need to ensure that data is trapped properly.

Due to this epiphany, I have now a more complete script to include both the data from SQL Agent as well as the data from the ReportServer database in regards to subscriptions and schedules of reports.

Script

In pulling the data together from the two sources, I opted to return two result sets. Not just two disparate result sets, but rather two result sets that each pertained to both the agent job information as well as the ReportServer scheduling data. For instance, I took all of the subscriptions in the ReportServer and joined that data to the job system to glean information from there into one result set. And I did the reverse as well. You will see when looking at the query and data. One of the reasons for doing it this way was to make this easier to assimilate into an SSRS style report.

As you can see, it is not a short script. By fair measure, it is also considerably more complex than the XML version that was recently posted (and mentioned earlier in this article). That said, it is many times more flexible and complete than the XML version as well. I do continue to use the bit math for figuring the schedules as I did in the first version of the script. When done this way, I can handle the custom schedules as well as get extensive details about the schedule from both the msdb and ReportServer databases.

Recap

SSRS provides built-in mechanisms to help report on the scheduled reports that have been deployed. This version of the report will help you retrieve the data from both a job perspective and from the report scheduler perspective. Through this series of articles, you should be confident in being able to now create custom schedules as well as accurately report on any reports that have specific subscriptions/schedules.

SSRS Subscription Schedules

Reporting Services

Reporting Services (SSRS) is a tool that permits you to create and deliver feature rich reports. The reports can be delivered in various formats and can even be scheduled to be delivered at various times. I even recently wrote about creating more advanced custom schedules.

With the ability to create content delivery schedules, or to subscribe to report content delivery, this imposes a requirement to also know when the various reports are scheduled to be delivered.

SSRS provides the means to be able to review the scheduled reports (subscriptions). That means is held within the ReportServer database. This article will help to uncover one of the sources of this scheduling information within the ReportServer database.

Review Schedules

When looking into the database for SSRS, I can see there are different means to be able to review the report schedules. This article is going to cover just one of those methods. And if I am going to be entirely up front about this method, I don’t like it and I recommend that it not be use.

I can hear the moans now. “If you don’t like it, then why show it to us?” Well, that is a very good question and there is a very good reason for this decision. A lesson I learned a long time ago is sometimes you need to learn the hard way, or less desirable way, to do various things. One of my favorite Calculus teachers from years ago drilled this into my head over and over again. Why? Well, there are three good reasons that come to mind: a) it makes the more desirable method seem much easier, b) it helps you to appreciate the more desirable method all that much more, and c) because if all else fails, you will have another method to fall back to just in case.

Less Disérables

The least desirable method (at least of the methods I will share) is to parse XML from a field stored in the ReportServer database. If I look into the Schedule table within the RepotServer database, I will find this column called MatchData. Up front, this field is not very intuitively named. I would not think this field actually represented the schedule, but it actually does.

Before we start diving into parsing XML, we need an example of what this XML may look like. The following will provide that very example that we need.

Are your gears grinding yet? The XML is not terribly difficult to follow. I am sure you have realized the problem from this format at this point. If I query this to make it human readable in a tabular format (you know DBA format), I will end up with a really wide table that is pretty ugly to look at (unless I get super creative to combine fields etc).

Let’s take a look at the query to parse something like the preceding XML example.

And there we have that ugly query to produce a really wide ugly result set. The query is not difficult to write. It’s just extremely repetitive. In similar fashion, the results are very repetitive. This makes, in my eyes, this particular method less desirable.

I haven’t even gotten to the part about the shortcoming in scheduling reports through SSRS that I wrote about recently – here. In that article I discussed a workaround to overcome the SSRS scheduling options. If you employ methods such as I discussed there, then this query will never fully cover the scheduling related to your reports. Because of that, I will be discussing the better solution in the next article.

Recap

SSRS provides built-in mechanisms to help report on the scheduled reports that have been deployed. While parsing the XML is less desirable than what I will be sharing in the near future, it is better than doing nothing at all. I recommend you start looking into the various report schedules you may already have in your environment. Also, stay tuned for the next article that will better show these schedules.

SSRS Custom Shared Schedule

Reporting Services

 

Reporting Services is a pretty feature rich tool for delivering reports to various consumers. There is plenty of power within Reporting Services (SSRS) giving one the ability to perform visualizations, render reports in various formats and even schedule reports to be delivered in different formats or on different schedules.

Unfortunately, the scheduling capability within SSRS is fairly weak. While it is true that one can accomplish a varied array of different schedules, the scheduling of SSRS is far below the power of SQL Agent (for instance).

I will explore the deficiency of the scheduling tool within SSRS in a very specific case. You may even be familiar with this deficiency already. Many organizations have the need to produce end of month reports that need to run on the last day of the month. If you are familiar with the scheduling tool within SSRS, you already know that this is not possible (at least through SSRS 2014). This article will help step you through how to schedule a report subscription to run on the last day of the month.

Schedule Options

Before diving into the custom schedules, let’s take a closer look at the options available for scheduling through SSRS.

While the tool does provide various options, the granularity certainly is not diverse enough to cover many legitimate scheduling needs – especially the “last day of month” requirement for many month end type of reports.

As you may be aware, SSRS subscriptions are actually run through the SQL Agent despite being set through SSRS. If I take a look at some of the scheduling options in SQL Agent, I can see the following.

Note here that there are various built-in options to schedule on a monthly basis, including the “last” option. If I wanted to look even closer at the available options I would see the following.

As you can see here, I can schedule for multiple different types of “last” options relative to the month. One of these options happens to be the last day. This helps to illustrate just how much more powerful and versatile the scheduling within SQL Agent is than what we get with the SSRS scheduler. All of this despite the fact that SSRS subscriptions are actually executed by the SQL Agent. Doesn’t that seem a bit odd?

Custom Schedule

Now that we better understand the limitations of the SSRS scheduler and given that SSRS subscriptions are executed through the SQL Agent, let’s move on to bigger, better and much more useful means of scheduling the SSRS reports.

The very first thing that you should do is to create a share schedule. This should be a shared schedule that is created as a run-once schedule. Let the schedule run that one time and then proceed on to the following steps. If you need help in creating a shared schedule, here is an msdn article. When you create the shared schedule, I recommend using a descriptive name that you can remember. This name will be useful in the next step. For the purposes of this article, my schedule is named “EndOfMonth”.

Once the schedule is created, the next thing to do is to query the ReportServer database. Make sure you know the name of your database. Some people have changed the ReportServer database name from the default. This is an important piece of information to remember. The query against the ReportServer database will be predominantly just to get the schedule id of the newly created schedule.

When I run that query, I receive the following results.

I now want to take that scheduleid and then use it to determine what SQL Agent job is actually related to that schedule so I can fetch some info from the job. I could skip this entirely and go to the subsequent step but this helps to understand what needs to be done in that subsequent step. So, from here let’s query the msdb database in SQL Server to fetch some info from the job system.

The scheduleid is used within a command within a jobstep. By passing the scheduleid into this query and then comparing against the existing job steps, I am able to retrieve the job that is related to the SSRS shared schedule (subscription). When I run the preceding query, I receive results illustrated in the following image.

Take note of the items I highlighted in red. Within the job name, step name and command text for the step, I can find the scheduleid for that shared schedule that I created. Yes, I could easily have changed the query I used to compare to the job name and that would have worked just fine in this case. By querying the command, I can confirm that the schedule is actually being used. The most important piece of information in this result is the entire command text for the job. I will need to take this command text and use it to populate a brand new SQL Agent job. This is how I will get my custom schedule for the SSRS subscription.

From here, I just need to create a SQL Agent job that uses the options for a monthly schedule indicating last day from the two drop down menus illustrated previously in this article. Then all that is left is a sigh of relief and a boom bada bing.

Recap

SSRS does not have the built-in capability for some of the more complex and often times regularly required report schedules to meet various business requirements. By following the steps outlined in this article, you can circumvent that short-coming and achieve the needed business requirements while looking like a hero to those that need the more advanced report schedules.

T-SQL Tuesday #089: The Cloud and Job Security

Comments: No Comments
Published on: April 11, 2017

The Cloud

Today I am doing a quick entry for my participation points in the monthly blog party called TSQL Tuesday. I have missed the past few opportunities for various reasons. Today when I saw the topic, I wanted to post a few quick thoughts. If you are interested, the host this month is Koen Verbeek (blog | twitter) and the invite can be found here.

Koen invites us to explore the cloud, whether it be a stormy cloud or a silver lined cloud. Either way, explore it and how it relates to you. Here are some of the examples Koen posted:

  • What impact has this had on your job?
  • Do you feel endangered?
  • Do you have more exciting features/toys to work with?
  • Do you embrace the change and learn new skills?
  • Or do you hide in your cubicle and fear the robot uprising?

I guess the answer for me is “it depends” – buahaha. Just kidding.

The Future Is Bright

I think the cloud is a good thing for the data professional, when done right. I do not believe there is anything to fear with it, so I definitely don’t feel endangered. That said, I do proceed cautiously to the silver lined puffs of water in the air. It’s not from fear, but more of a caution to ensure it is the correct move for the business need in question. I don’t believe the cloud is the right answer for all business needs but it is an appropriate solution for many business requirements.

I like to ensure my clients are well informed of what the choices are and the implications may be when deciding to move to the cloud. I like to make sure they understand that a move to the cloud is not a knee jerk decision – it takes planning and considerable effort in many situations. I also like to remind them that the cloud is really just another data center hosting their data. Granted, some offerings from vendors like Amazon and Microsoft do permit considerable flexibility and the opportunity to move quickly to new demands or interests.

Playground

For me, one of the biggest benefits of the cloud is the constantly evolving sandbox that I get to use to learn and grow (obviously that means I get to play a lot). I don’t have the resources at my disposal (and most clients don’t either) to be able to stand up a brand new environment from scratch for a POC quickly and efficiently. If I want to play around with Machine Learning (ML) then I can spin up an environment to help learn and evaluate my options. Should I decided I want to learn how to setup a multi-site multi-node windows cluster, I could spin up that environment very quickly and start learning with minimal hardware requirement on my part.

The cloud offers great learning potential for those interested. That said, it is obviously not free. There is cost for the cloud services and of course one must still invest personal time into the “sandbox” in order to learn the technology properly.

TSQL2sDay150x150The Wrap

Personally, I see no threat from the cloud movement. Some may worry about the cloud automating them out of a job. The truth is, data professionals are always trying to automate things. Automation is not really entirely new and it seems there is always more to be automated.

The cloud offers new avenues to grow ones career. The technology is getting more and more interesting. Is the cloud blowing past you and your career or are you riding the Jet Stream through the clouds and into your future?

SQL Server Fixed Role Permissions

Roles and Permissions

Some of my recent articles have been focused on permissions and security. There is good reason for that – security is important and all too often it is mis-understood.

You can catch up with a couple of those articles here and here.

It is very important to understand who has what level of access within the server and databases on that server. Sometimes we see users being granted server or database access through the fixed roles available in SQL Server. How exactly do you know what permissions those individuals have via role membership? This article will help to reveal the permissions granted to the various roles and maybe a gotcha or two.

Finding Permissions

There is ample documentation on what the permissions are for each of the various fixed server and fixed database roles in SQL Server. Some of that documentation can be found here and here and here. With all of that documentation, you may be surprised to hear that it is not quite as easy to find the permissions of these roles via queries from within SQL Server – with a caveat. I am going to discuss some documented means to retrieve the permissions for the various roles and also discuss the pitfalls of these solutions.

What are the fixed roles again? Just in case you did not see them in the links from the previous paragraph, I will list the various fixed roles here.

Fixed Server Roles Fixed Database Roles
public public
sysadmin db_owner
securityadmin db_accessadmin
serveradmin db_securityadmin
setupadmin db_ddladmin
processadmin db_backupoperator
diskadmin db_datareader
dbcreator db_datawriter
bulkadmin db_denydatareader
db_denydatawriter

These default roles do have a unique set of permissions for each role. As noted, the permissions are documented well enough. Sometimes, it is preferable to just query the system to retrieve a list of the permissions for each role. This is especially true if one is in need of providing documentation for an auditor on who has what permission.

When trying to query for a list of permissions, one may feel as though they have fallen off their rocker just as the granny in this pic to the left.

Never fear, however, for there is a method to find the permissions of the fixed roles. Let’s take a look at what it takes to query the permissions associated to each fixed role.

System Queries

Unlike most principals, where one can query the various system catalogs to retrieve the permissions assigned to the principal, the fixed roles do not expose the assigned permissions in the same way. With fixed roles, there are two stored procedures that have been created to retrieve the permissions. These stored procedures are sp_dbfixedrolepermission and sp_srvrolepermission.

Immediately I have a “rocker moment” for each of these stored procedures. In the documentation there is a note that states the following.

This feature will be removed in a future version of Microsoft SQL Server. Avoid using this feature in new development work, and plan to modify applications that currently use this feature.

When peaking into the secret sauce behind each of these stored procedures, there is nothing extraordinary to how the permissions are retrieved. In fact, both procedures reference a particular object in addition to the catalog view specific to the type of role (e.g server_principals or database_principals. This second object is called sys.role_permissions. Wait, I said there is no direct view or table to query with the permissions, right?

Let’s try to query that table.

Boom! We have just had another “rocker” moment. As it turns out, this table is the secret sauce to the fixed role permissions being accessible via query. This table can be queried from the stored procedures and can be queried direct – if it is queried from a DAC connection. Most will probably not connect to the DAC just to query the role permissions, so what can we do?

Here is a basic script showing what I have done.

In this script, I have taken the results from each of the stored procedures and dumped them into a temp table. Using this temp table, I can now join to this table to get a more complete list of the permissions in effect for various principals. Once that more complete list is made, then it can be handed to the auditors to satisfy them for at least a week before they ask again ;).

Now it is time for yet another “rocker” moment. Look carefully at the output from these stored procedures. Remember the notice that they will be removed (i.e. on the deprecation list)? It seems there is good reason to remove them from SQL Server – the permissions in sys.role_permissions is not being maintained. That is correct! There are permissions listed in the output of these procedures that are no longer applicable!

If the list is not entirely accurate, then what can be done to get an accurate list of permissions? As it turns out, it seems one may need to code a solution that has the permissions hard coded in the script – very similar to what these system stored procedures were doing.

Recap

Capturing fixed role permissions is possible through the use of two system stored procedures. Just like the red telephone booths, these stored procedures are soon to be a thing of the past. These stored procedures are deprecated and may be just as reliable as those old telephone booths.

Too bad there is not a better means to trap the permissions from these fixed roles. It would be really nice to be able to view them just the same as can be done with the other principals (users and logins).

Now go forth and Audit your roles.

PS

What is up with that weird granny pic? Well, it was a challenge from Grant Fritchey to use the image in a technical blog post. You can read the challenge invite over here. And yeah, I know it is some sort of Dr. Who thing.

SQL Server Permissions – Database Roles

securedb

EZ PZ Permission Squeezee

Given the critical level of importance related to permissions, one may think it is a concept that is well understood by all who are given the charge of protecting the data.

There is a great responsibility when the keys to the data kingdom are granted to a DBA. The DBA is responsible for ensuring the integrity and security of the data. To ensure the security of the data, the DBA has the responsibility of granting, revoking, or denying access, at various levels, to the data and objects within the database.

Despite this high visibility and critical nature of this concept, understanding permissions and assigning permissions does not seem to be as straight forward as it should be. Evidence of this is something I see far too frequently in the wild  as illustrated by the following image.

This screenshot is only a part of the problem that I wish to break down and discuss in this article.

SQL Server Permissions

A fundamental component of SQL Server is the security layer. A principle player in security in SQL Server comes via principals. In a previous article, I outlined the different flavors of principals while focusing primarily on the users and logins. You can brush up on that article here. While I touched lightly, in that article, on the concept of roles, I will expound on the roles a bit more here – but primarily in the scope of the effects on user permissions due to membership in various default roles.

Let’s reset back to the driving issue in the introduction. Frequently, I see what I would call a gross misunderstanding of permissions by way of how people assign permissions and role membership within SQL Server. The assignment of role membership does not stop with database roles. Rather it is usually combined with a mis-configuration of the server role memberships as well. This misunderstanding can really be broken down into one of the following errors:

  • The belief that a login cannot access a database unless added specifically to the database.
  • The belief that a login must be added to every database role.
  • The belief that a login must be added to the sysadmin role to access resources in a database.

The experienced professional will likely note that there is a direct conflict between a few of these beliefs. That said, all too often I see all three of these misconceptions implemented in every instance for nearly every user.

Let’s start looking at these misconceptions. To investigate these problems, I will create a login. After creating the login, I will add that login as a member to the sysadmin role. Once the login is added to the sysadmin role, I will then run some simple tests within my DBA database.

Sysadmin

The creation of a server principal (login) and adding the principal to the sysadmin role is fairly simple. The next couple of screenshots are followed by a quick script that will perform the same actions.

As was promised, here is the script that will do the same thing as illustrated in the GUI above.

With the user now in place, let’s test. The primary test here would be that a server principal cannot access the database since explicit database permissions have not been granted. Here is the first batch of tests that I will run.

The first statement is to allow me to impersonate the superuser login. From the impersonated connection, I first check to see I can query the sys.objects system catalog. Then I test the database_principals system catalog. Next in line is to check the list of permissions that have been granted to the superuser account. Each of these queries executes successfully without error. Here is a screen grab for these first three tests.

Notice the first two queries returned an empty set. This is not a failure, rather evidence that the select statement ran successfully. In the third result set, we can see that the superuser account has all sorts of server level permissions. In the result set there was not a single database level permission.

The last query that utilized sp_helprotect returned the following error:

Msg 15330, Level 11, State 1, Procedure sys.SP_HELPROTECT, Line 302
There are no matching rows on which to report.

This is confirmation that there is no database user called superuser.

So I can query a database without my server principal being given direct access to the database (it is worth reiterating here that this principal is in the sysadmin server role), but can I do other things such as create objects? Let’s test that with the following script.

This script is straight forward. All it does is check for a table. If that table exists, then drop it and recreate it. The last little bit will check to confirm the successful creation of the table. This script succeeds as illustrated in the following image.

That should be pretty convincing that if you add a server principal to the sysadmin server role then that user has access to the databases. These tests have illustrated that it is not necessary to add a server principal as a database principal when that server principal is in the sysadmin role (an erroneous configuration often seen). If the database principal is not necessary in this case, then what will happen if a database principal does exist?

Database Principal in Every Database Role

The next logical step in the sequence is to create a database principal for the already created superuser server principal. Once created, we will test to see what effects if any can be observed by having this database principal in every database role as well as the sysadmin role. This will help to test the first two bullet items from the list of common configurations I have seen in the wild. Let’s start with the script that will help create the principal to be used during the next iteration of tests.

The script basically creates a database principal and then adds that principal to each of the default fixed database roles available in SQL Server. Those roles are easily viewed in the first image in this article and are also listed here for ease of reference (intentionally ignoring the public role).

  • db_owner
  • db_accessadmin
  • db_securityadmin
  • db_ddladmin
  • db_backupoperator
  • db_datareader
  • db_datawriter
  • db_denydatareader
  • db_denydatawriter

The tests for this round of changes will be just like in the previous section. Here is the script to be used for the first batch of tests.

The major differences between this version of the test and the previous iteration of the test is that I have the table still in existence (I did not drop it but that will come shortly) and I have created a database principal so the first two queries will show a single row for each instead of an empty result set. The next significant difference is the last query that utilizes sp_helprotect. Instead of an error like the first time, this execution gives me the following results.

Next I will rerun the test to create an object with the following script:

This script will produce the same results as in the first example. The table, since it was already there, will be dropped and then recreated. After it is recreated, the validation script will find the table and return a single row.

This series of tests has just shown that a database principal tied to a login which is added to the sysadmin role and then added to all database roles will still be able to do pretty much everything a sysadmin can do. To this point, I have shown how bullet points one and three are not sound in reasoning. With that, there is still the test to prove (true or false) the final bullet point that a principal should be added to every database role.

Before testing the validity of the configuration, I want you to look closely at the list of database roles.

  • db_owner
  • db_accessadmin
  • db_securityadmin
  • db_ddladmin
  • db_backupoperator
  • db_datareader
  • db_datawriter
  • db_denydatareader
  • db_denydatawriter

Just reading the names of these roles should be a good indicator that adding a user to every database role is not a desired configuration. I am sure the question is bubbling up now in regards to some of those “but how is it working in the previous examples?”. The response to that is very easy: “The user is a sysadmin!”.

To test this last bit of the configuration, it is time to remove the server principal from the sysadmin role. I will do that via the following script.

With the user removed from the sysadmin role, it is time to repeat the tests.

The results are significantly different this time.

Msg 229, Level 14, State 5, Line 132
The SELECT permission was denied on the object ‘objects’, database ‘mssqlsystemresource’, schema ‘sys’.
Msg 229, Level 14, State 5, Line 136
The SELECT permission was denied on the object ‘database_principals’, database ‘mssqlsystemresource’, schema ‘sys’.

I just tried to query two objects and was refused due to a denial in each case. What if I try to query that table I created previously?

This will give me the following error.

Msg 229, Level 14, State 5, Line 141
The SELECT permission was denied on the object ‘MySuperTable’, database ‘DBA’, schema ‘dbo’.

Well that is not the desired effect or is it? Remember, I added the superuser principal to every role and that includes the “deny” roles.

How about if I try to drop and recreate that table?

Again, no dice!

Msg 229, Level 14, State 5, Line 157
The SELECT permission was denied on the object ‘objects’, database ‘mssqlsystemresource’, schema ‘sys’.

With this constant blocker of being denied cropping up, let’s take a look at relevant permissions related to the database roles to which I added the superuser principal. To evaluate those permissions, I am going to use the following script that relies on the results of a system stored procedure called sp_dbfixedrolepermission. (Word of advice, this procedure is deprecated and I will be writing more on that in a future article.)

Looking at the results of that script, one can start to see why we are having so many problems executing a select.

In this result set, one can see that by virtue of the db_owner role and the db_datareader and db_datawriter roles, we have permissions that are being GRANTed. Immediately after that, we hit the DENY. This behavior is to be expected. A DENY permission will override a GRANT except when the principal is a member of the sysadmin server role.

Seeing this in action, the next logical step is to remove the role membership from the two “deny” roles (in this scenario where all are checked) for starters. Once done, go ahead and re-run these tests, you will see that things will work differently. While you are at it, take it a step further and rerun the tests after removing superuser from the db_owner role. Think about it, do you really need to have db_owner along with all of the “grant” related roles given that db_owner gives all of those same roles?

One last bit of homework is for you to run the tests another time from the point of when the database principal was created. Only this last batch of tests, impersonate the database principal instead of the server principal. You can do that by doing the following:

Instead of using the following impersonation statement:

You may be interested by the results.

Recap

I have taken you on a journey through three common mis-configurations with principals and permissions. It really is quite unnecessary to add a principal to the sysadmin fixed server role as well as every fixed database role. Even if the principal is not a member of the sysadmin role, it is foolish to add it to every database role. There just is no good logic in setting up permissions in that manner.

Take a moment to focus and you can say EZ PZ permission squeezee.

Parse Dynamics AX Context Info

Dynamics AX and SQL Server

More and more I am seeing clients requiring assistance with the Microsoft Dynamics Suite. Each of the products in the suite comes with a different set of performance issues and gotchas. In this article I will only be discussing the AX product and an easy tweak to make troubleshooting that product much easier from the perspective of the database administrator. This tweak is to enable the context info from within the administration console.

Enable Context Info

Some may call this a critical setting that must be activated on every transaction heavy Dynamics AX AOS server. One of the most common reasons is that DAX user sessions frequently block one another. Occasionally the blocking may be uncomfortably long.

In  order to enable this setting on each DAX AOS server, the following steps should be followed:

  • Navigate to HKEY_LOCAL_MACHINE\SYSTEM\CurrentControlSet\services\DynamicsServer\6.01\Original (installed configuration). The last key, Original (installed configuration), is the key name for the current server configuration. If your system uses a different configuration than the original installed configuration, navigate to the currently active configuration.
  • Create a string registry value called ‘connectioncontext’ and set the value to 1.
  • Restart the AOS.

The steps to implement this change is not terribly difficult. The risk is very low as well. There is very minimal cost for sending this additional info. That said, even on an extremely busy system for one client, we have yet to see a negative impact with this setting enabled.

Easy Troubleshooting for the DBA

Now that the context info is enabled within the application, this is where the pains of troubleshooting AX performance issues within the database becomes somewhat easier for the database administrator. Just enabling the setting doesn’t bring you to the promised land though. You still need to do a bit of work.

Once the setting is enabled, what actually happens is the AX application starts to send an extra chunk of data along with each connection to the database server. This chunk of data is the context info. The context info that AX decides to send along is not straight-forward to read however. The AX context info is sent to SQL Server as a varbinary.

What does it mean to be in varbinary format for you as the database administrator? This means that you still have a bit of work to do. Do you need to perform that extra work every time you look at the data? Well, the short answer is “it depends”! If you are smart about your tool-set (e.g. set of administration scripts) then you will save this extra work there. If you do not yet have a tool-set and rewrite your queries every time – you obviously fall at the far opposite end of the spectrum and will have much more work to do.

Whichever end of the spectrum you fall within, here is script to integrate into your scripts to help make your AX DBA work just a tad bit easier.

So What does it DO?

So what value does this query actually bring you? I have talked about it making life easier by enabling the context info from within AX, but I didn’t dive into any details on what it will provide.

Looking at the query I just provided, one can surmise that the context info will provide two significant pieces of information. The first bit is the Session ID. This is not the spid within SQL Server. Rather this is the session id that is a different value within AX. The second piece of information that is highly valuable is the User that is tied to that AX Session. The AX application will show as the service account for the application on all spids within SQL Server. The spid and spid user are fairly useless when trying to figure out who is causing what level of pain since all users in SQL Server for AX will appear to be the same user. The SQL spid will be useless for the DAX admins because the spid will not match the DAX session id. Both of these factors will lead to an extra amount of frustration between the DBA and DAX Admin if in the middle of a performance slowdown.

Being able to extract the DAX User and Session ID from the context info will significantly reduce troubleshooting time when in the trenches trying to figure out who is running what from within the application. This reduces the chances of taking a guess and gives good solid evidence that can be taken back to the business users and try to improve their processes and the overall performance of the system.

User Contains Invalid Characters – Back to Basics

Remember When…

sqlbasic_sargeBack in late December of 2015, a challenge of sorts was issued by Tim Ford (twitter) to write a blog post each month on a SQL Server Basic. Some have hash-tagged this as #backtobasics. Here is the link to that challenge sent via tweet.

I did not officially accept the challenge. Was an official acceptance required? I don’t know. I do know that I think it is a good challenge and that I intend to participate in the challenge. I hope I can meet the requirements and keep the posts to “basics”. Let’s hope this post holds up to the intent of the challenge.

With this being another installment in a monthly series, here is a link to review the other posts in the series – back to basics. Reviewing that link, you can probably tell I am a bit behind in the monthly series.

Logins and Users

It seems appropriate to re-introduce the concept of principals (aka Logins and Users). Rather than go into depth about principals here though, I will refer you to a recent article on the topic. The article in question was another “basics” article and can be found here.

invalidWith that out of the way, it should be conceded that creating principals is a common practice and possibly a frequent requirement of the data professional. While creating those principals, there is a good chance that one will run into an absurd error ever now and then. Today, I want to discuss one absurd error. The fix for the error may seem just as absurd as the error, but would be really easy to implement.

Invalid Characters

Here is the error message that is quite possible to encounter while creating principals.

Msg 15006, Level 16, State 1, Line 6
‘SomeDOmain\jason’ is not a valid name because it contains invalid characters.

At first look, this error makes absolutely no sense. The error states there is an invalid character somewhere in the string “SomeDomain\jason”, yet every character in that string is supported and normal for the collation. This can be a head-scratcher for sure.

To better understand this error, let’s try to reproduce the error. First, we need to create a login.

Here, I have used “SomeDomain” in lieu of my actual domain or local workstation name. This statement will complete successfully given the user exists within the domain or on the Windows workstation. Great so far!

The next step is to create a database user within the AdminDB (you can pick a database that exists in your environment) and map this user to the Login created in the previous step. This can be done with the following script:

Bam! Executing the script produces:

Msg 15006, Level 16, State 1, Line 6
‘SomeDOmain\jason’ is not a valid name because it contains invalid characters.

This is where a close inspection of the script is required. Due to a fabulous fat finger, a 0 (zero) instead of O (capital o) was typed in the second occurrence of “SomeDOmain”. This is easy enough to reproduce with a typo of any portion of the windows login that already exists in SQL as a login principal.

The Fix

The fix is insanely easy once you figure out that invalid character actually means you mis-typed the Login portion of the Create User statement. The fix is to type the login correctly. Knowing is half the battle! Running into this error in the wild could cause you a few minutes trying to figure it out and prepping to throw something through the monitor.

Recap

In this article I have shown how a simple mistake can lead to a really obtuse error message that doesn’t seem to make much sense. A little care and attention to properly typing the login names will save you a bit of time and hair on the troubleshooting end of creating principals.

What Agent Job is Running – Back to Basics

Remember When…

sqlbasic_sargeBack in late December of 2015, a challenge of sorts was issued by Tim Ford (twitter) to write a blog post each month on a SQL Server Basic. Some have hash-tagged this as #backtobasics. Here is the link to that challenge sent via tweet.

I did not officially accept the challenge. Was an official acceptance required? I don’t know. I do know that I think it is a good challenge and that I intend to participate in the challenge. I hope I can meet the requirements and keep the posts to “basics”. Let’s hope this post holds up to the intent of the challenge.

With this being another installment in a monthly series, here is a link to review the other posts in the series – back to basics. Reviewing that link, you can probably tell I am a bit behind in the monthly series.

Active Queries

If you are like me, you have had the opportunity on more than one occasion to try and figure out what is currently active on your SQL Server. The reason to try and figure this is out is usually tied to some sort of performance issue that you have to dive in and troubleshoot.

When checking for current activity through sp_who2 or by querying the dmvs (sys.dm_exec_sessions, sys.dm_exec_requests, sys.dm_exec_connections), there is inevitably some sort of session that looks something like this:

active_job

This is not very helpful in this format. What I have seen most people do when they see this kind of result (and they care enough to know what is running) is to open up the “Job Activity Monitor” and then try to figure out manually what job is truly running. You can imagine the nightmare this becomes if there are more than a few jobs running.

Currently Running Agent Jobs

There is a significantly easier way to find the name and step of the agent job that is currently running when using your favorite dmv query to explore current activity. Let’s start with a simple query.

If you have jobs that are currently executing, then this query should return some results representative of the running jobs. Unfortunately, you only know that the source of the spid happens to be the SQLAgent. I am going to dirty up this simple query with quite a bit more query so it looks like the following:

There is a good reason for how much I have complicated the simple version of the query. I can leave the query significantly less complicated if not for the fact that I wanted to also know the sql text in a well formatted manner. That accounts for the entire segment in the cross apply.

To retrieve the name of the job that is running, I actually only need this little piece of code right here:

This takes the varbinary representation of the jobid string and converts to the human friendly form with the appropriate format of the string so we can compare it to the the actual job id and then finally get the job name. From there, I can then retrieve the job step to see exactly where in the process the job is presently executing.

Executing this query, I receive the following results for the job that I have executing right now.

decrypt_agentjob

If I take this code and then integrate it into my favorite query to check for running sessions while investigating issues on the server, I have become just that much more efficient as a DBA.

Recap

It is quite common to be required to investigate performance issues on the server. Even if not a performance issue, there are frequent needs that require us to know what queries are executing at various points in time throughout the day. A complication to this is the varbinary format of the job name that is represented as the program that is running during many of these spot checks.

The means to circumvent this complication is with a little extra code for your favorite script du jour used to investigate running sessions. I recommend adding a code segment, such as the code I have shown in this article, to help simplify your research tasks and help you look more like a rockstar. Of course, you could always resort to the other method touched on in this article – trial and guess through manual process of elimination via “Job Activity Monitor”.

Database Backups – Back to Basics

Remember When…

sqlbasic_sargeBack in late December of 2015, a challenge of sorts was issued by Tim Ford (twitter) to write a blog post each month on a SQL Server Basic. Some have hash-tagged this as #backtobasics. Here is the link to that challenge sent via tweet.

I did not officially accept the challenge. Was an official acceptance required? I don’t know. I do know that I think it is a good challenge and that I intend to participate in the challenge. I hope I can meet the requirements and keep the posts to “basics”. Let’s hope this post holds up to the intent of the challenge.

With this being another installment in a monthly series, here is a link to review the other posts in the series – back to basics. Reviewing that link, you can probably tell I am a bit behind in the monthly series.

Database Backups

db_backupIn my previous two articles, I touched on some data that is available to the data professional with regards to backups and database restores. In the article discussing the restore history data, I alluded to another topic related to restores that could have been discussed.

Today, I will be diving into that alternate direction. This direction is related to database backups. More specifically, how do you truly know if you have a successful backup? Here’s a hint: it has to deal with restores.

If you haven’t heard it before, here it is in simple terms “you do not have a backup until you have restored the backup“. If you ponder that for a minute, it is quite logical and makes plenty of sense. The only way to confirm that you have a backup is to test the purpose of the backup and that is to recover the database back to the state represented by the backup.

Testing Backups

Creating a database backup is a pretty straight-forward task. This is easy enough to do through the GUI or from tsql script, or db_restoreeven, *shudder*, from a maintenance plan. Creating the backup is the easy part. If you are not creating backups, I hope there is a good reason for it (and yes there are valid reasons for not creating a backup of certain databases).

This article will not explore the nuances of the ways to create a backup of a database. Suffice it to say, there are multiple options and methods. This article will focus on the second part of creating a successful backup – testing the backup. In other words, restoring the backup that was created.

So how does one go about testing a backup? The simple answer as already stated is to perform a restore. This means through the use of the restore command. But is it really that simple?

If I have the following backups available for my AdventureWorks2014 database, where should I test the validity of the backup?

avail_backups

Would it make much sense to take any of these backups and test the restore process on the same server as the source of the backup? To be honest, that is a question that may have an entirely different answer in each and every environment. That is a question that requires a little insight into the business needs, available resources and procedures and policies in place for the environment. For me, I typically like to create an automated restore process that will restore the previous night’s backup onto a test/stage/dev server that is not in production.

I have written previously on how I do this sort of automated restore. You can read all about it from the original article here. In that article I provide a script to assist with the restore of these backups. You are welcome to test it out and play around with it while setting up your backup validation environment. In using the script, the user assumes all risk.

I recommend an automated restore system to ensure the backups are tested on a routine basis. The benefits of doing this are more than just a few. That said here are a few of those benefits: 1. Confidence in backups, 2. Quick recovery in event of failure or disaster, 3. A usable environment for quick data comparison, 4. A passable environment for reporting, and the best benefit is that you can get a good nights sleep knowing your backups are reliable.

Caveat

With automated restores, there may be an occasional failure. Heck, you may run into a bit of a head-scratcher here or there trying to get them to work in the first place. It’s not technically easy the first time. With some practice, it gets much easier. This method is intended to be suitable for a cheap solution. There may be a pricier solution out there that can be bought. If that is in your budget – go with it. The main point is to do it.

After you get these restores working, I recommend using the scripts in this article to check the restore history from time to time. Take it an extra mile and generate some reports from that data.

Recap

I have provided some information on how and why to restore a database. The primary reason being that you never know how good your backup is until you have restored it. Some say you do not have a backup until you have restored it. This is a solution and some opinions on how to do that efficiently.

«page 1 of 60








Calendar
May 2017
M T W T F S S
« Apr    
1234567
891011121314
15161718192021
22232425262728
293031  
Content
SQLHelp

SQLHelp


Welcome , today is Sunday, May 28, 2017