SSRS Custom Shared Schedule

Reporting Services

 

Reporting Services is a pretty feature rich tool for delivering reports to various consumers. There is plenty of power within Reporting Services (SSRS) giving one the ability to perform visualizations, render reports in various formats and even schedule reports to be delivered in different formats or on different schedules.

Unfortunately, the scheduling capability within SSRS is fairly weak. While it is true that one can accomplish a varied array of different schedules, the scheduling of SSRS is far below the power of SQL Agent (for instance).

I will explore the deficiency of the scheduling tool within SSRS in a very specific case. You may even be familiar with this deficiency already. Many organizations have the need to produce end of month reports that need to run on the last day of the month. If you are familiar with the scheduling tool within SSRS, you already know that this is not possible (at least through SSRS 2014). This article will help step you through how to schedule a report subscription to run on the last day of the month.

Schedule Options

Before diving into the custom schedules, let’s take a closer look at the options available for scheduling through SSRS.

While the tool does provide various options, the granularity certainly is not diverse enough to cover many legitimate scheduling needs – especially the “last day of month” requirement for many month end type of reports.

As you may be aware, SSRS subscriptions are actually run through the SQL Agent despite being set through SSRS. If I take a look at some of the scheduling options in SQL Agent, I can see the following.

Note here that there are various built-in options to schedule on a monthly basis, including the “last” option. If I wanted to look even closer at the available options I would see the following.

As you can see here, I can schedule for multiple different types of “last” options relative to the month. One of these options happens to be the last day. This helps to illustrate just how much more powerful and versatile the scheduling within SQL Agent is than what we get with the SSRS scheduler. All of this despite the fact that SSRS subscriptions are actually executed by the SQL Agent. Doesn’t that seem a bit odd?

Custom Schedule

Now that we better understand the limitations of the SSRS scheduler and given that SSRS subscriptions are executed through the SQL Agent, let’s move on to bigger, better and much more useful means of scheduling the SSRS reports.

The very first thing that you should do is to create a share schedule. This should be a shared schedule that is created as a run-once schedule. Let the schedule run that one time and then proceed on to the following steps. If you need help in creating a shared schedule, here is an msdn article. When you create the shared schedule, I recommend using a descriptive name that you can remember. This name will be useful in the next step. For the purposes of this article, my schedule is named “EndOfMonth”.

Once the schedule is created, the next thing to do is to query the ReportServer database. Make sure you know the name of your database. Some people have changed the ReportServer database name from the default. This is an important piece of information to remember. The query against the ReportServer database will be predominantly just to get the schedule id of the newly created schedule.

When I run that query, I receive the following results.

I now want to take that scheduleid and then use it to determine what SQL Agent job is actually related to that schedule so I can fetch some info from the job. I could skip this entirely and go to the subsequent step but this helps to understand what needs to be done in that subsequent step. So, from here let’s query the msdb database in SQL Server to fetch some info from the job system.

The scheduleid is used within a command within a jobstep. By passing the scheduleid into this query and then comparing against the existing job steps, I am able to retrieve the job that is related to the SSRS shared schedule (subscription). When I run the preceding query, I receive results illustrated in the following image.

Take note of the items I highlighted in red. Within the job name, step name and command text for the step, I can find the scheduleid for that shared schedule that I created. Yes, I could easily have changed the query I used to compare to the job name and that would have worked just fine in this case. By querying the command, I can confirm that the schedule is actually being used. The most important piece of information in this result is the entire command text for the job. I will need to take this command text and use it to populate a brand new SQL Agent job. This is how I will get my custom schedule for the SSRS subscription.

From here, I just need to create a SQL Agent job that uses the options for a monthly schedule indicating last day from the two drop down menus illustrated previously in this article. Then all that is left is a sigh of relief and a boom bada bing.

Recap

SSRS does not have the built-in capability for some of the more complex and often times regularly required report schedules to meet various business requirements. By following the steps outlined in this article, you can circumvent that short-coming and achieve the needed business requirements while looking like a hero to those that need the more advanced report schedules.

Parse Dynamics AX Context Info

Dynamics AX and SQL Server

More and more I am seeing clients requiring assistance with the Microsoft Dynamics Suite. Each of the products in the suite comes with a different set of performance issues and gotchas. In this article I will only be discussing the AX product and an easy tweak to make troubleshooting that product much easier from the perspective of the database administrator. This tweak is to enable the context info from within the administration console.

Enable Context Info

Some may call this a critical setting that must be activated on every transaction heavy Dynamics AX AOS server. One of the most common reasons is that DAX user sessions frequently block one another. Occasionally the blocking may be uncomfortably long.

In  order to enable this setting on each DAX AOS server, the following steps should be followed:

  • Navigate to HKEY_LOCAL_MACHINE\SYSTEM\CurrentControlSet\services\DynamicsServer\6.01\Original (installed configuration). The last key, Original (installed configuration), is the key name for the current server configuration. If your system uses a different configuration than the original installed configuration, navigate to the currently active configuration.
  • Create a string registry value called ‘connectioncontext’ and set the value to 1.
  • Restart the AOS.

The steps to implement this change is not terribly difficult. The risk is very low as well. There is very minimal cost for sending this additional info. That said, even on an extremely busy system for one client, we have yet to see a negative impact with this setting enabled.

Easy Troubleshooting for the DBA

Now that the context info is enabled within the application, this is where the pains of troubleshooting AX performance issues within the database becomes somewhat easier for the database administrator. Just enabling the setting doesn’t bring you to the promised land though. You still need to do a bit of work.

Once the setting is enabled, what actually happens is the AX application starts to send an extra chunk of data along with each connection to the database server. This chunk of data is the context info. The context info that AX decides to send along is not straight-forward to read however. The AX context info is sent to SQL Server as a varbinary.

What does it mean to be in varbinary format for you as the database administrator? This means that you still have a bit of work to do. Do you need to perform that extra work every time you look at the data? Well, the short answer is “it depends”! If you are smart about your tool-set (e.g. set of administration scripts) then you will save this extra work there. If you do not yet have a tool-set and rewrite your queries every time – you obviously fall at the far opposite end of the spectrum and will have much more work to do.

Whichever end of the spectrum you fall within, here is script to integrate into your scripts to help make your AX DBA work just a tad bit easier.

So What does it DO?

So what value does this query actually bring you? I have talked about it making life easier by enabling the context info from within AX, but I didn’t dive into any details on what it will provide.

Looking at the query I just provided, one can surmise that the context info will provide two significant pieces of information. The first bit is the Session ID. This is not the spid within SQL Server. Rather this is the session id that is a different value within AX. The second piece of information that is highly valuable is the User that is tied to that AX Session. The AX application will show as the service account for the application on all spids within SQL Server. The spid and spid user are fairly useless when trying to figure out who is causing what level of pain since all users in SQL Server for AX will appear to be the same user. The SQL spid will be useless for the DAX admins because the spid will not match the DAX session id. Both of these factors will lead to an extra amount of frustration between the DBA and DAX Admin if in the middle of a performance slowdown.

Being able to extract the DAX User and Session ID from the context info will significantly reduce troubleshooting time when in the trenches trying to figure out who is running what from within the application. This reduces the chances of taking a guess and gives good solid evidence that can be taken back to the business users and try to improve their processes and the overall performance of the system.

User Contains Invalid Characters – Back to Basics

Remember When…

sqlbasic_sargeBack in late December of 2015, a challenge of sorts was issued by Tim Ford (twitter) to write a blog post each month on a SQL Server Basic. Some have hash-tagged this as #backtobasics. Here is the link to that challenge sent via tweet.

I did not officially accept the challenge. Was an official acceptance required? I don’t know. I do know that I think it is a good challenge and that I intend to participate in the challenge. I hope I can meet the requirements and keep the posts to “basics”. Let’s hope this post holds up to the intent of the challenge.

With this being another installment in a monthly series, here is a link to review the other posts in the series – back to basics. Reviewing that link, you can probably tell I am a bit behind in the monthly series.

Logins and Users

It seems appropriate to re-introduce the concept of principals (aka Logins and Users). Rather than go into depth about principals here though, I will refer you to a recent article on the topic. The article in question was another “basics” article and can be found here.

invalidWith that out of the way, it should be conceded that creating principals is a common practice and possibly a frequent requirement of the data professional. While creating those principals, there is a good chance that one will run into an absurd error ever now and then. Today, I want to discuss one absurd error. The fix for the error may seem just as absurd as the error, but would be really easy to implement.

Invalid Characters

Here is the error message that is quite possible to encounter while creating principals.

Msg 15006, Level 16, State 1, Line 6
‘SomeDOmain\jason’ is not a valid name because it contains invalid characters.

At first look, this error makes absolutely no sense. The error states there is an invalid character somewhere in the string “SomeDomain\jason”, yet every character in that string is supported and normal for the collation. This can be a head-scratcher for sure.

To better understand this error, let’s try to reproduce the error. First, we need to create a login.

Here, I have used “SomeDomain” in lieu of my actual domain or local workstation name. This statement will complete successfully given the user exists within the domain or on the Windows workstation. Great so far!

The next step is to create a database user within the AdminDB (you can pick a database that exists in your environment) and map this user to the Login created in the previous step. This can be done with the following script:

Bam! Executing the script produces:

Msg 15006, Level 16, State 1, Line 6
‘SomeDOmain\jason’ is not a valid name because it contains invalid characters.

This is where a close inspection of the script is required. Due to a fabulous fat finger, a 0 (zero) instead of O (capital o) was typed in the second occurrence of “SomeDOmain”. This is easy enough to reproduce with a typo of any portion of the windows login that already exists in SQL as a login principal.

The Fix

The fix is insanely easy once you figure out that invalid character actually means you mis-typed the Login portion of the Create User statement. The fix is to type the login correctly. Knowing is half the battle! Running into this error in the wild could cause you a few minutes trying to figure it out and prepping to throw something through the monitor.

Recap

In this article I have shown how a simple mistake can lead to a really obtuse error message that doesn’t seem to make much sense. A little care and attention to properly typing the login names will save you a bit of time and hair on the troubleshooting end of creating principals.

What Agent Job is Running – Back to Basics

Remember When…

sqlbasic_sargeBack in late December of 2015, a challenge of sorts was issued by Tim Ford (twitter) to write a blog post each month on a SQL Server Basic. Some have hash-tagged this as #backtobasics. Here is the link to that challenge sent via tweet.

I did not officially accept the challenge. Was an official acceptance required? I don’t know. I do know that I think it is a good challenge and that I intend to participate in the challenge. I hope I can meet the requirements and keep the posts to “basics”. Let’s hope this post holds up to the intent of the challenge.

With this being another installment in a monthly series, here is a link to review the other posts in the series – back to basics. Reviewing that link, you can probably tell I am a bit behind in the monthly series.

Active Queries

If you are like me, you have had the opportunity on more than one occasion to try and figure out what is currently active on your SQL Server. The reason to try and figure this is out is usually tied to some sort of performance issue that you have to dive in and troubleshoot.

When checking for current activity through sp_who2 or by querying the dmvs (sys.dm_exec_sessions, sys.dm_exec_requests, sys.dm_exec_connections), there is inevitably some sort of session that looks something like this:

active_job

This is not very helpful in this format. What I have seen most people do when they see this kind of result (and they care enough to know what is running) is to open up the “Job Activity Monitor” and then try to figure out manually what job is truly running. You can imagine the nightmare this becomes if there are more than a few jobs running.

Currently Running Agent Jobs

There is a significantly easier way to find the name and step of the agent job that is currently running when using your favorite dmv query to explore current activity. Let’s start with a simple query.

If you have jobs that are currently executing, then this query should return some results representative of the running jobs. Unfortunately, you only know that the source of the spid happens to be the SQLAgent. I am going to dirty up this simple query with quite a bit more query so it looks like the following:

There is a good reason for how much I have complicated the simple version of the query. I can leave the query significantly less complicated if not for the fact that I wanted to also know the sql text in a well formatted manner. That accounts for the entire segment in the cross apply.

To retrieve the name of the job that is running, I actually only need this little piece of code right here:

This takes the varbinary representation of the jobid string and converts to the human friendly form with the appropriate format of the string so we can compare it to the the actual job id and then finally get the job name. From there, I can then retrieve the job step to see exactly where in the process the job is presently executing.

Executing this query, I receive the following results for the job that I have executing right now.

decrypt_agentjob

If I take this code and then integrate it into my favorite query to check for running sessions while investigating issues on the server, I have become just that much more efficient as a DBA.

Recap

It is quite common to be required to investigate performance issues on the server. Even if not a performance issue, there are frequent needs that require us to know what queries are executing at various points in time throughout the day. A complication to this is the varbinary format of the job name that is represented as the program that is running during many of these spot checks.

The means to circumvent this complication is with a little extra code for your favorite script du jour used to investigate running sessions. I recommend adding a code segment, such as the code I have shown in this article, to help simplify your research tasks and help you look more like a rockstar. Of course, you could always resort to the other method touched on in this article – trial and guess through manual process of elimination via “Job Activity Monitor”.

Database Backups – Back to Basics

Remember When…

sqlbasic_sargeBack in late December of 2015, a challenge of sorts was issued by Tim Ford (twitter) to write a blog post each month on a SQL Server Basic. Some have hash-tagged this as #backtobasics. Here is the link to that challenge sent via tweet.

I did not officially accept the challenge. Was an official acceptance required? I don’t know. I do know that I think it is a good challenge and that I intend to participate in the challenge. I hope I can meet the requirements and keep the posts to “basics”. Let’s hope this post holds up to the intent of the challenge.

With this being another installment in a monthly series, here is a link to review the other posts in the series – back to basics. Reviewing that link, you can probably tell I am a bit behind in the monthly series.

Database Backups

db_backupIn my previous two articles, I touched on some data that is available to the data professional with regards to backups and database restores. In the article discussing the restore history data, I alluded to another topic related to restores that could have been discussed.

Today, I will be diving into that alternate direction. This direction is related to database backups. More specifically, how do you truly know if you have a successful backup? Here’s a hint: it has to deal with restores.

If you haven’t heard it before, here it is in simple terms “you do not have a backup until you have restored the backup“. If you ponder that for a minute, it is quite logical and makes plenty of sense. The only way to confirm that you have a backup is to test the purpose of the backup and that is to recover the database back to the state represented by the backup.

Testing Backups

Creating a database backup is a pretty straight-forward task. This is easy enough to do through the GUI or from tsql script, or db_restoreeven, *shudder*, from a maintenance plan. Creating the backup is the easy part. If you are not creating backups, I hope there is a good reason for it (and yes there are valid reasons for not creating a backup of certain databases).

This article will not explore the nuances of the ways to create a backup of a database. Suffice it to say, there are multiple options and methods. This article will focus on the second part of creating a successful backup – testing the backup. In other words, restoring the backup that was created.

So how does one go about testing a backup? The simple answer as already stated is to perform a restore. This means through the use of the restore command. But is it really that simple?

If I have the following backups available for my AdventureWorks2014 database, where should I test the validity of the backup?

avail_backups

Would it make much sense to take any of these backups and test the restore process on the same server as the source of the backup? To be honest, that is a question that may have an entirely different answer in each and every environment. That is a question that requires a little insight into the business needs, available resources and procedures and policies in place for the environment. For me, I typically like to create an automated restore process that will restore the previous night’s backup onto a test/stage/dev server that is not in production.

I have written previously on how I do this sort of automated restore. You can read all about it from the original article here. In that article I provide a script to assist with the restore of these backups. You are welcome to test it out and play around with it while setting up your backup validation environment. In using the script, the user assumes all risk.

I recommend an automated restore system to ensure the backups are tested on a routine basis. The benefits of doing this are more than just a few. That said here are a few of those benefits: 1. Confidence in backups, 2. Quick recovery in event of failure or disaster, 3. A usable environment for quick data comparison, 4. A passable environment for reporting, and the best benefit is that you can get a good nights sleep knowing your backups are reliable.

Caveat

With automated restores, there may be an occasional failure. Heck, you may run into a bit of a head-scratcher here or there trying to get them to work in the first place. It’s not technically easy the first time. With some practice, it gets much easier. This method is intended to be suitable for a cheap solution. There may be a pricier solution out there that can be bought. If that is in your budget – go with it. The main point is to do it.

After you get these restores working, I recommend using the scripts in this article to check the restore history from time to time. Take it an extra mile and generate some reports from that data.

Recap

I have provided some information on how and why to restore a database. The primary reason being that you never know how good your backup is until you have restored it. Some say you do not have a backup until you have restored it. This is a solution and some opinions on how to do that efficiently.

Database Backup History – Back to Basics

Remember When…

sqlbasic_sargeBack in late December of 2015, a challenge of sorts was issued by Tim Ford (twitter) to write a blog post each month on a SQL Server Basic. Some have hash-tagged this as #backtobasics. Here is the link to that challenge sent via tweet.

I did not officially accept the challenge. Was an official acceptance required? I don’t know. I do know that I think it is a good challenge and that I intend to participate in the challenge. I hope I can meet the requirements and keep the posts to “basics”. Let’s hope this post holds up to the intent of the challenge.

With this being another installment in a monthly series, here is a link to review the other posts in the series – back to basics. Reviewing that link, you can probably tell I am a bit behind in the monthly series.

Database Backup History

db_backupAs a data professional, there is little to no doubt that one of the most important tasks is to ensure the data is backed up regularly. We do this in preparation for several reasons. Whether it be to be able to recover in the event of a problem or to compare yesterdays data to how it looks today, the backup is done.

When doing a backup, how often do you report on the backup success? How frequently do you refer to the backup history to compare multiple runs or even generate reports for interested parties? The fact of the matter is that this history should be referenced on a routine basis to generate reports that should be read and not ignored.

Backup Reporting

Right about here you may be asking yourself why would somebody generate reports on the backups. After all, the job succeeds or fails and you respond appropriately to that, right? Does one really need to generate a report about the success of the backup job? Well, that is for you to decide. I would say it wouldn’t hurt to generate a report about the success rate of your backups. Management often likes to see reports concerning important processes. That said, that is not the reporting that I am referencing here.

The reporting I believe is useful in this case is indeed related to the historical information of your backups, but it is less about the success of the job and so forth. Of great benefit from the backup history data is the information on frequency of backup and also the size of the backup. These pieces of information can be somewhat useful when looking to forecast data growth. That is one example of a possible report when working with the backup history data that is stored.

Let’s take a look at a query that will help retrieve some of this backup history data.

Off the top, yes the script uses dynamic sql. I use dynamic sql because I use this script on multiple versions of SQL Server. One of the data points I gather in this script is the compressed backup size. That attribute is not available in SQL 2005. Due to that, and not wanting to maintain multiple scripts, I use the dynamic sql to account for that missing column.

I have also set the script to accept a database name parameter. If a name is provided, then only the backup history for that database is returned. If the parameter is left NULL, then the backup history for all databases will be returned. Additionally, I added a number of days parameter to limit the scope of the report to a specific range of days.

Among the data points returned in this script, you will note there is the duration of the backup, the date, and even the size of the backup. All of these attributes can help me to forecast future storage requirements both for the backup storage as well as for the data volume. Additionally, by knowing the duration of the backup and the trend of that duration, I can adjust maintenance schedules accordingly.

Caveat

In this particular script, I only retrieve the backup history for two backup types: FULL and DIFFERENTIAL. A small adjustment can be made to include other backup types if necessary. The line to change is:

Other possible values for backups are:

  • D = Database
  • I = Differential database
  • L = Log
  • F = File or filegroup
  • G =Differential file
  • P = Partial
  • Q = Differential partial

If you are interested in learning more about backup information, you can read this msdn article – here.

Recap

I have provided a quick example of how to retrieve pertinent backup history information. This information can be used to assist in forecasting storage requirements as well as assist in the job scheduling as backup jobs begin to run longer and longer as the data gets larger and larger.

For a related article, check out this article on how to retrieve the restore history from the msdb database.

Last Restore of a Database – Back to Basics

Remember When…

sqlbasic_sargeBack in late December of 2015, a challenge of sorts was issued by Tim Ford (twitter) to write a blog post each month on a SQL Server Basic. Some have hash-tagged this as #backtobasics. Here is the link to that challenge sent via tweet.

I did not officially accept the challenge. Was an official acceptance required? I don’t know. I do know that I think it is a good challenge and that I intend to participate in the challenge. I hope I can meet the requirements and keep the posts to “basics”. Let’s hope this post holds up to the intent of the challenge.

With this being another installment in a monthly series, here is a link to review the other posts in the series – back to basics. Reviewing that link, you can probably tell I am a bit behind in the monthly series.

Database Restore

db_restoreAs a data professional, one should be somewhat cognitive of backups with regards to the data. From the point of the creation of the backup, we can go in one of two directions with this article. Due to the importance of each of the directions, I will devote a basics article to each.

The order of publication of these articles in no means denotes a level of priority to the topic. I feel each is equally important. The two directions I see both deal with the restore of that backup that was created. One direction is in regards to reporting and the other direction is the actual restore.

Last Restore Date

Something that I seem to be getting asked more and more frequently is how to determine when a database was last restored. Maybe it is just a hot topic and maybe it will be short lived, but it sure seems to be something that is on the tip of peoples minds of late. So, how does one determine the last time a database was restored? The answer is actually quite simple. Within the msdb database, there is a table (that maintains the history of database restores) called restorehistory.

The restorehistory table stores information that will help you better understand what kind of restore was performed, who did the restore, when it was done, and even some information about the backup that was used to perform the restore. You can read more about the specifics of this table from the msdn article – here.

A quick means to query this table is via the following script:

Now, that is an extremely simple query looking at my TestDB database. The results of that query on my system are as follows:

db_restores_example

As is shown in the preceding illustration, I have multiple restores and restore types of this particular database. This is a good start. Let’s take this query up a notch. Not all production environments will have databases that are restored on the same server. What do we do to report on databases that are restored on the production server vs. the databases that exist on the production server? That is where this next query comes in to play.

In this latter query, I have added a few things to help for various reporting needs. You never know when the CIO or an auditor is going to come to you with a request to filter the data differently for restored databases. In this vein, I added parameters that will help filter the results down to a specific database, return all databases, or return all databases that have a restore history on the server. In this case, I decided to use the AdventureWorks2014 database as my example database. You will want to change the parameter as needed. If you pass a null value to the database name, then all databases can be returned in the result set – depending on the values used for the remaining parameters.

Caveat

This is all fine and well if the databases are restored onto the same server as the source database, right? What about the case where the database is restored to a different server? Well, the query still works for reporting the restored databases on the server where the database is restored. The complexity comes when trying to correlate back to the original production server. My recommendation here is good documentation and an administrative linked server that is not accessible via the application accounts or any user but the DBA group.

Recap

I have provided two quick examples of how to retrieve the restore history for your databases. I recommend that the restore history be checked on a routine basis. You never know when an over-permissioned user may decide to restore a database for you and then have all of the rest of the users coming to you to determine what happened. This will also be essential when we routinely test our database backups. Stay tuned for the next article where I will explore that aspect of database restores.

Login from an Untrusted Domain – Back to Basics

Categories: News, Professional, SSC
Comments: No Comments
Published on: November 8, 2016

Remember When…

sqlbasic_sargeBack in late December of 2015, a challenge of sorts was issued by Tim Ford (twitter) to write a blog post each month on a SQL Server Basic. Some have hash-tagged this as #backtobasics. Here is the link to that challenge sent via tweet.

I did not officially accept the challenge. Was an official acceptance required? I don’t know. I do know that I think it is a good challenge and that I intend to participate in the challenge. I hope I can meet the requirements and keep the posts to “basics”. Let’s hope this post holds up to the intent of the challenge.

With this being another installment in a monthly series, here is a link to review the other posts in the series – back to basics. Reviewing that link, you can probably tell I am a bit behind in the monthly series.

Untrusted Domain

SQL Server is full of wonderful features and tools. One feature that is not wonderful and is blatantly frustrating is the error message that is supposed to help you understand why you cannot connect to SQL Server.

Login failed. The login is from an untrusted domain and cannot be used with Windows authentication.

Are you really sure the login is from an untrusted domain? I recently ran into this problem and found that it was less than helpful, though it seems basic enough in what the error should represent. When I encountered the problem, there was little useful information that pertained strictly to my problem.

Wait, am I saying that this error can be reported for various different legitimate problems dealing with logins? Absolutely, yes! Let’s recap some of my givens because these facts are important to troubleshooting the real underlying problem. Which in turn, causes this to be more of a “basics” type of post.

  1. The server was joined to the domain.
  2. The user in question was granted sa permissions to the instance.
  3. The user can login to the instance locally without error.
  4. My personal domain account could login locally without error.
  5. My personal domain account could login remotely without error.
  6. The user in question could not login remotely. Error thrown was the untrusted domain error.
  7. The version of SQL Server was SQL Server 2014 Express edition.

Now talk about a head scratcher from the get-go. The problem only seems to occur when trying to login from remote. So, as an example, here is what I was basically seeing. First, the tsql to create my test login.

add_userAnd with that user in place, let’s try to connect from a remote server to see what will happen.

pre_changeresults

Well, that is far from useful. I can definitely see that my user is properly created. I have more than adequate permissions. If the user was truly from an untrusted domain, the creation of the login would have failed.

As it turns out, there is something that is explicit to SQL Express with remote logins that generates this problem. Some have postured that the account needs to be added to the local administrators group. This is not accurate.

Better Solution

When you encounter this problem, there is an easy solution. Unfortunately it is not really a SQL Solution. Rather it is more of a sysadmin type of solution. The solution is to employ the use of group policy (either locally or a domain group policy). The specific policy that should be changed is called “Access this computer from the network”.

policytochange

My advice here would be to use a domain based group policy if you have a bunch of servers that fall into this need. In my case, we needed to touch 120+ computers. Manually setting this GPO on 120+ servers would have been very tedious.

For a single server like this example, I added it manually as shown below:

after_gpo

With the account granted this permission, all that is left is simply to try and connect.

post_connectresults

This is a great result. Now the account can connect and perform the work it was intended to do. In this case, it is a task account that would server as a proxy account to perform remote tasks such as performing backups.

Recap

 

Troubleshooting login failures is a core concept for the data professional. At times the cause of the login failure is far from intuitive. In this case, the failure is non-indicative of the actual problem. The real problem is that SQL Express does not like remote logins without a little bit of hoop jumping. Learning how to troubleshoot the problem is essential to becoming a better data professional.

Default Sessions – Back to Basics

Comments: No Comments
Published on: September 1, 2016

Remember When…

sqlbasic_sargeBack in late December of 2015, a challenge of sorts was issued by Tim Ford (twitter) to write a blog post each month on a SQL Server Basic. Some have hash-tagged this as #backtobasics. Here is the link to that challenge sent via tweet.

I did not officially accept the challenge. Was an official acceptance required? I don’t know. I do know that I think it is a good challenge and that I intend to participate in the challenge. I hope I can meet the requirements and keep the posts to “basics”. Let’s hope this post holds up to the intent of the challenge.

With this being another installment in a monthly series, here is a link to review the other posts in the series – back to basics. Reviewing that link, you can probably tell I am a bit behind in the monthly series.

Default Sessions

SQL Server is full of some pretty cool stuff. Some of it is rather basic. Some of it is a bit complex. Whether basic or complex, there is a lot that seems to get overlooked because it just might be hidden from plain sight or maybe it is overlooked just because it is already done for you.

Today, I am going to talk about some of the stuff that is already done for you. It is pre-baked and ready to roll. Since it is pre-baked, it may be a good idea to get familiar with it and understand what kinds of data it may offer to you when troubleshooting.

defaultsessionIt is still somewhat surprising to me to discover how many people have never heard of the default trace in SQL Server. Slightly less surprising is how many people have never heard of the default Extended Events system_health session. What if I told you there were a bunch more default sessions than the default trace or system_health? I wrote about some of those default sessions previously.

With the evolution of SQL Server, there is an evolution in Extended Events. New features in SQL Server also means there may be some additional default sessions. I want to touch on some of these other default sessions in brief.

Default Sessions

Phone Home

First, lets start with an easy to find default session – telemetry_xevents. This session is a visible session and behaves like a user defined session with the caveat that it is a session installed with SQL Server 2016 and it is a system default session. I have another post queued up to discuss this session in a bit more detail so will just keep it brief here.

This session is there to basically capture specific data and phone it home to Microsoft. You can presume that based on the definition of the word telemetry

“is an automated communications process by which measurements and other data are collected at remote or inaccessible points and transmitted to receiving equipment for monitoring.”

Before you get your tin-foil hat in a crumple over sending data back to the mother-ship, understand that this data is not sensitive data. The type of data being gathered is usage metrics. Think of it in terms of gathering data about how many times a specific feature was used or how many times a group of errors occurred within the environment. This is in an effort to help improve the product.

While the session data is designed to be sent back to Microsoft, it could also be of use to the Enterprising DBA who is looking to get a better grasp of the environment or to troubleshoot various issues.

It’s a Stretch

With the release of SQL Server 2016 there is another default session that gets installed. This session is probably the most difficult to find. This is a private hidden session when it is installed. In addition, it is not present by default with SQL 2016 RTM but it was present by default with various CTP versions. The session is specific to the new stretch feature and is called rdaxesession.

Breaking down this particular session reveals pretty easily what feature the session is designed to support. The name rdaxesession breaks down to the following: rda = remote data archive, and then the rest is self explanatory. If I investigate the metadata for the session I will discover that the following events are traced in the session:

stretch_database_enable_completed
stretch_database_disable_completed
stretch_table_codegen_completed
stretch_table_remote_creation_completed
stretch_table_row_migration_results_event
stretch_table_validation_error
stretch_table_unprovision_completed
stretch_index_reconciliation_codegen_completed
stretch_remote_index_execution_completed
stretch_database_events_submitted
stretch_table_query_error

Well, that certainly clears it up! This session is all about the stretch feature. If you are on RTM and don’t use the stretch feature, then you won’t see this session even if you do query the metadata.

Don’t be a HADR

The last of the default sessions I want to cover both can be combined into a category about HA and DR – loosely at least. One of the sessions deals with backups and the second of the sessions deals with clusters.

When dealing with a cluster, it is important to know about the hidden trace file that records events related to the cluster. The session target data can be found in “%PROGRAMFILES%\Microsoft SQL Server\MSSQLxx.MSSQLSERVER\MSSQL\Log” by default and cannot be seen within SQL Server metadata at all (not that I have been able to find). For a few details about how this trace works, I recommend reading this article. This session can be called the SQLDiag session.

The session related to backups is actually broken down into two log files and is similar in nature to the SQLDiag session. This is true in that the session is hidden, is found in the same directory, is read in the same fashion, and is equally as hidden from within SQL Server as the SQLDiag session. The backup session deals specifically with managed backup.

What is managed backup? Well, it is a type of backup performed to Azure and which relies upon the “smart_admin” procedures, functions and process.

If you don’t perform managed backups or you don’t have a cluster configured, you will not need either of these sessions (and potentially not see the smart_admin files depending on SQL Server version). If you have either of these features, you will definitely want to ensure you become acquainted with them and how to retrieve the data.

Recap

There are plenty of mechanisms in place to help troubleshoot some of the more complex features of SQL Server. Getting to know the tools will prepare you for that moment when problems arise and you are under the gun.

I did not dive deep into any of these sessions holding that for later articles. I wanted to keep this article to an introductory level. If you are interested in the more detailed articles, please stay tuned for those upcoming articles.

If you are interested in reading more about Extended Events, I recommend reading my series which can be found here.

SQL Server CTE – Back to Basics

Categories: News, Professional, SSC
Comments: 2 Comments
Published on: July 14, 2016

Quick Flashback

sqlbasic_sargeBack in late December of 2015, a challenge of sorts was issued by Tim Ford (twitter) to write a blog post each month on a SQL Server Basic. Some have hash-tagged this as #backtobasics. Here is the link to that challenge sent via tweet.

I did not officially accept the challenge. Was an official acceptance required? I don’t know. I do know that I think it is a good challenge and that I intend to participate in the challenge. I hope I can meet the requirements and keep the posts to “basics”. Let’s just call this first post in the challenge to be my official acceptance.

With this being another installment in a monthly series, here is a link to review the other posts in the series – back to basics. Reviewing that link, you can probably tell I am a bit behind in the monthly series.

SQL Server CTE

kapowOf all the fundamental concepts within SQL Server, this one drives me a bit batty from time to time. Think about that statement for just a moment. I am not going to go in depth about the syntax of a CTE.

I am not really interested in breaking down the semantics and uses of the CTE. Rather, I want to break down something commonly associated to the CTE that is far more basic. I would classify this as more of a myth and misconception than anything else.

The topic du jour happens to be around punctuation. More specifically, do we use terminators or do we use beginninators? Yes that is a made-up term, but you get the drift. Where exactly should the punctuation be placed? Better yet, is it mandatory to include that punctuation every single time?

Those are the items I wish to discuss and hopefully demystify with a few examples (if that many are even necessary).

Punctuation

Punctuation shouldn’t be too big of a deal right? There is so little punctuation within t-sql that it really should be very basic and easy to understand. That said, there is this perpetual myth that a semi-colon is required at the beginning of a CTE. Furthermore, and somewhat more accurate, is that a semi-colon is a requirement in order for a CTE to work. Let’s take a look at both of those scenarios.

Case the First

Let’s just take a look at the most basic of scripts. A simple select statement without a CTE.

Straight away, we can see that I am accessing the AdventureWorks2014 Database. This happens to be on my 2014 instance as well. I have not terminated the select statement with a semi-colon. It really is somewhat a preference here since the statement will work with or without the semi-colon just the same (and despite the previous notion that it was going to become mandatory to terminate all statements with a semi-colon, don’t believe it until it actually happens).

Here is the SQL Server 2016 version:

Now let’s go with a basic CTE example written the same way that we commonly see CTEs written on the internet.

Notice that I am using the semi-colon as a statement beginninator and terminator all in the same statement. The statement runs perfectly fine – that much is true. The usual reason for the leading semi-colon is that the cte has to be preceded by a semi-colon. Let me see how true that is with a minor tweak!

What do you expect to happen from the execution of this statement? Well, as it stands the statement will execute without error just the same as it did in the previous example.

basicctetweaked

What? That worked? Yes. Yes it did. This is quite frankly due to the reality that a semi-colon is not required to precede a CTE. This statement was the only statement in the batch.

Case the Second

Based on those results, that must mean that a semi-colon is absolutely required when there is a statement preceding the CTE. Therefore, what is really required is the statement that immediately precedes the CTE must be terminated by a semi-colon. Not so fast there. Let’s test this one too!

Let’s use the following basic example.

That query works perfectly fine again. That must be due to the preceding semi-colon, right? Hold on to that thought. How about a rewrite as follows:

Here we tweaked the query to represent the notion that all previous statements should be terminated with a semi-colon. What do you think will happen? This will result in an error. GO is a batch terminator and therefore does not require the semi-colon. Surely this reinforces the theory that the semi-colon should come preceding to the CTE though, right? Let’s rewrite that query again.

I removed the semi-colon and now the code segment will execute without error. Wait there is more!

I want to continue tweaking this query. Let’s try multiple selects along with this CTE.

mind_blownNow I have a query that has a preceding select that is not terminated with a semi-colon. I have a CTE that is created right after that select and then I select from that CTE. What could ever possibly happen with this particular query?

The short of that story is that the query will execute without error and I will end up with two result sets.

Wait! Hold on two seconds there! Surely the semi-colon is an absolute requirement because we see it everywhere that it is a mandatory requirement.

The reality is that the semi-colon requirement is not really entirely accurate. If the CTE happens to be in the same batch, then the previous statement in the batch must be terminated by the semi-colon. Take the following change for example:

If I remove the semi-colon from after the “END” statement, then my query will terminate in an error. If I remove the BEGIN and END from around the first select statement, then the query will fail with an error. The error that I receive is part of the reason for the huge misconception about when a semi-colon is required:

Msg 319, Level 15, State 1, Line 38
Incorrect syntax near the keyword ‘with’. If this statement is a common table expression, an xmlnamespaces clause or a change tracking context clause, the previous statement must be terminated with a semicolon.

As I have already shown though, this is not accurate. Start thinking on a bigger scale than just semi-colons. Personally, I would prefer to see CTEs that are used in examples on the internet to be encapsulated with a BEGIN…END instead of using a preceding semi-colon (beginninator).

Has this behavior changed with SQL Server 2016? That is easily shown by the following example:

Recap

These examples clearly demonstrate the inaccuracy of the myth that a semi-colon is required. Despite the statement made in the error message, it just is not entirely the full story. A semi-colon to terminate the preceding statement within a batch is necessary but it is not required every time. If there is a separation between batches and the CTE, then a semi-colon is not required. And never is a semi-colon required to precede a CTE. It is required to terminate a previous statement within the same batch.

Please don’t be led astray by some notions on the web that dictate a CTE must be preceded by a semi-colon.

«page 1 of 7








Calendar
May 2017
M T W T F S S
« Apr    
1234567
891011121314
15161718192021
22232425262728
293031  
Content
SQLHelp

SQLHelp


Welcome , today is Monday, May 22, 2017