ArithAbort and SQL Agent Failures

Categories: News, Professional, Security, SSC
Comments: 1 Comment
Published on: January 7, 2019

ARITHABORT

I was coding along one day, working on rolling out some monitoring for a client—monitoring that I had used for previous clients.  The code was pretty straightforward and addressed a concern for the client.  Having used it for several previous clients, I felt pretty confident in the code.  The main purpose was simply to audit index definition changes.  It was rolled into a stored procedure and designed to be deployed to a “DBA” database.

With confidence pretty steady on this code, I deployed the stored procedure along with all of the supporting tables and structures.  Then I proceeded to deploy the SQL Agent job that controls the execution of the stored procedure.  When I deploy a job I like to test the job to ensure it will work.  As luck would have it, the job failed on the first test run.  I instantly became baffled.  Here is the error message.

Job Name DBA – Track Index Changes
Step Name exec proc
Duration 00:00:08
Sql Severity 16
Sql Message ID 1934
Operator Emailed
Operator Net sent
Operator Paged
Retries Attempted 0

Message
Executed as user: sa.

INSERT failed because the following SET options have incorrect settings: ‘ARITHABORT’.
Verify that SET options are correct for use with
indexed views
and/or indexes on computed columns
and/or filtered indexes
and/or query notifications
and/or XML data type methods
and/or spatial index operations.
[SQLSTATE 42000] (Error 1934). The step failed.

Immediately I started looking at my creation scripts for the tables.  Nope, no XML indexes, no spatial indexes, no filtered indexes, no indexes on computed columns (not even any computed columns),  and no query notifications.

Next I started checking the database settings.  Maybe the vendor for the application this client bought had set something for all of the databases regarding ARITHABORT.

Querying sys.databases, we could easily see that ARITHABORT is not enabled at the database level (just like previous implementations).  Well, this is a minor head scratcher at this point.  The code works in other environments, the database setting is the same as other environments.  I guess I could try setting ARITHABORT within the stored procedure and then re-test.

 

When I add the last line, “SET ARITHABORT ON;” to this stored procedure and then rerun the job it runs without any error.  It is a simple fix but the story doesn’t end there.

After making that change, I decided to go another round with the stored procedure and the ARITHABORT setting.  I removed it in the next round and decided to test the stored procedure directly.  Running the stored procedure in Management Studio with or without the ARITHABORT setting produces the same result.  That result is that both work as desired without any error.  For giggles, I ran the job again and discovered that the job still fails.  In the end, it appears to be something that the SQL Agent is setting as a part of its connection back to the database in this case.

In addition to this minor nuisance, you saw that the error outlines several possible causes for failure with regards to ARITHABORT.  One that I found that can be of big concern is with filtered indexes.  Check your connection settings from your application when dealing with any filtered indexes.  Filtered Indexes have produced this error in quite a few cases I have been asked to help fix.  Just a thought for something you should monitor and check should you run into this error or if you are considering the use of filtered indexes.

Putting a Bow on it…

In conclusion, this can be a short termed head scratcher.  Pay close attention to what has changed in the environment.  Test alternatives.  And check those connection strings.

If you are curious, there are a few ways to check connection string settings. One of my favorites is with Extended Events. To read more about Extended Events, I recommend this resource – here. In addition, I recommend checking out some of my other articles showing basics in troubleshooting which can be found here.

SHUTDOWN SQL Server

Categories: News, Professional, Security, SSC
Comments: 4 Comments
Published on: December 3, 2018

Recently a friend by the name of Chris Bell (blog | twitter) wrote about an easy way to disrupt SQL Server.  That disruption comes in the form of the SHUTDOWN TSQL command.  You can read what Chris wrote from his article here.

Granted, you do need to have elevated permissions such as sysadmin, serveradmin or securityadmin.  I include securityadmin even though Chris did not because a securityadmin can create an account and grant that account sysadmin permissions.  And in Chris’ article he only discusses the threat to the SQL Server process. When I read the article, I wondered to myself if the threat stopped there. I also wondered to what extent could I cause disruption. Yes, the gremlin in me did start to come out.

When I say I was curious what level of disruption could be caused, I really wanted to know if I could reboot the server from within SQL Server or even if I could simply shut down the entire server. Well, you can certainly bounce the server from within a TSQL script – if you have adequate permissions (or know how to elevate your permissions).

The first step is rather simple: check to see if xp_cmdshell is enabled. If it is not, then enable it.

Now that xp_cmdshell is enabled, let’s bounce the server.

Now this code will not work on your server as-is. I have coded it to be my sandbox server. That said, if the server name matches you should get a prompt informing you that SQL Server will shut down. And that shutdown will happen after 60 seconds.

If you happen to see a message reporting “Access Denied”, that is also easy to circumvent—well, at least up to SQL Server 2016. Prior to SQL Server 2016, I could utilize xp_cmdshell to also add the service account (let’s say it is NT Service\MSSQLSERVER) through the use of net localgroup /add.  However, in SQL Server 2016, you will continue to get an access denied message (even if you try to use a proxy account). Here is an example of that  exploit as well (prior to 2016):

Let’s say you have done your duty and changed the service account off of the default local service to a domain account or a local account (not nt service), but you decided to add that account to the local Administrators group. You have actually opened yourself up to plenty of other problems. Even in SQL 2016, if the service account for SQL Server is in the Local Admins group, then the shutdown SQL Server example shared here will work and force a shutdown.

So, in the end, please be mindful of the service account in use. And be mindful of the level of permissions that service account has been granted at the OS level. If you are not careful, you could have some rogue developer or intruder shutting you down before you know it.

Now would be a good time to go and review your security. Check for users with too much access. Here are some articles that can help with that (here and here).

Auditing when Database Files Change

As a Database Administrator, something that should be part of your database audit is monitoring the growth of files. That means tracking when the log file grows and tracking when the data file(s) grow(s). Many will argue that there is nothing worse than not knowing when a database-related file changed in size except maybe when it rapidly and constantly grows until the disk is filled.

Beyond just trapping when these events occur, the DBA usually also wants to know what caused the file to grow or shrink. This has been a problem that has plagued many a DBA over time. Sure there are plenty of methods to assist in the capture of such things. I even wrote about that quite some time ago here. In that article, I gave the tools to help track when a transaction log grew. Today, I am looking to share a different method on how to trap the executing SQL that causes a file to grow. And unlike that previous method, this method will also help to track when a file decreases in size.

I do want to point out that there is an alternate method to capture the shrink events if you wish to read more. I wrote previously about using the default trace to capture when a shrink occurs within the instance. This can be captured thanks to the default trace. You can read all about that over here. Despite the availability of the information within the default trace, I think the method I share here will be lightweight enough that it won’t hurt to have it running—at least occasionally.

Enter the Database Audit

If you have become accustomed to reading my articles, you will probably surmise that there is some degree of setup that needs to be done before I get to the meat of the matter. That holds true today. It just wouldn’t be right to share a method to do something without a harness to show how to get there. So, let’s get the setup out of the way.

Since we will be wanting to create something that will track when a file changes in size, we will need to have a baseline of the size of the files for the newly created Sandbox2 database. Let’s capture that with the following query. Small note on this query is that I am including the tempdb related file sizes. You don’t need to include that, but it may be of interest for future testing.

That is all the setup that is needed at this point. We are now ready to implement a tool that can help us monitor these changes. This tool is lightweight and pretty efficient at capturing the info that would be essential for the exceptional DBA to keep on top of the changes occurring in his/her environment. This tool comes to us in the form of an extended event. Let’s do a little checking into what is available before we dive into the actual XE session itself.

First, when you are curious, if there is an extended event that may be sufficient for your needs, you should check the event store to see what is there. We can do that via a query similar to the following.

This query may return something along the lines of what we see in the following image:

xe_events_2k8

From that list, we can immediately see that we have some options to help us try and capture what is happening with our databases when our back is turned. From here, we can query to find what kind of data is captured with each of these events. To do that, we simply need to run a query such as the following query:

Looking at the available data points in each of these sessions can prove promising. It is indeed promising enough that we can now proceed with the creation of an XE session.

I decided to just focus on the two events that included file_size_changed in the name of the event. You can also see that I chose to send this to two targets. You would be fine to just send this to an asynchronous file target. Just bear in mind that you do need to specify a path for the file_target that does exist or an error will be thrown. I have also specified that this session will restart on server startup and then I turned the session to the started state.

Does it work?

That is a really good question. Now that we have a test database and a session to trap events, all we need is to figure out how to test it. We would want to do a few different things to test—like grow and shrink a file in some way. And that is exactly what we are going to do at this point.

First test, since we have a really small empty database, is to try and force the database to grow by inserting some data. We will do that with this next query.

I didn’t post the numbers earlier for my files for the Sandbox2 database, but they were small. I had a 2MB data file and a 1mb log file. Now, when I look at the results from that last query that included the file size information, I will see a database that has grown a fair amount.

Database audit newfilesize

That should be enough of a change to have triggered something in our extended event session. Let’s take a look at the session. To do that, I am going to pull out a query to help parse the XML and see what has happened. Here is that query:

Despite having two targets in my session, we will only cover the query that helps parse the data from the asynchronous file target. When I run that query against the files that exist for this session I get a result set of 90 records on my system. That probably seems like a ton of results for such a small increase in the database. As it would happen, I left the growth settings at the default growth increments (don’t do that in a production system) and this means I get a ton of growth activities at very small numbers (1mb for the data file and 10% for the log file). Here is a sample of the output:

xe_output_grow

You can see how this might be helpful for when a process runs hog wild in the environment. What about going in the other direction though? What if we need to know about when the database is shrunk or when a file is shrunk? Well, let’s do that too. Let’s try the following query.

Again, I check for a baseline on the files to see if the file sizes changed. In this case, you can run that final query and compare or just trust me on it. Having shrunk both files in the Sandbox2 database, let’s check the XE session data again:

xe_output_shrink

Check that out! We have captured the shrink events too! There are multiple shrink events in this case only because I ran the shrink statements multiple times. This is excellent news for everybody that is trying to keep an eye on these database size changes. You can see in the XE session that I applied the sql_text() action. I have done this so I will be able to see what query caused the growth or shrink event to occur. Even better news is that this event session is perfect for those of you still on SQL 2008.

If you enjoyed this article, check out some of the follow-up articles: Data Growth Audits or Mysterious Growth.

T-SQL Tuesday #108: New Horizons Beyond SQL Server

Comments: 1 Comment
Published on: November 13, 2018

There comes a point in one’s career when a change is requisite. Big or small there always seems to be a tipping point that mandates some sort of change. Maybe the change is an entirely new career field. Maybe the change is adapting to the ever improving features of a specific software or product. Maybe, that change means learning a tangential technology.

This is precisely the goal Malathi Mahadevan (b | t) seems to have envisioned for the 108th installment of TSQL Tuesday.

If you are interested in reading the original invite, you can find that here.

So the challenge for this T-SQL Tuesday is – pick one thing you want to learn that is not SQL Server. Write down ways and means to learn it and add it as another skill to your resume. If you are already learning it or know it – explain how you got there and how it has helped you. Your experience may help many others looking for guidance on this.”

Personally, I am not one to settle, so learning and improving are important. New technologies, changes in technologies, new features, tangential technologies – they are ways to continue to learn and improve – most of the time. Sometimes, a new technology offers a good change of pace and offers an exit from something that is becoming too standard, while providing an entrance to something different, difficult, exciting and expanding.

Through the year (2018), I created a few goals for myself around some of these new or different technologies:

  1. Become proficient at MySQL (maybe even certify who knows)
  2. Become proficient at PowerShell
  3. Work towards the TCM (I got to busy with 1 and 2 to even accord any time towards this one)

Proficient is sort of a vague term because it can have a different meaning to different people. For me, I will describe what I have been doing to become proficient in both PoSh and MySQL.

PowerShell

A shout out is absolutely necessary for Adam Machanic (twitter) for picking the right blog meme that has been able to survive so long in the SQLFamily. This party has helped many people figure out fresh topics as well as enabled them to continue to learn.

I have dabbled in PowerShell over the years – just enough to “be dangerous” as some may say. I wouldn’t call it proficient and probably not even dangerous really. I was able to muddle my way through being able to create some basic scripts to perform certain tasks.

This rudimentary ability just isn’t enough to be able to put the skill on a resume (imho). It certainly wasn’t enough skill to be able to manage a large group of servers and perform various enterprise type tasks. My objective was to be able to create a set of tools for myself that I could use and have repeatable success at great ease.

I sat down with my first set of various tasks I wanted to be able to perform and worked on them as I set time aside to learn PoSh better – every day. It took several weeks and by no means am I on the same level as Rob Sewell (b | t) or Chrissy LeMaire (b | t) or Sean McCown (b | t). That said, I do feel I am far more capable in PoSh now than I was last year.

This is a skill that I plan to continue to hone. I am learning every time I pick it up and try to do something different. It is good that I am able to keep learning. Next year, I plan on being able to say I am more proficient than I am now. I am also hopeful to be able to be good enough to properly contribute to the dbaChecks project. I do also hope to share some of the scripts I have created on my blog as well.

Here are some of my first dabbles with powershell that I have integrated into other blog posts. Yes, they are very rudimentary.

MySQL

Learning MySQL is probably not too big of a stretch to be honest. I certainly think it is far more similar to SQL Server in many regards than PoSh. After all, it is just another little DBMS and does happen to be rather popular.

I wanted to pick up MySQL so I could support clients that have it installed here there and everywhere in their environments. Many clients have more than one DBMS platform and it is good to understand and be able to administer multiple platforms with a high level of competence. Unfortunately, MySQL comes with a fair amount of gotchas. There are serious limitations depending on version and flavor. Some clients may be stuck on a rather old version of MariaDB (akin to SQL 2000). This means developing a broad set of scripts and skills quickly and on the fly.

I have a ways to go in my learning due to the varied flavors of MySQL but I am getting there. I do feel pretty comfortable hopping in and troubleshooting performance issues and doing a quick health assessment at this point. I would call that proficient. Similar to what I said about PoSh, I plan on being able to say next year that I am more proficient. More opportunity with this platform is what lends itself to better proficiency.

TSQL2sDay150x150The Wrap

I believe in continuous integration / improvement when it comes to personal growth and development. It is necessary to keep your personal skills sharp as well as keep yourself marketable.

Oh, and if you are interested in some of my community contributions (which according to Jens Vestargaard is an awesome contribution), read this series I have published.

Audit Database Offline Events

Categories: News, Professional, Scripts, SQLBP, SSC
Comments: No Comments
Published on: November 13, 2018

Knowing when an event occurred within the database environment is a very important thing. Being able to act quickly when certain events occur is equally as important. Sometimes, we may not find out about an event for a few days or weeks and then we are asked to figure out the who, when, why and how of the event. Hopefully there was enough foresight in place to plan for such a request to travel back in time.

Recently while networking at a large database related conference, I had the chance to discuss some of this with a semi-desperate DBA. In his environment, apparently a database can be taken offline at random times without the involvement of the DBA team. He was looking for a solution to prevent it from happening. For me, the simplest solution is to revoke permissions from the person doing it. If you can’t revoke the permissions, then at least capture the pertinent information to document who and when the databases are being taken offline.

Auditing and Facts

I am a huge fan of auditing the database environment to gather facts. There is comfort that can be gained in knowing that if a change happens to a database or server under your purview, that you can report some facts back up the chain when a post-mortem is required. I recently shared another example of such an auditing scenario – here.

There is a technique in that article that may be of interest for you if you are restricted to implementing this kind of audit. I won’t go into detail about the poor man audit techniques in this article, so it is worth reading that previous article.

When looking to find data about a database being taken offline, one could capture the data from the default trace or from the SQL Server Error Log. While this article will show how to capture this data from both of those sources, it is important to understand that the data just might not be available in either location depending on settings and activity of the server combined with how old the offline event was.

Lets first take a look at the default trace. That one should be easy enough.

And a snippet of some possible output looks like the following:

If you look closely at the output, you will probably notice that I don’t get a lot of information. I find out that there was an ALTER event to a database but I do not know what the event was nor do I see the text from the SQL statement that caused the change. So, at best, I get a partial view of the picture by looking at just the default trace.

With that in mind, let’s look at the error log and see if that provides better information.

This is pretty cool here. I am checking all of the log files available on SQL Server and scanning for a search term of “offline.”

Ok, this shows a little bit of promise. I can see that the database

was indeed set to offline (or “option OFFLINE” was set to ON) with a spid and a timestamp. Unfortunately, the data shown here is far from being super useful in a post-mortem. The error log does not show who took the database offline.

What if, there was a way to combine these two queries into a single result set and better correlate the results into something highly useful? Something like that seems like it could be possible given we have a spid and timestamp in each result set.

Let’s try it and see.

If I use the timestamp from each result and the spid from each result set, I can surely join the two results together. As you can see, that is exactly what I did. I had to format the spid and timestamp a little bit – but that is ok. With that little bit of magic, I get a result set similar to the following now.

Now, I can easily see who put the database into offline mode. I can also tell you what the statement was when they executed the offline. I also know what the timestamp was when the database was put offline. This is usable data now when it comes time for a post-mortem or fact finding request.

The Wrap

Is this the most eloquent method to trap this kind of data? Absolutely not, but it is usable. When it comes to something more sleek and modern, I would recommend a better tool such as Extended Events. I will show how to do this same task with greater ease and reliability through the use of XE in my next article.

Oh, if you are interested in any of my other articles about Auditing or Extended Events, I recommend you read start in here (for Auditing) or here (for XE).

T-SQL Tuesday #104: Just Can’t Cut That Cord

We all have our favorite scripts, tools or utilities. Those are the things that help make our jobs easier. Some of us may have an unhealthy relationship with some of those scripts (similar in nature to the relationship many have with their phone). Whether or not the need to cut that proverbial cord exists, today we are not discussing the health of that dependence. Suffice it to say, sometimes we simply need to upgrade our scripts. How else can we get better scripts or make our scripts better – by sharing them.

This is precisely the goal Bert Wagner (b | t) seems to have envisioned for the 104th installment of TSQL Tuesday.

If you are interested in reading the original invite, you can find that here.

For this month’s T-SQL Tuesday, I want you to write about code you’ve written that you would hate to live without.

Maybe you built a maintenance script to free up disk space, wrote a query to gather system stats for monitoring, or coded some PowerShell to clean up string data.  Your work doesn’t need to be completely original either – maybe you’ve improved the code in some open source project to better solve the problem for your particular situation.”

There is a high probability that through the sharing of your script, somebody out there can benefit from that script. In addition, it is very likely that somebody will make a suggestion to help make your script better. Worst case (emphasis on worst case here), you have the script stored somewhere with half decent instructions on what it does and making it easily accessible for you to use again and again. Just in case you forget you have it out there – you can google for it again and find it on your own blog ;).

Personally, I have been able to find and re-use some of my older scripts. Not only do I get to re-discover them, but I also get to re-imagine a new use or improvement for the script.

Brief Intermission

A shout out is absolutely necessary for Adam Machanic (twitter) for picking the right blog meme that has been able to survive so long in the SQLFamily. This party has helped many people figure out fresh topics as well as enabled them to continue to learn.

Easy Access

While pondering the topic for today, I had the thought occur about how frequently I post a script on my blog already anyway. An easy out for this topic would have been to re-share one of those old scripts. For instance, I could easily redo a recent article about server access that has a couple scripts demonstrated in it. Or I could go back a few years to my articles about foreign keys (here or here) and space use (here or here). Even more intriguing could be to re-envision some of my articles on Extended Events. But where would the fun in that be?

Rather than take the easy road and rehash something, I have something different. This one goes hand in hand with the numerous articles and scripts I have previously provided on auditing – yet it is different.

Not every shop can afford third party software or even Enterprise edition and so they have to come up with a different way to audit their database instances. One of the problems with a home grown solution is to ensure the data is not stored local to the server (lots of good reasons for that). Here is an example of what I did for one client that happened to have a developer that found a back door that was giving him SA access to the SQL Server Instance and was changing things and trying to cover his tracks – even after being warned.

First the query

This query will be run from a job on a different server that is restricted in access to just a select few people. I do rely on the use of the default trace in this query. I am also reliant upon a little bit of sneaky behavior. If I run this from a separate server, prying eyes are usually unlikely to find that it is running and thus makes it easier to catch them red-handed. In addition, if they discover via some sort of trace and by a lot of luck that it is running, then they have no access to the remote server to alter anything that was captured.

The query does go out to the default trace and pull back any changes to permissions or principals on the server in question. The captured data is then stored in a database that is also restricted to a select few people. Lastly, the captured data can be routinely queried, or automated reports can be created to send email notifications of changes encountered.

The second part of the trickery here is that I am using a linked server to perform the queries (a slight change and I could also do this via powershell which will be shown in a future article). The linked server query uses the openquery format and sends the default trace query to the remote server. Since I am running this from a job on an administrative server that pulls a limited data set, I am not overly concerned with the linked server setup here.

Storing It

Once I query the data, I need to put it somewhere on my administrative server. The table setup for that is very straight forward.

After creating this table, I am ready to store the data. All I need to do is throw the audit query into an agent job and schedule it to run on a regular schedule. For my purposes, I usually only run it once a day.

TSQL2sDay150x150The Wrap

This has been my diatribe about service and giving back to the community. When done properly, there is a natural born effect of enhancing one’s personal life equal in some way to the amount of effort given towards the community.

Oh, and if you are interested in some of my community contributions (which according to Jens Vestargaard is an awesome contribution), read this series I have published.

Linked Servers and Stats

Linked Servers

A linked server is a fabulous feature in SQL Server to help accomplish various data tasks between local and remote servers. There is a time and a place for the use of linked servers. Unfortunately, some are of the belief that a linked server is the solution to all problems including the way to end world hunger.

You may recall that I have written about linked servers on a few occasions. Usually that is due to the relationship of a poorly performing process and the linked server. Here is a prime example here or here. On the other hand, here are some examples of using a linked server in a much better light here and here.

Using a linked server is entirely up to you. If you choose to use a linked server, you will need to understand there are risks involved – beyond just the documentation and performance nightmares it can cause. Today, I will share one of those fabulous gotchas – mostly because it has cropped up more than once with my clients.

Remote Stats

On more than one occasion I have had an emergency request because everything was broken. The everything in almost every incident is an SSIS package that is failing with error messages. The error message will typically have text similar to the following:

Could not locate statistics ‘_WA_Sys_00000015_346C780E’ in the system catalogs.

Due to the error, the package fails processing and grinds to a halt. When diving into the package it is discovered that the missing stats happen to be coming from a linked server query. This raises a big bright blaring alarm for me. Why is the SSIS package accessing the data via a linked server? This is rather counter-productive and definitely contrary to what is desired from a performance perspective.

You can certainly see why this is a sudden emergency right? Absolutely nothing is working anymore, right? Well, at least that is the way it seems for the person reporting the issue. Understanding that their realm of work has come to a halt is important in order to get to a resolution more quickly. Knowing that the point of failure is not as grande scale as claimed is critical because it can help you keep your cool while assessing the real problem.

Since the problem presents itself when accessing a specific object or even set of objects across a linked server, we have our scope of the issue scaled down quite a bit already. Now, what do we do from here? Obviously the statistic with that name is missing, right? Don’t be so hasty to assume it is actually missing. Let’s verify that the stat is indeed missing first. Even better – let’s just eliminate as many variables as possible. We can do this by querying the affected objects directly from SSMS.

Easy Troubleshooting for the DBA

For me, it is essential to eliminate variables when we run into a failure. In this case, I have just a few major variables that can potentially be causing this problem. Those variables are:

  1. SSIS
  2. Linked Server
  3. The query itself
  4. Security
  5. Bad Plan
  6. The stat is legitimately missing

I can easily assess the legitimacy of each variable through various quick tests. To eliminate or validate the “query” variable, I can issue a very simple query to retrieve data from the affected object. So let’s eliminate variables 1 and 3 in one fell swoop.

This query, in this scenario, results in the following:

This, while disappointing, is actually quite productive. This has eliminated two variables for me. I now know that SSIS is not related to the problem. I also know that query complexity is not related to the problem. There are still a few variables left to either validate or eliminate. Since I know the problem occurs when querying via linked server, let’s try querying the remote server direct (not via the linked server).

Well, while that does not entirely eliminate or validate any variables, it does tell me that the problem is still specifically related to the linked server. What if I try to use OPENQUERY instead of the traditional linked server query?

Wow, look at that? Ok, so this is a bit of trickery because I have told SQL Server to execute that query on the linked server as if it were a “local” query on that server. This does work without error and is definitely pushing the findings to be more conclusive that it is a linked server problem.

While the openquery version works, I do still need to eliminate some variables from the problem. One of the variables is security. Since the error mentions sp_table_statistics2_rowset, I googled about for that proc and found some mentions that maybe there are some column denies related to the stats in question that is giving the linked server some major fits. Or it could also be insufficient permissions to execute DBCC SHOW_Statistics. I highly doubt this to be an issue since the openquery version works while using the same pass through authentication of the linked server that the traditional linked server query would use.

In order to eliminate security as a potential cause, the test is simple (while it could be more complex, I went for the jugular to just eliminate the theory as quickly as possible) – I will add my account as the pass through account (which is a sysadmin on the remote server) and then query the linked server all over again. Suffice it to say, there was no change in the result – the error persisted.

This does not just yet eliminate the security variable because there could be a cached plan somewhere. So, just to be sure, I chose to flush the cache on both the linked server and the local server. Running the simple “Select *” query all over again yielded no difference in the query results. I can now safely say that the problem is not related to a cached plan nor is it related to the security. At this point, I set the linked server security back to the way it was. I have effectively eliminated all variables but 2 and 6 (linked server and the stat is missing).

Let’s eliminate the missing stat variable right now.

Undoubtedly you have noticed that I built a drop statement into the result set from this particular query. That aside, the green highlighted row is the very stat that was producing the error. This stat is most certainly available on the source server (linked server). This entirely eliminates the sixth variable because the stat is not missing.

This brings us to the final step – see what happens when we drop the stat and try the query again. I have the drop statement ready to roll, so let’s just run that and see what happens. My risk here is very small. This is an auto-generated stat and will be recreated if needed. After dropping the stat, I run the simple “Select *” query across the linked server again and guess what? It works perfectly fine.

In my opinion, we can just skip a few of these steps if the problem is on a system generated stat and just go straight to dropping the stat and trying again.

That said, the better solution in this case would be to do one of two things.

  1. Modify the Package to not use the linked servers in this way and create a data source connection for each of the different server connections.
  2. Modify the Package to use OPENQUERY style queries.

My preference between those two options would be to create specific data sources. It seems very silly to pull data across the linked server to then be consumed via SSIS package.

I have just illustrated what I would classify as a nuisance problem related to linked servers. Simplifying the approach as I described with the two potential solutions would alleviate this nuisance while also giving an inevitable performance boost.

Parting thought: One observation of the “fix” that involves dropping the stats is that on busy systems it is a very short lived kluge. I have seen the problem re-surface within five minutes on busy systems. The stats are not really a problem, it is just how the linked server connection interfaces with the stats.

Endpoint Owners – Back to Basics

Remember When…

sqlbasic_sargeBack in late December of 2015, a challenge of sorts was issued by Tim Ford (twitter) to write a blog post each month on a SQL Server Basic. Some have hash-tagged this as #backtobasics. Here is the link to that challenge sent via tweet.

While the challenge may have been for the year 2016 and I haven’t contributed in several months, it is still a worthwhile effort. In that vain, I am adding this article to that series.

With this being another installment in a monthly series, here is a link to review the other posts in the series – back to basics. Reviewing that link, you can probably tell I am a bit behind in the monthly series.

Endpoints

You may have heard the term endpoints thrown around in technical discussion and wondered “what the heck is an endpoint?” Well, that is a good question. An endpoint in the simplest form is a connection or point of entry into SQL server.

Another way of thinking about an endpoint is to look at the ends of a line. Generally speaking, there is a stop point at each end of the line. At this stopping point, the line may connect to something else and therefore be a point of entry into that “something else”.

When we deal with SQL Server there is a handful of default endpoints and then a handful of other types of endpoints. To figure out what the default endpoints are, it is pretty easy. The following query will expose the default endpoints.

Executing that query will produce results similar to the following:

Some of those might look pretty straight forward and then you get to that VIA endpoint. Don’t worry about that endpoint. The VIA endpoint is on the deprecation list. All of these default endpoints will be owned by sa and don’t require you to do anything with them per se other than to understand they exist. You may have noticed that I filtered my results by looking only at endpoints with an id of less than 65536. Any endpoint id lower than this number is a default endpoint. All others are user defined endpoints.

Endpoint Owners

So far so good. This is pretty straight forward to this point. If you have implemented anything like mirroring, Availability Groups, or Service Broker, then you may be interested to know that you have also created additional endpoints in the Instance. When you create an endpoint, did you know that you become the owner of that endpoint by default? It is very similar to when you restore a database or create a database – the default owner of that database will be the person that restored/created it.

Do you know who owns your endpoints? Did you create all of the endpoints? Do you know if you have any additional endpoints beyond the default endpoints? If you cannot answer yes to all of these questions, then you will probably want to figure out what the endpoints are and who owns those endpoints. Let’s try that with a slight modification to the previous query.

This will yield results similar to the following:

I took the script a step further than necessary. I wanted to illustrate potential endpoint types that may not be in use. This script covers both in use endpoint types and those not used. In my results you may note that I have a DATABASE_MIRRORING endpoint. As reality would actually have it, it is an Availability Group endpoint but those are presented as DATABASE_MIRRORING endpoints.

Note that my mirroring endpoint (aka Hadr_endpoint) is owned by “YourDomain\DBAdmin”. What if the owner of that particular endpoint was no longer present in the organization and I wanted to change it to something more sustainable? Well, I could do the following:

In this case, the default endpoints are owned by sa and it does make enough sense to assign the owner to be sa for the mirroring endpoint. Take notice that the name of the endpoint is required in order to reassign the owner. The name of the endpoint follows the :: in the script. So, whatever your endpoint name happens to be, just place “Hadr_endpoint” that follows the :: in my script.

Recap

Endpoints are a fundamental piece of the puzzle with SQL Server.  Getting to know your endpoints and the owners of those endpoints is an essential component of knowing your environment. Who knows, it may come to pass that the owner of an endpoint may no longer exist in your environment or possibly lose permissions along the way. Knowing who owns the endpoint may just save three or four grey hairs when that day comes.

SSRS Subscription Schedules – Enhanced

Reporting Services

 

Over the past couple of articles I have illustrated some of the fun that can be had when dealing with the scheduling capabilities of Reporting Services (SSRS). The first article covered how to create more advanced schedules (from the SSRS point of view). In another I article, I showed how to retrieve scheduling information from the ReportServer database. In that last article, I also promised a follow-up article for more in-depth scheduling details.

SSRS provides the capability to review the scheduled reports (subscriptions) in a far moare detailed fashion than shown in that previous article. That ability is held within the ReportServer database. This article will dive into the source of this scheduling information within the ReportServer database.

This dive will be a bit more detailed than the first time I dove into SSRS scheduling – here. That particular dive was missing an important set of data.

Deeper Dive

My first dive into building a report of the report schedules was pretty comprehensive and I used it quite frequently. Many others also used it regularly for their environments as well. So the first attempt wasn’t bad by any stretch. The main problem (at least for now) is that the script does not account for any of the custom schedules that can be built. I have to be honest in that I hadn’t really considered that feasibility. Times and experience change that perspective. When that perspective changes, it is time to dive back in and add coverage for the shortcoming in the script.

When I dove back in to fetch the custom scheduling information, I realized there was a discrepancy even in the old report in that I was not gathering Job information pertinent to the schedule. Recall that SSRS subscriptions are performed via the SQL Agent. With that in mind, it is reasonable that the Agent job information is pertinent and germane to the report subscription and probably should be included in a report. Couple that with the methods on creating custom schedules for the SSRS reports, and we have a resounding need to ensure that data is trapped properly.

Due to this epiphany, I have now a more complete script to include both the data from SQL Agent as well as the data from the ReportServer database in regards to subscriptions and schedules of reports.

Script

In pulling the data together from the two sources, I opted to return two result sets. Not just two disparate result sets, but rather two result sets that each pertained to both the agent job information as well as the ReportServer scheduling data. For instance, I took all of the subscriptions in the ReportServer and joined that data to the job system to glean information from there into one result set. And I did the reverse as well. You will see when looking at the query and data. One of the reasons for doing it this way was to make this easier to assimilate into an SSRS style report.

As you can see, it is not a short script. By fair measure, it is also considerably more complex than the XML version that was recently posted (and mentioned earlier in this article). That said, it is many times more flexible and complete than the XML version as well. I do continue to use the bit math for figuring the schedules as I did in the first version of the script. When done this way, I can handle the custom schedules as well as get extensive details about the schedule from both the msdb and ReportServer databases.

Recap

SSRS provides built-in mechanisms to help report on the scheduled reports that have been deployed. This version of the report will help you retrieve the data from both a job perspective and from the report scheduler perspective. Through this series of articles, you should be confident in being able to now create custom schedules as well as accurately report on any reports that have specific subscriptions/schedules.

SSRS Subscription Schedules

Reporting Services

Reporting Services (SSRS) is a tool that permits you to create and deliver feature rich reports. The reports can be delivered in various formats and can even be scheduled to be delivered at various times. I even recently wrote about creating more advanced custom schedules.

With the ability to create content delivery schedules, or to subscribe to report content delivery, this imposes a requirement to also know when the various reports are scheduled to be delivered.

SSRS provides the means to be able to review the scheduled reports (subscriptions). That means is held within the ReportServer database. This article will help to uncover one of the sources of this scheduling information within the ReportServer database.

Review Schedules

When looking into the database for SSRS, I can see there are different means to be able to review the report schedules. This article is going to cover just one of those methods. And if I am going to be entirely up front about this method, I don’t like it and I recommend that it not be use.

I can hear the moans now. “If you don’t like it, then why show it to us?” Well, that is a very good question and there is a very good reason for this decision. A lesson I learned a long time ago is sometimes you need to learn the hard way, or less desirable way, to do various things. One of my favorite Calculus teachers from years ago drilled this into my head over and over again. Why? Well, there are three good reasons that come to mind: a) it makes the more desirable method seem much easier, b) it helps you to appreciate the more desirable method all that much more, and c) because if all else fails, you will have another method to fall back to just in case.

Less Disérables

The least desirable method (at least of the methods I will share) is to parse XML from a field stored in the ReportServer database. If I look into the Schedule table within the RepotServer database, I will find this column called MatchData. Up front, this field is not very intuitively named. I would not think this field actually represented the schedule, but it actually does.

Before we start diving into parsing XML, we need an example of what this XML may look like. The following will provide that very example that we need.

Are your gears grinding yet? The XML is not terribly difficult to follow. I am sure you have realized the problem from this format at this point. If I query this to make it human readable in a tabular format (you know DBA format), I will end up with a really wide table that is pretty ugly to look at (unless I get super creative to combine fields etc).

Let’s take a look at the query to parse something like the preceding XML example.

And there we have that ugly query to produce a really wide ugly result set. The query is not difficult to write. It’s just extremely repetitive. In similar fashion, the results are very repetitive. This makes, in my eyes, this particular method less desirable.

I haven’t even gotten to the part about the shortcoming in scheduling reports through SSRS that I wrote about recently – here. In that article I discussed a workaround to overcome the SSRS scheduling options. If you employ methods such as I discussed there, then this query will never fully cover the scheduling related to your reports. Because of that, I will be discussing the better solution in the next article.

Recap

SSRS provides built-in mechanisms to help report on the scheduled reports that have been deployed. While parsing the XML is less desirable than what I will be sharing in the near future, it is better than doing nothing at all. I recommend you start looking into the various report schedules you may already have in your environment. Also, stay tuned for the next article that will better show these schedules.

«page 1 of 9

Calendar
March 2019
M T W T F S S
« Feb    
 123
45678910
11121314151617
18192021222324
25262728293031

Welcome , today is Tuesday, March 19, 2019