Last Restore of a Database – Back to Basics

Remember When…

sqlbasic_sargeBack in late December of 2015, a challenge of sorts was issued by Tim Ford (twitter) to write a blog post each month on a SQL Server Basic. Some have hash-tagged this as #backtobasics. Here is the link to that challenge sent via tweet.

I did not officially accept the challenge. Was an official acceptance required? I don’t know. I do know that I think it is a good challenge and that I intend to participate in the challenge. I hope I can meet the requirements and keep the posts to “basics”. Let’s hope this post holds up to the intent of the challenge.

With this being another installment in a monthly series, here is a link to review the other posts in the series – back to basics. Reviewing that link, you can probably tell I am a bit behind in the monthly series.

Database Restore

db_restoreAs a data professional, one should be somewhat cognitive of backups with regards to the data. From the point of the creation of the backup, we can go in one of two directions with this article. Due to the importance of each of the directions, I will devote a basics article to each.

The order of publication of these articles in no means denotes a level of priority to the topic. I feel each is equally important. The two directions I see both deal with the restore of that backup that was created. One direction is in regards to reporting and the other direction is the actual restore.

Last Restore Date

Something that I seem to be getting asked more and more frequently is how to determine when a database was last restored. Maybe it is just a hot topic and maybe it will be short lived, but it sure seems to be something that is on the tip of peoples minds of late. So, how does one determine the last time a database was restored? The answer is actually quite simple. Within the msdb database, there is a table (that maintains the history of database restores) called restorehistory.

The restorehistory table stores information that will help you better understand what kind of restore was performed, who did the restore, when it was done, and even some information about the backup that was used to perform the restore. You can read more about the specifics of this table from the msdn article – here.

A quick means to query this table is via the following script:

Now, that is an extremely simple query looking at my TestDB database. The results of that query on my system are as follows:

db_restores_example

As is shown in the preceding illustration, I have multiple restores and restore types of this particular database. This is a good start. Let’s take this query up a notch. Not all production environments will have databases that are restored on the same server. What do we do to report on databases that are restored on the production server vs. the databases that exist on the production server? That is where this next query comes in to play.

In this latter query, I have added a few things to help for various reporting needs. You never know when the CIO or an auditor is going to come to you with a request to filter the data differently for restored databases. In this vein, I added parameters that will help filter the results down to a specific database, return all databases, or return all databases that have a restore history on the server. In this case, I decided to use the AdventureWorks2014 database as my example database. You will want to change the parameter as needed. If you pass a null value to the database name, then all databases can be returned in the result set – depending on the values used for the remaining parameters.

Caveat

This is all fine and well if the databases are restored onto the same server as the source database, right? What about the case where the database is restored to a different server? Well, the query still works for reporting the restored databases on the server where the database is restored. The complexity comes when trying to correlate back to the original production server. My recommendation here is good documentation and an administrative linked server that is not accessible via the application accounts or any user but the DBA group.

Recap

I have provided two quick examples of how to retrieve the restore history for your databases. I recommend that the restore history be checked on a routine basis. You never know when an over-permissioned user may decide to restore a database for you and then have all of the rest of the users coming to you to determine what happened. This will also be essential when we routinely test our database backups. Stay tuned for the next article where I will explore that aspect of database restores.

Login from an Untrusted Domain – Back to Basics

Categories: News, Professional, SSC
Comments: No Comments
Published on: November 8, 2016

Remember When…

sqlbasic_sargeBack in late December of 2015, a challenge of sorts was issued by Tim Ford (twitter) to write a blog post each month on a SQL Server Basic. Some have hash-tagged this as #backtobasics. Here is the link to that challenge sent via tweet.

I did not officially accept the challenge. Was an official acceptance required? I don’t know. I do know that I think it is a good challenge and that I intend to participate in the challenge. I hope I can meet the requirements and keep the posts to “basics”. Let’s hope this post holds up to the intent of the challenge.

With this being another installment in a monthly series, here is a link to review the other posts in the series – back to basics. Reviewing that link, you can probably tell I am a bit behind in the monthly series.

Untrusted Domain

SQL Server is full of wonderful features and tools. One feature that is not wonderful and is blatantly frustrating is the error message that is supposed to help you understand why you cannot connect to SQL Server.

Login failed. The login is from an untrusted domain and cannot be used with Windows authentication.

Are you really sure the login is from an untrusted domain? I recently ran into this problem and found that it was less than helpful, though it seems basic enough in what the error should represent. When I encountered the problem, there was little useful information that pertained strictly to my problem.

Wait, am I saying that this error can be reported for various different legitimate problems dealing with logins? Absolutely, yes! Let’s recap some of my givens because these facts are important to troubleshooting the real underlying problem. Which in turn, causes this to be more of a “basics” type of post.

  1. The server was joined to the domain.
  2. The user in question was granted sa permissions to the instance.
  3. The user can login to the instance locally without error.
  4. My personal domain account could login locally without error.
  5. My personal domain account could login remotely without error.
  6. The user in question could not login remotely. Error thrown was the untrusted domain error.
  7. The version of SQL Server was SQL Server 2014 Express edition.

Now talk about a head scratcher from the get-go. The problem only seems to occur when trying to login from remote. So, as an example, here is what I was basically seeing. First, the tsql to create my test login.

add_userAnd with that user in place, let’s try to connect from a remote server to see what will happen.

pre_changeresults

Well, that is far from useful. I can definitely see that my user is properly created. I have more than adequate permissions. If the user was truly from an untrusted domain, the creation of the login would have failed.

As it turns out, there is something that is explicit to SQL Express with remote logins that generates this problem. Some have postured that the account needs to be added to the local administrators group. This is not accurate.

Better Solution

When you encounter this problem, there is an easy solution. Unfortunately it is not really a SQL Solution. Rather it is more of a sysadmin type of solution. The solution is to employ the use of group policy (either locally or a domain group policy). The specific policy that should be changed is called “Access this computer from the network”.

policytochange

My advice here would be to use a domain based group policy if you have a bunch of servers that fall into this need. In my case, we needed to touch 120+ computers. Manually setting this GPO on 120+ servers would have been very tedious.

For a single server like this example, I added it manually as shown below:

after_gpo

With the account granted this permission, all that is left is simply to try and connect.

post_connectresults

This is a great result. Now the account can connect and perform the work it was intended to do. In this case, it is a task account that would server as a proxy account to perform remote tasks such as performing backups.

Recap

 

Troubleshooting login failures is a core concept for the data professional. At times the cause of the login failure is far from intuitive. In this case, the failure is non-indicative of the actual problem. The real problem is that SQL Express does not like remote logins without a little bit of hoop jumping. Learning how to troubleshoot the problem is essential to becoming a better data professional.

Finding Deprecated Uses in SQL Server

 

sqlbasic_sarge

How well do you know your environment? You probably know all of the jobs that are running, the frequency that indexes need to be rebuilt, and even which users have which level of access to each object in the SQL Server instance. Do you know that your applications are accessing deprecated datatypes over 300 million times a week? What if your TSQL constructs are a bit archaic? Do you know that the code needs to be updated? Do you know how to find deprecated uses in SQL Server?

In this article, I will explore how to use Extended Events to track feature use and abuse. To be more precise, I will share how this tool can help you better understand all of the ways that your applications have been abusing your database by continuing to employ the use of deprecated features, syntax, or constructs in general. In case you are a bit behind in your exploration of XEvents, I have the perfect solution for you – my series on the topic that is continually growing. You can explore the full list of articles in the series by visiting the table of contents – here.

Audit Deprecated Uses

redxI would dare say that most data professionals think there is some use of deprecated constructs, datatypes or features within their environment. I would double down on that and say that most do not know just how bad it really may be. To find just how bad it really is, we need to audit for the use and abuse of these deprecation events.

Right here would be a good time to point out that your mileage may vary. Some items that are deprecated are more painful than others. Some may be deprecated and may have been on the list for 10+ years at this point. The point is, know your environment and then use good judgement to determine which items in your results need the most attention to fix and update. Why? Well, things really may break especially if you are looking to upgrade to a new version of SQL Server. Just because an item is still available in your current edition, that does not ensure it will still be available in a future release of SQL Server.

Now for the juicy stuff. As I mentioned, finding when and where a deprecated feature or syntax is employed, there are a couple of neat little events within Extended Events that can help to track each time a deprecated feature is accessed or employed. How do we do that? Use the deprecation_announcement and deprecation_final_support events. To help create sessions to track these events, I have the following script to create an “audit” session to do exactly that.

 

And just in case you are using 2008 or 2008R2, use this version instead.

Slight differences between these two sessions. First, in the 2008 version of the script, I rotten_orangehave to provide database ids instead of names. That is a shortcoming of 2008 and 2008R2 implementations of Extended Events. Additionally, the file target is different between the two (recall that they renamed the file target). And lastly, there are a few actions that I included in the 2012 version of the script that are not available in 2008 and R2.

With the session in place, I am now going to run through some sample scripts that will generate deprecation events to occur. I am sticking with my 2014 instance for this segment. That is important to note because different events may occur for different versions of SQL Server. Additionally, the parse script I will share will require a slight change for 2008 and r2 (again related to the file target name).

Now to take a peek at the data with this next script.

Now at long last, I can see what kind of data I am generating (they are really wide so I am just posting a snip).

deprecated_features_results

 

 

 

 

With this, I can see the feature_id along with the description and even the tsql that generated the event. What you don’t see in this is that I also trap the source machine and the user name. If there is an application name included in the connection string, I also trap that. These pieces of data can prove critical to efficiently troubleshooting and finding the source of these events.

From here, one might wish to explore all of the events generated from this session in order to ensure the environment is properly prepared for upgrade. Most tools do not evaluate the code thoroughly to trap all of these events. Instead they do a cursory look through stored procedures or at the data types. As we all should know, not every piece of SQL code is actually stored in the database or even is it cached at the time of analysis. This is the type of thing that requires a long running trace to prove that you are that rockstar DBA.

Conclusion

In the article today, I have shown how it is possible to see the deprecation alerts that may be generated in your environment. This data is what can help set you apart as a rockstar when it comes time for that migration. If you have yet to read my series on Extended Events, I highly recommend it. You can find that series here.

Seattle SQL Pro Workshop 2016

Categories: News, Professional, SSC
Comments: No Comments
Published on: October 23, 2016

db_resuscitateSeattle SQL Pro Workshop 2016

You may be aware of an event that some friends and I are putting together during the week

of PASS Summit 2016. I have listed the event details within the EventBrite page here.

As we near the actual event, I really need to get the schedule published (epic fail in getting it out sooner).

So the entire point of this post is to publish the schedule and have a landing page for it during the event.

Session Start Duration Presenter Topic
Registration 8:30 AM All
Intro/Welcome 9:00 AM 10 Jason Brimhall  
1 9:10 AM 60 Grant Fritchey Azure with RG Data Platform Studio
Break 10:10 AM 5    
2 10:15 AM 60 Tjay Belt PowerBI from a DBA
Break 11:15 AM 5    
3 11:20 AM 60 Wayne Sheffield SQL 2016 and Temporal Data
Lunch 12:20 PM 60   Networking /  RG
4 1:20 PM 60 Chad Crawford Impact Analysis – DB Change Impact of that Change
Break 2:20 PM 5    
5 2:25 PM 60 Gail Shaw Why are we Waiting?
Break 3:25 PM 5    
6 3:30 PM 60 Jason Brimhall XEvent Lessons Learned from the Day
Wrap 4:30 PM 30   Swag and Thank You
END 5:00 PM Cleanup

Database Dropped

What do you do when a developer comes to you and asks, “Where did the database go?  The database was there one minute, and the next it was not.”  Only one thing could be database droppedworse than the feeling of losing a database on your watch, and that would be losing a production database. It’s like magic—it’s there, and then it disappears. To compound the issue, when asking people if they know what might have happened, all will typically deny, deny, deny.

What do you do when you run into that missing database situation and the inevitable denial that will ensue?  This is when an audit can save the day.  Through an audit, you can discover who dropped the database and when it happened.  Then you have hard data to take back to the team to again ask what happened.  Taking the info from a previous article of mine, we can alter the script I published there and re-use it for our needs here.

This script will now query the default trace to determine when a database was dropped or created.  I am limiting this result set through the use of this filter: ObjectType = 16964.  In addition to that, I have also trimmed the result-set down to just look for drop or create events.

This is the type of information that is already available within the default trace.  What if you wished to not be entirely dependent on the default trace for that information?  As luck would have it, you don’t need to be solely dependent on the default trace.  Instead you can use the robust tool called extended events.  If you would like to be able to take advantage of Extended Events to track this information, I recommend you read my follow-up article here.

This has been a republication of my original content first posted here.

«page 2 of 108»








Calendar
February 2017
M T W T F S S
« Jan    
 12345
6789101112
13141516171819
20212223242526
2728  
Content
SQLHelp

SQLHelp


Welcome , today is Sunday, February 26, 2017