Who2 Article Published

Comments: No Comments
Published on: June 22, 2010

I just had my second article published on June 14, 2010 at SQLServerCentral.  I will be posting a follow-up to that article in the near future.  The follow-up is related to the discussion in the thread for the article.  The article is titled: What, Where, When, How, Who2.  The discussion on it can be found here.  In the discussion there is a note about an error that some people are seeing.  There is also a note about a different proc (who is active) written by Adam Machanic (Blog | Twitter).  In the follow-up I will be looking at the error in specific and comparing what Adam has done to resolve it (since it works in his) – with full attribution 😉 .

Declare Scalar Variable

Tags: ,
Comments: 2 Comments
Published on: June 22, 2010

Lately I have been going through a bunch of maintenance style scripts and fixing them.  Along with fixing them, I have also been updating them to work more efficiently and to work in SQL 2008.  Most have been corrected / updated.  The most recent script I have been looking at is supposed to backup databases and delete old database backups from the fileshare based on parameters passed to the stored procedures from a table.  The backups are working as expected, the deletion of files is not working so well.

This will be just a short entry on some of the frustration involved with getting this script to work properly.  That frustration revolves around the title of the post.

In the Beginning…

The procedure that runs the backups is a modularized script.  The basic workings depend on bit operations / math.  Thus I can store multiple options for the backup job within a single field.  Some of these options are for compression, verify, and to delete the backup files.  Performing the bit math on the values stored in this particular field, I knew that my settings were correct and that various operations were supposed to be happening.  The logic inside the parent procedure was also correct to call the delete module.  Thus I figured the problem had to be inside that module.  So it is in the delete module that we will pick up.

Inside this delete module we have various operations to determine OS version, then loops to get file details, and to move this information from one temp table to another, and then ultimately a delete statement.  The delete statement is run iteratively for each file that matches certain criteria (most notably that the file is older than the retention specified).  Each action is then logged to a Log table.  Here is a snapshot of what the script does to find the file information.

[codesyntax lang=”tsql”]

[/codesyntax]

I didn’t much like this as it was originally written but copied it to a new query window so that I could debug it outside of the proc.  When I copied it, I ended up with something like the following:

[codesyntax lang=”tsql”]

[/codesyntax]

What the Heck¿

I tried running that snippet of code and failed miserably.  Why?  I started receiving an error message and was stumped.  The error was that I must declare scalar variable @SourceDirFOR.  But I have that variable declared.  The size is an appropriate size, I can add a print statement and see that the variable is getting a value assigned to it.  So the problem must be inside the variable and the usage of the FOR Loop there.  In that case, I will just change it up and simplify this process a bit.  So let’s try the following in lieu of the SourceDirFor stuff.

[codesyntax lang=”tsql”]

[/codesyntax]

Alright, I am still getting this error message.  Some research and frustration later, I will have checked the collation settings and checked to see what others have done.  None of it seemed to be matching.  What is going on?  (Mind you this was being done while tired and needing a break.)  Finally I came across something in a forum (By Jeff Moden) that was similar to what I was doing so I checked the code provided there (up to the point of running the xp_dirtree) and ensuring case similarity (though collation was checked and case insensitive collation is being used).  Running that code yielded the exact results that I wanted.  What was the difference?  I compared the xp_dirtree command to mine, the table creation to mine, and the variable declaration to mine.  It was all the same – still getting different results.  This, btw, was being done on the same server in different connections.  The SQL Server version is 2008 sp1, and the OS is 2008 R2.  Finally, I yielded to verifying line by line the differences.  Here is the difference:

[codesyntax lang=”tsql”]

[/codesyntax]

Conclusion

Do you see it?  One little “go” statement was the cause of all of my scalar variable problems.  I don’t know how I ended up with a go statement in my script in that spot, but I overlooked it several times.  That is a really simple fix.  I’m fine with the result though – the script works better now and I prefer the shorter xp_dirtree method.  The moral is that you should batch your statements, but you need to be careful where you place the “;” or “go” statements.

24 HOP Available

Tags:
Comments: 1 Comment
Published on: June 16, 2010

Just a quick note.  PASS has made the 24 Hours of PASS available via streaming video.  You can check it out at 24 Hours of PASS.  There is a variety of topics and sessions from which to choose.

Enjoy

June S3OLV Wrapup

Categories: News, Professional
Tags:
Comments: No Comments
Published on: June 11, 2010

Last night we had our monthly Chapter User group meeting in affiliation with PASS.  Honestly, the experience was quite humbling and embarrassing.  What could one do though given the circumstances?  You may be wondering what I am talking about right now.  Let me explain.

I had arranged for a couple of vendor presentations.  One of the vendors was Fusion IO and the other is a company from Utah called Kynetx.  Both were coming in from out of town to present to us.  I was the point of contact and one had even shipped some material to me in preparation for the meeting.  I received the box of materials the day before and all was set.  Well, sometimes things just don’t happen as they are planned.

Despite my efforts, I had to be at work in support of a current project and some production issues.  I was in an area where cell phone reception is inconsistent and the noise pollution is horrendous.  With those factors in play, I did not catch any of the calls being placed to me by the vendors.  To top it off, by the time I was able to get the phone-calls, they were 20 minutes behind schedule and were ready to start without any real intro and by themselves.

I arrived in the middle of the first presentation by Fusion IO and we were able to get the material handed out.  Some recovery, but embarrassment nonetheless.  At least the few of us in attendance got to hear some good stuff about some good products.  One person even won an IPOD nano in a giveaway.

The presentations were great.  I would recommend that people check out Fusion IO and Kynetx and see what they have to offer.

Data Compression

We have come to that time of month again – blog party.  This time, the party is hosted by Jorge Segarra (aka SQLChicken) (Blog | Twitter).  The rules have also changed this month, though ever so slightly.  Now, we must insert a pic into our blog post rather than include the TSQL Tuesday in our blog title.  BTW, this month we are discussing features in SQL 2008 (supposed to be R2, but I am only discussing something that came out with 2008).

Pick your Poison…err Feature.

The feature that I have chosen is compression.  In SQL 2008 we have the option to compress backups as well as compress the data.  The data can be compressed in two methods as well.  I will just be discussing my experience with Page level compression.  I will touch lightly on the differences between row level and page level compression.

History

Do you recall a nifty utility that Microsoft gave us back in the glory days called Doublespace that was later renamed to Drivespace?  I do!!  Oh the memories are painful still.  That little compression agent was renamed due to the inaccuracy of the name.  You didn’t truly get double the space on your hard drive by using it.  I remember numerous support calls related to compression and all of them turned out ugly.  Something about compressing your drive and then losing everything because you used a Drive Overlay to access a larger disk drive than the BIOS supported and then used Doublespace to compress it.  Or another good one was to lose the doublespace bin files from a compressed drive.  You could also see heavy fragmentation issues.  All of these have created a heavy bias for me against mass compression utilities.

Ch Ch Ch Changes

Despite my heavy bias against compression, I have always like the ability to compress selectively certain files or folders.  The reasons for compressing in this method, for me, have largely been for archival purposes.  There are several file compression utilities out there on the market for use in performing this.

How does that relate to database compression?  I see database compression, as offered with SQL 2008, to be more like these file compression utilities than DriveSpace.  Data compression in SQL 2008 is not an all or none implementation.  You get to pick and choose what gets compressed.  That is a big time bonus for me.

The Setup

After some research and having learned a bit about compression, I decided to test it out.  I have yet to test performance as I have only tested the disk savings that compressing could generate.  There is a good demonstration on performance by Jason Shadonix here, for those that are interested.  I will be baselining and testing performance at a later time – that is just a bit beyond the scope for this article.

I decided to use a database from our warehouse that resides on SQL 2000, on Windows 2003 32 Bit, currently.  I created a backup of that database and restored it to a test box that is running SQL 2008 on Windows 2008 R2 64 Bit.  The starting database size was 164GB.  The database was also left in SQL 2000 compatibility mode.  The selection criteria for tables to compress was to select any table larger than 1GB in size.  I used a script I showed in the table space series to determine which tables to target.  The script can be found as follows.

[codesyntax lang=”tsql”]

[/codesyntax]

In this script, you will note that I am dumping the results into a temp table.  I wanted to be able to run a few different cases against the data to check results quickly without running the entire query over again.  Though, running this particular query would not take that long in most cases.  Consider the temp table a fail-safe to prevent long execution times of the query.

To determine which tables from the previous query I would target, I employed a script similar to the following.

[codesyntax lang=”tsql”]

[/codesyntax]

This is pretty straight-forward.  I am only selecting those tables from the temp table that are larger than 1 GB.  I am also concatenating a string to those tables for manual execution purposes.  I can now see which tables are “large” and compress them with Page level compression or not (note: the same sort of script would work for Row Level compression).  After, I was happy with the result set and which tables would be compressed, I proceeded with compressing the tables.  Recall that the database started at 164GB.  After compression of the handful of tables (30 versus the 460 tables in the database), the database used space was now down to 84GB.  That is a reduction of ~49% in space requirements for this database.  For me, that is a pleasing result.

What if…

Should you decided to run this process again, the tables that you have already compressed will be included in the result set.  Should you proceed with running the table alter scripts again, you will not get any better compression and the compression will remain in effect.  Or, you could eliminate those tables that are compressed by altering the script provided to also read from sys.partitions as in the following script.

[codesyntax lang=”tsql”]

[/codesyntax]

Conclusion

Without hard and fast numbers concerning performance, a final conclusion could not be made concerning whether to compress or not.  Having run a few dry runs to test if processes still worked, I am pleased with the performance (it is faster – I don’t have the final data to back that now though).  The disk savings and even with a stalemate on performance, I like the compression that is offered in SQL 2008.

June 2010 S3OLV Meeting

Categories: News, Professional
Tags:
Comments: 1 Comment
Published on: June 7, 2010

The S3OLV monthly meeting is being held this Thursday.  We will be meeting at the same location and time as usual.  This month we have a couple of vendors coming to speak.  Fusion IO and Kynetx will both be in town giving presentations.  Come and enjoy the presentations and the Pizza.

Thursday June 10, 2010

The Learning Center at 777 Rainbow suite 250.

Defensive Database Programming Chapter 01

Today I am finally getting around to the review of the first chapter as promised.  I talked about the book a bit in the first post on the subject, which can be found here.  This book was written by Alex Kuznetsov (blog).

In this chapter, Alex discusses some basic defensive programming techniques.  That, in fact, is essentially the name of the chapter.  This is a pretty lengthy chapter – and with good reason.  Alex is laying the groundwork in this chapter with some basic samples and pretty much giving an introduction to the book.  The code samples are straight forward and designed to illustrate his point quite well.

During my current reading, a good takeaway from this chapter is the discussion on Rowcount.  For me, it serves as a prime reminder of the deprecation status as of SQL Server 2008.  If you are using SET ROWCOUNT, stop and change your practices.  Alex suggests using TOP in lieu of SET ROWCOUNT.

With each case that Alex presents as a potential problem when building your SQL code, he offers up a solution on how to avoid that problem.  The composition of using problems and providing solutions is quite handy.

And in Alex’s own words (Chapter 1 Summary, Page 59)

“It is vital that you understand and document the assumptions that underpin your implementation, test
them to ensure their validity, and eliminate them if they are not.”

SQLBlogs.Net

Categories: The Corner dep
Tags:
Comments: No Comments
Published on: June 4, 2010

After the articles concerning the content on SQLBlogs.net, I have worked with Steinar Andersen to help get some stuff going on his site again.  Steinar has developed some rules for syndication for his site.  I have also created a Feed from my site to syndicate articles to him.  This should prove to be a worthwhile venture.

Thanks to Steinar for working with me to get some content sent his way.  Steinar will be setting up the feed pull from his side and then you should be able to start seeing some of my content pulled to SQLBlogs.net.

DTS and 2008

Tags: No Tags
Comments: No Comments
Published on: June 4, 2010

Have you ever been required to update your SQL Server environment and needed to test it?  I know, rhetorical question.  I am in such a process currently.  We have the entire gambit of project killers in play as well.  We have short deadlines, requirements to maintain current technologies (i.e. DTS) in the new environment, consolidate rack space (ok not really a project killer), and wrong parts/equipment being shipped.  Through all of this we must test two different versions of operating system to determine which environment to build the new servers.  To accomplish this we had two test servers setup with SQL 2008, and each is on a different platform.  One is on SQL 2008 (32 Bit) and the other is SQL 2008 R2 (64 Bit).  The testing of these two environments is explicitly to determine that both will work appropriately and produce desirable results, particularly in the realm of our DTS packages.

I am not going to go into details on how to test all of that.  Rather, I want to discuss a quick way to get such environments ready to go without the need to manipulate each of the packages to have them redirect to entirely new servers.  Remember, the idea is to test these production packages in a test environment and ensure they will function properly.  The only way to do that is to mimic the environment.  When you create new test servers, you could be required to change log locations, connection strings and possibly some queries inside the packages in order to make these packages work in the test environment.  If the production boxes were no longer in play and all we had were these new servers, the easiest solution by far is to create a CNAME entry in DNS with the old server name and point that to the IP address of the new servers.  This does not work when you must test while the production server still exists.  There are also other options – such as a new Test network to perform this.  If that sort of environment exists – by all means use it.  We don’t have the time nor the equipment to create such a setup (maybe someday later – our dev environment is not a distinct network separate from the production network either).

What to Do?

The first thing that came to mind to try and cover all aspects of this type of situation was to alter the Hosts file on the servers.  In Server 2008 R2 that is a bit different to do due to the need to launch notepad first in administrator mode.  Once I had the Hosts file on each of the servers altered to redirect to the local box should the prod server name be encountered, I set off to begin testing.  It did not take long to discover that this would not work.  When trying to create a connection in SSMS an error was thrown.  The error was in relation to the user being from an untrusted domain.  Well that wasn’t possible since the user was a legitimate domain and there is only one domain in play.  Fileshares were redirected as desired, just SSMS did not want to play.

Since that was a failure, the next idea I had was to create an alias on each of the servers and see if that worked any better.

Notice in this screen from the Server 2008 R2 environment (not SQL 2008 R2) that there are two client configuration sections.  In order to create an alias for use by SSMS, the 32bit version must be configured.  Once this alias is in place, one is able to reference in SSMS the Alias Name and be redirected right back to the local databases.  This is progress!  When SQL Agent attempts to connect to a Servername encountered in a connection (database) string in a dts package, it will use the alias.  This is real progress!

This does not cure all things though for testing.  The alias and the Hosts reconfig don’t work in conjunction one with another.  Thus, if one has any logs being written out to the file system then the logs will be written out to the actual server and not the aliased server.  This is true of logs attached to job steps and logs from within the DTS package.  Thus, if you are doing either of those things – some job and package editing may have to take place.  For the logs being written from job steps – I recommend not writing out to the file system and rather write it out to a table.  Writing out to a table will also reduce significantly the changes required in packages and their Agent jobs.  This will help you get to quicker testing.

A final note about the aliases.  If you try to change the Alias in the SQL Native Client 10.0 configuration section, you will not get the desired results.  Be careful on where you make these sorts of changes.

page 1 of 1








Calendar
June 2010
M T W T F S S
« May   Jul »
 123456
78910111213
14151617181920
21222324252627
282930  
Content
SQLHelp

SQLHelp


Welcome , today is Monday, August 21, 2017