Scaring a Database Near you

Comments: No Comments
Published on: October 31, 2013

r2

Something I have a hard time by-passing is a good R2 unit.  I have R2 units in so many different forms, including a standing R2 cake one year for my birthday.  So when I cam across this R2 unit, I just had to share it.

That is a pumpkin carved into the resemblance of R2-D2.  I think it is a mighty fine job too.  It’s amazing how many good Star Wars related pumpkin carvings there are out there.  You probably wouldn’t have too difficult a time finding three or four hundred if you tried a little google-fu.

Each year I try to have something for the Halloween Holiday such as this one or this one.  I failed to provide something in 2012, and this is getting back on the right track.

Despite the ease to find haunting Halloween effects related to SQL Server, I am amazed at how few have even heard of “Halloween Protection” which stems from the “Halloween Problem.”

I am not going to dive into the problem or the protection of it.  I think that has been covered plenty and even quite masterfully by Paul White (blog | twitter).  I recommend that you read his four part series on the topic starting here.

With all of the COSPLAY going about here in the States, I find some of the scarier things to be about either stuff I have previously fixed or about which I have written or tweeted or all of the above.

Take for instance this article about the blatant disregard by some vendors and clients in regards to security.  I still can’t figure out why the public role would ever need to be dbo for a database – at least not a legitimate reason.

Or we can take on the recent time I tweeted about a cursor that I fixed.  I took that scary cursor down from a 30+ hour run time to a mere 50 seconds.  Here is a segment of the execution plan (plan is roughly 4mb in size to give a little scale) zoomed out to 1/5th.

optimized_segment

 

The query was much uglier than that originally.  Imagine that beast looping through on your server for 30 hrs, and that is not even the entire thing.  It is little wonder why things started to drag on the server.

Another scary item I like is the effect of implicit conversions.  That is a topic that can be viewed pretty easily through the use of google-fu.  Here is a short demo on the implications of implicit conversions.

[codesyntax lang="tsql"]

[/codesyntax]

In this demo I have created three temp tables.  Each is pretty simple in nature and each is to receive 10,000 records.  The insert statement just inserts an integer into each field of each table through the while loop.  Notice that I intentionally named a column in #T3 to be SomeReal but the datatype is an NVARCHAR.  This is to underscore a pet peeve of mine that I have seen over and over again – naming the field in the table after the datatype and the datatype doesn’t even match.

When this query runs, I get the following timing results.

timing

 

The thing that stands out to me is the huge difference in time between the implicit-free query and the query replete with an implicit conversion.  The implicit conversion query

grim

was about 930 times slower than the query free of implicit conversions.  Granted that query was against a cold cache, so let’s see what happens to an average of five runs each against a warm cache.

With a warm cache I see an average of 51ms for the implicit free query.  On the other hand, the implicit conversion query runs at an average of 84525ms.  That equates to about 1644 times slower.

Sure this was a contrived example.  But keep in mind the table sizes, the datatypes and the number of records in each table.  Had this been a more true to life example with larger tables and millions of records, we could be seeing a query that is far more devastating due to the implicit conversions.  Let’s just call it the grim reaper of your database. (Grim Reaper from www.mysticalpassage.com/grim_reaper.html)

With these horrifying things to haunt your database, I leave you with this new hope as you battle the dark side and grim that is in your database.

swbattle

To DBA or Not to DBA…

Categories: News, Professional, SSC
Comments: No Comments
Published on: October 30, 2013

In my day to day operations I have the opportunity to work with people in various capacities in regards to data.  Sometimes it is in the capacity of a mentor, sometimes in the capacity of a consultant, and sometimes just in the capacity of the dude that fixes the problem.

I enjoy working as a database professional.  There may be times when I want to scream or yell or pull out my teeth and hair.  Then there are times when I just bounce off the walls with joy and pleasure.  Some may call that a manic-depressive disorder.  They just don’t understand the true life of a data professional.

Reminiscing

In becoming a data professional, I took the long route to get where I am.  I made the decision to work with SQL and learn about SQL 17 years ago.  I made the decision to learn about SQL because I viewed it as a really difficult thing to learn.  I wanted that challenge.  Then again, back then I also enjoyed the challenge of learning to configure Cisco routers.

stay-tuned

 

This has been a short tease on an article to be published on Tuesday November 26, 2013 as a part of a community project.  Please return to read the rest of the article and the articles from the rest of the project at that time.

Summit 2013 – Part the First

Categories: News, Professional, SSC
Comments: No Comments
Published on: October 16, 2013

While not technically the first day of Summit since the official start is yet to come, here are my thoughts on Sunday and Monday of PASS Summit 2013.

For many of us the week began Sunday evening.  Many, the start was not until Tuesday.  Others still might have started Monday or Wedneday.  My first day of Summit 2013 was actually last week as I began my travels to the East coast the Friday preceding SQL Saturday in Charleston, SC.

So, I want to share those experiences as a prelude to the events in Charlotte, NC.

sqlsat227_web

The event in Charleston was a first time event in more than one way.  This was the first time that SQL Saturday had ever been presented in Charleston.  It was also the first time that the organizer had ever a) attended a SQL Saturday and b) organized a SQL Saturday.

The event was successful.  Some of that success can be attributed to many factors.  One factor I think was the quality of presenters that was selected.  Another big factor was the responsiveness of the venue and volunteers to help resolve issues big or small.  Another factor was the general help of the volunteers.  And finally, the attendees were FANTASTIC.

One big issue that we ran into affected all presenters in the afternoon for one of the rooms.  The projector died!  I just so happened to have my projector with me.  When it became apparent that nobody would be able to get the projector to work, we hooked up my projector.  Whether it was a Surface tablet or a normal laptop, we got it working.  We had different connectors and just made things work.

As an attendee, I took advantage to see sessions by Andy Warren, Laerte Junior, Grant Fritchey and David Klee

As a presenter, I thoroughly enjoyed presenting on compression.  Even with Andy Warren (who is a really really good presenter) sitting in the room trying to offer distractions.

I also enjoyed taking part in the two different QA panels in two different rooms as two of the three afternoon presenters no-showed.  The QA panels were a riot.  In one room we had Andy Warren and Shannon Lowder.  In the other room we had David Klee, Steve Jones, Mike Wells and Grant Fritchey.  I tried to split time between the two because they were different topics and questions in each.  The room with Grant, Steve et al ended up being largely entertaining as well as informative.  Yes, Grant is still preaching NO PANTS (even three days later).

From there, I rode with Wayne Sheffield back to Charlotte to start the actual events of Summit – with a few left hand detours on I-26.  You’ll have to tweet Wayne (blog | twitter) to get more details on the Charlotte motor speedway (I-26 and left hand turns).

Sunday was a slow day filled with mingling and talking to people who I haven’t seen in months or a year or so.  Combine that with the opportunity to network with people I have never met, and then those with whom I have only virtually met – and it was a good day overall.

Monday was the day to really start diving into the deep end of Summit 2013.  This is the day that I was supposed to get to “Color with Crayons” during the precon by Paul White. I am still waiting for my pack of crayons.  I know that was all just said in jest by Paul on twitter.  The session was an advanced look into the Optimizer and Execution Plans.  The session did not disappoint.  I have a lot of playing to do with the stuff learned from that precon event.

Monday evening was filled with a networking party where I met soooooo many of the SQLFamily with whom I have frequently chatted and talked.  That event was worth the price of admission and then some (cost of the meal ;) ).  That event was shortly followed by a quick jaunt to the Friends of RedGate gathering a few blocks up the street.  Which, was another fantastic opportunity to meet and greet with more SQLFamily.

Tuesday is a bit of a different story.  Some people continue on with the precons.  Some people had various meetings to attend.  I was in the latter group.  I had meetings about SQLSaturday, Volunteering, and a meeting for Chapter Leaders.

Tuesday was capped with the Quiz Bowl that seemed to be over before it even started.  I think a good majority didn’t even know it was happening when it happened or that it had completed when it did.  Many were busy socializing or in a different part of the Welcome Reception where you couldn’t hear the announcements.

After the welcome reception (where the Quiz Bowl happened), I spent my evening at a bowling alley called Strike City.  There was an event for volunteers and another party hosted by Linchpin People.  The evening was filled with great networking opportunities and was time well spent.

Carolina Whistle Stops

Comments: No Comments
Published on: October 10, 2013

The SQL Crazy train continues chugging along this month.  The next two whistle stops are back to back in the Carolinas.

The first stop is coming up in just a few short hours.  I will be stopping in for SQL Saturday in Charleston South Carolina.  I have been selected to present on the topic of Compression.

sqlsat227_web

 

The lineup is pretty rock solid for this event.  And nothing like extending a bit of training to encompass the SQL Saturday events if it just so happens you had intended to be in town for the next whistle stop – Summit 13.  I’ll talk about that in a bit.

There are a lot of speakers for this single day event that I would really like to see.  Looking at the schedule, I don’t know how I am going to pick which session to attend.  Every single time slot has at least two sessions that I would like to attend.  There are only 3 tracks!  At least this will give me practice in selecting which sessions to attend for Summit.

The next whistle stop is in Charlotte North Carolina.  Charlotte is the home of PASS Summit 2013.

logo_header

 

Even better is that this year, the summit really does have a train – of sorts.

(Image is a link to the actual source at SQL Sentry – the sponsors of the trolley.)

SQL Sentry has been kind enough to sponsor the SQL Sentry Shuttle to help Summit attendees get around and see some of Charlotte while in town.  You can read all about it here.

Yeah – I will be boarding that train/trolley.

I have a busy schedule coming up at Summit 2013.  But, I am looking forward to meeting people and talking about whatever may come up.

Will you be at any of these whistle stops?  If see, say Hi.

Coast to Coast with SQL Saturday

Comments: No Comments
Published on: October 10, 2013

Last month I blogged about the SQL Crazy train making a whistle stop in Providence Rhode Island.  A couple of weeks prior to that, you may have read this article about the Salt Lake City event.

What you have not seen is a recap of either of those two events.  I just want to give a quick bit of feedback on both events at this time.

sqlsat246

 

The SLC event was held at the Adobe building down south in Utah county – just outside of Salt Lake City.  We’ll let it slide that it wasn’t even in the same county as the event name.

Overall, the event was good.  A lot of people attended.  The food was good (catered BBQ and Pizza as reserve in case there wasn’t enough BBQ).  The speakers were good, the facilities were mostly good, and the topics presented were good.

Above all of the good of the event comes the great stuff.  I particularly enjoyed the opportunity I had to chat with people like Kevin Boles and Argenis Fernandez.  I also had the great time to mingle with people like Andre Dubois, Keith Tate, Aaron Cutshall, Jason Kassay and Reeves Smith.

I had met each of these people at one event or another and I had time to chat with them all over again.  It is always nice to reconnect with other community volunteers.

If you ever have the chance to attend the session about SysInternals Tools by Argenis, then DO IT!  I particularly enjoyed that session.

logo

 

Much like SQL Saturday in Salt Lake City, I enjoyed the Providence event.  I found myself bailing on some of the sessions more in Providence than in SLC so I could tech talk.

I really enjoyed pulling off to a quiet corner to answer tech questions and to try and help resolve problems.  The questions were in part about my presentations, and in part had nothing to do with anything that I presented.

One thing that I thought worked rather well was the scheduling of my second session.  The session preceding mine was about a framework for SSIS.  My session was essentially about a framework for SSRS.

I left both of my sessions feeling that both audiences during my sessions were engaged.  When I present, I monitor the group to try and determine if they are engaged and it seemed like the attendees were there and wanting more.

After the event, the speakers and volunteers gathered for a social evening at a local restaurant.  I thought that the concept worked out really well.  I have to admit that I was a bit skeptical at first with the layout.  I am happy to admit that the skepticism was squashed promptly.  The chance to meet with the volunteers and other speakers in a more informal gathering was ideal.  I enjoyed it even more than a formal dinner – in this particular occasion.  It is this kind of informal get together for speakers and volunteers to mix and socialize that I would be interested in seeing more frequently at SQL Saturdays.

T-SQL Tuesday #47: SWAG SWAG SWAG

Comments: No Comments
Published on: October 7, 2013

TSQL2sDay150x150This month we probably have the easiest topic I have ever seen for a TSQL Tuesday blog party.  That isn’t a slight, but rather is something of a good thing.

This is a very hectic time of year in the SQL Community.  Add the usual stresses of work, family and life – few people have a lot of free time.

It just so happens, that to start this month, I am trying once again to get back on the ball and blog more consistently.

Kendall Van Dyke (blog | twitter) has accepted the challenge of hosting the blog party this month.  You can read his invitation here.

The topic chosen is all about the SWAG.  This particular SWAG is the kind of freebies you might get from various SQL Activities.  This is not the SWAG that is associated with a Guess.

There are a lot of SWAG items that I have liked over the years.  I have received Bucky Balls, stuffed animals (from Horton), stuffed Cats (from SQLCat), usb drives, and even a Lava Lamp or two. The interesting thing about some of these things is that SQLCAT has been rebranded  (so probably no more plush SQL Cats) and Bucky Balls is extinct due to a ban on sales by the CPSC.

I am not surprised by the extinction of Bucky Balls.  That was one of the best geek SWAG items I ever got.  But I had to lock mine up (and not just in my office) due to a certain child decided to find them (in my office) and swallow a few.  It just so happens to be the same child that sneezed an almond into her nose.  And yes, we did successfully retrieve all of the Bucky Balls (four swallowed).

But none of that SWAG compares to the IPAD that I won.  Of course, the children would say that it is their IPAD.  At this point it serves a great purpose in trying to keep the children somewhat calm.

ipad

 

 

Transaction Log Management

Categories: Book Reviews, SSC
Comments: No Comments
Published on: October 7, 2013

I have had the distinct pleasure of being asked to review a book by RedGate.  The book is by a friend and respected community member Gail Shaw (blog | twitter).  The book is a stairways style book and is titled “SQL Server Transaction Log Management.”

sql-server-transaction-logs-200

 

I read this book with the intent to provide an in-depth analysis and technical review.  I was happy to take on the challenge of reviewing this book and happy with what I gleaned from the book.

Let’s start with my overall impression of the book.  This book is ideal for every junior DBA and every accidental dba out in the workforce.  If I need to repeat that statement I will, or you can flash back and reread that statement as many times as you need.

My next observation about this book is that it seems obvious that it was written with a community undertone.  There are numerous references to community resources and individuals who are ever-present in the community.  That community tone can be an extension of RedGate as a whole.  Community appears to be fairly important to RedGate overall.

I enjoyed reading the book, and I enjoyed the demo scripts.  An absolute must with this book is the demo scripts that can be downloaded and sampled (see page 13 for more code sample links).  They are an absolute must to help illustrate the points that Gail and Tony are making.

I do have a minor critique that is more along the lines of readability.  The first critique may be a bit zealous, but I found it difficult to read various characters such as I and 1 due to the choice of font in the paragraph text.  The numeric 1 is a different font and looks like an I in paragraph versus the numeric 1 in the chapter headings and image text that actually looks like a numeric 1.  There are a few cases where the font appeared to change as well making it seem disjointed.  Again, that is a really a minor issue and nothing related to the technical content.  An example of this font switching may be seen on Page 29 and 30.

That said, something I like to do with books such as this is try to take something of interest from each of the chapters.  So, here is a list of points of interest for each chapter.

Chapter 1

Many DBAs (part-time, accidental or even full time) along with Vendors (I am adding them to this group), do not backup their databases correctly.  Many in these groups “perform full backups on their databases, but they don’t perform transaction log backups” (page 21).

This book underscores that problem and helps show those groups how to implement transaction log backups and better manage their database transaction logs.

Chapter 2

Another item that many DBAs neglect to do is manage the log file fragmentation.  “Log file fragmentation can degrade the performance of SQL Server processes that need to read the log ” (page 41).  On some databases, I have seen this degradation cause delays in the range of 20-30 minutes or worse when bringing a database online.  Do you want to take that risk?

Chapter 3

This chapter covers a topic that I see happening on a regular basis – even with “seasoned” DBAs.  That topic is concerning databases that are in “pseudo-full.”  This mode of operation is baffling to me.  I don’t quite understand why people place a database in full recovery model and then never take a full backup of the database.  The database transaction log will continue to truncate as if it were in simple recovery model.

I also particularly enjoyed reading about some really easy ways to break a transaction log chain.  There are a few methods listed on how to break the chain, but more as a caution to not use them as opposed on showing you how to use them.  One method is the switch from Full recovery to Simple recovery to try and control a runaway log.

One method that wasn’t mentioned ( at least not until Chapter 7) was backing up the transaction log to a NUL device.  Backing up to a NUL device is just as destructive to the log chain as any method mentioned in the book.

One thing this chapter does not delve into is the implications of breaking the log chain.  By Design that is covered in Chapter 7.  One item that was not covered in great deal was that of log-shipping.  Think of the implications of a Mirrored or Log-shipped database that is in the multi-terabyte range.  Do you want to be the one to have to copy a full backup cross-country in order to rebuild log-shipping because you created a log backup to the NUL device?

Chapter 4

This chapter is summed up succinctly in the summary section of the chapter.  “If you wish to measure exposure to data loss in minutes rather than hours, don’t use SIMPLE model” (page 61).

Chapter 5

Have you ever heard the expression “measure twice cut once”?  In this chapter we see a subtle reminder of the importance to double-check these types of things.  Did you know that records in the backupset table could be manipulated?  Do you rely only the results of queries to the backupset table to confirm that backups were taken?  That just might be a mistake.

This chapter shows the need to also query sys.database_recovery_status to confirm backups of the database log.  If you are not double checking your log backups then please start doing so.

Chapter 6

This chapter covers Bulk Logged and some common uses for this chapter.  Some concerns are also illustrated in this chapter about the use of Bulk Logged in various scenarios.  There is also a discussion on pros and cons as well as Best Practice.  A major concern with Bulk Logged is discussed, and for that you should read the book.

I particularly enjoyed the demo on space savings regarding the use of Bulk Logged in this chapter.  Have a run at the demo and enjoy what you learn from this chapter.

Chapter 7

Besides the script on page 141 for finding open transaction information and the warnings about CDC, I found the section on Mismanagement to be an absolute gem.

There is a lot of mis-information available on the internet.  The information concerning transaction logs can be quite dangerous.  This section discusses some of the more prominent problems that may be found through internet searches.  The debunking of some of these dangerous myths is a must read.

Chapter 8

This chapter covers some of the Golden Rules of transaction log management.  A SQL Server Database is NOT a set it and forget type of venture.  By extension, a transaction log is even less of a set it and forget it undertaking.

One of very first things debunked in this myth busting chapter is that concerning the use of multiple transaction logs.  Simply put, don’t do it.  There is an edge case where adding a second transaction log is helpful – but once the issue is resolved the additional log file MUST be removed.

One thing you may find in your internet searches is what is deemed a best practice to manage your transaction log growth explicitly.  This chapter discusses that and demonstrates the benefits to manually managing your transaction logs.

In this chapter we also see a renewed discussion concerning log fragmentation.  Here we see more things that are affected by a highly fragmented transaction log.  To underscore the importance of proper management of your transaction log and the number of log fragments there is a very good demo script.  Check it out.

Chapter 9

To top off all the great information, we conclude with something that is all too frequently overlooked by many (especially those in the earlier defined groups).  You must monitor your transaction log!!

Conclusion

There is a lot of great information throughout this book.  There are also some great demo scripts throughout the book.

One final note is in regards to something that I often see vendors doing.  It was touched upon in this book and warned against.  Please do not store backups of different types or even different databases in the same backup file.  This is a practice that will almost certainly guarantee you of a larger scale disaster.  What if you have one corrupt backup within that mass accumulation of backups within the same file?

I enjoyed this short book.  I recommend it be on every junior or accidental DBAs bookshelf.  I’d also recommend it be in every employers library for use by all IT employees.

page 1 of 1




Calendar
October 2013
M T W T F S S
« Sep   Nov »
 123456
78910111213
14151617181920
21222324252627
28293031  
Content
SQLHelp

SQLHelp


Welcome , today is Tuesday, July 22, 2014