March 2013 LV UG Meeting

Categories: News, Professional, SSC, SSSOLV
Comments: No Comments
Published on: March 11, 2013

Another month and we have another installment (meeting) for the Data Professionals of the Las Vegas area.

This month we will be getting back on schedule with the usual meeting time and place.  We had a slight divergence last month due to that heartsy holiday.

We are excited about some of the upcoming speakers and feedback we have been getting of late.  We are feverishly working to get some presenters on Big Data and Hadoop.  If you are in the market to present on these topics, ping me.

Here are some of the details for the meeting.

Presenting is Jason Kassay

Jason Will be presenting to us on the topic of “The Accidental Programmer.

Most people in the database world have heard of the term, “The Accidental DBA”, those programmers who have to work with databases. How about the other side of the coin? What about DBA’s an Dev’s who have to write code or have to work closely with programmers? This presentation is a best practices guide for working with SQL Server in a .Net environment. You will learn how to recognize when bad code is written that interacts with the database, how to track it down, and most importantly how to fix it. On top of that you will also receive an introduction to object oriented programming concepts such as data abstraction, encapsulation, tier architecture, and class objects so that you can better communicate with your programmers.


Jason has been a software developer for over 10 years and currently works at EZFacility, providing scheduling, management, full-service billing, and membership solutions for the sports, health, and fitness industries. He has worked primarily with .Net (C# and VB) and SQL Server as well as HTML, CSS, and Javascript. When he is not coding or spending time with his awesome family, you will find him either running (to train for the zombie apocalypse) or stopping hockey pucks as a goaltender.


LiveMeeting Information:

Attendee URL:
Meeting ID: DC769N



The meeting location has changed.  We will no longer be meeting at The Learning Center.  New meeting location is M Staff Solutions & Training / 2620 Regatta Drive Suite 102 Las Vegas, NV 89128.
We are also moving to meetup for meeting management. Please join us for free at

January S3OLV 2013

Tags: , ,
Comments: No Comments
Published on: January 9, 2013

I hope you are ready for some free SQL learning.  Coming up on Thursday January 10th at 6:30 PM Pacific (GMT – 8 ) we will have a stellar opportunity to do just that – Learn!

Edwin Sarmiento will be presenting for us this at the upcoming monthly Las Vegas User Group meeting.  Edwin has chosen (well he left it up to me to decide) to present on Powershell.

Title: Windows PowerShell for the SQL Server DBA


Windows PowerShell is becoming the scripting language of choice for managing Microsoft servers and workstations. And while T-SQL is still the scripting language for DBAs to administer and manage SQL Server, there are tons of stuff that can be easily done with Windows PowerShell. In this session, learn Windows PowerShell from the ground up and how you can use it with SQL Server Management Objects (SMO) to administer and manage a SQL Server instance.

Here is the bonus behind all of this – we will be broadcasting the presentation to the virtual sphere.Here are the details for joining the meeting via livemeeting.

Copy this address and paste it into your web browser:


Copy and paste the required information:

  • Meeting ID: JRW8JQ
  • Location:


On the Fifth Day…

Today we will have a quick change of pace.  This will be less about maintenance and more about internals.

Today’s topic is one of particular interest to me as well.  The topic is a small section of a presentation I like to give at User Groups and at SQL Saturdays.

Before we take this diversion from the under-carriage to something more related to the engine, let’s have a brief recap of the previous four days of pre-Christmas.

So far the gifts of pre-Christmas have included the following articles.

  1. Exercise for msdb – Day 4
  2. Backup, Job and Mail History Cleanup – 3rd Day
  3. Service Broker Out of Control – 2nd Day
  4. Maint Plan Logs – 1st Day

On the Fifth Day of pre-Christmas…

My DBA gave to me a much more compact database.  Surprisingly it can be as much as much as 20% of the original size.

I know, I know.  I’ve heard it before but this is not the compression used by doublespace and drivespace that we were given many years ago.  This really is much better.

And yes I have heard about the performance factor too.  That is a topic for another discussion.  As an aside, when it comes to performance, I always tell people that they must test for themselves because mileage will vary.

No, what I want to talk about is much different.  I want to talk about the CD Array(at the page level) and the new data types specific to compression that you may encounter within the CD Array.

CD Array Data Types

SQL Server introduces us to 13 data types that are used within the CD Array when Compression has been enabled.  Twelve of these data types can be seen when Row Compression has been enabled on an object.  The thirteenth data type is only applicable when page compression has been implemented.

There is no guarantee that any or all of these data types will be present on a page related to an object that has been compressed using either Row Compression or Page Compression.  If you want to find these data types on a compressed page, you may have to do a little hunting.

To demonstrate that these data types exist and that they can be found, I have a sample script.

[codesyntax lang=”tsql”]


Now let’s take a look at the different data types, starting with the 12 available with Row Compression.

0x00 NULL SomeNull
0x01 EMPTY SomeBit
0x02 ONE_BYTE_SHORT Some1Byte
0x03 TWO_BYTE_SHORT Some2Byte
0x05 FOUR_BYTE_SHORT Some4Byte
0x06 FIVE_BYTE_SHORT Some5Byte
0x07 SIX_BYTE_SHORT Some6Byte
0x0a LONG SomeLong
0x0b BIT_COLUMN SomeBit2

If we look at page 20392 (yours will likely be different), we will find all of these data types present.  We will also note that this page should show (COMPRESSED) PRIMARY_RECORD – which indicates that the page is Row Compressed.  When hunting for these data types, it is a good idea to make sure the page is compressed first.  In the supplied table, you can see what data type matches to which column in the table we created via the script.  The table also provides a short description of what that data type represents (as you would see in the CD Array).

If we now want to explore and find the 13th data type, we need to look at the second table we created in the attached script – CDTypes2.  Notice that this table has been page compressed.  I even did that twice.  I did this to make sure the data was page compressed (occasionally when testing I could not easily find a page compressed page unless I page compressed a second time).

Much the same as was done with Row Compression, we need to verify that a page is Page Compressed before searching up and down trying to find this 13th type.  To find a Page Compressed page, we need to look in the CompressionInfo section of the page for CI_HAS_DICTIONARY.  If this notation is present, the page is Page Compressed.

Here is a page snippet with the pertinent data type.

The CD array entry for Column 4 is what we are interested in here.  In this particular record, we see the thirteenth data type of 0x0c (ONE_BYTE_PAGE_SYMBOL).  This data type can appear for any of the columns in the CD array and can be different for each record on the page.  This is due to how Page Compression works (which is also a different discussion).

For this last example, I want to backtrack a bit and point out that I used page 24889 to query for the Symbol data type in the CD Array.  This page was different every time I re-ran the test script.  One thing that was consistent with how I setup the exercise is that the page compressed page that had this data type was always the second page of type 1 (data page) in the results from DBCC Ind.


I hope this helped demystify compression in SQL Server – a little.  This is only one piece of compression but an interesting one in my opinion.  Use the provided script and play with it a little bit to get some familiarity.

Stay tuned for the 6th day of pre-Christmas.

Let’s Talk About Joins

Comments: 1 Comment
Published on: December 11, 2012

T-SQL Tuesday #37

This month please join us in the TSQL blog party that happens on the second tuesday of the month.  It is hosted this month by Sebastian Meine (@sqlity).

Sebastien has a month long blog going this month all about Joins.  You can read all about that fun in his post titled A Join A Day – Introduction.


This is a good topic.  And I pondered what to write this month for a bit.  I immediately went to a topic that I had on my toblog list.  Unfortunately I had already covered that topic once upon a time.  But with it being at the first thing my mind went to when thinking of this topic, I feel it would be good to re-hash it.  I may change it up a bit, I may not – we’ll just have to see where this post takes us.  So jump on board the Crazy Train for a little Salt N Peppa remix – Let’s Talk About Joins.

Business Requirement

I have some tables that I need to query.  One of the tables has lookup information with a bitmask applied to the id field.  Another table references this table but the ids can be a direct match or an indirect match to the id of the lookup table.  For this case, they will always only differ by the value of 1 if the two values do not directly match.  No other match possibility is considered for this example.

Based on this need, the solution dictates some sort of bitwise math.  I have several examples of how a join could be written to accomplish this primary objective.  I am only going to show the possible ways of performing this.  In my environment these all yield the same results and the data is unique and large enough (4.2 million records).  I will compare performance of these different queries in a later post as I demonstrate a query tuning method to drive the query down from nine seconds to 100ms or less.  For the record, I would choose any of queries 5, 6, or 7 for this particular need based on plan cost and performance.


First up is the ANSI style INNER Join using addition in one of the conditions as well as an OR to fulfill the second portion of the business requirement.

[codesyntax lang=”tsql”]


This is probably the easiest to understand and it performs well enough.  Until running into this business requirement, I hadn’t considered putting an OR in the JOIN conditions.  But it makes sense considering that an AND can be used there.

Next is a NON-ANSI style of JOIN.

[codesyntax lang=”tsql”]


Through 2008 R2, this works just as well as the ANSI JOIN already shown.  I haven’t tested in SQL 2012 but I do know that the NON-ANSI syntax of *= (for example) no longer works.  I am not a big fan of this style JOIN because it is far too easy to end up with a Cartesian product.

Another type of JOIN that I like is the use of APPLY.

[codesyntax lang=”tsql”]


This particular code segment is the equivalent of the first query shown.  This is the last in the set of using basic math and an OR in the JOIN conditions.  The remaining queries all rely on bitwise operations to perform the JOIN.  Again, until this particular need, I had never even considered using a bitwise operation in a JOIN.  First in this series is the NON-ANSI style JOIN.

[codesyntax lang=”tsql”]


The big change here is in the where clause.  Notice the use of COALESCE and the first comparison value in that COALESCE.  This is called a BITWISE OR.  From MSDN: “The bits in the result are set to 1 if either or both bits (for the current bit being resolved) in the input expressions have a value of 1; if neither bit in the input expressions is 1, the bit in the result is set to 0.”

So I am comparing the bit values of 1 and the SourceID.  The SourceID from RumorView will create a match meeting the requirements put forth thanks in large part to the BIT OR operation being performed on both sides of the equality in the WHERE clause.  It is also worth mentioning that the COALESCE is completely unnecessary in this query but it I am leaving it as a pseudo reference point for the performance tuning article that will be based on these same queries.

Next on tap is the CROSS Apply version.

[codesyntax lang=”tsql”]


And the last two queries that the optimizer equate to the same query.

[codesyntax lang=”tsql”]


The query optimizer in this case is smart and eliminates the ISNULL.  These two queries use the same exact plan, have the same cost and the same execution statistics.  The version with COALESCE is considered more expensive and takes longer to run than these queries.  It is also important to note that the Cross Apply Join also produces the exact same plan as these two queries.


So there you have it.  Many different ways to write the JOIN for this little query.  Performance and results may vary.  It is good to have a few different ways of writing this particular JOIN.  During my testing, it was evident that various methods performed better under different circumstances (such as how the indexes were configured – which will be discussed in the follow-up article).

Yes, I did re-use my previous post on this particular topic.  That said, I want to add another tidbit.

This Post just Goes On and On

When Joining objects, you should take care as to the data type used in the Join.  Without going into implicit conversions in the Joins, I just want to discuss briefly the impact of data type choice in a Join.  This is where knowing your Data, the workload, and usage patterns is going to be an absolute necessity.

That said, you should not be surprised by a change in the performance of your queries if you were to change from a varchar() datatype to a varbinary() datatype.  I’m not saying that the query is going to perform better – but that the performance may change.  As an example, I was effectively able to turn a well performing query into a very poor performing query by changing from varchar to varbinary.  On the flipside, I have seen the reverse also become true.  It all boils down to proper data types for the data.

Differential Restore Fail

Categories: Corner, News, Professional, SSC, SSSOLV
Comments: 6 Comments
Published on: November 28, 2012

We have all heard that backup is not complete until you test it by restoring it.  We also know that part of a DR plan involves the testing of those backups through restores.

I recently had the opportunity to run through a test scenario and ran into a substantial failure.  It was nearly catastrophic due to the size of the database, size of the backups and location of the DR site where we were testing the recovery.  Let’s just fast forward to all of the fun rather than bore you with all of the minutia.

I began the test with the restore of the full database backup.  This restore took in excess of 16 hours.  Once complete, the next phase was to restore the latest differential and then log backups from there.

Msg 3136, Level 16, State 1, Line 1
This differential backup cannot be restored because the database has not been restored to the correct earlier state.
Msg 3013, Level 16, State 1, Line 1
RESTORE DATABASE is terminating abnormally.

This is where the fun began.  I sat there pondering this for a bit.  How could this be the case?  I verified that the Full backup was only run once via the regular schedule.  I even ran the following to script to check.



This code returned only the one full backup that I knew and expected should be there from within the past few days.  So I tried that differential restore again.  I also tried a new differential just in case.  All of that was futile.  Then I looked back to my script and removed the following line



Tada – that revealed the problem.  Suddenly I had a new full backup appear for each day since the full backup that was taken through the regularly scheduled means.  Each new backup had the following nomenclature for the backup path.



And then it became apparent what was happening.  I had been excluding backups with paths following that type of naming convention due to CommVault in the past.  In this case I knew that CommVault was not a part of the picture.  I did know however that SAN snapshots were being taken on a nightly basis.

This was the aha moment – SAN snapshotting was causing a full backup to be registered in SQL Server.  And it makes perfect sense when looking at the backup history and sizes for each of the differentials that had been taken.  The differential backups were almost always the same size.  If you are running SAN Snapshots, you may want to reconsider your differential backup plan.

This script (just the revised version of the above) will help list the Full and Differential backups that have been taken.



Summit 2012

Categories: News, Professional, SSC, SSSOLV
Comments: No Comments
Published on: November 1, 2012

In case you have been living under a rock (I know it’s not under my rock), here is your notice that Summit 2012 is just a few days away at this point.

I will soon be boarding a jet bound for Seattle for a week full of all things SQL.  When I say all things SQL – there isn’t much exaggeration to it.

During my week at summit, I will be busy from pre-dawn to post-dusk every day.  That is  not an atypical experience for Summit – but the reality hit me firmly as I finalized my daily schedule for next week.

Since I put it together, I decided to share it so others may see all that may be available to them.

I arrive Sunday.  That evening after getting settled in, I plan to join the MCM study group (and maybe have the Sunday night NFL game on in the background).

Monday is a free day of learning hosted by Red Gate and their SQL in the City series. You can see the agenda here. After the event there is the after event social party.  That party overlaps with the Networking Dinner hosted by Andy Warren (twitter) and Steve Jones (twitter) at Gordon Biersch.  This dinner is a pre-registration event (as is SQL in the City).  You can register here (if they have slots available).

Tuesday is a big day.  Tuesday I will be taking the MCM Lab exam in the morning.  Upon completion I hope to decompress for a few hours prior to the evening events.  In the evening we have the Welcome reception/Quiz Bowl sponsored by SQL Sentry.  Immediately following that, there are two events at the same time.  The first is the Speaker/Volunteer party.  The other is another Red Gate event – The Exceptional DBA Awards and SQLServerCentral party. For those from the US, this is also Election Tuesday.  I hope for those from out of town you voted early or are sending in an absentee ballot.

Wednesday we jump into the sessions that I planned on attending (XE, SQLOS and Perf Tuning on my docket).  But in addition to those sessions, I will be at the Apress booth (booth 342) from 12:30 – 1:00 and Wayne Sheffield (twitter) representing our book. And before anything even gets going Wednesday, I will also be at the Summit Community Zone from 7AM to 8AM.  In the evening there is the Exhibitor Reception.  And for anybody into Karaoke – you can do either of the two Karaoke events at about 9PM that evening.  Don’t forget to wear your SQL Saturday shirt(s) on Wednesday.

Thursday I will be at the Community Zone again from 7AM to 8AM.  If you have a kilt – wear it Thursday.  I plan on attending the Query Tuning Mastery, Transaction Isolation Levels, and Recovering Lost Data sessions on Thursday. That brings us to the evening when we have the Community Appreciation Party that runs from 7PM to 10PM.

Friday we are on our last leg wrapping things up and hoping to not be overloaded by this point.  The session I am most looking forward to is the last one of the day by Paul White (twitter). I mean who wouldn’t want to end summit by attending a deep dive session into the Query Optimizer.  In fact I know Wayne Sheffield is presenting during the same time and he was hoping nobody would come to his session so he could attend Paul’s session instead.  Of course he said that in jest, because he wants to do his presentation.  But at the same time he really wants to get to Paul’s session. At the conclusion of this session, my agenda is concluded leaving Friday night my only open night.

I am looking forward to meeting you and chatting. If you see me, say hi. But please don’t try to interrupt my exam. 😉

SQLSat153 Recap

Comments: No Comments
Published on: October 24, 2012

You may not know that this past Saturday ( October 20, 2012) was SQLSaturday 153 in Salt Lake City.  If you knew about it, awesome.  If you attended – even better.  You probably don’t know that I was there presenting this year.  I was a huge slacker in getting the word out about presenting at SQLSat153.

I am hoping that this recap will help atone for that grievous mistake. 😉

Pat Wright (twitter)and company did a great job putting the event together and it appeared to run without problem.  I did what I could to help where possible.  I showed up early to try and help with setup and also to make sure demos and equipment would work.

Notice that I was there to make sure my there wouldn’t be equipment issues.  I had the first presentation of the day and I wanted it to run smoothly.  I’ll also throw out there that it was a first time presenting that topic for myself.  Well, I hope the feedback is as good as the turnout.

I had a packed house!  I had a couple of other presenters find me afterward because they had heard my session was packed.  A little razzing about stealing their attendees was to be had and it was welcomed.

The session that was packed (and the empty chair was a momentary absence so he could answer his phone) was on Extended Events in SQL 2012.  This was a fun session and the topic is really good.  Unfortunately we did not make it to the bonus material.  I think the bonus demos would have been very good to discuss.  I hope we can get to that material next time.

Despite not reaching the bonus material, I have included the material in the zip file that I uploaded to the event website.

I did present two sessions Saturday and was part of the open forum panel that consisted of Cindy Gross (twitter), Randy Knight (twitter), Ben Miller (twitter) and myself.    Now you may know that Cindy is an MCM.  You might also know that Randy and Ben have both passed the MCM Knowledge exam (Ben is taking the Lab as I type).  That made the panel full of SQL Knowledge (we joked that it was equal to two full MCMs giving Ben, Randy and myself 1/3 each for having passed the Knowledge Exam).  This Q&A session went over quite well too.

If you didn’t make it out, I hope it was a really good reason.  I also hope that you can make it next time because the sessions were outstanding and the opportunity to learn is top notch.

Lions and Tigers and Bears…

Categories: Corner, News, Professional, SSC, SSSOLV
Comments: No Comments
Published on: October 15, 2012

Last Thursday I found myself on a journey of sorts.  There weren’t any lions or tigers or bears, but there were plenty of “Oh My”‘s.

It all began on a dark and gloomy night.  Well, I am sure it was dark and gloomy somewhere.  In reality it was before dusk and the sun was shining.  I was heading out of town for a quick trip.  My family was to drop me off at the airport so we could have dinner together before I left.

We ended up at a McDonalds.  We let the kids play in the play area while we chatted with some of the other parents in the area.  For all intents and purposes this is where the journey began.

Our timing was pretty good to arrive at dinner.  We had just missed the commotion.  A family had just been robbed at gunpoint.  A rare thing on this side of the river in the woods.

Finishing up with the excitement and dinner we were back on our way to the airport.  When arriving at the airport, I found that my flight was to be delayed by about an hour. Apparently, the flight from San Francisco was delayed causing us some big delays.  In turn, this resulted in me getting to my hotel after midnight.

At the hotel after a few hours of rest, I was awake and prepping for the day.  The most nerve racking thing was about to happen and I had no clue it was coming.  I had taken the elevator downstairs to get some breakfast.  After having eaten, and needing to leave within the next 20 minutes, I was back in the elevator on my way back up to the room.  Five of those minutes were consumed in that elevator as it decided to stop working between floors.  It was as if the elevator took a break or decided to reboot.  Needless to say, I took the stairs the next time which was the last time.

Now why does any of this matter?  Well, it was certainly an interesting situation as I was heading to the Prometric test center to take the MCM Knowledge exam.  The exam was intense enough without the added complexity of travel.  For now, the only option (besides paying extra for remote delivery) for people such as myself living in Utah is to travel out of Utah.  This exposes us to that extra complexity and challenges such as getting stuck in an elevator.  I was a bit anxious thanks to that experience, but I made it to the center in time for my exam.  Thankfully I was planning on being there extra early.

Moral of the story – plan ahead.  I considered flying in and out the same day to take the exam and then reconsidered.  I didn’t want to take the risk of flight delays causing me to miss it.  Having that extra time, it made these lions and tigers and bears all the more bearable and I still was able to take the exam.

Do you Dream?

Comments: 4 Comments
Published on: October 11, 2012

You ever have those nights consumed by all things SQL?

I do.  It happens more than I might like sometimes.  Usually it happens when there is a work problem that I am trying to solve.

I don’t mind it sometimes because it does provide some results at times.  Other times it is of no use.

Sometimes when multi-tasking like this, I wish I could recall some of the code.  Others, I don’t care to recall the code.  And yet other times, the code sits fresh in my head until I sit at a keyboard.  Sometimes, that last one is the only way I am going to get some sleep – find the laptop and then bang out some code.

Of late though, my SQL dreams have been very relaxing.  I have been pouring through books and thinking about topics I might want to study more in depth to become that more well rounded DBA.  Of course, these studies also have another goal in mind.  I am working toward the MCM.

So as I study and think of those extra tidbits I want to learn, I also dream of exotic solutions and implementations.  I have been sleeping better and dreaming more about SQL.  Which is kind of weird come to think about it.  But hey, I will enjoy these SQL dreams and hopefully it will be more than just dreams.  I will soon know if I get to move on to the next exam as I will be taking my Knowledge exam tomorrow.


My question for you is: Do you Dream of SQL?  If you do, what kinds of SQL Dreams do you have?

Oh and if you practice this at your desk, do you just tell your boss that you are multi-tasking?

High Energy Plankton – T-SQL Tuesday #35

Comments: No Comments
Published on: October 9, 2012

High Energy Plankton

In 2022 humans will finally have become their own worst enemies/frenemies.  No matter how hard Gore works to convince us that the greenhouse effect is killing us, we won’t believe it until 2022.  It is at that time we will finally turn to alternative energy sources.  These alternative energy sources include the food source derived from high energy plankton.

You may already have heard of this product, but in case you have not, you can get a glimpse of it now and maybe prevent our impending doom.

Hurry now while supplies last – get your box of Soylent Green crackers.  This product is available for purchase here.

This high energy food source was the brainchild of the movie that is the source of this months blog party hosted by Nick Haslam (T|B).  This blog party is also known as T-SQL Tuesday and you can read the invite here.

Now, Soylent Green may or may not be that cool granola type food you are looking for.  But I really hope the by-product of Soylent Green is nothing you want.

Good Intentions

“So, what I’d like to know is, what is your most horrifying discovery from your work with SQL Server?”

I thought about this quite a bit.  The reason being that I really wanted to include some code that would classify this entry as Soylent Green.  Many examples came to mind, but I am not sure I can share any of that code.  However, I do have a few stories to share.  In no particular order, here you go.

  1. Dealing with a particular application we noticed that performance was far less that optimal (big surprise there I know 😉 ).  While monitoring and through performing traces to find the pain points I discovered something very peculiar.  The peculiarity came in the form of having multiple plans cached for the same procedure call.  That may or may not be peculiar just yet – but when I add the info that the procedure call was to perform the login process for the application – that is what becomes peculiar.  Using the same parameters the application had called the login stored procedure four different ways.
    They did a fine job by using a stored procedure to perform the logon process.  Unfortunately they did not wrap that into an object within the application and instead called the logon process differently each different module that called it.
    You can see similar activity by doing the following against the Adventureworks database.
  2. Much earlier in my career I had the distinct pleasure of working for a bank as a DBA.  This was a short lived experience due in large part to this experience.  We were being audited and I was newly hired.  One of the first requests was for a temporary change to be made in the database while the auditor was present.  And the request also made it clear that it was to be changed back once that auditor left. My immediate response was “NO.”  There were many colorful arguments made as to why I should immediately make that change.  That was a great experience (shutter).
  3. More recently I inherited an environment that was experiencing pretty bad performance.  Hardware was thrown at it to try and alleviate the problems.  After some discovery it was found that no cluster indexes could be found in any table in any database.  The reason given was that the Cluster Indexes would change query results and they couldn’t take that risk.  After much work, many clustered indexes were placed into the environment with no negative impact.  I worked very hard on that environment to get it purring.  I found that after I left it only took a few months for things to start falling apart.  Some of the problems include the DBAs refusing to perform the database backups insisting that the systems admins do that for them.  OUCH!
  4. This last example is a prime example of Soylent Green.  I was recently called to assist with a production outage issue.  Tempdb was full but had not completely filled the drives.  People were unable to get work done.  To get things moving, I expanded tempdb (the log file) and then began querying to find what was going on.  Within minutes I had found multiple instances of the same query running that had been running for nearly 10hrs.  The query was causing significant spooling and should not have been this detrimental.  I killed the query in question and got a little more info on it.  In this case the developer decided to test against production.  He had taken a query that had been finishing in under five minutes, tweaked it, broke it and then brought down production.
    Some interesting figures besides the 10 hours to run and not complete is that it had read 2 billion records by the time it was killed and the largest table in the database is only 20 million records.  No table in this query exceeded one million records.
«page 2 of 6»

October 2019
« Jul    

Welcome , today is Wednesday, October 23, 2019