March 2013 LV UG Meeting

Categories: News, Professional, SSC, SSSOLV
Comments: No Comments
Published on: March 11, 2013

Another month and we have another installment (meeting) for the Data Professionals of the Las Vegas area.

This month we will be getting back on schedule with the usual meeting time and place.  We had a slight divergence last month due to that heartsy holiday.

We are excited about some of the upcoming speakers and feedback we have been getting of late.  We are feverishly working to get some presenters on Big Data and Hadoop.  If you are in the market to present on these topics, ping me.

Here are some of the details for the meeting.


Presenting is Jason Kassay

Jason Will be presenting to us on the topic of “The Accidental Programmer.

Most people in the database world have heard of the term, “The Accidental DBA”, those programmers who have to work with databases. How about the other side of the coin? What about DBA’s an Dev’s who have to write code or have to work closely with programmers? This presentation is a best practices guide for working with SQL Server in a .Net environment. You will learn how to recognize when bad code is written that interacts with the database, how to track it down, and most importantly how to fix it. On top of that you will also receive an introduction to object oriented programming concepts such as data abstraction, encapsulation, tier architecture, and class objects so that you can better communicate with your programmers.

BIO

Jason has been a software developer for over 10 years and currently works at EZFacility, providing scheduling, management, full-service billing, and membership solutions for the sports, health, and fitness industries. He has worked primarily with .Net (C# and VB) and SQL Server as well as HTML, CSS, and Javascript. When he is not coding or spending time with his awesome family, you will find him either running (to train for the zombie apocalypse) or stopping hockey pucks as a goaltender.

 

LiveMeeting Information:

Attendee URL: https://www.livemeeting.com/cc/UserGroups/join?id=DC769N&role=attend
Meeting ID: DC769N

 

VERY IMPORTANT

The meeting location has changed.  We will no longer be meeting at The Learning Center.  New meeting location is M Staff Solutions & Training / 2620 Regatta Drive Suite 102 Las Vegas, NV 89128.
We are also moving to meetup for meeting management. Please join us for free at http://www.meetup.com/SQL-Server-Society-of-Las-Vegas/

January S3OLV 2013

Tags: , ,
Comments: No Comments
Published on: January 9, 2013

I hope you are ready for some free SQL learning.  Coming up on Thursday January 10th at 6:30 PM Pacific (GMT – 8 ) we will have a stellar opportunity to do just that – Learn!

Edwin Sarmiento will be presenting for us this at the upcoming monthly Las Vegas User Group meeting.  Edwin has chosen (well he left it up to me to decide) to present on Powershell.

Title: Windows PowerShell for the SQL Server DBA

Abstract

Windows PowerShell is becoming the scripting language of choice for managing Microsoft servers and workstations. And while T-SQL is still the scripting language for DBAs to administer and manage SQL Server, there are tons of stuff that can be easily done with Windows PowerShell. In this session, learn Windows PowerShell from the ground up and how you can use it with SQL Server Management Objects (SMO) to administer and manage a SQL Server instance.

Here is the bonus behind all of this – we will be broadcasting the presentation to the virtual sphere.Here are the details for joining the meeting via livemeeting.

Copy this address and paste it into your web browser:

Copy and paste the required information:

 

On the Ninth Day…

It’s the end of the world as we know it.  And as the song goes…I feel fine!  But hey, we didn’t start the fire.

Those are a couple of songs that pop into my head every time somebody starts talking doomsday and doomsday prophecies.

If you are reading this, I dare say that the world is still turning.  And that is a good thing because that gives us a chance to talk about the 12 days of pre-Christmas.

Today we will be talking about a tool that can be at the DBAs disposal to help in tracking performance as well as problems.

First there are a couple of items of housekeeping.  First item is that I only realized with this post that the post on the first day was wrong.  I had miscalculated my twelve days to end on Christmas day.  That is wrong!  Counting down from that first post on the 13th means the 12th day will end up on December 24th.  Maybe I will throw in a bonus 13th day post, or delay a post for a day, or just go with it.  It will be a mystery!

Second item is naturally the daily recap of the 12 days to date.

On the Ninth Day of pre-Christmas…

My DBA gave to me a Data Collection!

If only my DBA had told me that I would need to have Management Data Warehouse (MDW) preconfigured.  Well, that is not a problem.  We can handle that too.  For a tutorial on how to setup MDW, I am going to provide a link to one that has been very well written by Kalen Delaney.

The article written by Kalen covers the topic of MDW very well all the way from setting up the MDW, to setting up the Data Collectors, and all the way down to the default reports you can run for MDW.  The MDW and canned Data Collectors can provide some really useful information for your environment.

What I want to share though is a means to add custom data collections to your MDW.  To create a custom collection, we need to rely on two stored procedures provided to us by Microsoft.  Those stored procedures are: sp_syscollector_create_collection_item and sp_syscollector_create_collection_set.  Both of these stored procedures are found in the, you guessed it, msdb database.

Each of these stored procedures has a number of parameters to help in the creation of an appropriate data collection.  When creating a data collection, you will first create the collection set, and then you add collection items to that set.

There are a few notable parameters for each stored procedure that I will cover.  Otherwise, you can refer back to the links for each of the stored procedures to see more information about the parameters.

Starting with the sp_syscollector_create_collection_set stored procedure, I want to point out the @schedule_name, @name, and @collection_mode parameters.  The name is pretty obvious – make sure you have a distinguishable name that is descriptive (my opinion) or at least have good documentation.  The collection mode has two values.  As noted in the documentation, you will want to choose one value over the other depending on the intended use of this data collector.  If running continuously, just remember to run in cached mode.  And lastly is the schedule name.  This will help determine how frequently the job runs.

Unfortunately, the schedule names are not found in the documentation, but rather you are directed to query the sysschedules tables.  To help you find those schedules, here is a quick query.

Now on to the sp_syscollector_create_collection_item stored procedure.  There are three parameters that I want to lightly touch on.  For the rest, you can refer back to the documentation.  The parameters of interest here are @parameters, @frequency and @collector_type_uid.  Starting with the frequency parameter, this tells the collector how often to upload the data to the MDW if running in cached mode.  Be careful here to select an appropriate interval.  Next is the parameters parameter which is really the workhorse of the collection item.  In the case of the custom data collector that I will show in a bit, this is where the tsql query will go.

Last parameter to discuss is the collector type uid.  Like the schedule for the previous proc, the documentation for this one essentially refers you to a system view – syscollector_collector_types.  Here is a quick query to see the different collector types.

The collector type that I will be using for this example is Generic T-SQL Query Collector Type.  A discussion on the four types of collectors can be reserved for another time.

Let’s move on to the example now.  This custom data collector is designed to help troubleshoot deadlock problems.  The means I want to accomplish this is by querying the system_health extended event session.

I can query for deadlock information direct to the system_health session using a query like the following.

You may notice that I have converted to varchar(4000) from XML.  This is in large part to make sure the results will play nicely with the data collector.  Now to convert that to a query that can be used in the @parameters parameter, we get the following.

With this query, we are loading the necessary schema nodes that correlate to the Data Collector Type that we chose.  Since this parameter is XML, the schema must match or you will get an error.  We are now ready to generate a script that can create a deadlock data collector.

Upon creation, this will create a SQL Agent job with the defined schedule.  Since this is a non-cached data collector set, the Agent job will adhere to the schedule specified and upload data on that interval.

Now all we need to do is generate a deadlock to see if it is working.  It is also a good idea to introduce you to the table that will be created due to this data collector.  Once we create this collector set, a new table will be created in the MDW database.  In the case of this collector set, we have a table with the schema and name of custom_snapshots.systemhealthdeadlock.

This new table will have three columns.  One column represents the DeadlockGraph as we retrieved from the query we provided to the @parameters parameter.  The remaining columns are data collector columns for the collection date and the snapshot id.

Now that we have covered all of that, your favorite deadlock query has had enough time to finally fall victim to a deadlock.  We should also have some information recorded in the custom_snapshots.systemhealthdeadlock table relevant to the deadlock information (if not, it will be there once the agent job runs, or you can run a snapshot from SSMS of the data collector).  With a quick query, we can start looking into the deadlock problem.

This query will give me a few entries (since I went overkill and created a bunch of deadlocks).  If I click the DeadlockGraph cell in the result sets, I can then view the XML of the DeadlockGraph, as in the following.

Code to generate deadlock courtesy of “SQL Server 2012 T-SQL Recipes: A Problem-Solution Approach” by Jason Brimhall, Wayne Sheffield et al (Chapter 12, pages 267-268).  If you examine the deadlock graph you will see the code that generated the deadlock.

Since this is being pulled from the RingBuffer target of the system_health, it can prove useful to store that data into a table such as I have done.  The reason being that the Ringbuffer can be overwritten, and with good timing on the data collector, we can preserve this information for later retrieval and troubleshooting.  Deadlocks don’t always happen at the most opportune time and even less likely to occur when we are staring at the screen waiting for them to happen.

As you read more about the stored procedures used to create a data collector, you will see that there is a retention parameter.  This helps prevent the table from getting too large on us.  We can also ensure that an appropriate retention is stored for these custom collectors.

Conclusion

Creating a custom data collector can be very handy for a DBA.  Especially in times of troubleshooting.  These collectors are also quite useful for trending and analysis.  Consider this a tool in the chest for the poor man. 😉

Enjoy and stay tuned!

All scripts and references were for SQL 2008 R2.  The Deadlock script was pulled from the 2012 book, but the script runs the same for the 2008 version of the AdventureWorks database.

On the Fifth Day…

Today we will have a quick change of pace.  This will be less about maintenance and more about internals.

Today’s topic is one of particular interest to me as well.  The topic is a small section of a presentation I like to give at User Groups and at SQL Saturdays.

Before we take this diversion from the under-carriage to something more related to the engine, let’s have a brief recap of the previous four days of pre-Christmas.

So far the gifts of pre-Christmas have included the following articles.

  1. Exercise for msdb – Day 4
  2. Backup, Job and Mail History Cleanup – 3rd Day
  3. Service Broker Out of Control – 2nd Day
  4. Maint Plan Logs – 1st Day

On the Fifth Day of pre-Christmas…

My DBA gave to me a much more compact database.  Surprisingly it can be as much as much as 20% of the original size.

I know, I know.  I’ve heard it before but this is not the compression used by doublespace and drivespace that we were given many years ago.  This really is much better.

And yes I have heard about the performance factor too.  That is a topic for another discussion.  As an aside, when it comes to performance, I always tell people that they must test for themselves because mileage will vary.

No, what I want to talk about is much different.  I want to talk about the CD Array(at the page level) and the new data types specific to compression that you may encounter within the CD Array.

CD Array Data Types

SQL Server introduces us to 13 data types that are used within the CD Array when Compression has been enabled.  Twelve of these data types can be seen when Row Compression has been enabled on an object.  The thirteenth data type is only applicable when page compression has been implemented.

There is no guarantee that any or all of these data types will be present on a page related to an object that has been compressed using either Row Compression or Page Compression.  If you want to find these data types on a compressed page, you may have to do a little hunting.

To demonstrate that these data types exist and that they can be found, I have a sample script.

[codesyntax lang=”tsql”]

[/codesyntax]

Now let’s take a look at the different data types, starting with the 12 available with Row Compression.

0x00 NULL SomeNull
0x01 EMPTY SomeBit
0x02 ONE_BYTE_SHORT Some1Byte
0x03 TWO_BYTE_SHORT Some2Byte
0x04 THREE_BYTE_SHORT Some3Byte
0x05 FOUR_BYTE_SHORT Some4Byte
0x06 FIVE_BYTE_SHORT Some5Byte
0x07 SIX_BYTE_SHORT Some6Byte
0x08 SEVEN_BYTE_SHORT Some7Byte
0x09 EIGHT_BYTE_SHORT Some8Byte
0x0a LONG SomeLong
0x0b BIT_COLUMN SomeBit2

If we look at page 20392 (yours will likely be different), we will find all of these data types present.  We will also note that this page should show (COMPRESSED) PRIMARY_RECORD – which indicates that the page is Row Compressed.  When hunting for these data types, it is a good idea to make sure the page is compressed first.  In the supplied table, you can see what data type matches to which column in the table we created via the script.  The table also provides a short description of what that data type represents (as you would see in the CD Array).

If we now want to explore and find the 13th data type, we need to look at the second table we created in the attached script – CDTypes2.  Notice that this table has been page compressed.  I even did that twice.  I did this to make sure the data was page compressed (occasionally when testing I could not easily find a page compressed page unless I page compressed a second time).

Much the same as was done with Row Compression, we need to verify that a page is Page Compressed before searching up and down trying to find this 13th type.  To find a Page Compressed page, we need to look in the CompressionInfo section of the page for CI_HAS_DICTIONARY.  If this notation is present, the page is Page Compressed.

Here is a page snippet with the pertinent data type.

The CD array entry for Column 4 is what we are interested in here.  In this particular record, we see the thirteenth data type of 0x0c (ONE_BYTE_PAGE_SYMBOL).  This data type can appear for any of the columns in the CD array and can be different for each record on the page.  This is due to how Page Compression works (which is also a different discussion).

For this last example, I want to backtrack a bit and point out that I used page 24889 to query for the Symbol data type in the CD Array.  This page was different every time I re-ran the test script.  One thing that was consistent with how I setup the exercise is that the page compressed page that had this data type was always the second page of type 1 (data page) in the results from DBCC Ind.

Conclusion

I hope this helped demystify compression in SQL Server – a little.  This is only one piece of compression but an interesting one in my opinion.  Use the provided script and play with it a little bit to get some familiarity.

Stay tuned for the 6th day of pre-Christmas.

Let’s Talk About Joins

Comments: 1 Comment
Published on: December 11, 2012

T-SQL Tuesday #37

This month please join us in the TSQL blog party that happens on the second tuesday of the month.  It is hosted this month by Sebastian Meine (@sqlity).

Sebastien has a month long blog going this month all about Joins.  You can read all about that fun in his post titled A Join A Day – Introduction.

 

This is a good topic.  And I pondered what to write this month for a bit.  I immediately went to a topic that I had on my toblog list.  Unfortunately I had already covered that topic once upon a time.  But with it being at the first thing my mind went to when thinking of this topic, I feel it would be good to re-hash it.  I may change it up a bit, I may not – we’ll just have to see where this post takes us.  So jump on board the Crazy Train for a little Salt N Peppa remix – Let’s Talk About Joins.

Business Requirement

I have some tables that I need to query.  One of the tables has lookup information with a bitmask applied to the id field.  Another table references this table but the ids can be a direct match or an indirect match to the id of the lookup table.  For this case, they will always only differ by the value of 1 if the two values do not directly match.  No other match possibility is considered for this example.

Based on this need, the solution dictates some sort of bitwise math.  I have several examples of how a join could be written to accomplish this primary objective.  I am only going to show the possible ways of performing this.  In my environment these all yield the same results and the data is unique and large enough (4.2 million records).  I will compare performance of these different queries in a later post as I demonstrate a query tuning method to drive the query down from nine seconds to 100ms or less.  For the record, I would choose any of queries 5, 6, or 7 for this particular need based on plan cost and performance.

The JOINS

First up is the ANSI style INNER Join using addition in one of the conditions as well as an OR to fulfill the second portion of the business requirement.

[codesyntax lang=”tsql”]

[/codesyntax]

This is probably the easiest to understand and it performs well enough.  Until running into this business requirement, I hadn’t considered putting an OR in the JOIN conditions.  But it makes sense considering that an AND can be used there.

Next is a NON-ANSI style of JOIN.

[codesyntax lang=”tsql”]

[/codesyntax]

Through 2008 R2, this works just as well as the ANSI JOIN already shown.  I haven’t tested in SQL 2012 but I do know that the NON-ANSI syntax of *= (for example) no longer works.  I am not a big fan of this style JOIN because it is far too easy to end up with a Cartesian product.

Another type of JOIN that I like is the use of APPLY.

[codesyntax lang=”tsql”]

[/codesyntax]

This particular code segment is the equivalent of the first query shown.  This is the last in the set of using basic math and an OR in the JOIN conditions.  The remaining queries all rely on bitwise operations to perform the JOIN.  Again, until this particular need, I had never even considered using a bitwise operation in a JOIN.  First in this series is the NON-ANSI style JOIN.

[codesyntax lang=”tsql”]

[/codesyntax]

The big change here is in the where clause.  Notice the use of COALESCE and the first comparison value in that COALESCE.  This is called a BITWISE OR.  From MSDN: “The bits in the result are set to 1 if either or both bits (for the current bit being resolved) in the input expressions have a value of 1; if neither bit in the input expressions is 1, the bit in the result is set to 0.”

So I am comparing the bit values of 1 and the SourceID.  The SourceID from RumorView will create a match meeting the requirements put forth thanks in large part to the BIT OR operation being performed on both sides of the equality in the WHERE clause.  It is also worth mentioning that the COALESCE is completely unnecessary in this query but it I am leaving it as a pseudo reference point for the performance tuning article that will be based on these same queries.

Next on tap is the CROSS Apply version.

[codesyntax lang=”tsql”]

[/codesyntax]

And the last two queries that the optimizer equate to the same query.

[codesyntax lang=”tsql”]

[/codesyntax]

The query optimizer in this case is smart and eliminates the ISNULL.  These two queries use the same exact plan, have the same cost and the same execution statistics.  The version with COALESCE is considered more expensive and takes longer to run than these queries.  It is also important to note that the Cross Apply Join also produces the exact same plan as these two queries.

Conclusion

So there you have it.  Many different ways to write the JOIN for this little query.  Performance and results may vary.  It is good to have a few different ways of writing this particular JOIN.  During my testing, it was evident that various methods performed better under different circumstances (such as how the indexes were configured – which will be discussed in the follow-up article).

Yes, I did re-use my previous post on this particular topic.  That said, I want to add another tidbit.

This Post just Goes On and On

When Joining objects, you should take care as to the data type used in the Join.  Without going into implicit conversions in the Joins, I just want to discuss briefly the impact of data type choice in a Join.  This is where knowing your Data, the workload, and usage patterns is going to be an absolute necessity.

That said, you should not be surprised by a change in the performance of your queries if you were to change from a varchar() datatype to a varbinary() datatype.  I’m not saying that the query is going to perform better – but that the performance may change.  As an example, I was effectively able to turn a well performing query into a very poor performing query by changing from varchar to varbinary.  On the flipside, I have seen the reverse also become true.  It all boils down to proper data types for the data.

Differential Restore Fail

Categories: Corner, News, Professional, SSC, SSSOLV
Comments: 8 Comments
Published on: November 28, 2012

We have all heard that backup is not complete until you test it by restoring it.  We also know that part of a DR plan involves the testing of those backups through restores.

I recently had the opportunity to run through a test scenario and ran into a substantial failure.  It was nearly catastrophic due to the size of the database, size of the backups and location of the DR site where we were testing the recovery.  Let’s just fast forward to all of the fun rather than bore you with all of the minutia.

I began the test with the restore of the full database backup.  This restore took in excess of 16 hours.  Once complete, the next phase was to restore the latest differential and then log backups from there.

Msg 3136, Level 16, State 1, Line 1
This differential backup cannot be restored because the database has not been restored to the correct earlier state.
Msg 3013, Level 16, State 1, Line 1
RESTORE DATABASE is terminating abnormally.

This is where the fun began.  I sat there pondering this for a bit.  How could this be the case?  I verified that the Full backup was only run once via the regular schedule.  I even ran the following to script to check.

 

 

This code returned only the one full backup that I knew and expected should be there from within the past few days.  So I tried that differential restore again.  I also tried a new differential just in case.  All of that was futile.  Then I looked back to my script and removed the following line

 

 

Tada – that revealed the problem.  Suddenly I had a new full backup appear for each day since the full backup that was taken through the regularly scheduled means.  Each new backup had the following nomenclature for the backup path.

 

 

And then it became apparent what was happening.  I had been excluding backups with paths following that type of naming convention due to CommVault in the past.  In this case I knew that CommVault was not a part of the picture.  I did know however that SAN snapshots were being taken on a nightly basis.

This was the aha moment – SAN snapshotting was causing a full backup to be registered in SQL Server.  And it makes perfect sense when looking at the backup history and sizes for each of the differentials that had been taken.  The differential backups were almost always the same size.  If you are running SAN Snapshots, you may want to reconsider your differential backup plan.

This script (just the revised version of the above) will help list the Full and Differential backups that have been taken.

 

 

Summit 2012

Categories: News, Professional, SSC, SSSOLV
Comments: No Comments
Published on: November 1, 2012

In case you have been living under a rock (I know it’s not under my rock), here is your notice that Summit 2012 is just a few days away at this point.

I will soon be boarding a jet bound for Seattle for a week full of all things SQL.  When I say all things SQL – there isn’t much exaggeration to it.

During my week at summit, I will be busy from pre-dawn to post-dusk every day.  That is  not an atypical experience for Summit – but the reality hit me firmly as I finalized my daily schedule for next week.

Since I put it together, I decided to share it so others may see all that may be available to them.

I arrive Sunday.  That evening after getting settled in, I plan to join the MCM study group (and maybe have the Sunday night NFL game on in the background).

Monday is a free day of learning hosted by Red Gate and their SQL in the City series. You can see the agenda here. After the event there is the after event social party.  That party overlaps with the Networking Dinner hosted by Andy Warren (twitter) and Steve Jones (twitter) at Gordon Biersch.  This dinner is a pre-registration event (as is SQL in the City).  You can register here (if they have slots available).

Tuesday is a big day.  Tuesday I will be taking the MCM Lab exam in the morning.  Upon completion I hope to decompress for a few hours prior to the evening events.  In the evening we have the Welcome reception/Quiz Bowl sponsored by SQL Sentry.  Immediately following that, there are two events at the same time.  The first is the Speaker/Volunteer party.  The other is another Red Gate event – The Exceptional DBA Awards and SQLServerCentral party. For those from the US, this is also Election Tuesday.  I hope for those from out of town you voted early or are sending in an absentee ballot.

Wednesday we jump into the sessions that I planned on attending (XE, SQLOS and Perf Tuning on my docket).  But in addition to those sessions, I will be at the Apress booth (booth 342) from 12:30 – 1:00 and Wayne Sheffield (twitter) representing our book. And before anything even gets going Wednesday, I will also be at the Summit Community Zone from 7AM to 8AM.  In the evening there is the Exhibitor Reception.  And for anybody into Karaoke – you can do either of the two Karaoke events at about 9PM that evening.  Don’t forget to wear your SQL Saturday shirt(s) on Wednesday.

Thursday I will be at the Community Zone again from 7AM to 8AM.  If you have a kilt – wear it Thursday.  I plan on attending the Query Tuning Mastery, Transaction Isolation Levels, and Recovering Lost Data sessions on Thursday. That brings us to the evening when we have the Community Appreciation Party that runs from 7PM to 10PM.

Friday we are on our last leg wrapping things up and hoping to not be overloaded by this point.  The session I am most looking forward to is the last one of the day by Paul White (twitter). I mean who wouldn’t want to end summit by attending a deep dive session into the Query Optimizer.  In fact I know Wayne Sheffield is presenting during the same time and he was hoping nobody would come to his session so he could attend Paul’s session instead.  Of course he said that in jest, because he wants to do his presentation.  But at the same time he really wants to get to Paul’s session. At the conclusion of this session, my agenda is concluded leaving Friday night my only open night.

I am looking forward to meeting you and chatting. If you see me, say hi. But please don’t try to interrupt my exam. 😉

SQLSat153 Recap

Comments: No Comments
Published on: October 24, 2012

You may not know that this past Saturday ( October 20, 2012) was SQLSaturday 153 in Salt Lake City.  If you knew about it, awesome.  If you attended – even better.  You probably don’t know that I was there presenting this year.  I was a huge slacker in getting the word out about presenting at SQLSat153.

I am hoping that this recap will help atone for that grievous mistake. 😉

Pat Wright (twitter)and company did a great job putting the event together and it appeared to run without problem.  I did what I could to help where possible.  I showed up early to try and help with setup and also to make sure demos and equipment would work.

Notice that I was there to make sure my there wouldn’t be equipment issues.  I had the first presentation of the day and I wanted it to run smoothly.  I’ll also throw out there that it was a first time presenting that topic for myself.  Well, I hope the feedback is as good as the turnout.

I had a packed house!  I had a couple of other presenters find me afterward because they had heard my session was packed.  A little razzing about stealing their attendees was to be had and it was welcomed.

The session that was packed (and the empty chair was a momentary absence so he could answer his phone) was on Extended Events in SQL 2012.  This was a fun session and the topic is really good.  Unfortunately we did not make it to the bonus material.  I think the bonus demos would have been very good to discuss.  I hope we can get to that material next time.

Despite not reaching the bonus material, I have included the material in the zip file that I uploaded to the event website.

I did present two sessions Saturday and was part of the open forum panel that consisted of Cindy Gross (twitter), Randy Knight (twitter), Ben Miller (twitter) and myself.    Now you may know that Cindy is an MCM.  You might also know that Randy and Ben have both passed the MCM Knowledge exam (Ben is taking the Lab as I type).  That made the panel full of SQL Knowledge (we joked that it was equal to two full MCMs giving Ben, Randy and myself 1/3 each for having passed the Knowledge Exam).  This Q&A session went over quite well too.

If you didn’t make it out, I hope it was a really good reason.  I also hope that you can make it next time because the sessions were outstanding and the opportunity to learn is top notch.

Lions and Tigers and Bears…

Categories: Corner, News, Professional, SSC, SSSOLV
Comments: No Comments
Published on: October 15, 2012

Last Thursday I found myself on a journey of sorts.  There weren’t any lions or tigers or bears, but there were plenty of “Oh My”‘s.

It all began on a dark and gloomy night.  Well, I am sure it was dark and gloomy somewhere.  In reality it was before dusk and the sun was shining.  I was heading out of town for a quick trip.  My family was to drop me off at the airport so we could have dinner together before I left.

We ended up at a McDonalds.  We let the kids play in the play area while we chatted with some of the other parents in the area.  For all intents and purposes this is where the journey began.

Our timing was pretty good to arrive at dinner.  We had just missed the commotion.  A family had just been robbed at gunpoint.  A rare thing on this side of the river in the woods.

Finishing up with the excitement and dinner we were back on our way to the airport.  When arriving at the airport, I found that my flight was to be delayed by about an hour. Apparently, the flight from San Francisco was delayed causing us some big delays.  In turn, this resulted in me getting to my hotel after midnight.

At the hotel after a few hours of rest, I was awake and prepping for the day.  The most nerve racking thing was about to happen and I had no clue it was coming.  I had taken the elevator downstairs to get some breakfast.  After having eaten, and needing to leave within the next 20 minutes, I was back in the elevator on my way back up to the room.  Five of those minutes were consumed in that elevator as it decided to stop working between floors.  It was as if the elevator took a break or decided to reboot.  Needless to say, I took the stairs the next time which was the last time.

Now why does any of this matter?  Well, it was certainly an interesting situation as I was heading to the Prometric test center to take the MCM Knowledge exam.  The exam was intense enough without the added complexity of travel.  For now, the only option (besides paying extra for remote delivery) for people such as myself living in Utah is to travel out of Utah.  This exposes us to that extra complexity and challenges such as getting stuck in an elevator.  I was a bit anxious thanks to that experience, but I made it to the center in time for my exam.  Thankfully I was planning on being there extra early.

Moral of the story – plan ahead.  I considered flying in and out the same day to take the exam and then reconsidered.  I didn’t want to take the risk of flight delays causing me to miss it.  Having that extra time, it made these lions and tigers and bears all the more bearable and I still was able to take the exam.

Do you Dream?

Comments: 4 Comments
Published on: October 11, 2012

You ever have those nights consumed by all things SQL?

I do.  It happens more than I might like sometimes.  Usually it happens when there is a work problem that I am trying to solve.

I don’t mind it sometimes because it does provide some results at times.  Other times it is of no use.

Sometimes when multi-tasking like this, I wish I could recall some of the code.  Others, I don’t care to recall the code.  And yet other times, the code sits fresh in my head until I sit at a keyboard.  Sometimes, that last one is the only way I am going to get some sleep – find the laptop and then bang out some code.

Of late though, my SQL dreams have been very relaxing.  I have been pouring through books and thinking about topics I might want to study more in depth to become that more well rounded DBA.  Of course, these studies also have another goal in mind.  I am working toward the MCM.

So as I study and think of those extra tidbits I want to learn, I also dream of exotic solutions and implementations.  I have been sleeping better and dreaming more about SQL.  Which is kind of weird come to think about it.  But hey, I will enjoy these SQL dreams and hopefully it will be more than just dreams.  I will soon know if I get to move on to the next exam as I will be taking my Knowledge exam tomorrow.

 

My question for you is: Do you Dream of SQL?  If you do, what kinds of SQL Dreams do you have?

Oh and if you practice this at your desk, do you just tell your boss that you are multi-tasking?

«page 2 of 7»








Calendar
June 2017
M T W T F S S
« May    
 1234
567891011
12131415161718
19202122232425
2627282930  
Content
SQLHelp

SQLHelp


Welcome , today is Thursday, June 29, 2017