Cloud Nirvana

Comments: 1 Comment
Published on: November 12, 2013

TSQL2sDay150x150T-SQL Tuesday is a recurring blog party, that is started by Adam Machanic (Blog | @AdamMachanic). Each month a blog will host the party, and everyone that want’s to can write a blog about a specific subject.

This month the subject is “Cloud Atlas”. If you want to read the invite, please click the image below to go to the party-starter: Jorge Segarra (Blog | @SQLChicken).

For me, this will be a quick entry on my part to participate.  I have a mixed bag of feelings about the Cloud and the buzz-wordiness that implies.

mushroomDisaster Struck

One of my biggest concerns with the cloud is the gloom and doom that has been felt by a client after a bout of corruption struck their database.  The person on staff responsible for backups did a fantastic job of ensuring the database was backed up on a daily basis.  That is not the doom and gloom sadly.

This particular case turned sour after corruption hit the database.  The web application no longer connected to the database.  The data was corrupt.  The database was inaccessible.  The client was distraught and sweating bullets.  The client and the employee new not what to do so they sought me out to get some help.

After an assessment, we determined that the database needed to restored from backup.  All of the backups were done fastidiously – to the cloud.  The backups were only accessible through a specific application and the process required copying the backup from the cloud down to the server via the application.  The process is pretty easy, right?

The recovery of the database took an entire week!  The copy down through the application from the cloud was destructive to the business to say the least.  The company had little to no chance of surviving being out of business for the outage caused by this scenario.

thunder-cloudNirvana

On the more positive side, there may be a silver lining to the cloud for many people.  I have done more than my share of successful Cloud migrations.  I have also seen many applications work very well from the Cloud.

One benefit to me personally about the Cloud is the ability to quickly spin up resources or even servers.  Much like a VM farm, I can request more resources for a Server during specific workload times.  And I can also dial down the gauges a bit when the workload is less intense.

I really like the idea of spinning up a machine in a matter of a few clicks and minutes.  When SQL 2014 released CTP2, this was fantastic.  I know several people took advantage of this ability and began playing with CTP2 almost immediately after the announcement of the CTP2 release.

Is the cloud right for you?  I can’t say.  I have had success and failure related specifically to the cloud that might not have happened in a less “nimble” environment.  The decision to move to the Cloud is frequently the decision made by somebody other than the technologist that must support the technology and the decision.  If you are moving to the Cloud, you may succeed and enjoy it or you may fail and hate it.  Your mileage will vary.

Scaring a Database Near you

Comments: No Comments
Published on: October 31, 2013

r2

Something I have a hard time by-passing is a good R2 unit.  I have R2 units in so many different forms, including a standing R2 cake one year for my birthday.  So when I cam across this R2 unit, I just had to share it.

That is a pumpkin carved into the resemblance of R2-D2.  I think it is a mighty fine job too.  It’s amazing how many good Star Wars related pumpkin carvings there are out there.  You probably wouldn’t have too difficult a time finding three or four hundred if you tried a little google-fu.

Each year I try to have something for the Halloween Holiday such as this one or this one.  I failed to provide something in 2012, and this is getting back on the right track.

Despite the ease to find haunting Halloween effects related to SQL Server, I am amazed at how few have even heard of “Halloween Protection” which stems from the “Halloween Problem.”

I am not going to dive into the problem or the protection of it.  I think that has been covered plenty and even quite masterfully by Paul White (blog | twitter).  I recommend that you read his four part series on the topic starting here.

With all of the COSPLAY going about here in the States, I find some of the scarier things to be about either stuff I have previously fixed or about which I have written or tweeted or all of the above.

Take for instance this article about the blatant disregard by some vendors and clients in regards to security.  I still can’t figure out why the public role would ever need to be dbo for a database – at least not a legitimate reason.

Or we can take on the recent time I tweeted about a cursor that I fixed.  I took that scary cursor down from a 30+ hour run time to a mere 50 seconds.  Here is a segment of the execution plan (plan is roughly 4mb in size to give a little scale) zoomed out to 1/5th.

optimized_segment

 

The query was much uglier than that originally.  Imagine that beast looping through on your server for 30 hrs, and that is not even the entire thing.  It is little wonder why things started to drag on the server.

Another scary item I like is the effect of implicit conversions.  That is a topic that can be viewed pretty easily through the use of google-fu.  Here is a short demo on the implications of implicit conversions.

[codesyntax lang="tsql"]

[/codesyntax]

In this demo I have created three temp tables.  Each is pretty simple in nature and each is to receive 10,000 records.  The insert statement just inserts an integer into each field of each table through the while loop.  Notice that I intentionally named a column in #T3 to be SomeReal but the datatype is an NVARCHAR.  This is to underscore a pet peeve of mine that I have seen over and over again – naming the field in the table after the datatype and the datatype doesn’t even match.

When this query runs, I get the following timing results.

timing

 

The thing that stands out to me is the huge difference in time between the implicit-free query and the query replete with an implicit conversion.  The implicit conversion query

grim

was about 930 times slower than the query free of implicit conversions.  Granted that query was against a cold cache, so let’s see what happens to an average of five runs each against a warm cache.

With a warm cache I see an average of 51ms for the implicit free query.  On the other hand, the implicit conversion query runs at an average of 84525ms.  That equates to about 1644 times slower.

Sure this was a contrived example.  But keep in mind the table sizes, the datatypes and the number of records in each table.  Had this been a more true to life example with larger tables and millions of records, we could be seeing a query that is far more devastating due to the implicit conversions.  Let’s just call it the grim reaper of your database. (Grim Reaper from www.mysticalpassage.com/grim_reaper.html)

With these horrifying things to haunt your database, I leave you with this new hope as you battle the dark side and grim that is in your database.

swbattle

MCM on Life Support

Comments: No Comments
Published on: September 11, 2013

Good news everybody.  October 1st is no longer the day that the MCM was prescribed to die.  The Microsoft Certified Master is now on life support until December 31st, 2013.  Which means that the Jan 1st (for any interested in New Years resolutions), the Advanced Certifications at Microsoft will have been terminated and any who possess the certification will instantly become myths, legends, and items of folklore.  Ok so that last bit might be a bit of an exaggeration.

Here is a screen grab from the Advanced Certification Site here.  I added the nice red boxes and underline for emphasis.

MCM_exam_extended

While this bodes well for those that had committed to training rotations, or who had committed to hours of study and only had the Lab Exam remaining, the program is still being terminated.  Many are still unhappy.  Many have moved on through the grief cycle.  Polls have been taken, articles (here, here, and here) written and meetings held.  In the end, Microsoft sort of acknowledged that they alienated a huge faction of their supporting community.  Bumping the death date back to December 31st is a sign of that.

I am happy that a meeting was held after the community outcry.  It is far better than nothing at all.  I am also happy that the date has been bumped back.  I was not happy with the initial announcement and the manner it was done.  I am not yet satisfied that Microsoft will do anything to promote a Master level with their products.  In fact, as others have said (and I am sure many more have silently supported), I am uncertain I could advocate an advanced certification for Microsoft again.  I am uncertain that I could support an advanced certification for Microsoft from a 3rd party vendor.

If there is ever to be an advanced certification for Microsoft SQL Server, I will have to think long and hard about the value and worth of pursuing it.  All of that said, I value learning and I reiterate something that Paul Randal wrote on his survey results post:

Although the MCM has been removed as the focus point for learning goals, learning still goes on, and people will still aspire to make themselves better SQL Server professionals.

Don’t give up learning – it’s always worth it.

If you have passed your Knowledge Exam, I say go for the lab exam.  Test yourself and see if you can tame the beast.  If for nothing else other than to measure yourself and provide a bit of a self-evaluation.  Don’t hesitate and leave yourself wondering if you could have accomplished that goal.

I know one person that has been pondering if he should do it.  He had been planning on attempting the MCM exams next year.  Obviously the change from Microsoft has changed his plans and made him question the value of attempting.  My advice to him was to take the exams, and especially now that the MCM is on Life Support.

MCM_SQL

It is far better to attempt the exams (for those that are still on the fence), than it is to live in regret and doubt.  Worst case, you fail the exam and have a fair amount of introspect that will happen because of it.  Best case is you will pass the exams, be an MCM for life, and still have a fair amount of introspect that will happen because of it.

Rare Species, Extinction and the MCM

Comments: 8 Comments
Published on: September 2, 2013

MCM_SQLPandaAs of 11:04 PM MST ( -6 UTC ), the Microsoft Certified Master along with the other certifications in the Advanced certification series at Microsoft joined the ranks of the Panda on the Rare and Endangered Species list.

The email is readily available on the web, and can be read here. I am not going to dive too deeply into the email other than to reiterate a comment that I made here about it.

“The communication on the extinction of the program really spewed a lack of respect and communication for the ‘vanguard.’”

I am not happy about the decision.  In part that is why I have waited a few days before writing up an article on the whole shenanigan.  I was pretty ticked off about it in the first several hours after seeing the email.  The community in whole also seems to share that same sentiment.

Today, I want to share some of my thoughts on the whole certification thing.  I want to share some of the community thoughts on the soon to be extinct certification.  I also want to try and do all of this in an even keel.

First some back story. Let’s revive some of my thoughts and perspectives on certification and training from past articles.

MCJ (Microsoft Certified Journeyman)

When I first wrote about this, I called it a Stepping Stone Cert. That article can be found here. There is also a follow-up article here about it. This came about because of two fundamental reasons: 1. Cost of MCM and 2. lack of validity behind the MCITP/MCDBA/MCSE certifications.

As far as the SQL MCM track goes, the first fundamental reason for that blog series was corrected.  Microsoft opened up the SQL MCM to a broader audience by eliminating the 3 week rotation requirement and by lowering the cost (most of which was probably due to the 3 week rotation).  Suddenly the MCM became much more attainable for somebody like me.  I think that this barrier to entry is right sized now.  I would probably still be happy with the cost if it were to be double the current cost (the cost is travel and exam costs for somebody like me).  Could Microsoft take it up another notch?  Probably.  I don’t think I would be as miffed by the decision to raise the bar another level, as I was with the decision to try and force extinction on the MCM.

The second reason has not been addressed in any manner in my opinion.  Since writing that series, I put myself to the challenge and took all four of the pre-requisite exams for MCITP SQL Server 2008.  I took the exams without preparation and found them to still be excessively easy.  Even going out on the web today, I can find plenty of resources for those exams that look like the exams I took.  To me, this validates my concern as well as the concern of many who attained the MCM that the MCSE/MCITP/MCDBA is nothing more than a paper certification that can be far too easily braindumped.  I will post some quotes along those very lines as well.

Bootcamp Certifications

I have also written about my disdain for many training facilities in the past.  One such article can be read here.  Part of the reason for that article was to try and make it to training by SQLSkills.  They perform training that can be respected, unlike all of those MCSE farms out there that teach you how to pass the test and do nothing about teaching you valid SQL Skills.  Sadly, I did not make it to that training offered with that contest, and am still seeking my first opportunity to attend.  You can read my reasons for not having attended here.

To belabor this point.  That was not the only run-in I have had with a paper certified DBA.  On other occasions I have been interviewing and received stacks of resumes from these Bootcamps.  I have received numerous resumes for a Senior DBA role with “truck driving” or “rope making” as the only prior experience.  While that doesn’t mean these people are not intelligent, they are in no way qualified to be a Senior DBA just yet.  I don’t care what the bootcamp promises these people, a bootcamp is not experience and a MCSE/MCDBA/MCITP certification does not automatically equate to a Senior DBA.

However, if you had an MCM apply for the job, that should equate to a Senior DBA.  And no, just because one is an MCM does not mean we know everything about SQL Server.  It does mean, however, that we are very good, seasoned and can figure it out quickly if there is something we don’t know.  And I will take a moment for a sidebar here as well.  Just because somebody is an MCM (or presenting or training) does not mean you have an opportunity to try and play stump the chump.  If you engage in behavior such as that, you are pretty ignorant and crass.

With some of that background out of the way, we can now dive into some of the aftermath caused by this announcement.

Disdain for Paper Certs (MCITP/MCSE/MCDBA)

References for these citations will be hyperlinks on the persons name or images of the citation in whole.

we definitely need something that distinguishes us from the horde of paper MCITPs/MCSEs, etc. – Luke Edson

Even though I’m not a MCM yet I sick of seeing paper MCP’s get ahead and the MCM gave me hope and a goal to differentiate myself as I’m sure it did to many others. - Michael Usov

As of last night, there are now 0 certifications that one can take that I will immediately respect. MCITP/MCTS/MCSA/MCSE tell me that either a person is interested in learning more, that they were required by their company to take the exams and did the absolute bare minimum memorisation or that they’re a cheat, which it is I have to find in the interview. - Gail Shaw (MCM, MVP)

I haven’t taken a single Microsoft certification. Now, since the only one I respect is gone, I don’t have to. - Grant Fritchey (MVP)

Without something like the MCSM/MCM to truly validate the knowledge & experience of those who take and pass the exam how can we differentiate ourselves from those that have just brain dumped or boot-camped their way to an MCSE? Without the MCSM/MCM program we will just return to the bad old days of the certifications not being worth having. - SQLServerMonkey (John Martin)

What this does is make the MCSE (eminently brain dumpable) the highest level of certification. There is nothing that anyone with real knowledge can do to differentiate themselves from someone that bought the cert out the back of a lorry. - Nic Cain (MCM)

That is just a few of the sentiments.  Now let’s look at some of the comments from twitter on this affair.

twitter1

 

twitter2

 

twitter3

 

twitter4 twitter5 twitter6 twitter7

 

 

 

 

 

 

 

 

 

This is just a small sample of some of the activity and comments concerning this announcement.  Granted, it does not paint the entire picture yet since it seems pretty much opposed to this decision by Microsoft.  To put it as Mary Jo Foly said on ZD-Net (emphasis added) “Microsoft’s surprise phase-out of its highest-level certification programs has angered a number of those who have trained or are in the midst of training to be “masters” across a variety of the company’s products.

It should be understood that there would be an angry backlash when the “pinnacle” of Microsoft certification has been abolished.

pinnacleThis is a sentiment that was echoed by Tim Sneath in his comments on the connect item when he called the MCM the vanguard.

“You are the vanguard of the community. You have the most advanced skills and have demonstrated it through a grueling and intensive program. The certification is a clear marker of experience, knowledge and practical skills.” - Tim Sneath

To put it another way, we have this from Simon Sharwood at The Register

Microsoft has again enraged some of its most committed users, by “retiring” the Microsoft Certified Master (MCM), Microsoft Certified Solutions Master (MCSM), and Microsoft Certified Architect (MCA) certifications. 

It should be abundantly clear that this decision is driving a wedge between the “most committed users” and Microsoft.  Not in the decision itself but also in the way that it was executed.  Sending the news out at 10PM PST (-7 UTC) is being viewed as nothing more than a means to try and mute the criticism.  Instead, the criticism has boiled over and many in the community are unhappy with Microsoft, Microsoft Learning, and this decision.

Furthering the backlash is not just the articles by the news outlets and the social media networks, but also through several blogs that have been written concerning the topic. This news has not only angered some, but also disheartened them.  One such comment I received was the following:

I am changing tracks, after 14 yrs of prod dba work and with this, it is the end of the road for me

A DBA of 14 years is disheartened to the point to change career paths and goals? Does Microsoft really want to push away the committed users in this fashion?

Cause of the Pause

Tim posted several possible reasons for the need to pause the MCM program in a post on the connect item.  There has been some feedback and some of it nastier than other feedback.  Two of the big driving factors that seem relevant from his comments are: 1. Traction and 2. Cost.

It was also stated that there was months of deliberation on the topic.  I don’t think Tim is fooling anybody with that statement.  There is evidence that Microsoft continued to take money for rotations starting in October as late as last week.  This seems to be a bit of back-pedaling in a cover up at the very least to try and take some heat off the decision.

As per the two semi-coherent reasons for canceling the advanced certifications, the only one that seems to hold water is cost.  I don’t buy into the barriers to entry (as is the case for others per their responses on the connect item).  I don’t buy into the traction issue.

Tim stated that .08% of MCSEs go on to become MCM.  For Microsoft that seems to be a bad thing.  For the rest of the world, this seems to be a good thing.  This is the peak, pinnacle, vanguard we are talking about here.  Can you have a peak that contains 10% of your certified user base?  That seems more like a saddle than a peak to me.

With .08% traction, that is a good thing in my mind.  I have seen plenty of potential clients demanding an MCM.  Sure, they may not have known what an MCM is/was last year, but we are in a changing landscape and people know what the certification represents.

If Microsoft wanted better traction, I think they should have done some of the things that Greg Low has pointed out in the connect item.

A first step in making the program more relevant would have been to make use of it within other Microsoft programs. For example, Microsoft Certified Trainers could have been required to have at least taken the knowledge exam. When I suggested that, it was met with howls of “but we’d lose most of the trainers”. While that’s partly true, what does that tell you about the existing trainers?

Instead of abandoning it, why not take quality seriously and see it applied throughout related programs. The MCT program is one but another would be the Gold Partner program. Is it really acceptable to have Gold Partners working (and promoted by Microsoft) on the largest projects, without a single MCM/MCSM on staff? Why not require real demonstrated ability in the Gold Partner program?

Speaking on the Gold Partner program, there was apparently some inconsistency there with that as well.  For instance, I know some cases where the Gold Partner did not account for the MCMs on staff and in some cases where the “partner” was informed that the MCM would count.  If it doesn’t count (for Gold Partners), how can one expect there to be a more broadened traction with the advanced certification?

The Community

If you are on twitter, I recommend you follow the hashtag #BringBackMCM or #SQLMCM.  I also recommend you read more of the opinions people have written on the topic.  Here is a good starter list:

Radi Atanassov -

http://www.sharepoint.bg/radi/post/The-fall-of-the-Master.aspx

Wayne Sheffield -

http://blog.waynesheffield.com/wayne/archive/2013/08/retiring-of-the-mcm-certifications/

Nic Cain -

http://sirsql.net/blog/2013/8/30/so-long-mcm

Jon Gurgul -

http://jongurgul.com/blog/master-certification-cancelled/

Robert Davis -

http://www.sqlsoldier.com/wp/sqlserver/isthemastercertificationdeadorwillitbebornagain

Ed Leighton -

http://www.edleightondick.com/2013/09/thoughts-about-microsofts-retirement-of-the-mcm/

Marc Anderson -

http://sympmarc.com/2013/09/01/thoughts-on-the-discontinuation-of-the-microsoft-masters-program-by-microsoft-learning/

Tony Redmond -

http://windowsitpro.com/blog/microsoft-learning-kills-mcm-and-mca-accreditations

Mala Mahadevan -

http://diligentdba.wordpress.com/2013/08/31/finding-new-goals/

Is it just endangered or is it dead?

So far, it has been pretty much doom and gloom.  There are some out-crying for a refund (some should and some are just being pithy).  Microsoft gave us a pretty absurd announcement and it was poorly executed.  In no way would many of us be able to retain a client with an announcement executed in the same manner that was done by Microsoft Learning.

We have solely focused on the “supporter” side of the fence and how this announcement affects us.  Very little has been said concerning how this affects those at Microsoft Learning that received the news in similar fashion and who may be without a job in 30 days.

There is a possible light at the end of the tunnel.

That’s why we’re taking a pause from offering this program, and looking to see if there’s a better way to create a pinnacle, WITHOUT losing the technical rigor. We have some plans already, but it’s a little too early to share them at this stage. Over the next couple of months, we’d like to talk to many of you to help us evaluate our certifications and build something that will endure and be sustainable for many years to come.” – Tim Sneath

I will remain cautiously optimistic about the program being revived.  I do not expect it to happen and will be happy if it comes back to life.  That comes with a warning though.  If the program is anything less than the current program, it will be met with disdain by the community and I dare so by Corporations globally.

Another huge fact is being lost in all of this fuss.  Every single person that has pursued the MCM/MCSM/MCA has engaged in education.  Anything that you have learned cannot be revoked by Microsoft.  The certification may be dead (Image snagged from twitter),

mcm_rip

 

but the knowledge and skills each MCM has gained is invaluable.  We can continue to serve our clients better.  And maybe that will be in spite of Microsoft Learning.  Microsoft has said we will continue to be charter members.  That’s cool and all, but it probably won’t hold water with any client that checks the Microsoft Learning website and is unable to find MCM or MCSM anywhere.

I know the characteristics of many of the MCM and I am fairly certain that there is one thing this announcement will not change.  Those who have the drive and passion for SQL Server and attained the MCM, also have a drive and passion for continued education.  We will continue to dive deep into SQL Server.  We will continue to push ourselves.  We will strive to continue to provide top notch service to our clients or employer.  That is just who we are, and the removal of this certification will not change that.

I do fear that the loss of this certification will continue the big brother psychosis in the DBMS world though.  There will be a perpetual “Oracle has a certified Master” or “Oracle has this” and Microsoft will be in the Oracle shadows for a while to come because of it.  We don’t see Oracle abandoning their pinnacle certification.  We don’t see Cisco abandoning their pinnacle certification.  And I am certain it is no less costly to them than Microsoft, nor is it anymore profitable to them than it is Microsoft.

With that in mind, I wonder what is on the mind this Certification and Microsoft learning as it approaches its deathbed. (Image snagged from twitter.)

finaldays

Is Microsoft Learning ready to latch tight to the MCSE that is poorly regarded through the Community and Corporations worldwide?

If relationships matter most, I would hope that Microsoft Learning does not abandon this relationship with the MCM and their most staunch supporters/evangelists.

In concluding this long diatribe, I welcome all MCMs to show their support at Summit 2013.  We are planning on wearing the robes of the Jedi in tribute to the MCM.  The initial idea was by Robert Davis, and we are trying to run with it now.

 One thing remains certain for me, Microsoft cannot change my license plate or that I have earned my MCM!!

20130711_201747

The following two images are courtesy of Robert Davis.

mcm4life001 mcm4life002

T-SQL Tuesday #45 Follow the Yellow Brick Road

TSQL2sDay150x150

We have the distinct pleasure once again to have a party full of SQL enthusiasts.  The best part is that few if any of us are even in the same room or building for this party.

The party is known as TSQL Tuesday and this month is hosted by Mickey Stuewe (Blog | Twitter).  This month Mickey has come up with the notion that for some reason or another you may want to have something audited, tracked, traced or otherwise followed.  To quote Mickey from the invitation (which you can read here): “So, it is time to follow Dorothy and Toto down the yellow brick road and to share your experience with auditing data.”

Mickey did provide some viable options for auditing.  And I didn’t see any restrictions on what the topic of auditing couldn’t be.  So I have a slightly different spin on the topic than what appeared to be the direction that Mickey was headed with her examples and description in the invite.

That said, I will be discussing a topic along the vein of “finding out who did something stupid last week.”  And the TSQL Tuesday topic once again works out well this month for me because I have a back log of blog topics for this subject that I have been meaning to write.  So let’s get at least one of them knocked out of the way with this entry into the blog party of the month.

OzcurtBehold the Great and Powerful Audit

Have you ever come into work and been blasted first thing with “everything is slow!”

Then you start to dig and see your email reports and you begin to wonder why you have 50% of your clustered indexes jacked up to 99% fragmentation?  You know that sort of thing never happens suddenly in your environment because you have your finger on the pulse of all things DB related – or so you thought.

You check your index defrag log and see that it ran perfectly fine last night and you even see the report on fragmentation from after the defrag process was completed.  You begin to really scratch your head.  You know that you have no process that does any bulk loads or ETL batches after the index defrag process.  Nothing is really making any sense.  Then a lightbulb goes off and you check your report on database sizes.  You see that the database is now several hundred GB smaller and there is no free space left in the data files.  Free space that you had left in the data file intentionally.

300px-Steam_eruptionNow you are steamed and about to blow your top like this geyser.  Who has done something so stupid as to mess around with your data files?  We Senior DBA’s are somewhat territorial with the databases we support and we don’t like people making changes they haven’t discussed with us.  So it is perfectly understandable if you are a bit steamed by an event such as this.

The question now is: What are you going to do about it?  How are you going to find who jacked up your data files?

The answer is in the default trace.

[codesyntax lang="tsql"]

[/codesyntax]

The optimal solution may not be to have to rely on the default trace to capture an event that occurred in the past, but rather to proactively monitor it through other means (perhaps an extended event for example).  But in the event you need to capture the information because you were caught off-guard, you may be able to trap the information from the default trace.

When I run the above query, I get the following results (yes I intentionally shrunk my msdb database explicitly for this example).

results

Now when you have a sysadmin or a junior dba or that manager with too many permissions doing things they shouldn’t be doing, you have some evidence to show them.  You also can go to the person that “did something really stupid” and offer them a chance to learn why it might have been a bad thing – but do it nicely.  You don’t want to be the one that looks like a jerk.

Lost that Single-User Connection?

You have changed your database to single_user mode to do a task.  As you go about your business, you lose track of which connection was the single_user connection.  You start closing connections and mistakenly close the session that was your single_user connection.  Now you are unable to start a new single_user session.  It would seem that somebody or something has taken your connection.

Today, I am going to discuss some things you may do to get around this problem.

The first thing that may come to mind when you encounter this is “Oh crap!”  Well, no need to get too terribly worried (not unless you really hosed something up and you are trying to fix it real quick before the boss notices).

The next thing you may think of trying is how to circumvent the single_user mode.  And during that thought process you may be thinking that single_user does not really mean single_user so you might try something like start a DAC session.  Well, let’s go through that and see what would happen in a DAC session if your single_user session is stolen.

I am going to skip the part of setting a database into single_user mode because we are presuming that the condition already exists.  To start a DAC session, I am going to point you to a previous article I did on the topic – here.

To ensure I am using a DAC session, I am going to issue the following query.  This will ensure I am in the right session and that DAC is in use.

[codesyntax lang="tsql"]

[/codesyntax]

In my case, this results in an endpoint with the name “Dedicated Admin Connection” and a spid of 84.  Good, I am in the correct session for the rest of this test.  Next, I will issue a Use database statement.  I have created a test database called ClinicDB.  So I will issue the following.

[codesyntax lang="tsql"]

[/codesyntax]

I get the following result.

[codesyntax lang="tsql"]

[/codesyntax]

So, that blows that idea right out of the water.  It shouldn’t really have been a consideration in the first place because single_user really means just that – single_user.

Now What?

Well, what do you think we could do now to circumvent this little problem and get that single_user session back?

That requires a little investigative work.  It is time to find out who has taken the single_user session and politely ask them to give it up.  To make that task a little easier, we could modify the previous query to find out who has that single_user session (thus limiting how many people we have to ask).  I have modified the following query to use sys.sysprocesses so I could limit the results to the ClinicDB.  This is a limitation of SQL 2008 R2 and older versions.  Getting the database reliably means using sysprocesses.  Despite the database_id being available in other related DMVs, it’s just not that easy.  One would think you could use sys.dm_exec_requests.  But if a request is not active, an entry won’t exist for that session.  This problem is fixed in SQL 2012 since the sys.dm_exec_connections DMV now has the database_id field.  Enough of that birdwalk and on to the query.

[codesyntax lang="tsql"]

[/codesyntax]

I chose not to do an entirely new query to simply demonstrate that it was possible with a very small tweak to what has been already used.

Now that you know (in my case I can see that I have a session open with ID = 80 that is connected to that single_user database), I can walk over to the person (knowing his/her login id and computer name) and politely ask them to disconnect.

In the end, this is really an easy thing to resolve.  Sure it may take some people skills – but that doesn’t make the task too terribly difficult.  Next time this happens to you, just remember you can run a quick query to find who has sniped that single_user session.

MCM Road Less Traveled

Comments: 14 Comments
Published on: April 16, 2013

I began this post back in October of 2012 after learning that I had passed the SQL Server 2008 MCM Knowledge exam.  I had set it aside in hopes of polishing it off after my first lab attempt at Summit 2012.  Notice I said first attempt?  I failed that first attempt.  This is a bit of a story about the journey through the exams and the results.

longuphillbw

The first attempt was a little crushing.  A mix of emotions concerning the lab came over me because I felt I knew the technology.  While the results were not desirable – the end effect was desirable and I am glad I did not succeed on that first attempt.

When I took the knowledge exam, I took my time going through the questions and examined the questions.  I think that was a good method for that exam – I was trying to ensure I understood the question and didn’t miss anything that was stated.  Though I took it slowly, that is not an indictment to difficulty for the exam.  I was well prepared and did not want to make any invalid assumptions.

Fast forward a little bit to the first attempt at the lab exam.  I had a plan going into the exam.  I felt confident in my skills.  I felt relaxed with the technology.  Then the exam started and my plan went out the window.  I started making assumptions about the exam that I really should not have done.  I rushed a few scenarios that I should not have rushed because I could have done them better.  In the end, I was my own worst enemy during the exam.  This is not a characteristic of an MCM.  Even when the pressure is on, doing what you know and being methodical is a wonderful asset.

Having failed the first time, I wondered if I was ready to be an MCM.  In retrospect, I was not.  Leading up to the exam, I did feel that I was ready for the exam.  I had read several articles such as the following to try and figure out if I was ready.  In the end, it’s more of a gut check and a leap of faith for some.

Gavin Payne http://gavinpayneuk.com/2012/05/01/knowing-when-youre-ready-to-attempt-to-become-an-mcm-of-sql-server/

Joe Sack http://blogs.msdn.com/b/joesack/archive/2010/11/21/how-do-you-know-if-you-re-ready-for-sql-mcm.aspx

I even perused some of the resource type of articles such as the following.

Nick Haslam http://nhaslam.com/blog/2011/11/19/mcmprep-88-970-sql-mcm-knowledge-exam-sqlmcm/

MCM Blog http://blogs.technet.com/b/themasterblog/archive/tags/sql+server+mcm/

Robert Davis Amazon MCM Reading List http://www.amazon.com/Official-Server-Certified-Master-Readiness/lm/R3RB13PQ7D8TKB

Brent Ozar MCM Articles http://www.brentozar.com/sql-mcm/

In the end, this post by Rob Farley sums it up nicely for me (http://sqlblog.com/blogs/rob_farley/archive/2012/12/23/the-mcm-lab-exam-two-days-later.aspx).  Rob references an article by Tom LaRock and some of what Tom did and didn’t do.  As well as some theory on how to approach the exam at certain stages.  Of all those things, I think the best advice I read as well as I could recommend is to get a study buddy.

It’s not sooo much to have somebody to bounce ideas off of, as it is to have somebody to try and teach.  I worked with Wayne Sheffield as we prepped for our retakes.  The biggest benefit was, as I said, in that I could pick a topic and try to teach Wayne.  He did the same to me.  And then, we could question each other on what-if type questions about our selected topics.  Another benefit is to have a heat-check so to speak.  Having a study buddy can help you from straying too far off into tangents or maybe keeping you from wasting too much time on a topic they might feel you understand exceptionally well.

If you are reading this far, you probably have figured that I have taken the Lab a second time.  I was debating whether to do it this soon due to life and family.  My wife continued to push me to do it (much as Nic Cain experienced here - http://sirsql.net/blog/2012/3/26/achievement-unlockedmcm-sql-server-2008.html).  Without Krista pushing me, I might have delayed even longer.  It was also helpful to have a co-worker and friend pushing me along too (I’m a little competitive).  Thanks again Wayne!  You can read his experiences here.

I received my notification email while driving home from Vegas last week.  Seeing that email pop up from boB Taylor gets your heart racing a little bit – especially after you have failed the lab previously.  I glanced at the email and only need to go to the first word – “CONGRATULATIONS!”

MCM_SQL

Oh yeah!  Vindication, satisfaction, elation, joy and general happiness set in quickly.  Memories of that old “Wide World of Sports” show from a long time ago spilled in with the words “Agony of Defeat, Thrill of Victory.”

I am glad I took the time to pursue this certification.  This adventure has helped me to learn and grow as a DBA.  I am also grateful to those that helped or pushed in some way or another – whether they knew it or not at the time.  Last but not least, I am thankful for my employer Ntirety.  They understood the importance of this for me and were very supportive in this endeavor.

As one final take away, I was recently shown this link with some interesting questions to help you decide if you feel you are ready to take the exams.

Las Vegas April 2013 UG Meeting

Categories: Corner, News, Professional, SSC
Comments: No Comments
Published on: April 9, 2013

AlwaysOn – Finally, A usable ‘mirror’!

In the past, High Availability and Disaster Recovery (HADR) had many limitations. Clustering and Mirroring are great, but the mirror/secondary database is not very usable since it is not online. Finally, AlwaysOn solves this limitation by merging both multi-node Clustering and mirroring. AlwaysOn also allows the secondary database to remain ONLINE, so we can use it for reporting purposes without resorting to a Snapshot! Come see this lively session with extensive demos of setting up, configuring and testing AlwaysOn. We’ll also test automatic fail-over using a real web app to see how well this feature works.

 

Jim Murphy

Speaker photo

Jim Murphy has been using Microsoft SQL Server since version 6.0 back in the 90′s. He is the President/CEO of SQLWatchmen, LLC., a managed DBA service provider for smaller companies who do not need a full-time DBA. He runs the Austin SQL Server User Group: CACTUSS Central and sits on the board of SQLPOSSE.com. He has been a professional DBA and developer for over 20 years and has run a consulting company full-time for over a decade.

LiveMeeting Information:

Attendee URL:https://www.livemeeting.com/cc/UserGroups/join?id=834ZNP&role=attend 

Meeting ID:834ZNP

T-SQL Tuesday #040: File and Filegroup Wisdom

Comments: 3 Comments
Published on: March 11, 2013

Backstory

Each month the SQL community comes together for an important party.  This is the blog party that was the brain child of Adam Machanic (Twitter) known as T-SQL Tuesday.

The party is a very good collaboration among data professionals on a pre-determined topic.  This month, for TSQL Tuesday #40, the topic is on Files and Filegroups.  The host du mois is Jen McCown (Twitter).

This month, I had the luck of encountering something this past week that is right up the alley of this topic.  I love it when sysadmins help create learning opportunities for me (e.g. blog material).

Production Down

I was recently given the following concerning a client server issue:

The log file for database ‘xxx’ is full. Back up the transaction log for the database to free up some log space.

That was followed by a short description stating that the sysadmin had tried to expand the log file and that they also tried to run a full backup.  The output of the full backup was as follows.

BACKUP DATABASE [xxx] To Disk=’blah’ WITH NOFORMAT, NOINIT, NAME = N’blah’, SKIP, REWIND, NOUNLOAD, STATS = 10
” failed with the following error: “The backup of the file or filegroup “sysft_FTS” is not permitted because it is not online.
BACKUP can be performed by using the FILEGROUP or FILE clauses to restrict the selection to include only online data.

Now this makes things more interesting.  The sysadmin at least tried to do a full backup and then handed off when it got too deep.

The client server is a SQL 2005 box.  Fulltext was enabled for the database on that box. And we have seen plenty of issues related to Fulltext in SQL 2005.  Somehow, I feel that none of them really pertained to this opportunity.  From all appearances, there was either a disk issue (no history in the logs but client said there was) or somebody deleted the directory (there was a login at the time the issue started and there was a service restart at that time).  In either case, the folder for the fulltext filegroup was no longer present.  But I am getting a little ahead of myself.

When querying the sys.database_files catalog view, I was able to confirm the directory path that should have been in place for the filegroup and that the filegroup was indeed OFFLINE.  Results and query to follow, with filepaths redacted intentionally.

[codesyntax lang="tsql"]

[/codesyntax]

file_id type_desc name state_desc
1 ROWS Somefile ONLINE
2 LOG Somefile_log ONLINE
3 ROWS Somefile_data ONLINE
4 ROWS Somefile_index ONLINE
65537 FULLTEXT sysft_FTS OFFLINE

So, indeed I do have a problem with the filegroup and I need to get it back online in order to resume backups and get this database back online and able to perform backups.

Some suggestions out there would be to rebuild the fulltext catalog in order to bring it back online.  Well, the files are no longer present on the filesystem, so this didn’t work too well.

[codesyntax lang="tsql"]

[/codesyntax]

The notes in the code block represent the outcome.  And the output makes sense if you ask me.  But when trying to drop and recreate, I ran into some more fun.

[codesyntax lang="tsql"]

[/codesyntax]

Once again, that makes sense.  I had hoped that it would drop everything for me.  So, time to try dropping the indexes and recreating them.  For this, I took screenshots of each index in question.  Then tried to drop them.  Once again – another error.

property fulltextindexsize is not available

Despite that error, the indexes were gone and the catalog dropped.  Since I had disabled FT on the database, I needed to re-enable it in order to recreate the catalog and indexes (I had scripts for the catalog and screenshots for the indexes).

[codesyntax lang="tsql"]

[/codesyntax]

Now issuing a rebuild against that catalog works as expected.  Additionally, backups work as expected.  And to confirm that all is well, query sys.database_files once again to see that the filegroup is online.

file_id type_desc name state_desc
1 ROWS Somefile ONLINE
2 LOG Somefile_log ONLINE
3 ROWS Somefile_data ONLINE
4 ROWS Somefile_index ONLINE
65537 FULLTEXT sysft_FTS ONLINE

Between the Lines

I breezed through what got this filegroup back online so database activity could resume.  One thing that I skipped over was a step I took trying to recover without dropping and recreating.  Since the directory was not present, and there was a full backup from the same day that had the filegroup in a working state, I tried to recover the filegroup manually.  Restore the database, copy the folder structure into the appropriate filepath and run an alter database statement.  Since it didn’t work, I am not going into deep details on it.  The short of it is that since the structure disappeared off disk, there was some corruption related to it internally in the database.  That needed fixed and in this case it meant to drop the indexes and catalog in order to recreate it.

Las Vegas February 2013 UG

Categories: Corner, News, Professional, SSC
Comments: No Comments
Published on: February 12, 2013

I know it is right in the middle of TSQL2SDAY when this post is to go live.  If you don’t know what that is, you should go check out this months edition here.  The topic this month is good and should drive up a lot of participation.

It also happens that this week we have a group meeting for the Las Vegas User Group. We normally meet on the second Thursday of each month.  It just so happens that this month that lands on the 14th of February.  If you can’t fathom why we changed our meeting date, maybe you should Google the date or something (just joking).

This month we have a first time presenter.  I have been bugging her for months to try to get her to come out of her shell to present.  And now she is doing it so be gentle and check out the meeting.

Presenting this month is Terrie White.  She will be presenting on Replication Technologies and High Availability solutions.

Here are some of the details for the meeting.

Presenting is Terrie White

Terrie White Will be presenting to us on replication technologies and high availability.

 

 

 

 

LiveMeeting Information:

Attendee URL: https://www.livemeeting.com/cc/UserGroups/join?id=WD4TSW&role=attend
Meeting ID: WD4TSW

 

VERY IMPORTANT

The meeting location has changed.  We will no longer be meeting at The Learning Center.  New meeting location is M Staff Solutions & Training / 2620 Regatta Drive Suite 102 Las Vegas, NV 89128.
We are also moving to meetup for meeting management. Please join us for free at http://www.meetup.com/SQL-Server-Society-of-Las-Vegas/

«page 1 of 4




Calendar
July 2014
M T W T F S S
« May    
 123456
78910111213
14151617181920
21222324252627
28293031  
Content
SQLHelp

SQLHelp


Welcome , today is Wednesday, July 30, 2014