T-SQL Tuesday #57 – SQL Family and Community

Comments: 1 Comment
Published on: August 12, 2014

TSQL2sDay150x150Look at that, it is once again that time of the month that has come to be known as TSQL Tuesday.  TSQL Tuesday is a recurring blog party that occurs on the second Tuesday (most generally) of the month.  This event was the brainchild of Adam Machanic (Blog | Twitter).  

Anybody who desires to participate in this blog party is welcome to join.  Coincidentally, that open invitation is at the base of this months topic – Family and Community.  The invitation, issued by Jeffrey Verheul (blog | twitter), for this month said the following.

This month I would like to give everyone the opportunity to write about SQL Family. The first time I heard of SQL Family, was on Twitter where someone mentioned this. At first I didn’t know what to think about this. I wasn’t really active in the community, and I thought it was a little weird. They were just people you meet on the internet, and might meet in person at a conference some day. But I couldn’t be more wrong about that!

Once you start visiting events, forums, or any other involvement with the community, you’ll see I was totally wrong. I want to hear those stories. How do you feel about SQL Family? Did they help you, or did you help someone in the SQL Family? I would love to hear the stories of support, how it helped you grow and evolve, or how you would explain SQL Family to your friends and family (which I find hard). Just write about whatever topic you want, as long as it’s related to SQL Family or community.

What is it?

We have all likely seen SQL Family thrown about here and there.  But what exactly is this notion we hear about so often?

I think we have a good idea about what family might be.  I think we might even have a good idea of what a friend is.  Lastly, I might propose that we know what a community is.  When we talk of this thing called SQL Family, I like to think that it is a combination of family, friends and community.

mushroom

These are people that can come together and talk about various different things that span far beyond SQL Server.  We may only see each other at events every now and then.  Those events can be anything from a User Group meeting to a large conference or even at a road race (5k, half marathon, marathon).

These are the people that are willing to help where help is needed or wanted.  That help can be anything ranging from well wishes and prayers, to teaching about SQL Server, to lending a vehicle, or anything along that spectrum.

I have seen this community go out of their way to help provide a lift to a hotel or to the airport.  These people will help with lodging in various circumstances when/if they can.  These are the people that have been known to make visits to hospitals to give well wishes for other people in the community.

Isn’t that what friends / family really boils down to?  People that can talk to each other on an array of topics?  People that go out of their way to help?  Think about it for a minute or three.

Murder in Raleigh

sqlsat320_webI am about to set sail on a new venture with my next official whistle stop.  This year has been plenty full of whistle stops and I plan on continuing.  You can read (in full) about previous whistle stops and why they are called whistle stops here.

Suffice it to say at this point that it all started with a comment about a sailing train a few months back.

raleigh_traini

Time to sink or sail, so to speak.  SQL Saturday 320 in Raleigh will mark the next attempt at what I hope to be a repeat performance – many times.  I will be tag-teaming with Wayne Sheffield in this all day workshop event.  The session is one of two all day sessions for the event in Raleigh NC.

If you are a DBA or a database developer, this session is for you.  If you are managing a database and are experiencing performance issues, this session is a must.  We will chat with attendees about a horde of performance killers and other critical issues we have seen in our years working with SQL Server.  In short, some of these issues are pure murder on your database, DBA, developer and team in general.  We will work through many of these things and show some methods to achieve a higher state of database Zen.

Description

Join Microsoft Certified Masters, Wayne Sheffield and Jason Brimhall, as they examine numerous crazy implementations they have seen over the years, and how these implementations can be murder on SQL Server.  No topic is off limits as they cover the effects of these crazy implementations from performance to security, and how the “Default Blame Acceptors” (DBAs) can use alternatives to keep the developers, DBAs, bosses and even the end-users happy.

Presented by:

wayneWayne Sheffield, a Microsoft Certified Master in SQL Server, started working with xBase databases in the late 80′s. With over 20 years in IT, he has worked with SQL Server (since 6.5 in the late 90′s) in various dev/admin roles, with an emphasis in performance tuning. He is the author of several articles atwww.sqlservercentral.com, a co-author of SQL Server 2012 T-SQL Recipes, and enjoys sharing his knowledge by presenting at SQL PASS events and blogging at http://blog.waynesheffield.com/wayne

 

 

 

JasonBrimhall

Jason Brimhall has 10+ yrs experience and has worked with SQL Server from 6.5 through SQL 2012. He has experience in performance tuning, high transaction environments, as well as large environments.  Jason also has 18 years experience in IT working with the hardware, OS, network and even the plunger (ask him sometime about that). He is currently a Consultant and a Microsoft Certified Master(MCM). Jason is the VP of the Las Vegas User Group (SSSOLV).

 

 

 

 

Course Objectives

  1. Recognize practices that are performance pitfalls
  2. Learn how to Remedy the performance pitfalls
  3. Recognize practices that are security pitfalls
  4. Learn how to Remedy the security pitfalls
  5. Demos Demos Demos – scripts to demonstrate pitfalls and their remedies will be provided
  6. Have fun and discuss
  7. We might blow up a database

kaboom

 

There will be a nice mix of real world examples and some painfully contrived examples. All will have a good and useful point.

If you will be in the area, and you are looking for high quality content with a good mix of enjoyment, come and join us.  You can find registration information and event details at the Raleigh SQL Saturday site - here.  There are only 25 seats available for this murder mystery theater.  Reserve yours now.

The cost for the class is $110 (plus fees) up through the day of the event.  When you register, be sure to tell your coworkers and friends.

Wait, there’s more…

Not only will I be in Raleigh for this workshop, I hope to also be presenting as a part of the SQLSaturday event on Sep 6 2014 (the day after the workshop which is Sep 5, 2014).  I hope to update with the selected session(s) when that information becomes available.

You can see more details about the topics lined up for this event - here.

Shameless plug time

I present regularly at SQL Saturdays.  Wayne also presents regularly at SQL Saturdays.  If you are organizing an event and would like to fill some workshop sessions, please contact either Wayne, myself or both of us for this session.

Top 10 Recommended Books…

Comments: No Comments
Published on: July 29, 2014

So the title says it all, right?  Well, only really partially.

Recently an article was published listing the top 10 most recommended books for SQL Server.  That’s the part the title doesn’t say.  It is really important to understand that we are talking about the top 10 recommended books for SQL Server.

The beauty of the top 10 list is that I have a book on that list.  It caught me by surprise.  That is very cool.

If you are interested in finding a book, I recommend naturally that you check out my book.  But just as importantly have a look at the list.  This was a list that was published independently by SQL Magazine.  On the list you will find books by people like Kalen Delaney, Itzik Ben-Gan, and Grant Fritchey.

2012_Recipes

Check out the original list, here!

Murder In Denver

Comments: 1 Comment
Published on: July 14, 2014

sqlsat331_webI am about to set sail on a new venture with my next official whistle stop.  This year has been plenty full of whistle stops and I plan on continuing.  You can read (in full) about previous whistle stops and why they are called whistle stops here.

Suffice it to say at this point that it all started with a comment about a sailing train a few months back.

train

Time to sink or sail, so to speak.  SQL Saturday 331 in Denver will mark the next attempt at what I hope to be a repeat performance – many times.  I will be tag-teaming with Wayne Sheffield in this all day pre-con / workshop event.  The session is one of three all day sessions for the event in Denver CO.

If you are a DBA or a database developer, this session is for you.  If you are managing a database and are experiencing performance issues, this session is a must.  We will chat with attendees about a horde of performance killers and other critical issues we have seen in our years working with SQL Server.  In short, some of these issues are pure murder on your database, DBA, developer and team in general.  We will work through many of these things and show some methods to achieve a higher state of database Zen.

Description

Join Microsoft Certified Masters, Wayne Sheffield and Jason Brimhall, as they examine numerous crazy implementations they have seen over the years, and how these implementations can be murder on SQL Server.  No topic is off limits as they cover the effects of these crazy implementations from performance to security, and how the “Default Blame Acceptors” (DBAs) can use alternatives to keep the developers, DBAs, bosses and even the end-users happy.

Presented by:

wayneWayne Sheffield, a Microsoft Certified Master in SQL Server, started working with xBase databases in the late 80′s. With over 20 years in IT, he has worked with SQL Server (since 6.5 in the late 90′s) in various dev/admin roles, with an emphasis in performance tuning. He is the author of several articles atwww.sqlservercentral.com, a co-author of SQL Server 2012 T-SQL Recipes, and enjoys sharing his knowledge by presenting at SQL PASS events and blogging at http://blog.waynesheffield.com/wayne

 

 

 

JasonBrimhall

Jason Brimhall has 10+ yrs experience and has worked with SQL Server from 6.5 through SQL 2012. He has experience in performance tuning, high transaction environments, as well as large environments.  Jason also has 18 years experience in IT working with the hardware, OS, network and even the plunger (ask him sometime about that). He is currently a Consultant and a Microsoft Certified Master(MCM). Jason is the VP of the Las Vegas User Group (SSSOLV).

 

 

 

 

Course Objectives

  1. Recognize practices that are performance pitfalls
  2. Learn how to Remedy the performance pitfalls
  3. Recognize practices that are security pitfalls
  4. Learn how to Remedy the security pitfalls
  5. Demos Demos Demos – scripts to demonstrate pitfalls and their remedies will be provided
  6. Have fun and discuss
  7. We might blow up a database

kaboom

 

There will be a nice mix of real world examples and some painfully contrived examples. All will have a good and useful point.

If you will be in the area, and you are looking for high quality content with a good mix of enjoyment, come and join us.  You can find registration information and event details at the Denver SQL site - here.  There are only 30 seats available for this murder mystery theater.  Reserve yours now.

The cost for the class is $125 up through the day of the event.  When you register, be sure to choose Wayne’s class.

Wait, there’s more…

Not only will I be in Denver for the Precon, I hope to also be presenting as a part of the SQLSaturday event on Sep 20 2014 (the day after the precon which is Sep 19, 2014).  I hope to update with the selected session(s) when that information becomes available.

You can see more details about the topics lined up for this event - here.

Shameless plug time

I present regularly at SQL Saturdays.  Wayne also presents regularly at SQL Saturdays.  If you are organizing an event and would like to fill some pre-con sessions, please contact either Wayne, myself or both of us for this session.

Is your Team Willing to Take Control?

TSQL2sDay150x150

The calendar tells us that once again we have reached the second tuesday of the month.  In the SQL Community, this means a little party as many of you may already know.  This is the TSQL Tuesday Party.

This month represents the 56th installment of this party.  This institution was implemented by Adam Machanic (b|t) and is hosted by Dev Nambi (b|t) this month.

The topic chosen for the month is all about the art of being able to assume.

In many circles, to assume something infers a negative connotation.  From time to time, it is less drastic when you might have a bit of evidence to support the assumption.  In this case, it would be closer to a presumption.  I will not be discussing either of those connotations.

What is this Art?

Before getting into this art that was mentioned, I want to share a little background story.

Let’s try to paint a picture of a common theme I have seen in environment after environment.  There are eight or nine different teams.  Among these teams you will find multiple teams to support different data environments.  These data environments could include a warehouse team, an Oracle team, and a SQL team.

As a member of the SQL team, you have the back-end databases that support the most critical application for your employer/client.  As a member of the SQL team, one of your responsibilities is to ingest data from the warehouse or from the Oracle environment.

Since this is a well oiled machine, you have standards defined for the ingestion, source data, and the destination.  Right here we could throw out a presumption (it is well founded) that the standards will be followed.

Another element to consider is the directive from management that the data being ingested is not to be altered by the SQL team to make the data conform to standards.  That responsibility lies squarely on the shoulder of the team providing the data.  Should bad data be provided, it should be sent back to the team providing it.

Following this mandate, you find that bad data is sent to the SQL team on a regular basis and you report it back to have the data, process, or both fixed.  The next time the data comes it appears clean.  Problem solved, right?  Then it happens again, and again, and yet again.

Now it is up to you.  Do you continue to just report that the data could not be imported yet again due to bad data?  Or do you now assume the responsibility and change your ingestion process to handle the most common data mistakes that you have seen?

I am in favor of assuming the responsibility.  Take the opportunity to make the ingestion process more robust.  Take the opportunity to add better error handling.  Take the opportunity continue to report back that there was bad data.  All of these things can be done in most cases to make the process more seamless and to have it perform better.

By assuming the responsibility to make the process more robust and to add better reporting/ logging to your process, you can only help the other teams to make their process better too.

While many may condemn assumptions, I say proceed with your assumptions.  Assume more responsibility.  Assume better processes by making them better yourself.  If it means rocking the boat, go ahead – these are good assumptions.

If you don’t, you are applying the wrong form of assumption.  By not assuming the responsibility, you are assuming that somebody else will or that the process is good enough.  That is bad in the long run.  That would be the only burning “elephant in the room”.

elephants

From here, it is up to you.  How are you going to assume in your environment?

T-SQL Tuesday #54 – Interviews and Hiring

Comments: 1 Comment
Published on: May 13, 2014

TSQL2sDay150x150

This month’s T-SQL Tuesday is hosted by Boris Hristov (blog|twitter) and his chosen topic is “Interviews and Hiring” – specifically interviewing and hiring of SQL Server Professionals.

 

This is a pretty interesting topic from a few different angles.  Boris proposed a few topics such as the following list.

 

  1. The story of how did you get hired on your latest position?
  2. The most interesting interview you ever had?
  3. How do you think an interview should be handled? What should it include?
  4. Any “algorithms” of how to find the perfect candidate?
  5. If you are the one that leads the technical interview – what do you focus on?
  6. What are the most important questions to ask for the various SQL Server positions out there?

Any one of these ideas would be good fodder for a blog article.  A combination of these topics might prove more interesting.  I think I will try something a little different.  I want to broach the topic of the use and abuse of interviews.

infinte

There are two interviews that come to mind that might be good examples.  The first is the infinite interview.

In the infinite interview, the candidate comes in for a full day of interviews (a surprise to the candidate).  If you were lucky you might have been informed in advance that the interview would be an all-day ordeal.

You arrive on-site and are shuttled from one interviewer to the next and so on throughout the day.  Most of these people will have absolutely nothing to do with your work queue or your job duties.  Most won’t be able to spell SQL other than maybe having a book that somebody might have given them.

In one such case, I had the opportunity to be grilled all day long.  The peak of the interview(s) occurred when their dev team sat down in an office, gave me chalk and eraser and required me to redesign their database that they took 6+ months to design and were still in the process of fixing bugs.  Lots of memorization based questions centered around developer (not database) terminology.

In short this pretty much felt like a free consultation session for them.  Once finished, I got to show my own way out the door.  Not by choice but by them being too busy for it.  And in the end not a word from the company.

The second kind of interview comes in the form of stump the chump.

stumpThis is another fun type of interview.  It can come in many forms.  Sometimes it can be in the form of free consultation.  Sometimes, the interviewer just gets his rocks off trying to prove he is smarter or that you are not as qualified as you say you are.

In the type where it comes as free consultation, the interviewer has usually been trying to resolve a production issue for quite some time and just can’t figure it out.  They will present a partial scenario to you and see if you can figure it out on limited info.  If you can’t, they might come back with “We already tried that” or they may provide more info.  Again, this is all in an effort to try and resolve a problem that they couldn’t.  Sometimes  it is often to try and save face with the boss showing that even an expert couldn’t do it.

The alternate style, the interviewer knows from the start that you may be overqualified but really wants to just prove they are as smart or smarter.  Often times it just proves that they have some really erroneous understandings about SQL Server.  One such interview, the person seemed to have an explicit Oracle background and wanted to get into the internals of SQL Server.  He wanted to get into index trees and tried to go down the path of some io statistics for queries based on a bunch of unknowns.

There is really only one thing to do in these types of interviews.  Once you recognize what is going on (be it stump the chump or the never-ending interview), politely excuse yourself and look for a position somewhere else.

T-SQL Tuesday #051: Bets and Results

Comments: 2 Comments
Published on: February 18, 2014

TSQL2sDay150x150

The line for this months TSQL Tuesday required wagers be made concerning the risks and bets that have either been made or not made.

At close, we saw 17 people step up and place remarkable markers.  Today, we will recap the game and let you know who the overall winner from this week of game play in Vegas just happened to be.

poker-hands

 

This is about some bets, so we needed to understand some of the hands that might have won, right?

Let’s see the hands dealt to each of our players this past week.

Andy GalbraithAndy Galbraith (b|t) shared a full house of risk this month when talking about backups.  Do you have a backup if you haven’t tested it.

“without regular test restores, your backups do not provide much of a guarantee of recoverability.  (Even successful test restores don’t 100% guarantee recoverability, but it’s much closer to 100%).”

 

Boris HristovBoris Hristov (b|t) thought he was feeling lucky.  He couldn’t imagine things getting worse.  He even kept reminding himself that it couldn’t get worse.  He was dealt a hand and it was pretty good – and then everything just flushed down the drain.

A disaster with replication and with the storage system – ouch!

 

Chris YatesChris Yates (b|t) wanted to push his hand a little further than Andy this week.  Chris went all-in on his backups.  At least he went all-in early in his career.

The gamble you ask?  Chris didn’t test the backups until after he learned an important lesson.

“I’ve always been taught to work hard and hone your skill set; for me backups fall right into that line of thinking. Always keep improving, learn from your mistakes.”

 

Doug PurnellDoug Purnell (b|t) shares another risky move to make.  In this hand, Doug thought he could parlay maintenance plans into an enterprise level backup solution.

What Doug learned is that maintenance plans don’t offer a checksum for your backups.  After learning that, he decided to stay and get things straight.

 

Jason BrimhallJason Brimhall (b|t) took a different approach.  I took the approach of how these career gambles may or may not impact home, family, health, and career in general.

There is a life balance to be sought and gained.  It shouldn’t be all about work all the time.  And if work is causing health problems, then it is time for a change.

It’s important to have good health and enjoy life away from work.

 

Jeffrey VerheulJeffrey Verheul (b|t) had multiple hands that many of us have probably seen.  I’d bet we would even be able to easily relate.

In the end, what stuck with me was how more than once we saw Jeffrey up the ante with a story of somebody who was not playing with a full deck.  If you don’t have a full deck, sometimes the best hand is not a very good one overall.

 

Joey D'Antoni

Joey D’Antoni (b|t) had a nightmare experience that he shared.  We have all seen too many employers like what he described.

The short of it is summed up really will by Joey.

“The moral of this story, is to think about your life ahead of your firms. The job market is great for data pros—if you are unhappy, leave.”

 

K. Brian KelleyK. Brian Kelley (b|t) brought us the first four of a kind.  Not only did he risk life and limb with SQL 7, but he tried to do it over a WAN link that was out of his control.

When he bets, he bets BIG!  DTS failures, WAN failures, SQL 7, SQL 2000, low bandwidth and somebody playing with the nobs and shutting down the WAN links while laughing devishly at the frustration they were causing.

 

Kenneth FisherKenneth Fisher (b) thought he would try to one-up Jeffery by getting employers that would not play with a full deck either.

From one POS time tracking system to another POS time tracking system to yet another.  Apparently, time tracking was doomed to failure and isn’t really that important.

That seems to be a lot of hefty wagers somebody is willing to lay down.

 

Matt VelicMatt Velic (b|t) brought his A-game.  He was in a no prisoner kind of mood.

Matt decided he was going to real you in, divert your attention, and then lay down the wood hard.  Don’t try to get anything past Matt – especially if it wreaks of shifty and illegal.

The way he parlayed his wagers this month was a riot.

 

Mickey StueweMickey Stuewe (b|t) was the only person willing to Double-down and to even try to place a bet on snake-eyes.  With the two-pronged attack at doubles, she was able to come up with two pairs.

To compound her doubles kind of wagers, she was laying down markers on functions.  Check out her casino wizardry with her display of code and execution plans.

 

Rob FarleyRob Farley (b|t) was a victim of his own early success.  He had a lucky run and then it seemed to peter out a bit.  In the end he was able to manage an Azure high hand

Rob reminds us of some very important things with his post.  You can get lucky every now and again and be successful without a whole lot of foresight.  Be careful and try to plan and test for the what-if moment.

 

Bobby TablesRobert Pearl (b|t) rolled the dice in this card game.  He was hoping for a pair of kings with his pair of clusters and the planned but unplanned upgrade.

There is nothing like a last minute decision to upgrade an “active-active” cluster.  In the end Bobby Tables had an Ace up his sleeve and was able to pull it out for this sweet pair.

 

Russ Thomas

Russ Thomas (b|t) ever have the business buy some software and then thrust it on IT to have it installed last minute?

That is almost what happened in this story that had some interesting yet eventual results.

Russ weaves the story very well, but take your eye of the game at hand!!

 

Sebastian Meine

 

Sebastian Meine (b|t) brought needles to the table.  That is wicked crazy and leaves quite the impression.

Maybe he thought he was going to inject some cards into the game to improve his hand.  I was almost certain he had nothing going, but magically he was able to produce some favorable data.

Oh, that was the point of his post!  Have a weakness? It will be found, injected and exploited.

 

Steve Jones

Steve Jones (b|t) had a crazy house going.  Imagine 2000 or so people all trying to help you make your bets and play your hand.  That is a FULL house.

Of course, his full house was more to deal with a misunderstood risk with the application and causing performance problems in the database.

In the end, they fixed it and it started working better.  A little testing would have gone a long way on this one!

 

Wayne SheffieldWayne Sheffield (b|t) in perhaps the most disappointing and surprising turn of events, Wayne ended up with a hand that could have won but he folded.

Well, Wayne didn’t fold but there were some bets that resulted in people folding and maybe worse in the story that Wayne shares.  This can happen when you are betting on something you know nothing about and really should get somebody to help make the correct bets for you.

 

House

And to recap, the overall winner was…

the HOUSE.  With a winning hand of a royal flush.

Thanks to all of the SQLFamily for participating this month.  There were some really great experiences shared.  The posts were great and it was a lot of fun.  I hope you got as much enjoyment out of the topic and articles this month as I did.

Risking Health, Life and Family

Comments: 2 Comments
Published on: February 11, 2014

TSQL2sDay150x150Since announcing the topic last week for T-SQL Tuesday, I have thought about many different possibilities for my post.  All of them would have been really good examples.  The problem has not been the quality but in the end just settling on my wager for this hand.

You see, this month T-SQL Tuesday has the theme of risks, betting on a technology, solution or person, or flatly having had an opportunity and not taken it (that’s a bet too in a sense).  Sometimes we have to play it safe, and sometimes we have to take some degree of risk.

If you are interested, the invite for T-SQL Tuesday is here and the deadline for submission is not until Midnight GMT on 12 February.

It’s a Crapshoot

craps-0208When all the dice finally settled, I decided it would be best for me to talk about some recent experiences in this Past Post.

First a little dribble with the back story.  Just don’t lose your focus on the price with this PK*.  Readers, please don’t Press and be patient during this monologue.

Over the past year I have been pushing hard with work and SQL.  I was working for a firm as a part of their remote DBA services offering.  As time progressed and I became more and more tenured with the firm, I found that I was working harder and harder.  Not that the work was hard, but that there was a lot of it.

Stress rose higher and higher (I must have been oblivious to it).  At one point I started getting frequent migraines.  I went to the doctor to try and figure things out.  I visited the chiropractor to try and figure things out.  The chiropractor proved to be useful and had some profound advice.  He asked me how many hours I would sit in front of the computer on a daily basis (since that was my job).  My reply to him shocked him pretty good.  I was putting in regular 20 hour days.

Having weekly chiropractor sessions helped somewhat with the migraines but it was not nearly enough.  I figured I would just have to deal with it since we couldn’t figure out what the root cause was (yeah we were trying to perf tune this DBA).

In addition to the chiropractor and traditional medicine to fight migraines, I also tried some homeopathic remedies.  Again, similar results.  It seemed to help but wasn’t an overall solution and not a consistent solution.

Later in the year I found something that seemed to help a little with the migraines too.  I started using Gunnars.  Sitting in front of a computer for 20 hours a day on most days, it made sense there might be some eye strain.  Wearing the Gunnars, I immediately felt less eye strain.  That was awesome.  Too bad it did not reduce the migraines.

After more than a year of having regular migraines, I found that the migraines started occurring more regularly (yes there was a baseline).  Near the end of 2013, I found that there was a period that I had eight straight migraine days.  These migraines typically lasted the duration of the day and there wasn’t much I could do outside of just dealing with it and making sure work got done.

Notice the risk?  What are all of the risks that might be involved at this point?  Yes, I was risking my health, family and work.

Russian Roulette

rouletteNear the end of the year 2013, I made a very risky decision.  I decided to part ways with the firm and pursue a consulting career.  This was as scary as could possibly be.  I was choosing to leave a “Safe” job knowing that I had a job and secured income – so long as the company did well.

Not only was I choosing to gamble with the job change and risking whether or not I would have work flowing in to keep me busy, I was also risking the well-being of my family.  With a family, there is the added risk of ensuring you provide for them.  This was a huge gamble for me.  Not to mention the concern with the migraines and whether I would be able to work this day or that based on the frequency and history of these things.

In this case, the bet on Green came up GREEN!  Over two months into this decision I have yet to have a migraine.  For my health this was the right decision.  I have also been lucky enough to be able to get myself into the right consulting opportunity at the right time with the right people.  Because of that, we have been able to keep me busy the whole time.

With all of that said, thanks to Randy Knight (@randy_knight) for bringing me in as a Principal Consultant at SQL Solutions Group.  With the change to consulting, Randy has helped to keep my hours down to less than 20 hours a day.

ssg

The thing about those 20 hour days is there were several people trying to get me to back off.  They’d say things like “leave it for tomorrow” or “the work will still be there.”  That may be true, but the firms clients had certain expectations.  Learning when to back off and keep the foot on the gas pedal is something everybody needs to learn.  For me, I felt I had to do it because it was promised to the client.  Now as a consultant, I feel I can better control when those deliverables are due.  Thanks to Wayne (@DBAWayne) for continuing to point this out as a symptom of “burnout.”

In the end, it took making a risky change to avoid the burnout and get my health back under control.

*PK in this case is a term for a pick ‘em bet and not in reference to a Primary Key as is commonly used in SQL Server.

Day 12 – High CPU and Bloat in Distribution

This is the final installment in the 12 day series for SQL tidbits during this holiday season.

Previous articles in this mini-series on quick tidbits:

  1. SQL Sat LV announcement
  2. Burning Time
  3. Reviewing Peers
  4. Broken Broker
  5. Peer Identity
  6. Lost in Space
  7. Command ‘n Conquer
  8. Ring in The New
  9. Queries Going Boom
  10. Retention of XE Session Data in a Table
  11. Purging syspolicy

distribution

Working with replication quite a bit with some clients you might run across some particularly puzzling problems.  This story should shed some light on one particularly puzzling issue I have seen on more than one occasion.

In working with a multi-site replication and multi-package replication topology, the cpu was constantly running above 90% utilization and there seemed to be a general slowness even in Windows operations.

Digging into the server took some time to find what might have been causing the slowness and high CPU.  Doing an overall server health check helped point in a general direction.

Some clues from the general health check were as follows.

  1. distribution database over 20GB.  This may not have been a necessarily bad thing but the databases between all the publications weren’t that big.
  2. distribution cleanup job taking more than 5 minutes to complete.  Had the job been cleaning up records, this might not have been an indicator.  In this case, 0 records were cleaned up on each run.

The root cause seemed to be pointing to a replication mis-configuration.  The mis-configuration could have been anywhere from the distribution agent to an individual publication.  Generally, it seems that the real problem is more on a configuration of an individual publication more than any other setting.

When these conditions are met, it would be a good idea to check the publication properties for each publication.  Dive into the distribution database and try to find if any single publication is the root cause and potentially is retaining more replication commands than any other publication.  You can use sp_helppublication to check the publication settings for each publication.  You can check MSrepl_commands in the distribution database to find a correlation of commands retained to publication.

Once having checked all of this information, it’s time to put a fix in place.  It is also time to do a little research before actually applying this fix.  Why?  Well, because you will want to make sure this is an appropriate change for your environment.  For instance, you may not want to try this for a peer-to-peer topology.  In part because one of the settings can’t be changed in a peer-to-peer topology.  I leave that challenge to you to discover in a testing environment.

The settings that can help are as follows.

[codesyntax lang="tsql"]

[/codesyntax]

These settings can have a profound effect on the distribution retention, the cleanup process and your overall CPU consumption.  Please test and research before implementing these changes.

Besides the potential benefits just described, there are other benefits to changing these commands.  For instance, changing replication articles can become less burdensome by disabling these settings.  The disabling of these settings can help reduce the snapshot load and allow a single article to be snapped to the subscribers instead of the entire publication.

Day 11 – Purging syspolicy

This is the eleventh installment in the 12 day series for SQL tidbits during this holiday season.

Previous articles in this mini-series on quick tidbits:

  1. SQL Sat LV announcement
  2. Burning Time
  3. Reviewing Peers
  4. Broken Broker
  5. Peer Identity
  6. Lost in Space
  7. Command ‘n Conquer
  8. Ring in The New
  9. Queries Going Boom
  10. Retention of XE Session Data in a Table

Garbage-Dump

Did you know there is a default job in SQL Server that is created with the purpose of removing system health phantom records?  This job also helps keep the system tables ,that are related to policy based management, nice and trim if you have policy based management enabled.  This job could fail for one of a couple of reasons.  And when it fails it could be a little annoying.  This article is to discuss fixing one of the causes for this job to fail.

I want to discuss when the job will fail due to the job step related to the purging of the system health phantom records.  Having run into this on a few occasions, I found several proposed fixes, but only one really worked consistently.

The error that may be trapped is as follows:

A job step received an error at line 1 in a PowerShell script.
The corresponding line is ‘(Get-Item SQLSERVER:\SQLPolicy\SomeServer\DEFAULT).EraseSystemHealthPhantomRecords()’.
Correct the script and reschedule the job. The error information returned by PowerShell is:
‘SQL Server PowerShell provider error: Could not connect to ‘SomeServer\DEFAULT’.
[Failed to connect to server SomeServer. -->

The first proposed fix came from Microsoft at this link.  In the article it proposed the root cause of the problem being due to the servername not being correct.  Now that article is specifically for clusters, but I have seen this issue occur more frequently on non-clusters than on clusters.  Needless to say, the advice in that article has yet to work for me.

Another proposed solution I found was to try deleting the “\Default” in the agent job that read something like this.

(Get-Item SQLSERVER:\SQLPolicy\SomeServer\Default).EraseSystemHealthPhantomRecords()

Yet another wonderful proposal from the internet suggested using Set-ExecutionPolicy to change the execution policy to UNRESTRICTED.

Failed “fix” after failed “fix” is all I was finding.  Then it dawned on me.  I had several servers where this job did not fail.  I had plenty of examples of how the job should look.  Why not check those servers and see if something is different.  I found a difference and ever since I have been able to use the same fix on multiple occasions.

The server where the job was succeeding had this in the job step instead of the previously pasted code.

if (‘$(ESCAPE_SQUOTE(INST))’ -eq ‘MSSQLSERVER’) {$a = ‘\DEFAULT’} ELSE {$a = ”};
(Get-Item SQLSERVER:\SQLPolicy\$(ESCAPE_NONE(SRVR))$a).EraseSystemHealthPhantomRecords()

That, to my eyes, is a significant difference.  Changing the job step to use this version of the job step has been running successfully for me without error.

I probably should have referenced a different server instead of resorting to the internet in this case.  And that stands for many things – check a different server and see if there is a difference and see if you can get it to work on a different server.  I could have saved time and frustration by simply looking at local “resources” first.

If you have a failing syspolicy purge job, check to see if it is failing on the phantom record cleanup.  If it is, try this fix and help that job get back to dumping the garbage from your server.

«page 1 of 2




Calendar
August 2014
M T W T F S S
« Jul    
 123
45678910
11121314151617
18192021222324
25262728293031
Content
SQLHelp

SQLHelp


Welcome , today is Wednesday, August 20, 2014