Day 7 – Command ‘n Conquer

This is the seventh installment in the 12 day series for SQL tidbits during this holiday season.

Previous articles in this mini-series on quick tidbits:

  1. SQL Sat LV announcement
  2. Burning Time
  3. Reviewing Peers
  4. Broken Broker
  5. Peer Identity
  6. Lost in Space

As a DBA, we sometimes like to shortcut things.  Not shortcutting a process or something of importance.  The shortcuts are usually in the realm of trying to shortcut time, or shortcut the number of steps to perform a task or shortcutting by automating a process.

We seldom like to perform the same task over and over and over again.  Click here, click there, open a new query window, yadda yadda yadda.  When you have 100 or so servers to run the same script against – it could be quite tedious and boring.  When that script is a complete one-off, there probably isn’t much sense in automating it either.

To do something like I just described, there are a few different methods to get it done.  The method I like to use is via SQLCMD mode in SSMS.  Granted, if I were to use it against 100 servers, it would be a self documenting type of script.  I like to use it when setting up little things like replication.

How many times have you scripted a publication and the subscriptions?  How many times have you read the comments?  You will see that the script has instructions to run certain segments at the publisher and then other segments at the subscriber.  How many times have you handed that script to somebody else to run and they just run it on the one server?

Using SQLCMD mode and then adding a CONNECT command in the appropriate places could solve that problem.  The only thing to remember is to switch to SQLCMD mode in SSMS.  Oh and switching to SQLCMD mode is really easy.  The process to switch to SQLCMD mode is even documented.  You can read all about that here.

And there you have it, yet another simple little tidbit to take home and play with in your own little lab.

Day 6 – Lost in Space

This is the sixth installment in the 12 day series for SQL tidbits during this holiday season.

Previous articles in this mini-series on quick tidbits:

  1. SQL Sat LV announcement
  2. Burning Time
  3. Reviewing Peers
  4. Broken Broker
  5. Peer Identity

Credit: NASA/JPL

One of the more frequently recurring themes I see in my travel and work is the perpetual lack of space.

For instance, every time I fly there is inevitably a handful of people that have at least three carry-on items and at least one of those items is larger than the person trying to “carry” it on the plane.  Imagine watching these people trying to lift 100+ pound bags over their heads to put them into these small confined overhead storage compartments.  We are talking bags that are easily 2-3 times larger than the accepted dimensions, yet somehow this person made it on the plane with such a huge bag for such a tiny space.

Another favorite of mine is watching what appears to be a college student driving home in a GEO Metro.  A peek inside the vehicle might reveal 5 or 6 baskets of soiled laundry and linens.  A look at the vehicle as a whole might reveal a desert caravan’s worth of supplies packed onto the vehicle.  Watching the vehicle for a while you might notice that it can only lumber along at a top speed of 50 mph going downhill and good luck getting back up the hill.  It is just far too over-weighted and over-packed.  The vehicle obviously does not have enough room internally.

In both of these examples we have a limited amount of storage space.  In both of these examples we see people pushing the boundaries of those limitations.  Pushing those boundaries could lead to some unwanted consequences.  The GEO could break down leaving the college student stranded with dirty laundry.  The air-traveler just may have to check their dog or leave it home.

But what happens when people try to push the boundaries of storage with their databases?  The consequences can be far more dire than either of the examples just shared.  What if pushing those boundaries causes an outage and your database is servicing a hospital full of patient information (everything from diagnostics to current allergies – like being allergic to dogs on planes)?  The doctor needs to give the patient some medication immediately or the patient could die.  The doctor only has two choices and one of those could mean death the other could mean life.  All of this is recorded in the patient records but the doctor can’t access those records because the server is offline due to space issues.

Yeah that would pretty much suck.  But we see it all the time.  Maybe nothing as extreme as that case, but plenty of times I have seen business lose money, revenue, and sales because the database was offline due to space.  The company wants to just keep pushing those boundaries.

In one case, I had a client run themselves completely out of disk space.  They wouldn’t allocate anymore space so it was time to start looking to see what could be done to alleviate the issue and get the server back online.

In digging about, I found that this database had 1Tb of the 1.8TB allocated to a single table.  That table had a clustered index built on 6 columns.  The cool thing about this clustered index is that not a single query ever used that particular combination.  Even better was that the database was seldom queried.  I did a little bit of digging and found that there really was a much better clustered index for the table in question.  Changing to the new clustered index reduced the table size by 300GB.  That is a huge chunk of waste.

Through similar exercises throughout the largest tables in the database, I was able to reduce index space waste by 800GB.  Disk is cheap until you can’t have anymore.  There is nothing wrong with being sensible about how we use the space we have been granted.

Thinking about that waste, I was reminded of a great resource that Paul Randal has shared.  You can find a script he wrote, to explore this kind of waste, from this link.  You can even read a bit of background on the topic from this link.

Day 5 – Peer Identity

This is the fifth installment in the 12 day series for SQL tidbits during this holiday season.

Previous articles in this mini-series on quick tidbits:

  1. SQL Sat LV announcement
  2. Burning Time
  3. Reviewing Peers
  4. Broken Broker

identityIn the digital age it seems we are constantly flooded with articles about identity crises.  From identity theft to mistaken identity.  SQL server is not immune to these types of problems and stories.  Whether SQL Server was housing the data that was stolen ,leading to identity theft, or if SQL Server is having an identity management issue of its own – SQL Server is definitely susceptible to the identity issues.

The beauty of SQL Server is that these identity issues seem to be most prevalent when trying to replicate data.  Better yet is when the replication multiple peers setup in a Peer-to-Peer topology.

When these Identity problems start to crop up there are a number of things that can be done to try and resolve them.  One can try to manually manage the identity ranges or one can flip the “Not for Replication” attribute on the table as two possible solutions.

The identity crisis in replication gets more fun when there are triggers involved.  The triggers can insert into a table that is not replicated or can insert into a table that is replicated.  Or even better is when the trigger inserts back into the same table it was created on.  I also particularly like the case when the identity range is manually managed but the application decides to reseed the identity values (yeah that is fun).

In one particular peer-to-peer topology I had to resort to a multitude of fixes depending on the article involved.  In one case we flipped the “Not for Replication” flag because the tables acted on via trigger were not involved in replication.  In another we disabled a trigger because we determined the logic it was performing was best handled in the application (it was inserting a record back into the table the trigger was built on).  And there was that case were the table was being reseeded by the application.

In the case of the table being reseeded we threw out a few possible solutions but in the end we felt the best practice for us would be to extend the schema and extend the primary key.  Looking back on it, this is something that I would suggest as a first option in most cases because it makes a lot of sense.

In our case, extending the schema and PK meant adding a new field to the PK and assigning a default value to that field.  We chose for the default value to be @@ServerName.  This gave us a quick location identifier for each location and offered us a quick replication check to ensure records were getting between all of the sites (besides relying on replication monitor).

When SQL Server starts throwing a tantrum about identities, keep in mind you have options.  It’s all about finding a few possible solutions or mix of solutions and proposing those solutions and then testing and implementing them.

One of the possible errors you will see during one of these tantrums is as follows.

Explicit value must be specified for identity column in table ‘blah’ either when IDENTITY_INSERT is set to ON or when a replication user is inserting into a NOT FOR REPLICATION identity column.

Day 4 – Broken Broker

This is the fourth installment in the 12 day series for SQL tidbits during this holiday season.

Previous articles in this mini-series on quick tidbits:

  1. SQL Sat LV announcement
  2. Burning Time
  3. Reviewing Peers

mini-Broker

Brokers

 

 

 

 

On a recent opportunity to restore a database for a client, I experienced something new.  

I thought it was intriguing and it immediately prompted some questions.  First, let’s take a look at the message that popped up during the restore and then on to what was done to resolve the problem.

 

Query notification delivery could not send message on dialog ‘{someguid}.’. Delivery failed for notification ‘anotherguid;andanotherguid‘ because of the following error in service broker: ‘The conversation handle “someguid″ is not found.’

My initial reaction was “Is Service Broker enabled?”  The task should have been a relatively easy straight forward database restore and then to setup replication after that.  My next question that popped up was “Is SB necessary?”

Well the answers that came back were “Yes” and “YES!!!”  Apparently without SB, the application would break in epic fashion.  That is certainly not something that I want to do.  There are enough broke brokers and broke applications without me adding to the list.

Occasionally when this problem arises it means that the Service Broker needs a “reset.”  And in this case it makes a lot of sense.  I had just restored the database and there would be conversations that were no longer valid.  Those should be ended and the service broker “reset.”

The “reset” is rather simple.  First a word of warning – do not run this on your production instance or any instance without an understanding that you are resetting SB and it could be that conversations get hosed.

[codesyntax lang="tsql"]

[/codesyntax]

For me, this worked like a charm.  There was also substantial reason to proceed with it.  If you encounter this message, this is something you may want to research and determine if it is an appropriate thing to do.

Day 3 – Reviewing Peers

Comments: 1 Comment
Published on: December 27, 2013

This is the third installment in the 12 day series for SQL tidbits during this holiday season.

Previous articles in this mini-series on quick tidbits:

  1. SQL Sat LV announcement
  2. Burning Time

Remember back in the day when Napster was really popular?  I’m sure it is still popular now – just not in the same league as the early years.  Napster pretty much made some revolutionary changes in file-sharing across the internet.  Now the algorithms and the method have become more advanced and use a hive approach, but it still pretty much boils down to the setup that Napster used – Peer to Peer.

peer-pressure

In the P2P file-share world, every node had a part to play.  If you downloaded a file, your machine could also upload that file or other files to the network for other users.  This approach required full files in order to work.

p2p_net

In the Hive approach, the files are broken up into chunks.  So you still participate on a P2P basis, but you no longer need to have the complete file to participate.  (I am probably over-generalizing, but that is ok – the point is coming soon.)  This helped transfers be quicker and the P2P network/hive to be larger (in case you were wondering).

Now, let’s take that idea and move it down to a smaller chunk of data.  What if we did that with a database and only sent a record at a time to a partner and that partner could send a few records back to the first partner/peer?  Now we have something that could be pretty usable in many scenarios.  One such scenario could be to sync data from the same database at ten different locations (or maybe 100 different locations) so all locations would have current information.

Well, SQL Server does have that technology available for use.  Coincidentally enough, it is called Peer-to-Peer replication.  Truth be told, it is really more of a two transactional replication on steroids.  In SQL 2008, you had to setup transactional replication in order to create the P2P.  But in SQL 2012, there is now an option on the publication types for Peer-to-Peer.

Setting up P2P replication in SQL 2012 is pretty easy to do.  Here is a quick step-through on doing just that.  I will bypass the setup of the distributor and jump straight into setting up the publication through to the point of adding peers.  From that point, it will be left to you to determine what kind of subscription (push/pull) you use and to figure out how to configure those types.

Step-through

The first step is to expand the tree in SSMS until you see replication and then to expand that to see “Local Publications.”  From “Local Publications,” right click and select “New Publication.”

menu

Then it is almost as easy as following the prompts as I will show in the following images.  You need to select the database you wish to be included in the P2P publication.

db_selection

Then it is a matter of selecting the publication type.  Notice here that Peer to Peer has been highlighted.

repl_selection

Of course, no replication is complete without some articles to include in the replication.  In this case, I have chosen to just replicate a few of the articles and not every article in the database.  When replicating data, I recommend being very picky about what articles (objects) get included in the replication.  No sense in over-replicating and sending the entire farm across the wire to Beijing, London, Paris and Moscow.

table_selection

Once the articles are selected, it will be time to setup the agent security.  Again, this is pretty straight forward.  And in my contrived setup, I am just going to rely on the SQL Server Agent Service account.  The screen will inform you that it is not best practice.  I will leave that as a exercise for you to explore.

agent_securitylog_reader_security

With that last piece of configuration, the publication is ready.  Just click your way through to finish.

Once the publication is complete, it is time to add a subscriber to the publication.  That is easily accomplished by right clicking the publication.  Since this is a P2P publication, we need to select “Configure Peer-To-Peer Topology…”

p2p_topology_menu

Selecting that menu option will bring up the Wizard.  First step in the new wizard is to pick the publisher and the publication at that publisher that needs to be have the topology configured.

publication_selection

After selecting the publisher and publication then I can add nodes to the P2P topology by right-clicking the “map” (as I like to call it) area.  Select “Add a New Peer Node” from the menu and then enter the appropriate details for the new subscriber.

add_node

It is here that I will conclude this short tutorial.  Configuring the topology is an exercise best left to each individual circumstance.  Configuring where the pull subscribers will be and where the push subscribers will be is almost an art.  Have fun with it.

I have had the opportunity to use this kind of setup on a large multi-node setup across several sites.  It runs pretty smoothly.  Sometimes it can get to be a hair-raising event when a change gets introduced that borks the schema.  But those are the events that permit you to learn and grow and document what has happened and how to best handle the issues in your environment.

I have even taken a multi-site P2P setup and just added a 1 direction subscriber (as if it were a transactional publication) so the subscriber could just get the information and run reports without pushing changes back up into the rest of the topology.  That also works pretty well.  Document the design and be willing to change it up in case there appears to be latency and too much peer pressure.

12 Days of Christmas 2013 Day 2

Comments: 3 Comments
Published on: December 26, 2013

This is the second installment in the 12 day series for SQL tidbits during this holiday season.

Previous articles in this mini-series on quick tidbits:

  1. SQL Sat LV announcement

burningtime

Recently I was able to observe an interesting exchange between a couple of key people at a client.  That exchange gave me a bit to ponder.  I wanted to recount a bit of that exchange here.  All names have been, well you know how that goes.

Accountant Joe came in early one wintry morning.  He was gung-ho and ready for the day ahead.  Joe had huge plans to finish counting all of the beans and get his task list done for the day.  You see, taskmaster Judy had been harping on him significantly over the past week to get his beans counted.

On this frosty morning, Joe was zipping along.  As more and more people filed into the office from the various departments, Joe was still contentedly counting his beans.  That only lasted for a few fleeting moments with everybody in the office though.

Suddenly Joe could no longer count the beans.  The beans Joe was counting were served up via the backend database.  And since the beans were running too sow, Joe called the helpdesk to have them fix the database.  A few moments later, Sally called the helpdesk too.  Sally was complaining about things being horribly slow too.  Sally was trying to open the company calendar (Sally is the executive secretary).

More and more calls were coming in to the helpdesk from various departments and every user-base in the company.  The helpdesk was busy fighting this fire or that fire.  Finally news of the slowness is escalated to the DBA Dillon so he could investigate why the beans were so slow on this frosty day.  As Dillon investigated, he noticed that IO stalls were off the charts.  He was seeing IO stalls in the hundred second range instead of the milli-second range like normal.

Like a dilligent DBA, Dillon immediately escalated the issue to the sysops team who was responsible for the SAN (yeah he notified his manager too).  Bill from sysops promptly responded.  Sadly the response was “I am too busy at the moment.

After much pestering, Bill finally became available and was ready to help – 4 hours later.

As it turns out, the SAN that housed all company shares, applications, databases and even Exchange was down to about 30GB free space.  Due to the lack of free space, the SAN degraded performance automatically to try and prevent it from filling up entirely.  Bill knew about this pending failure and had ordered extra storage – which sat on his desk for 2+ weeks.

The entire company was essentially down because Bill ended up being too busy (in a meeting).  Though the issue was eventually resolved – the sting has yet to fade.

When faced with an outage situation, let this story be your gift to remind you of how not to treat the outage.

12 Days of Christmas 2013 Day 1

Comments: 21 Comments
Published on: December 25, 2013

Last year I did a mini-series themed around the 12 Days of Christmas.  I am going to do a similar series this year.  Over the next 12 days, I will share short tidbits on an array of topics.  The tidbit may be a tip to help with SQL Server, or it could be an announcement that is SQL related.  As an example, the announcement could be a short bit of information on how to get SQL training.

las-vegas-nv2

To kick things off, there is a pretty cool announcement from the Desert.  Down in Vegas we have been working pretty hard to bring some free SQL learning to the area.  And we have finally done it.  The inaugural SQL Saturday in Las Vegas is confirmed.

sqlsat295_web

 

The first SQL Saturday in Las Vegas is to be held on April 5, 2014.  You can register for the event and submit presentations via this link.

It was great to hear that we could get the venue on a good date.  The people at InNEVation are stoked to have this event in their building.

Many thanks to the Stacia Misner and Pat Wright for pulling this together.

Always Waiting, Waiting Waiting

Categories: News, Professional
Comments: No Comments
Published on: December 23, 2013

What’s all the Wait about…

I have been meaning to publish this post for a long long time.  I have no idea what I have been waiting on.  As a DBA, that isn’t necessarily a good thing.  As a DBA, we would generally like to know what is causing the delay or what the wait is being caused by etc etc etc.

It’s even a bit of coincidence because the topic today would have also worked very well for the TSQL Tuesday topic this month.  Robert Davis invited all to participate by writing about waits in SQL Server in some fashion or another.  You can read a bit about that from his roundup, with all of the necessary links, here.

Today, I only hope to be able to do a minor justice to the topic.

Monitoring

Since DBAs really do not like to be caught off-guard, it is very common practice to monitor the waits on the server(s) under his/her domain.  If the waits are not monitored, then the DBA at least should know how to check the waits and determine what may be helping cause the delays and/or procrastination in SQL Server.

I want to share a tool that I have been impressed with for several years.  The tool should be pretty popular by now.  Not only do I want to share a bit about that tool, but I will show how to become a bit more efficient through the use of the tool and trying to have the tool help you before you have to turn to the tool to start hunting.

Let’s just say this is a small gift from me to you for this Holiday Season.

What is it?

I was introduced to this tool 6+ years ago.  I was happy with it then and started to use it where I could at my employer.  After moving on, I have made a consistent recommendation with regards to it.  That said, I like the tool for the very precise design of monitoring and inspecting waits on the server.  That tool of course is – Ignite for SQL Server by Confio.  I will be writing about Ignite 8 and not as much about Ignite Central.

Before getting too far into, I want to say that like many worthwhile tools, Ignite gets better with each release.  For me, that speaks to the company and their willingness to listen to their constituents.  Take the feedback – make the tool better.  Know what you do, what you do well and continue to make it better.  I think Confio does a fine job at that.

waits

What you see now is a quick screenshot with a stacked bar chart showing some information that Ignite might present to you.  In this case, I have a monthly trend report for a specific server showing the top x waits and how each of those waits stacks up in the grand scheme of things.

Now, at a glance, this is great information.  It is enough to get you started.  You can see a trend, or maybe the absence of a trend.  You can identify at a glance which waits are reportedly problematic in your server.  From here you can even drill in and get more information.  You would do that by clicking a section of one of the stacked bars to determine what might be related to that wait type on the day related to the stacked bar you clicked.

That is all great.  It’s even better when in the middle of troubleshooting (you just have to remember to use the tool).

But what if you are off-site and can’t get to the server housing the reports?  What if you are a Consultant and don’t necessarily need/want to login to the client server each day just to check this information?  The simple solution is to have the report emailed, right?

Well, with Ignite, that is a possibility too.  Confio has created several canned reports that are (rare species here) useful out of the box.  To help make it easier for all of us, a link has been created in the application on the Home Page.  It is real easy to get to the reporting module and to see all of the possible reports that can be viewed.

With that, we are starting to get somewhere.  If you click the Reports link on the home page, you will be presented with two list boxes from which you can pick some reports.

report_list

I can run any of those reports from that prompt.  That’s good news.  But that is not quite yet our final destination.  We want to have these reports run auto-magically and be emailed to us.  If you look around a bit more on the Reports screen, you will find a “Report Schedules” button.  Once the new page loads, you will find there is a Create Schedule Button.  By clicking this button, you will permit yourself the opportunity to create a schedule to email a report or group of reports automatically to a group of people or to just yourself – your choice.  Following the prompts is very straight forward and worth the five minutes or so to create the schedule.

Here’s a bit of a caveat.  You must execute and save the report before you can add it to a schedule.  Once you have done these few simple steps, you can have access to the reports from your favorite tablet or mobile device.  Better yet, should you see something out of line, you could take an action on it (call somebody and have them fix it, or remote in and fix it for those taking vacation ;) ).

This was a bit of a short and sweet introduction into just one feature of a really good tool.  As a DBA, I like to automate what I can.  I also like to monitor what I can.  Then there is an aspect of automation and monitoring called reporting and free time.  If I can automate and implement a solution with minimal time that provides information that I need – I am usually in favor of that.  DBAs need reports on how SQL Server is performing.  Without those, you are just waiting to fight fires rather than be proactive.  So I hope this simple gift of automated reporting from a great SQL tool can give you more time in the future to be a better DBA.

DC SQLSat

Categories: News, Professional, SSC
Comments: 1 Comment
Published on: December 11, 2013

dcThis past weekend I had the opportunity to go visit Washington DC.  It was the first time I got to stay in the Nation’s capitol for more than just a few hours.  It is also the first time that I was able to see any of the monuments in the capitol area.  Granted, I only saw them from the car or plane window in passing.  But that is far better than seeing them in photos or not at all.

The reason for the visit?  It was SQL Saturday 233.  I had written a little about the opportunity here, as it approached.  Now, I have the chance to recap the event and what I learned.

sqlsat233_web

Some articles have already been written at the time of this writing.  One article that I want to mention is by Ayman El-Ghazali (blog) that you can read here.  I had a good conversation with Ayman in the speaker room in between some of the sessions.  Ayman struck me as a very humble and appreciative person.  Those are traits that are important to have as a DBA these days.  Then to read the blog post by Ayman, it was refreshing to see those same traits echoed in his writing.  Check it out and give him a shout out.

Something that I found funny throughout the few days I was in town was the repeated looks and comments by the locals.  I wear shorts just about as frequently as I can.  The morning before I left to head to DC, the local temperature had warmed to -2 by late morning.  The day of this writing, I saw the local temperature at -8 in the morning (8 am) and -4 at 6 pm.  Just a couple days before those temperatures, the temperatures were well north of 50.  The temperatures in DC were in the 40’s and 50’s and it really felt more like 70’s and 80’s for me.

You can imagine the comments about the shorts and me thinking it was Vegas or something like that.  Well, it did feel rather warm – almost tropical.  But, since the locals were bundled in parkas, using umbrellas and generally bundled from head to toe (for the rather warm weather), I decided I needed to provide a frosty perspective on the SQL Saturday event / weekend.

dc_frostThat should help you feel chilly now that the photo is “iced” over.

Despite the weather or the pending doom and gloom of the weather (which is still happening with the ice storms), the event was great.  The event was well organized.  I think that is mostly due to Gigi Bell (twitter).  She is the wife of Chris Bell (twitter) and she whipped those boys into shape. ;)

There were some things that couldn’t be controlled necessarily.  But everybody came together and helped to make it work.  We had a couple of cancellations.  I was lucky enough to get an opportunity to present a second session thanks to one of these cancellations.  I enjoyed presenting to packed rooms and I enjoyed the feedback.  One comment came back saying “I learned so much more than I expected.”  That is GREAT!

I also had a great time seeing SQLFamily.  Talking with friends and enjoying everybody’s company.  I did make it to a few sessions outside of mine.  I took great pride in harassing Robert Pearl.  I learned some soft skills from Alan Hirt.  And I got to chat with attendees while trying to answer their questions in the halls.

I am looking forward to this event again next year.  And I hope everybody that attended my sessions learned at least one thing.

One last thing.  Thanks to all of the attendees.  To say “the attendees were great,” at this event, would be a gross mis-understatement in my opinion.  The attendees were awake and engaged.  They invested their time and effort and I think they helped to make the event top notch.

To DBA or Not to DBA (DBA Jumpstart)

Comments: 3 Comments
Published on: December 10, 2013
This post is part of the SQL Community Project started by John Sansom called #DBAJumpStart.
“If you could give a DBA just one piece of advice, what would it be?”
John asked 20 successful and experienced SQL Server professionals this exact question. I share my own thoughts with you below and you can find all our answers together inside DBA JumpStart, a unique collection of inspiring content just for SQL Server DBAs. Be sure to get your free copy of DBA JumpStart.

In my day to day operations I have the opportunity to work with people in various capacities in regards to data.  Sometimes it is in the capacity of a mentor, sometimes in the capacity of a consultant, and sometimes just in the capacity of the dude that fixes the problem.

I enjoy working as a database professional.  There may be times when I want to scream or yell or pull out my teeth and hair.  Then there are times when I just bounce off the walls with joy and pleasure.  Some may call that a manic-depressive disorder.  They just don’t understand the true life of a data professional.

Reminiscing

In becoming a data professional, I took the long route to get where I am.  I made the decision to work with SQL and learn about SQL 17 years ago.  I made the decision to learn about SQL because I viewed it as a really difficult thing to learn.  I wanted that challenge.  Then again, back then I also enjoyed the challenge of learning to configure Cisco routers.

Early on, I passed the Microsoft exams for SQL 6.5.  A couple of years later, I finally landed a job where I got to touch a database.  That was part of my duties with being in one man shops.  I worked in a few of those one man shops for a while where I had to be the exchange admin, domain admin, DBA, and even janitor at one shop.  I don’t miss the days of having to fix the plumbing in between troubleshooting performance issues and checking the router for DoS attacks.

Eventually I got an opportunity with a larger enterprise to be a production DBA.  All I had to do was work with SQL Server all day long.  It was fun designing metrics and monitors to alert on various thresholds while saving the company oodles of money.  I really thought I was learning something cool.  I thought I was doing pretty good too.

Fast forward a little more and a couple of job changes and I found myself living in Las Vegas and getting more involved in the community.  Boy did I learn quickly how little I actually knew about SQL Server.  Sure, there was reading of posts, books and forums before that.  But that just didn’t quite open my eyes like becoming involved.

I soon started applying myself even more so I could learn more about SQL Server and then be able to try and teach those things to the developers where I worked.  I also started working on trying be good enough to be able to teach people at User Group meetings.  Throw in the efforts to answer questions on forums and writing articles – and it was an explosion of learning.

Now I present pretty regularly at User Group meetings.  I travel around the world to present at SQL Saturdays.  I have contributed articles and co-authored a book.  I also had (still have) the sweet opportunity to participate in the Mentoring project hosted by Andy Warren.  I even went so far as to challenge myself and attained the MCM.  Yet, I know that I have really only scratched the tip of the iceberg with SQL Server.  There is so much to learn about SQL Server still.  If I were to compare myself past to present, I would rate my skills in various areas lower now than I probably did back in the day.

Through the years, and more particularly the more recent years, I have observed many teammates and DBAs for clients.  These observations have revealed some good and some bad.  When I notice certain behaviors that need to be changed, I try to use it as a teaching opportunity.

Price of Rice

One thing I find myself doing on a frequent basis is to try and gauge if I might be treating my work as a 9-5 J O B or if I am treating it like a career?  Am I just punching the clock or am I investing in myself and improving my skills?  Am I helping others improve their skills or am I hording the knowledge like an Oracle DBA?

As I observe others, I can’t help to ponder some of those same questions.  For instance, if I encounter a veteran DBA of 10 or so years that can’t perform a transaction log backup, I will wonder if being a DBA is just a J O B for that person.  The way you treat your work duties is often transparent about how much you care for the quality of work you do and is also revealing in how much one values their skills.

Taken that same DBA that can’t perform a log backup, I might start to wonder if their is a time investment outside of work to better their skills.  I might wonder why I have to show that person five or six times how to perform that log backup.  This may sound a tad judgmental, but it is not meant in that way.  Let’s call it an informal assessment to try and figure out how to help that person become more efficient at performing their job duties.

As a data professional, I think it is an important thing to do.  Spend some time on introspection and try to determine just how much of a career the job is.  Find out if it is a career or if it is on the short end of the spectrum that points to it being just a J O B.

As a team lead, I like to give everybody on the team the task of taking 15-30 minutes each day (on the clock) to improve their skill-set in some way.  This is a tactic that does not work in all environments and with all employers – I get that.  But if that 15 minutes a day means that the teammate will be more efficient down the road, it is a good investment.  If that 15 minutes means there will be less time redoing some work, then it is time well spent.

As I mentioned earlier, there is plenty about SQL Server that I still need to learn.  An important component of learning is to invest some time.  It’s a matter of finding a topic and then taking the time to research.  I do my research by reading and then experimenting.  Once I feel comfortable with that research, I will typically write about the topic.  Why?  It helps to solidify or to disprove some of the principles just learned.  It also helps to cement that research into memory.  I also like to do it because it serves as a personal archive that I can refer back to at some future point (I have done that plenty of times).

Another thing I like to do after learning about something different in SQL Server is to present it to a group of people.  That group can be co-workers, a user group, or at a SQL Saturday (as a few examples).  The beauty of presenting on the topic is that it helps me to embed that knowledge a little further.  It also helps me to try and gain an even deeper understanding of the topic to be able to answer questions that may arise. Best of all is that it helps to disseminate knowledge to others.

Recap

For me, being a data professional equates to a career.  I get that for some it is just a J O B – and that is fine.  For some, it may just be a J O B because they have not figured out how to advance it into a meaningful career.  Those people don’t want to just be a clock puncher and want to make something more of their chosen profession.

As a data professional, I suggest the following practices to help turn your profession into a career.

  1. Regular introspection – check in with yourself on occasion to keep yourself headed in the right direction.
  2. Learn something new – Treat this like a cursor. Keep finding something new to learn and act on it.
  3. Give Back and Get Involved – When you learn something new, teach it to somebody or post it on a blog. This helps give back to the community and more people can learn and grow.

These three simple steps can help turn a J O B into a career.  Better yet is that these steps can help to invest in yourself.

 

 

«page 1 of 2






Calendar
December 2013
M T W T F S S
« Nov   Jan »
 1
2345678
9101112131415
16171819202122
23242526272829
3031  
Content
SQLHelp

SQLHelp


Welcome , today is Thursday, October 23, 2014