This month we are hosted by Catherine Wilhemsen (blog | twitter) from Norway. And interestingly, Catherine has asked for us to talk about monitoring SQL Server. Wow! Talk about a HUGE topic to cover in such a short space. Well, let’s give it a go.
I am going to try and take this in a bit of a different direction, and we shall see if I have any success with it or not.
Direction the First
Monitoring is a pretty important piece of the database puzzle. Why? Well, because you want to try and find out before the end-users that something is happening. Or do you? It is a well established practice at many shops to allow the end-users to be the monitoring solution. How does this work, you ask?
It works, by waiting for an end-user to experience an error or some unexpected slowness. Then the user will either call you (the DBA), your manager, the company CEO, or (if you are lucky) the helpdesk. Then, the user will impatiently wait for you to try and figure out what the problem is.
The pros to this solution involve a much lower cost to implementation. The cons, well we won’t talk about that because I am trying to sell you on this idea. No, in all seriousness, the con to this approach could involve a lot of dissatisfaction, job loss, outages, delays in processing, delays in paychecks, dizziness, fainting, shortness of breath, brain tumors, and rectal bleeding. Oh wait, those last few are more closely related to trial medications for <insert ailment here>.
If you are inclined to pursue this type of monitoring – may all the hope, prayers, faith and luck be on your side that problems do not occur.
This methodology is also rather cheap to implementation. The risk is relatively high as well and I have indeed seen this implementation. In this new approach, we will require that the DBA eyeball monitor the databases all day and all night.
At the DBA’s disposal is whatever is currently available in SQL Server to perform the monitoring. It is preferred that only Activity Monitor and Profiler be used to perform these duties. However, the use of sp_who2 and the DMVs is acceptable for this type of duty.
The upside to this is that you do not incur any additional cost for monitoring over what has been allocated for the salary of the DBA. This an easy and quick implementation and requires little knowledge transfer or ability.
The downside here is – well – look at the problems from the last section and then add the glassed over stoner look of the 80s from staring at the monitor all day.
If you have not had the opportunity to use this type of monitoring – consider how lucky you are. This has been mandated by several companies (yes I have witnessed that mandate).
Pick your Poison
Now we come to a multi-forked path. Every path at this level leads to a different tool set. All of these tools bare different costs and different levels of knowledge.
The pro here is that these come with lower risk to those suspicious symptoms from the previous two options. The con is that it will require a little bit more grey matter to configure and implement.
You can do anything you would like at this level so long as it involves automation. You should configure alerts, you should establish baselines, you should establish some level of history for what has been monitored and discovered. My recommendation here is to know your data and your environment and then to create scripts to cover your bases.
One last thought, no matter what solution you decide to implement, you should also monitor the monitor. If the DBA collapses from long hours of eyeball monitoring, who will be there to pick him/her up to resume the monitoring?
If you opt to not implement any of these options, or if you opt to implement either of the first two options, I hope you have dusted off your resume!