Day 12 – High CPU and Bloat in Distribution

This is the final installment in the 12 day series for SQL tidbits during this holiday season.

Previous articles in this mini-series on quick tidbits:

  1. SQL Sat LV announcement
  2. Burning Time
  3. Reviewing Peers
  4. Broken Broker
  5. Peer Identity
  6. Lost in Space
  7. Command ‘n Conquer
  8. Ring in The New
  9. Queries Going Boom
  10. Retention of XE Session Data in a Table
  11. Purging syspolicy

distribution

Working with replication quite a bit with some clients you might run across some particularly puzzling problems.  This story should shed some light on one particularly puzzling issue I have seen on more than one occasion.

In working with a multi-site replication and multi-package replication topology, the cpu was constantly running above 90% utilization and there seemed to be a general slowness even in Windows operations.

Digging into the server took some time to find what might have been causing the slowness and high CPU.  Doing an overall server health check helped point in a general direction.

Some clues from the general health check were as follows.

  1. distribution database over 20GB.  This may not have been a necessarily bad thing but the databases between all the publications weren’t that big.
  2. distribution cleanup job taking more than 5 minutes to complete.  Had the job been cleaning up records, this might not have been an indicator.  In this case, 0 records were cleaned up on each run.

The root cause seemed to be pointing to a replication mis-configuration.  The mis-configuration could have been anywhere from the distribution agent to an individual publication.  Generally, it seems that the real problem is more on a configuration of an individual publication more than any other setting.

When these conditions are met, it would be a good idea to check the publication properties for each publication.  Dive into the distribution database and try to find if any single publication is the root cause and potentially is retaining more replication commands than any other publication.  You can use sp_helppublication to check the publication settings for each publication.  You can check MSrepl_commands in the distribution database to find a correlation of commands retained to publication.

Once having checked all of this information, it’s time to put a fix in place.  It is also time to do a little research before actually applying this fix.  Why?  Well, because you will want to make sure this is an appropriate change for your environment.  For instance, you may not want to try this for a peer-to-peer topology.  In part because one of the settings can’t be changed in a peer-to-peer topology.  I leave that challenge to you to discover in a testing environment.

The settings that can help are as follows.

[codesyntax lang="tsql"]

[/codesyntax]

These settings can have a profound effect on the distribution retention, the cleanup process and your overall CPU consumption.  Please test and research before implementing these changes.

Besides the potential benefits just described, there are other benefits to changing these commands.  For instance, changing replication articles can become less burdensome by disabling these settings.  The disabling of these settings can help reduce the snapshot load and allow a single article to be snapped to the subscribers instead of the entire publication.

Day 11 – Purging syspolicy

This is the eleventh installment in the 12 day series for SQL tidbits during this holiday season.

Previous articles in this mini-series on quick tidbits:

  1. SQL Sat LV announcement
  2. Burning Time
  3. Reviewing Peers
  4. Broken Broker
  5. Peer Identity
  6. Lost in Space
  7. Command ‘n Conquer
  8. Ring in The New
  9. Queries Going Boom
  10. Retention of XE Session Data in a Table

Garbage-Dump

Did you know there is a default job in SQL Server that is created with the purpose of removing system health phantom records?  This job also helps keep the system tables ,that are related to policy based management, nice and trim if you have policy based management enabled.  This job could fail for one of a couple of reasons.  And when it fails it could be a little annoying.  This article is to discuss fixing one of the causes for this job to fail.

I want to discuss when the job will fail due to the job step related to the purging of the system health phantom records.  Having run into this on a few occasions, I found several proposed fixes, but only one really worked consistently.

The error that may be trapped is as follows:

A job step received an error at line 1 in a PowerShell script.
The corresponding line is ‘(Get-Item SQLSERVER:\SQLPolicy\SomeServer\DEFAULT).EraseSystemHealthPhantomRecords()’.
Correct the script and reschedule the job. The error information returned by PowerShell is:
‘SQL Server PowerShell provider error: Could not connect to ‘SomeServer\DEFAULT’.
[Failed to connect to server SomeServer. –>

The first proposed fix came from Microsoft at this link.  In the article it proposed the root cause of the problem being due to the servername not being correct.  Now that article is specifically for clusters, but I have seen this issue occur more frequently on non-clusters than on clusters.  Needless to say, the advice in that article has yet to work for me.

Another proposed solution I found was to try deleting the “\Default” in the agent job that read something like this.

(Get-Item SQLSERVER:\SQLPolicy\SomeServer\Default).EraseSystemHealthPhantomRecords()

Yet another wonderful proposal from the internet suggested using Set-ExecutionPolicy to change the execution policy to UNRESTRICTED.

Failed “fix” after failed “fix” is all I was finding.  Then it dawned on me.  I had several servers where this job did not fail.  I had plenty of examples of how the job should look.  Why not check those servers and see if something is different.  I found a difference and ever since I have been able to use the same fix on multiple occasions.

The server where the job was succeeding had this in the job step instead of the previously pasted code.

if (‘$(ESCAPE_SQUOTE(INST))’ -eq ‘MSSQLSERVER’) {$a = ‘\DEFAULT’} ELSE {$a = ”};
(Get-Item SQLSERVER:\SQLPolicy\$(ESCAPE_NONE(SRVR))$a).EraseSystemHealthPhantomRecords()

That, to my eyes, is a significant difference.  Changing the job step to use this version of the job step has been running successfully for me without error.

I probably should have referenced a different server instead of resorting to the internet in this case.  And that stands for many things – check a different server and see if there is a difference and see if you can get it to work on a different server.  I could have saved time and frustration by simply looking at local “resources” first.

If you have a failing syspolicy purge job, check to see if it is failing on the phantom record cleanup.  If it is, try this fix and help that job get back to dumping the garbage from your server.

Day 10 – Retention of XE Session Data in a Table

This is the tenth installment in the 12 day series for SQL tidbits during this holiday season.

Previous articles in this mini-series on quick tidbits:

  1. SQL Sat LV announcement
  2. Burning Time
  3. Reviewing Peers
  4. Broken Broker
  5. Peer Identity
  6. Lost in Space
  7. Command ‘n Conquer
  8. Ring in The New
  9. Queries Going Boom

food-storage

 

Gathering event information is a pretty good thing.  It can do wonders for helping to troubleshoot.  What do you do if you need or want to be able to review the captured information in 3 months or maybe 12 months from now?

Retaining the session data for later consumption is often a pretty essential piece of the puzzle.  There is more than one way to accomplish that goal.  I am going to share one method that may be more like a sledgehammer for some.  It does require that Management Data Warehouse be enabled prior to implementing.

When using MDW to gather and retain the session data, you create a data collector pertinent to the data being collected and retained.  In the following example, I have a data collector that is created to gather deadlock information from the system health session.  In this particular case, I query the XML in the ring buffer to get the data that I want.  Then I tell the collector to gather that info every 15 minutes.  The collection interval is one of those things that needs to be adjusted for each environment.  If you collect the info too often, you could end up with a duplication of data.  Too seldom and you could miss some event data.  It is important to understand the environment and adjust accordingly.

Here is that example.

[codesyntax lang="tsql"]

[/codesyntax]

Looking this over, there is quite a bit going on.  The keys are the following paramaters: @parameters and @interval.  The @parameters parameter stores the XML query to be used when querying the ring buffer (in this case).  It is important to note that the XML query in this case needs to ensure that the values node is capped to a max of varchar(4000) like shown in the following.

[codesyntax lang="tsql"]

[/codesyntax]

With this data collector, I have trapped information and stored it for several months so I can compare notes at a later date.

Day 9 – Queries Going Boom

This is the ninth installment in the 12 day series for SQL tidbits during this holiday season.

Previous articles in this mini-series on quick tidbits:

  1. SQL Sat LV announcement
  2. Burning Time
  3. Reviewing Peers
  4. Broken Broker
  5. Peer Identity
  6. Lost in Space
  7. Command ‘n Conquer
  8. Ring in The New

Kaboom

Ever see an error like this??

The query processor ran out of internal resources and could not produce a query plan. This is a rare event and only expected for extremely complex queries or queries that reference a very large number of tables or partitions. Please simplify the query. If you believe you have received this message in error, contact Customer Support Services for more information.

Error 8623
Severity 16

That is a beautiful error.  The message is replete with information and gives you everything needed to fix the problem, right?  More than a handful of DBAs have been frustrated by this error.  It’s not just DBAs that this message seems to bother.  I have seen plenty of clients grumble about it too.

The obvious problem is that we have no real information as to what query caused the error to pop.  The frustrating part is that the error may not be a persistent or consistent issue.

Thanks to the super powers of XE (extended events), we can trap that information fairly easily now.  Bonus is that to trap that information, it is pretty lightweight as far as resource requirements go.

Without further ado, here is a quick XE session that could be setup to help trap this bugger.

[codesyntax lang="tsql"]

[/codesyntax]

And now for the caveats.  This session will only work on SQL 2012.  The second caveat is that there are two file paths defined in this session that must be changed to match your naming and directory structure for the output files etc.

Should you try to create this session on SQL Server 2008 (or 2008 R2) instead of SQL Server 2012, you will get the following error.

Msg 25706, Level 16, State 8, Line 1
The event attribute or predicate source, “error_number”, could not be found.

Now that you have the session, you have a tool to explore and troubleshoot the nuisance “complex query” error we have all grown to love.  From here, the next step would be to explore the output.

Tools note

Categories: News, Professional, SSC
Comments: No Comments
Published on: March 24, 2011

I was asked a few months ago to allow some advertising on my blog.  I hadn’t given the idea a whole lot of thought but was reminded again today about the request.  For now, I think I will just give the requester a mention and add the tool he wants advertised to the growing list of tools that I have been compiling.

The tool looks like it has some merit and seems like it would be worth the trial.  The tool is called DBDefence.  You can find more about it here.  The purpose of this tool is to help encrypt your databases – outside of the databases and prevent people from prying too far into what they shouldn’t.  I haven’t yet tested the tool – but it seems quite interesting.

And since I have a captive audience already, I will add one more tool to the mix.  Like DBDefence, this other tool is security focused.  In SQL Server there is a bug that allows the savvy to view the unencrypted password of the SQL accounts that have logged in since last service start.  One of the passwords that falls into this category is that of the sa account.  Sentrigo has a tool that will clear that password from memory so it can no longer be viewed in clear text.  The name of the tool is Passwordizer.  You can check it out from here.

SQL Server Startup

From time to time I see a question asking how to determine the last time a SQL Server was either restarted or when the last time was that the database was used.  For the first question there is a pretty common answer that works for SQL 2000 and up.  In answering both questions though, I am going to show you three quick similar scripts that can be used to find this information.  The catch is that these queries will not work on SQL 2000.  On the flipside, it should give you an idea of what can be done to find out the same info for SQL 2000.

TempDB

[codesyntax lang="tsql"]

[/codesyntax]

A common answer to the question of when was SQL Server last started is to check the Create_Date of the tempdb database.  Since tempdb is recreated every time SQL Server is restarted, it is a pretty accurate timestamp of the server start.  This query was written to work for SQL 2005 and above, but you can still find the create_date of the tempdb database in SQL 2000 by using a different query.

sys.dm_exec_requests

[codesyntax lang="tsql"]

[/codesyntax]

In this query we access one of those Dynamic Management views available in SQL 2005 and up.  In this case, we look at the first process id similar to what one may have done with sysprocesses in sql 2000.  Now is probably a good time to also note something that is happening inside the CTE.  Notice the COALESCE that is being used?  I am taking the Max from those fields that are associated with read type of activities in a database.  This information is being pulled from a DMV called sys.dm_index_usage_stats.  This query will get me the most recent activity for reads and writes in each database (notice the group by).

Default Trace

[codesyntax lang="tsql"]

[/codesyntax]

The last one is to make use of the default trace in SQL 2005, 2008 and 2008 R2.  The default trace records the time of certain events that occur in the database and that includes when the database is starting up.  To find the server startup time in this case, I am checking the EventSequence and IsSystem fields for a value of 1.  As you can see, I also dumped the filepath for the default trace file into a variable and use that in the function to get the data.

When comparing performance of these three options, they pretty much came out even.  Each took its turn performing faster – with regards to time.  However, the default trace method did return a more expensive execution plan every single time.  With how fast these queries run, I’m not real sure that that is very conclusive nor that it would be a heavy concern.  These queries are designed more for the occasional run by the DBA rather than to be executed millions of times a day.  You decide which will be best for you.  I think with the use of the function to get the default trace info, I was mildly surprised that the query performed that well.

Another thing to note is that each of these methods will return a slightly different timestamp.  For instance, the tempdb timestamp gives me 3:21:28 and the sys.dm_exec_requests produces a timestamp of 3:21:36 and the tracefile shows a timestamp of 3:21:24.  I don’t have any hard foundation for why that is – just speculation that seems to make sense.  I will just leave it at that though.

In addition to these methods there is also the option of checking the system event logs to determine the SQL Service startup time.  This information should help you when investigating your databases and server and would be something handy to keep in your toolbox.

Database Tools Follow-up

Categories: News, Professional, SSC
Comments: No Comments
Published on: March 18, 2011

After posting a database tools list yesterday, I found that there were things that I had forgotten and a couple of things that i hadn’t even considered.  In thinking about this, I found myself trying to find a script that I knew I had seen and that I wanted to put on the list.

Trying to find the script got me a bit out of sorts.  I knew I could probably find it on my share at work with my scripts (which reminds me that I really need to get a dropbox or put them all on a thumbdrive to keep with me).  But, I wasn’t at work and wanted to locate this cool script that I knew was recommended by another prominent person (more about that later).  So I pulled out my googlefu.

I tried searching the web for this blog that had the script and tried several combinations of the key words I knew would help me find it.  In doing this search, I came across a few blogs that had created a list of tools that database professionals should check out.  Here is a short list of the blogs I found.

All of these lists are great compilations.  I also found one more list, but found that it was just a copy of Aaron’s list.  This was extremely disappointing.  Just reference the list or come up with a list of your own.  A SQL Server MVP should never have to copy content from another MVP.  It’s not even that the list is the same – it was a copy of the entire post by Aaron.  I personally hold MVPs to a higher standard and find it disappointing and aggravating when one is copying content and is esteemed as a person who has contributed much to the community.

None of these lists had what I the item for which I was looking.  The item that I wanted was a script that Brent Ozar (blog) had given high compliments to a few months back.  It was a script that, he said “a script that’s like my Blitz server takeover script, but even more advanced.  Love it!”  Too bad I couldn’t remember the person, script or wording he used when I was trying to find the script.  Better yet, the page with the reference popped up several times in my searches.  Well, needless to say, I pinged the community and Brent on twitter and Brent came through.  Here are a few more items to add to the list of tools you should have or try.

  1. 60 Minute Blitz Script (Brent Ozar) – check it out.  It will be worth your time.  If you looked at the other lists you will also see that this script is there and highly recommended.
  2. Diagnostic Information Queries (Glenn Berry) – this was the script I was trying to find.  This is the script that Brent has given high praise – give it a look.
  3. sp_whoisactive (Adam Mechanic) – Newest version as of this post is 10.76 and is listed as a beta version.
  4. SSMS ToolsPack (Mladen Prajdic)

So there you have it.  Between my two lists and these other great lists – you should have a great toolbox.

Database Tools

Categories: News, Professional, SSC
Comments: 5 Comments
Published on: March 17, 2011

Recently I have posted a couple of articles about various database or SQL tools either for maintenance or for benchmarking.  Those posts work well in conjunction with a post several months back that you can read here.  These posts made me think a bit harder about the tools that I use to do the job.

In thinking about those tools, I realized that I did not have a list of what I have used or do currently use.  Thus, I went through an exercise compiling my list of tools.  Some of these tools are currently in use, some are items that I have used in the past and was impressed by the tool.  And someday in the future I will finish compiling the list of all of the tools that I have encountered over the past year or two that I have intended to test.

Without further adieu:

  1. SSMS Addins – This is available on codeplex.  One of the features that intrigues me is the ability to script out the data from the table.
  2. OpenDBDiff - This one seems to have promise.  Similar in function to Visual Studio 2010 or RedGate SQL compare tools, this one is free and compares the schema and objects.  If you are on a tight budget and need to be able to compare two databases, this just might be the tool to try.
  3. SQL Monitor – A tool to monitor various things in SQL Server like jobs and executing queries.  Kind of low-level, but I figured I would test this app out and see if it had some merit.
  4. SQL nexus – This is a tool to help evaluate performance issues with SQL Server.  You can evaluate wait stats along with PSSDiag files.
  5. SQL Powershell Extensions – I recently learned of this tool on Codeplex.  This is a high priority item for me to download and test.  This tool helps to create “intuitive functions around the SMO objects.”
  6. PowerShellPack - Download from Microsoft to enhance the powershell experience.
  7. Data Dictionary – This software is on my list to evaluate.  It is mostly out of curiosity because I have something in place to create data dictionaries already.  This tool allows you to update the extended properties from a GUI.
  8. US Census Data – I think this one is intriguing as a sample data set.
  9. SQL Schema Source Control – This is an SVN plugin
  10. ScriptDB4SVn - Another SVN Plugin to get your database projects into source control.
  11. SQL Source Control (RedGate) – Do you detect a theme going on now?  This is a commercial product to integrate into SVN or TFS.  It integrates into SSMS and has received many great reviews.  I have seen it in use and it is a good product.
  12. SQL Diagnostic Manager (Idera) – I used this tool a lot a few years back.  The tool has gotten better since.  I need to get another license for it and try it again.
  13. Confio Ignite – I was a part of a focus group testing this tool.  I was highly impressed by the tool.  Ignite allows you to gather waitstats and other diagnostic information to monitor the health of the server.  I would highly recommend this tool.
  14. TOAD (Quest Software) – I used this tool a few years ago and liked it.  This tool is useful for determining quickly the alternatives to writing your query in a few different ways and to view the performance impact of those changes.
  15. DBA Bundle and Developer Bundle (RedGate) – Alternatively, you could look for the Toolbelt by RedGate.  The Bundles are chock full of high value great tools to do the job.
  16. SQL Scripts Manager – This is a collection of Scripts from various contributors that has been made available for free by our friends at RedGate.
  17. Dr. DMV – Glenn Alan Berry has some awesome scripts for use on your 2005 and 2008 servers.  These scripts utilize greatly the DMVs in SQL Server.
  18. DBA Dashboard – This is a set of reports put together to help you identify resource usage and the source of that resource consumption.
  19. SQLPing3 – Security type tool to help you discover SQL Servers on the network.
  20. Discovery Wizard for SQL Server (Quest Software) – A tool to help discover SQL Instances on the network.
  21. SQLCentric - By Robert Pearl, this tool is a web based monitoring and alerting tool for your SQL Servers.
  22. Power Architect – I used this tool largely for helping to document some data models.  This is a reasonably priced tool and it works quite well.
  23. SQLIO - This one is from our friends at Microsoft and I think the name explains it.
  24. SQLIOSim - Another tool from Microsoft that I think the name explains it.
  25. IOMeter – Another IO tool
  26. GeekBench - This tool will quickly measure processor and memory and provide some benchmarks.
  27. Plan Explorer (SQLSentry) – I find this tool extremely useful.  The execution plans are much easier to read in this tool than in SSMS.  I use both to compare and contrast and am able to more quickly ascertain the pain points of a query.  The readability of Plan Explorer is great and the additional features really help augment your abilities to query tune based on Execution Plans.

There you have it!  That is quite the list.  I don’t have all of these installed but would recommend trying some of the items out and getting them in your toolbox.  A good recommendation would be to install them into a virtual machine while you are testing the software.  One of my goals for the year is to actually get all of my preferred tools installed into a VM that I can port with me.  By doing so, then I would have them readily available with little to no downtime while trying to install them.

If you have any tools that you think are great and should be in the DB Professional toolbox, please leave a comment or drop me a note.

SQL Server Standard Reports Winforms

In SQL Server, there is a good set of preloaded reports to help you monitor the Instance.  These reports can be found under the Standard Reports and can be reached via context menu.  To find them, right click your Instance in Object Explorer from within SSMS.  Then navigate the context menu – much like the menus shown here.

You can see from the image that there is a decent amount of good reports available to choose.

When you run one of these reports, it is like a report that you would run from SSRS.  However, SSRS is not required to be installed in order to run these reports.

Occasionally, you may encounter an error when trying to run these reports.  The error that you may encounter is as follows:

The file ‘Microsoft.ReportViewer.WinForms, Version=9.0.0.0, Culture=neutral, PublicKeyToken=b03f5f7f11d50a3a’ cannot be opened.

This error message seems to pop up commonly after an upgrade to SQL Server is performed (CU or SP).  Sometimes, it can crop up without an upgrade having been recently performed.

In the event that you encounter this particular error, you should try installing BIDS.  Though it is not necessarily a pre-requisite – it is a good idea to have it installed anyway and it also resolves this error message.

Another method that has worked for me to resolve this error is to install Microsoft Report Viewer 2008 SP1.

Additionally, besides looking into these Standard reports and hopefully helping you circumvent the aforementioned error, there is another recommendation.  In SQL 2005, there is a download for the Performance Dashboard Reports.  And in SQL 2008 there is a new feature called Management Data Warehouse.

You can download the Performance Reports from here.

You can see how to configure the Management Data Warehouse here.

Oh, and if you want to retrofit SQL 2008 with the 2005 Performance Dashboard, you will have to tweak it a bit.  You can read up on that here.

DB Benchmarking

Comments: 3 Comments
Published on: March 14, 2011

As database professionals, we have a need to benchmark performance of the database, processes, and essentially overall performance.  When benchmarking, it is preferable to get a baseline and then run the same benchmark tests on a periodic basis and compare those results to the baseline.

Recently I was reminded of a couple of tools that should be in every DB Professionals vocabulary and tool set.  Each one is used for a different purpose.  Those tools are:

  1. CPU-Z
  2. TPC-E

CPU-Z is a freeware app that helps you gather information about motherboard, CPU, and memory.  It helps you to determine processor usage and if you are running your hardware at an optimal level.

TPC-E simulates the OLTP workload.  It is designed to be representative of OLTP systems and is scalable.

Both of these tools can be of great use to the database professional.  Check them out and see what you can learn by using them.

«page 1 of 2






Calendar
October 2014
M T W T F S S
« Sep    
 12345
6789101112
13141516171819
20212223242526
2728293031  
Content
SQLHelp

SQLHelp


Welcome , today is Friday, October 24, 2014