Day 12 – High CPU and Bloat in Distribution

This is the final installment in the 12 day series for SQL tidbits during this holiday season.

Previous articles in this mini-series on quick tidbits:

  1. SQL Sat LV announcement
  2. Burning Time
  3. Reviewing Peers
  4. Broken Broker
  5. Peer Identity
  6. Lost in Space
  7. Command ‘n Conquer
  8. Ring in The New
  9. Queries Going Boom
  10. Retention of XE Session Data in a Table
  11. Purging syspolicy


Working with replication quite a bit with some clients you might run across some particularly puzzling problems.  This story should shed some light on one particularly puzzling issue I have seen on more than one occasion.

In working with a multi-site replication and multi-package replication topology, the cpu was constantly running above 90% utilization and there seemed to be a general slowness even in Windows operations.

Digging into the server took some time to find what might have been causing the slowness and high CPU.  Doing an overall server health check helped point in a general direction.

Some clues from the general health check were as follows.

  1. distribution database over 20GB.  This may not have been a necessarily bad thing but the databases between all the publications weren’t that big.
  2. distribution cleanup job taking more than 5 minutes to complete.  Had the job been cleaning up records, this might not have been an indicator.  In this case, 0 records were cleaned up on each run.

The root cause seemed to be pointing to a replication mis-configuration.  The mis-configuration could have been anywhere from the distribution agent to an individual publication.  Generally, it seems that the real problem is more on a configuration of an individual publication more than any other setting.

When these conditions are met, it would be a good idea to check the publication properties for each publication.  Dive into the distribution database and try to find if any single publication is the root cause and potentially is retaining more replication commands than any other publication.  You can use sp_helppublication to check the publication settings for each publication.  You can check MSrepl_commands in the distribution database to find a correlation of commands retained to publication.

Once having checked all of this information, it’s time to put a fix in place.  It is also time to do a little research before actually applying this fix.  Why?  Well, because you will want to make sure this is an appropriate change for your environment.  For instance, you may not want to try this for a peer-to-peer topology.  In part because one of the settings can’t be changed in a peer-to-peer topology.  I leave that challenge to you to discover in a testing environment.

The settings that can help are as follows.

[codesyntax lang=”tsql”]


These settings can have a profound effect on the distribution retention, the cleanup process and your overall CPU consumption.  Please test and research before implementing these changes.

Besides the potential benefits just described, there are other benefits to changing these commands.  For instance, changing replication articles can become less burdensome by disabling these settings.  The disabling of these settings can help reduce the snapshot load and allow a single article to be snapped to the subscribers instead of the entire publication.

Day 5 – Peer Identity

This is the fifth installment in the 12 day series for SQL tidbits during this holiday season.

Previous articles in this mini-series on quick tidbits:

  1. SQL Sat LV announcement
  2. Burning Time
  3. Reviewing Peers
  4. Broken Broker

identityIn the digital age it seems we are constantly flooded with articles about identity crises.  From identity theft to mistaken identity.  SQL server is not immune to these types of problems and stories.  Whether SQL Server was housing the data that was stolen ,leading to identity theft, or if SQL Server is having an identity management issue of its own – SQL Server is definitely susceptible to the identity issues.

The beauty of SQL Server is that these identity issues seem to be most prevalent when trying to replicate data.  Better yet is when the replication multiple peers setup in a Peer-to-Peer topology.

When these Identity problems start to crop up there are a number of things that can be done to try and resolve them.  One can try to manually manage the identity ranges or one can flip the “Not for Replication” attribute on the table as two possible solutions.

The identity crisis in replication gets more fun when there are triggers involved.  The triggers can insert into a table that is not replicated or can insert into a table that is replicated.  Or even better is when the trigger inserts back into the same table it was created on.  I also particularly like the case when the identity range is manually managed but the application decides to reseed the identity values (yeah that is fun).

In one particular peer-to-peer topology I had to resort to a multitude of fixes depending on the article involved.  In one case we flipped the “Not for Replication” flag because the tables acted on via trigger were not involved in replication.  In another we disabled a trigger because we determined the logic it was performing was best handled in the application (it was inserting a record back into the table the trigger was built on).  And there was that case were the table was being reseeded by the application.

In the case of the table being reseeded we threw out a few possible solutions but in the end we felt the best practice for us would be to extend the schema and extend the primary key.  Looking back on it, this is something that I would suggest as a first option in most cases because it makes a lot of sense.

In our case, extending the schema and PK meant adding a new field to the PK and assigning a default value to that field.  We chose for the default value to be @@ServerName.  This gave us a quick location identifier for each location and offered us a quick replication check to ensure records were getting between all of the sites (besides relying on replication monitor).

When SQL Server starts throwing a tantrum about identities, keep in mind you have options.  It’s all about finding a few possible solutions or mix of solutions and proposing those solutions and then testing and implementing them.

One of the possible errors you will see during one of these tantrums is as follows.

Explicit value must be specified for identity column in table ‘blah’ either when IDENTITY_INSERT is set to ON or when a replication user is inserting into a NOT FOR REPLICATION identity column.

Day 3 – Reviewing Peers

Comments: 1 Comment
Published on: December 27, 2013

This is the third installment in the 12 day series for SQL tidbits during this holiday season.

Previous articles in this mini-series on quick tidbits:

  1. SQL Sat LV announcement
  2. Burning Time

Remember back in the day when Napster was really popular?  I’m sure it is still popular now – just not in the same league as the early years.  Napster pretty much made some revolutionary changes in file-sharing across the internet.  Now the algorithms and the method have become more advanced and use a hive approach, but it still pretty much boils down to the setup that Napster used – Peer to Peer.


In the P2P file-share world, every node had a part to play.  If you downloaded a file, your machine could also upload that file or other files to the network for other users.  This approach required full files in order to work.


In the Hive approach, the files are broken up into chunks.  So you still participate on a P2P basis, but you no longer need to have the complete file to participate.  (I am probably over-generalizing, but that is ok – the point is coming soon.)  This helped transfers be quicker and the P2P network/hive to be larger (in case you were wondering).

Now, let’s take that idea and move it down to a smaller chunk of data.  What if we did that with a database and only sent a record at a time to a partner and that partner could send a few records back to the first partner/peer?  Now we have something that could be pretty usable in many scenarios.  One such scenario could be to sync data from the same database at ten different locations (or maybe 100 different locations) so all locations would have current information.

Well, SQL Server does have that technology available for use.  Coincidentally enough, it is called Peer-to-Peer replication.  Truth be told, it is really more of a two transactional replication on steroids.  In SQL 2008, you had to setup transactional replication in order to create the P2P.  But in SQL 2012, there is now an option on the publication types for Peer-to-Peer.

Setting up P2P replication in SQL 2012 is pretty easy to do.  Here is a quick step-through on doing just that.  I will bypass the setup of the distributor and jump straight into setting up the publication through to the point of adding peers.  From that point, it will be left to you to determine what kind of subscription (push/pull) you use and to figure out how to configure those types.


The first step is to expand the tree in SSMS until you see replication and then to expand that to see “Local Publications.”  From “Local Publications,” right click and select “New Publication.”


Then it is almost as easy as following the prompts as I will show in the following images.  You need to select the database you wish to be included in the P2P publication.


Then it is a matter of selecting the publication type.  Notice here that Peer to Peer has been highlighted.


Of course, no replication is complete without some articles to include in the replication.  In this case, I have chosen to just replicate a few of the articles and not every article in the database.  When replicating data, I recommend being very picky about what articles (objects) get included in the replication.  No sense in over-replicating and sending the entire farm across the wire to Beijing, London, Paris and Moscow.


Once the articles are selected, it will be time to setup the agent security.  Again, this is pretty straight forward.  And in my contrived setup, I am just going to rely on the SQL Server Agent Service account.  The screen will inform you that it is not best practice.  I will leave that as a exercise for you to explore.


With that last piece of configuration, the publication is ready.  Just click your way through to finish.

Once the publication is complete, it is time to add a subscriber to the publication.  That is easily accomplished by right clicking the publication.  Since this is a P2P publication, we need to select “Configure Peer-To-Peer Topology…”


Selecting that menu option will bring up the Wizard.  First step in the new wizard is to pick the publisher and the publication at that publisher that needs to be have the topology configured.


After selecting the publisher and publication then I can add nodes to the P2P topology by right-clicking the “map” (as I like to call it) area.  Select “Add a New Peer Node” from the menu and then enter the appropriate details for the new subscriber.


It is here that I will conclude this short tutorial.  Configuring the topology is an exercise best left to each individual circumstance.  Configuring where the pull subscribers will be and where the push subscribers will be is almost an art.  Have fun with it.

I have had the opportunity to use this kind of setup on a large multi-node setup across several sites.  It runs pretty smoothly.  Sometimes it can get to be a hair-raising event when a change gets introduced that borks the schema.  But those are the events that permit you to learn and grow and document what has happened and how to best handle the issues in your environment.

I have even taken a multi-site P2P setup and just added a 1 direction subscriber (as if it were a transactional publication) so the subscriber could just get the information and run reports without pushing changes back up into the rest of the topology.  That also works pretty well.  Document the design and be willing to change it up in case there appears to be latency and too much peer pressure.

page 1 of 1

February 2020
« Jan    

Welcome , today is Wednesday, February 19, 2020