T-SQL Tuesday #051: Bets and Results

Comments: 2 Comments
Published on: February 18, 2014

TSQL2sDay150x150

The line for this months TSQL Tuesday required wagers be made concerning the risks and bets that have either been made or not made.

At close, we saw 17 people step up and place remarkable markers.  Today, we will recap the game and let you know who the overall winner from this week of game play in Vegas just happened to be.

poker-hands

 

This is about some bets, so we needed to understand some of the hands that might have won, right?

Let’s see the hands dealt to each of our players this past week.

Andy GalbraithAndy Galbraith (b|t) shared a full house of risk this month when talking about backups.  Do you have a backup if you haven’t tested it.

“without regular test restores, your backups do not provide much of a guarantee of recoverability.  (Even successful test restores don’t 100% guarantee recoverability, but it’s much closer to 100%).”

 

Boris HristovBoris Hristov (b|t) thought he was feeling lucky.  He couldn’t imagine things getting worse.  He even kept reminding himself that it couldn’t get worse.  He was dealt a hand and it was pretty good – and then everything just flushed down the drain.

A disaster with replication and with the storage system – ouch!

 

Chris YatesChris Yates (b|t) wanted to push his hand a little further than Andy this week.  Chris went all-in on his backups.  At least he went all-in early in his career.

The gamble you ask?  Chris didn’t test the backups until after he learned an important lesson.

“I’ve always been taught to work hard and hone your skill set; for me backups fall right into that line of thinking. Always keep improving, learn from your mistakes.”

 

Doug PurnellDoug Purnell (b|t) shares another risky move to make.  In this hand, Doug thought he could parlay maintenance plans into an enterprise level backup solution.

What Doug learned is that maintenance plans don’t offer a checksum for your backups.  After learning that, he decided to stay and get things straight.

 

Jason BrimhallJason Brimhall (b|t) took a different approach.  I took the approach of how these career gambles may or may not impact home, family, health, and career in general.

There is a life balance to be sought and gained.  It shouldn’t be all about work all the time.  And if work is causing health problems, then it is time for a change.

It’s important to have good health and enjoy life away from work.

 

Jeffrey VerheulJeffrey Verheul (b|t) had multiple hands that many of us have probably seen.  I’d bet we would even be able to easily relate.

In the end, what stuck with me was how more than once we saw Jeffrey up the ante with a story of somebody who was not playing with a full deck.  If you don’t have a full deck, sometimes the best hand is not a very good one overall.

 

Joey D'Antoni

Joey D’Antoni (b|t) had a nightmare experience that he shared.  We have all seen too many employers like what he described.

The short of it is summed up really will by Joey.

“The moral of this story, is to think about your life ahead of your firms. The job market is great for data pros—if you are unhappy, leave.”

 

K. Brian KelleyK. Brian Kelley (b|t) brought us the first four of a kind.  Not only did he risk life and limb with SQL 7, but he tried to do it over a WAN link that was out of his control.

When he bets, he bets BIG!  DTS failures, WAN failures, SQL 7, SQL 2000, low bandwidth and somebody playing with the nobs and shutting down the WAN links while laughing devishly at the frustration they were causing.

 

Kenneth FisherKenneth Fisher (b) thought he would try to one-up Jeffery by getting employers that would not play with a full deck either.

From one POS time tracking system to another POS time tracking system to yet another.  Apparently, time tracking was doomed to failure and isn’t really that important.

That seems to be a lot of hefty wagers somebody is willing to lay down.

 

Matt VelicMatt Velic (b|t) brought his A-game.  He was in a no prisoner kind of mood.

Matt decided he was going to real you in, divert your attention, and then lay down the wood hard.  Don’t try to get anything past Matt – especially if it wreaks of shifty and illegal.

The way he parlayed his wagers this month was a riot.

 

Mickey StueweMickey Stuewe (b|t) was the only person willing to Double-down and to even try to place a bet on snake-eyes.  With the two-pronged attack at doubles, she was able to come up with two pairs.

To compound her doubles kind of wagers, she was laying down markers on functions.  Check out her casino wizardry with her display of code and execution plans.

 

Rob FarleyRob Farley (b|t) was a victim of his own early success.  He had a lucky run and then it seemed to peter out a bit.  In the end he was able to manage an Azure high hand

Rob reminds us of some very important things with his post.  You can get lucky every now and again and be successful without a whole lot of foresight.  Be careful and try to plan and test for the what-if moment.

 

Bobby TablesRobert Pearl (b|t) rolled the dice in this card game.  He was hoping for a pair of kings with his pair of clusters and the planned but unplanned upgrade.

There is nothing like a last minute decision to upgrade an “active-active” cluster.  In the end Bobby Tables had an Ace up his sleeve and was able to pull it out for this sweet pair.

 

Russ Thomas

Russ Thomas (b|t) ever have the business buy some software and then thrust it on IT to have it installed last minute?

That is almost what happened in this story that had some interesting yet eventual results.

Russ weaves the story very well, but take your eye of the game at hand!!

 

Sebastian Meine

 

Sebastian Meine (b|t) brought needles to the table.  That is wicked crazy and leaves quite the impression.

Maybe he thought he was going to inject some cards into the game to improve his hand.  I was almost certain he had nothing going, but magically he was able to produce some favorable data.

Oh, that was the point of his post!  Have a weakness? It will be found, injected and exploited.

 

Steve Jones

Steve Jones (b|t) had a crazy house going.  Imagine 2000 or so people all trying to help you make your bets and play your hand.  That is a FULL house.

Of course, his full house was more to deal with a misunderstood risk with the application and causing performance problems in the database.

In the end, they fixed it and it started working better.  A little testing would have gone a long way on this one!

 

Wayne SheffieldWayne Sheffield (b|t) in perhaps the most disappointing and surprising turn of events, Wayne ended up with a hand that could have won but he folded.

Well, Wayne didn’t fold but there were some bets that resulted in people folding and maybe worse in the story that Wayne shares.  This can happen when you are betting on something you know nothing about and really should get somebody to help make the correct bets for you.

 

House

And to recap, the overall winner was…

the HOUSE.  With a winning hand of a royal flush.

Thanks to all of the SQLFamily for participating this month.  There were some really great experiences shared.  The posts were great and it was a lot of fun.  I hope you got as much enjoyment out of the topic and articles this month as I did.

Risking Health, Life and Family

Comments: 2 Comments
Published on: February 11, 2014

TSQL2sDay150x150Since announcing the topic last week for T-SQL Tuesday, I have thought about many different possibilities for my post.  All of them would have been really good examples.  The problem has not been the quality but in the end just settling on my wager for this hand.

You see, this month T-SQL Tuesday has the theme of risks, betting on a technology, solution or person, or flatly having had an opportunity and not taken it (that’s a bet too in a sense).  Sometimes we have to play it safe, and sometimes we have to take some degree of risk.

If you are interested, the invite for T-SQL Tuesday is here and the deadline for submission is not until Midnight GMT on 12 February.

It’s a Crapshoot

craps-0208When all the dice finally settled, I decided it would be best for me to talk about some recent experiences in this Past Post.

First a little dribble with the back story.  Just don’t lose your focus on the price with this PK*.  Readers, please don’t Press and be patient during this monologue.

Over the past year I have been pushing hard with work and SQL.  I was working for a firm as a part of their remote DBA services offering.  As time progressed and I became more and more tenured with the firm, I found that I was working harder and harder.  Not that the work was hard, but that there was a lot of it.

Stress rose higher and higher (I must have been oblivious to it).  At one point I started getting frequent migraines.  I went to the doctor to try and figure things out.  I visited the chiropractor to try and figure things out.  The chiropractor proved to be useful and had some profound advice.  He asked me how many hours I would sit in front of the computer on a daily basis (since that was my job).  My reply to him shocked him pretty good.  I was putting in regular 20 hour days.

Having weekly chiropractor sessions helped somewhat with the migraines but it was not nearly enough.  I figured I would just have to deal with it since we couldn’t figure out what the root cause was (yeah we were trying to perf tune this DBA).

In addition to the chiropractor and traditional medicine to fight migraines, I also tried some homeopathic remedies.  Again, similar results.  It seemed to help but wasn’t an overall solution and not a consistent solution.

Later in the year I found something that seemed to help a little with the migraines too.  I started using Gunnars.  Sitting in front of a computer for 20 hours a day on most days, it made sense there might be some eye strain.  Wearing the Gunnars, I immediately felt less eye strain.  That was awesome.  Too bad it did not reduce the migraines.

After more than a year of having regular migraines, I found that the migraines started occurring more regularly (yes there was a baseline).  Near the end of 2013, I found that there was a period that I had eight straight migraine days.  These migraines typically lasted the duration of the day and there wasn’t much I could do outside of just dealing with it and making sure work got done.

Notice the risk?  What are all of the risks that might be involved at this point?  Yes, I was risking my health, family and work.

Russian Roulette

rouletteNear the end of the year 2013, I made a very risky decision.  I decided to part ways with the firm and pursue a consulting career.  This was as scary as could possibly be.  I was choosing to leave a “Safe” job knowing that I had a job and secured income – so long as the company did well.

Not only was I choosing to gamble with the job change and risking whether or not I would have work flowing in to keep me busy, I was also risking the well-being of my family.  With a family, there is the added risk of ensuring you provide for them.  This was a huge gamble for me.  Not to mention the concern with the migraines and whether I would be able to work this day or that based on the frequency and history of these things.

In this case, the bet on Green came up GREEN!  Over two months into this decision I have yet to have a migraine.  For my health this was the right decision.  I have also been lucky enough to be able to get myself into the right consulting opportunity at the right time with the right people.  Because of that, we have been able to keep me busy the whole time.

With all of that said, thanks to Randy Knight (@randy_knight) for bringing me in as a Principal Consultant at SQL Solutions Group.  With the change to consulting, Randy has helped to keep my hours down to less than 20 hours a day.

ssg

The thing about those 20 hour days is there were several people trying to get me to back off.  They’d say things like “leave it for tomorrow” or “the work will still be there.”  That may be true, but the firms clients had certain expectations.  Learning when to back off and keep the foot on the gas pedal is something everybody needs to learn.  For me, I felt I had to do it because it was promised to the client.  Now as a consultant, I feel I can better control when those deliverables are due.  Thanks to Wayne (@DBAWayne) for continuing to point this out as a symptom of “burnout.”

In the end, it took making a risky change to avoid the burnout and get my health back under control.

*PK in this case is a term for a pick ‘em bet and not in reference to a Primary Key as is commonly used in SQL Server.

Day 12 – High CPU and Bloat in Distribution

This is the final installment in the 12 day series for SQL tidbits during this holiday season.

Previous articles in this mini-series on quick tidbits:

  1. SQL Sat LV announcement
  2. Burning Time
  3. Reviewing Peers
  4. Broken Broker
  5. Peer Identity
  6. Lost in Space
  7. Command ‘n Conquer
  8. Ring in The New
  9. Queries Going Boom
  10. Retention of XE Session Data in a Table
  11. Purging syspolicy

distribution

Working with replication quite a bit with some clients you might run across some particularly puzzling problems.  This story should shed some light on one particularly puzzling issue I have seen on more than one occasion.

In working with a multi-site replication and multi-package replication topology, the cpu was constantly running above 90% utilization and there seemed to be a general slowness even in Windows operations.

Digging into the server took some time to find what might have been causing the slowness and high CPU.  Doing an overall server health check helped point in a general direction.

Some clues from the general health check were as follows.

  1. distribution database over 20GB.  This may not have been a necessarily bad thing but the databases between all the publications weren’t that big.
  2. distribution cleanup job taking more than 5 minutes to complete.  Had the job been cleaning up records, this might not have been an indicator.  In this case, 0 records were cleaned up on each run.

The root cause seemed to be pointing to a replication mis-configuration.  The mis-configuration could have been anywhere from the distribution agent to an individual publication.  Generally, it seems that the real problem is more on a configuration of an individual publication more than any other setting.

When these conditions are met, it would be a good idea to check the publication properties for each publication.  Dive into the distribution database and try to find if any single publication is the root cause and potentially is retaining more replication commands than any other publication.  You can use sp_helppublication to check the publication settings for each publication.  You can check MSrepl_commands in the distribution database to find a correlation of commands retained to publication.

Once having checked all of this information, it’s time to put a fix in place.  It is also time to do a little research before actually applying this fix.  Why?  Well, because you will want to make sure this is an appropriate change for your environment.  For instance, you may not want to try this for a peer-to-peer topology.  In part because one of the settings can’t be changed in a peer-to-peer topology.  I leave that challenge to you to discover in a testing environment.

The settings that can help are as follows.

EXEC SP_CHANGEPUBLICATION 
    @publication = 'somepub', -- put your publication name here 
    @property = 'allow_anonymous', 
    @VALUE = 'false' 
GO 
 
EXEC SP_CHANGEPUBLICATION 
    @publication = 'somepub', -- put your publication name here 
    @property = 'immediate_sync', 
    @VALUE = 'false' 
GO

These settings can have a profound effect on the distribution retention, the cleanup process and your overall CPU consumption.  Please test and research before implementing these changes.

Besides the potential benefits just described, there are other benefits to changing these commands.  For instance, changing replication articles can become less burdensome by disabling these settings.  The disabling of these settings can help reduce the snapshot load and allow a single article to be snapped to the subscribers instead of the entire publication.

Day 11 – Purging syspolicy

This is the eleventh installment in the 12 day series for SQL tidbits during this holiday season.

Previous articles in this mini-series on quick tidbits:

  1. SQL Sat LV announcement
  2. Burning Time
  3. Reviewing Peers
  4. Broken Broker
  5. Peer Identity
  6. Lost in Space
  7. Command ‘n Conquer
  8. Ring in The New
  9. Queries Going Boom
  10. Retention of XE Session Data in a Table

Garbage-Dump

Did you know there is a default job in SQL Server that is created with the purpose of removing system health phantom records?  This job also helps keep the system tables ,that are related to policy based management, nice and trim if you have policy based management enabled.  This job could fail for one of a couple of reasons.  And when it fails it could be a little annoying.  This article is to discuss fixing one of the causes for this job to fail.

I want to discuss when the job will fail due to the job step related to the purging of the system health phantom records.  Having run into this on a few occasions, I found several proposed fixes, but only one really worked consistently.

The error that may be trapped is as follows:

A job step received an error at line 1 in a PowerShell script.
The corresponding line is ‘(Get-Item SQLSERVER:\SQLPolicy\SomeServer\DEFAULT).EraseSystemHealthPhantomRecords()’.
Correct the script and reschedule the job. The error information returned by PowerShell is:
‘SQL Server PowerShell provider error: Could not connect to ‘SomeServer\DEFAULT’.
[Failed to connect to server SomeServer. -->

The first proposed fix came from Microsoft at this link.  In the article it proposed the root cause of the problem being due to the servername not being correct.  Now that article is specifically for clusters, but I have seen this issue occur more frequently on non-clusters than on clusters.  Needless to say, the advice in that article has yet to work for me.

Another proposed solution I found was to try deleting the “\Default” in the agent job that read something like this.

(Get-Item SQLSERVER:\SQLPolicy\SomeServer\Default).EraseSystemHealthPhantomRecords()

Yet another wonderful proposal from the internet suggested using Set-ExecutionPolicy to change the execution policy to UNRESTRICTED.

Failed “fix” after failed “fix” is all I was finding.  Then it dawned on me.  I had several servers where this job did not fail.  I had plenty of examples of how the job should look.  Why not check those servers and see if something is different.  I found a difference and ever since I have been able to use the same fix on multiple occasions.

The server where the job was succeeding had this in the job step instead of the previously pasted code.

if (‘$(ESCAPE_SQUOTE(INST))’ -eq ‘MSSQLSERVER’) {$a = ‘\DEFAULT’} ELSE {$a = ”};
(Get-Item SQLSERVER:\SQLPolicy\$(ESCAPE_NONE(SRVR))$a).EraseSystemHealthPhantomRecords()

That, to my eyes, is a significant difference.  Changing the job step to use this version of the job step has been running successfully for me without error.

I probably should have referenced a different server instead of resorting to the internet in this case.  And that stands for many things – check a different server and see if there is a difference and see if you can get it to work on a different server.  I could have saved time and frustration by simply looking at local “resources” first.

If you have a failing syspolicy purge job, check to see if it is failing on the phantom record cleanup.  If it is, try this fix and help that job get back to dumping the garbage from your server.

Day 10 – Retention of XE Session Data in a Table

This is the tenth installment in the 12 day series for SQL tidbits during this holiday season.

Previous articles in this mini-series on quick tidbits:

  1. SQL Sat LV announcement
  2. Burning Time
  3. Reviewing Peers
  4. Broken Broker
  5. Peer Identity
  6. Lost in Space
  7. Command ‘n Conquer
  8. Ring in The New
  9. Queries Going Boom

food-storage

 

Gathering event information is a pretty good thing.  It can do wonders for helping to troubleshoot.  What do you do if you need or want to be able to review the captured information in 3 months or maybe 12 months from now?

Retaining the session data for later consumption is often a pretty essential piece of the puzzle.  There is more than one way to accomplish that goal.  I am going to share one method that may be more like a sledgehammer for some.  It does require that Management Data Warehouse be enabled prior to implementing.

When using MDW to gather and retain the session data, you create a data collector pertinent to the data being collected and retained.  In the following example, I have a data collector that is created to gather deadlock information from the system health session.  In this particular case, I query the XML in the ring buffer to get the data that I want.  Then I tell the collector to gather that info every 15 minutes.  The collection interval is one of those things that needs to be adjusted for each environment.  If you collect the info too often, you could end up with a duplication of data.  Too seldom and you could miss some event data.  It is important to understand the environment and adjust accordingly.

Here is that example.

BEGIN TRANSACTION 
BEGIN Try 
DECLARE @collection_set_id_1 INT 
DECLARE @collection_set_uid_2 UNIQUEIDENTIFIER 
EXEC [msdb].[dbo].[sp_syscollector_create_collection_set] @name=N'system_health_deadlock', @collection_mode=1, 
@description=N'system_health_deadlock', @logging_level=1, @days_until_expiration=14, 
@schedule_name=N'CollectorSchedule_Every_15min', @collection_set_id=@collection_set_id_1 OUTPUT, 
@collection_set_uid=@collection_set_uid_2 OUTPUT 
 
SELECT @collection_set_id_1, @collection_set_uid_2 
 
DECLARE @collector_type_uid_3 UNIQUEIDENTIFIER 
SELECT @collector_type_uid_3 = collector_type_uid FROM [msdb].[dbo].[syscollector_collector_types] 
WHERE name = N'Generic T-SQL Query Collector Type'; 
DECLARE @collection_item_id_4 INT 
EXEC [msdb].[dbo].[sp_syscollector_create_collection_item] 
@name=N'system_health_deadlock'
, @PARAMETERS=N'<ns:TSQLQueryCollector xmlns:ns="DataCollectorType"><Query><Value>
 
SELECT CAST(
                  REPLACE(
                        REPLACE(XEventData.XEvent.value(''(data/value)[1]'', ''varchar(max)''), 
                        '''', ''''),
                  '''','''')
            AS XML) AS DeadlockGraph
FROM
(SELECT CAST(target_data AS XML) AS TargetData
from sys.dm_xe_session_targets st
join sys.dm_xe_sessions s on s.address = st.event_session_address
where name = ''system_health'') AS Data
CROSS APPLY TargetData.nodes (''//RingBufferTarget/event'') AS XEventData (XEvent)
where XEventData.XEvent.value(''@name'', ''varchar(4000)'') = ''xml_deadlock_report'' 
 
</Value><OutputTable>system_health_deadlock</OutputTable></Query></ns:TSQLQueryCollector>', 
@collection_item_id=@collection_item_id_4 OUTPUT
, @frequency=900 --#seconds in collection interval
,@collection_set_id=@collection_set_id_1
, @collector_type_uid=@collector_type_uid_3 
 
SELECT @collection_item_id_4 
 
COMMIT TRANSACTION; 
END Try 
BEGIN Catch 
ROLLBACK TRANSACTION; 
DECLARE @ErrorMessage NVARCHAR(4000); 
DECLARE @ErrorSeverity INT; 
DECLARE @ErrorState INT; 
DECLARE @ErrorNumber INT; 
DECLARE @ErrorLine INT; 
DECLARE @ErrorProcedure NVARCHAR(200); 
SELECT @ErrorLine = ERROR_LINE(), 
@ErrorSeverity = ERROR_SEVERITY(), 
@ErrorState = ERROR_STATE(), 
@ErrorNumber = ERROR_NUMBER(), 
@ErrorMessage = ERROR_MESSAGE(), 
@ErrorProcedure = ISNULL(ERROR_PROCEDURE(), '-'); 
RAISERROR (14684, @ErrorSeverity, 1 , @ErrorNumber, @ErrorSeverity, @ErrorState, @ErrorProcedure, 
@ErrorLine, @ErrorMessage); 
 
END Catch; 
 
GO

Looking this over, there is quite a bit going on.  The keys are the following paramaters: @parameters and @interval.  The @parameters parameter stores the XML query to be used when querying the ring buffer (in this case).  It is important to note that the XML query in this case needs to ensure that the values node is capped to a max of varchar(4000) like shown in the following.

WHERE XEventData.XEvent.VALUE(''@name'', ''VARCHAR(4000)'')

With this data collector, I have trapped information and stored it for several months so I can compare notes at a later date.

Day 9 – Queries Going Boom

This is the ninth installment in the 12 day series for SQL tidbits during this holiday season.

Previous articles in this mini-series on quick tidbits:

  1. SQL Sat LV announcement
  2. Burning Time
  3. Reviewing Peers
  4. Broken Broker
  5. Peer Identity
  6. Lost in Space
  7. Command ‘n Conquer
  8. Ring in The New

Kaboom

Ever see an error like this??

The query processor ran out of internal resources and could not produce a query plan. This is a rare event and only expected for extremely complex queries or queries that reference a very large number of tables or partitions. Please simplify the query. If you believe you have received this message in error, contact Customer Support Services for more information.

Error 8623
Severity 16

That is a beautiful error.  The message is replete with information and gives you everything needed to fix the problem, right?  More than a handful of DBAs have been frustrated by this error.  It’s not just DBAs that this message seems to bother.  I have seen plenty of clients grumble about it too.

The obvious problem is that we have no real information as to what query caused the error to pop.  The frustrating part is that the error may not be a persistent or consistent issue.

Thanks to the super powers of XE (extended events), we can trap that information fairly easily now.  Bonus is that to trap that information, it is pretty lightweight as far as resource requirements go.

Without further ado, here is a quick XE session that could be setup to help trap this bugger.

CREATE EVENT SESSION
    overly_complex_queries
ON SERVER
ADD EVENT sqlserver.error_reported
(
    ACTION (sqlserver.sql_text, sqlserver.tsql_stack, sqlserver.database_id, sqlserver.username)
    WHERE ([severity] = 16
		AND [error_number] = 8623)
)
ADD TARGET package0.asynchronous_file_target
(SET filename = 'C:\Database\XE\overly_complex_queries.xel' ,
    metadatafile = 'C:\Database\XE\overly_complex_queries.xem',
    max_file_size = 10,
    max_rollover_files = 5)
WITH (MAX_DISPATCH_LATENCY = 5SECONDS)
GO
 
-- Start the session
ALTER EVENT SESSION overly_complex_queries
    ON SERVER STATE = START
GO

And now for the caveats.  This session will only work on SQL 2012.  The second caveat is that there are two file paths defined in this session that must be changed to match your naming and directory structure for the output files etc.

Should you try to create this session on SQL Server 2008 (or 2008 R2) instead of SQL Server 2012, you will get the following error.

Msg 25706, Level 16, State 8, Line 1
The event attribute or predicate source, “error_number”, could not be found.

Now that you have the session, you have a tool to explore and troubleshoot the nuisance “complex query” error we have all grown to love.  From here, the next step would be to explore the output.

Day 7 – Command ‘n Conquer

This is the seventh installment in the 12 day series for SQL tidbits during this holiday season.

Previous articles in this mini-series on quick tidbits:

  1. SQL Sat LV announcement
  2. Burning Time
  3. Reviewing Peers
  4. Broken Broker
  5. Peer Identity
  6. Lost in Space

As a DBA, we sometimes like to shortcut things.  Not shortcutting a process or something of importance.  The shortcuts are usually in the realm of trying to shortcut time, or shortcut the number of steps to perform a task or shortcutting by automating a process.

We seldom like to perform the same task over and over and over again.  Click here, click there, open a new query window, yadda yadda yadda.  When you have 100 or so servers to run the same script against – it could be quite tedious and boring.  When that script is a complete one-off, there probably isn’t much sense in automating it either.

To do something like I just described, there are a few different methods to get it done.  The method I like to use is via SQLCMD mode in SSMS.  Granted, if I were to use it against 100 servers, it would be a self documenting type of script.  I like to use it when setting up little things like replication.

How many times have you scripted a publication and the subscriptions?  How many times have you read the comments?  You will see that the script has instructions to run certain segments at the publisher and then other segments at the subscriber.  How many times have you handed that script to somebody else to run and they just run it on the one server?

Using SQLCMD mode and then adding a CONNECT command in the appropriate places could solve that problem.  The only thing to remember is to switch to SQLCMD mode in SSMS.  Oh and switching to SQLCMD mode is really easy.  The process to switch to SQLCMD mode is even documented.  You can read all about that here.

And there you have it, yet another simple little tidbit to take home and play with in your own little lab.

Day 6 – Lost in Space

This is the sixth installment in the 12 day series for SQL tidbits during this holiday season.

Previous articles in this mini-series on quick tidbits:

  1. SQL Sat LV announcement
  2. Burning Time
  3. Reviewing Peers
  4. Broken Broker
  5. Peer Identity

Credit: NASA/JPL

One of the more frequently recurring themes I see in my travel and work is the perpetual lack of space.

For instance, every time I fly there is inevitably a handful of people that have at least three carry-on items and at least one of those items is larger than the person trying to “carry” it on the plane.  Imagine watching these people trying to lift 100+ pound bags over their heads to put them into these small confined overhead storage compartments.  We are talking bags that are easily 2-3 times larger than the accepted dimensions, yet somehow this person made it on the plane with such a huge bag for such a tiny space.

Another favorite of mine is watching what appears to be a college student driving home in a GEO Metro.  A peek inside the vehicle might reveal 5 or 6 baskets of soiled laundry and linens.  A look at the vehicle as a whole might reveal a desert caravan’s worth of supplies packed onto the vehicle.  Watching the vehicle for a while you might notice that it can only lumber along at a top speed of 50 mph going downhill and good luck getting back up the hill.  It is just far too over-weighted and over-packed.  The vehicle obviously does not have enough room internally.

In both of these examples we have a limited amount of storage space.  In both of these examples we see people pushing the boundaries of those limitations.  Pushing those boundaries could lead to some unwanted consequences.  The GEO could break down leaving the college student stranded with dirty laundry.  The air-traveler just may have to check their dog or leave it home.

But what happens when people try to push the boundaries of storage with their databases?  The consequences can be far more dire than either of the examples just shared.  What if pushing those boundaries causes an outage and your database is servicing a hospital full of patient information (everything from diagnostics to current allergies – like being allergic to dogs on planes)?  The doctor needs to give the patient some medication immediately or the patient could die.  The doctor only has two choices and one of those could mean death the other could mean life.  All of this is recorded in the patient records but the doctor can’t access those records because the server is offline due to space issues.

Yeah that would pretty much suck.  But we see it all the time.  Maybe nothing as extreme as that case, but plenty of times I have seen business lose money, revenue, and sales because the database was offline due to space.  The company wants to just keep pushing those boundaries.

In one case, I had a client run themselves completely out of disk space.  They wouldn’t allocate anymore space so it was time to start looking to see what could be done to alleviate the issue and get the server back online.

In digging about, I found that this database had 1Tb of the 1.8TB allocated to a single table.  That table had a clustered index built on 6 columns.  The cool thing about this clustered index is that not a single query ever used that particular combination.  Even better was that the database was seldom queried.  I did a little bit of digging and found that there really was a much better clustered index for the table in question.  Changing to the new clustered index reduced the table size by 300GB.  That is a huge chunk of waste.

Through similar exercises throughout the largest tables in the database, I was able to reduce index space waste by 800GB.  Disk is cheap until you can’t have anymore.  There is nothing wrong with being sensible about how we use the space we have been granted.

Thinking about that waste, I was reminded of a great resource that Paul Randal has shared.  You can find a script he wrote, to explore this kind of waste, from this link.  You can even read a bit of background on the topic from this link.

Day 5 – Peer Identity

This is the fifth installment in the 12 day series for SQL tidbits during this holiday season.

Previous articles in this mini-series on quick tidbits:

  1. SQL Sat LV announcement
  2. Burning Time
  3. Reviewing Peers
  4. Broken Broker

identityIn the digital age it seems we are constantly flooded with articles about identity crises.  From identity theft to mistaken identity.  SQL server is not immune to these types of problems and stories.  Whether SQL Server was housing the data that was stolen ,leading to identity theft, or if SQL Server is having an identity management issue of its own – SQL Server is definitely susceptible to the identity issues.

The beauty of SQL Server is that these identity issues seem to be most prevalent when trying to replicate data.  Better yet is when the replication multiple peers setup in a Peer-to-Peer topology.

When these Identity problems start to crop up there are a number of things that can be done to try and resolve them.  One can try to manually manage the identity ranges or one can flip the “Not for Replication” attribute on the table as two possible solutions.

The identity crisis in replication gets more fun when there are triggers involved.  The triggers can insert into a table that is not replicated or can insert into a table that is replicated.  Or even better is when the trigger inserts back into the same table it was created on.  I also particularly like the case when the identity range is manually managed but the application decides to reseed the identity values (yeah that is fun).

In one particular peer-to-peer topology I had to resort to a multitude of fixes depending on the article involved.  In one case we flipped the “Not for Replication” flag because the tables acted on via trigger were not involved in replication.  In another we disabled a trigger because we determined the logic it was performing was best handled in the application (it was inserting a record back into the table the trigger was built on).  And there was that case were the table was being reseeded by the application.

In the case of the table being reseeded we threw out a few possible solutions but in the end we felt the best practice for us would be to extend the schema and extend the primary key.  Looking back on it, this is something that I would suggest as a first option in most cases because it makes a lot of sense.

In our case, extending the schema and PK meant adding a new field to the PK and assigning a default value to that field.  We chose for the default value to be @@ServerName.  This gave us a quick location identifier for each location and offered us a quick replication check to ensure records were getting between all of the sites (besides relying on replication monitor).

When SQL Server starts throwing a tantrum about identities, keep in mind you have options.  It’s all about finding a few possible solutions or mix of solutions and proposing those solutions and then testing and implementing them.

One of the possible errors you will see during one of these tantrums is as follows.

Explicit value must be specified for identity column in table ‘blah’ either when IDENTITY_INSERT is set to ON or when a replication user is inserting into a NOT FOR REPLICATION identity column.

Day 4 – Broken Broker

This is the fourth installment in the 12 day series for SQL tidbits during this holiday season.

Previous articles in this mini-series on quick tidbits:

  1. SQL Sat LV announcement
  2. Burning Time
  3. Reviewing Peers

mini-Broker

Brokers

 

 

 

 

On a recent opportunity to restore a database for a client, I experienced something new.  

I thought it was intriguing and it immediately prompted some questions.  First, let’s take a look at the message that popped up during the restore and then on to what was done to resolve the problem.

 

Query notification delivery could not send message on dialog ‘{someguid}.’. Delivery failed for notification ‘anotherguid;andanotherguid‘ because of the following error in service broker: ‘The conversation handle “someguid″ is not found.’

My initial reaction was “Is Service Broker enabled?”  The task should have been a relatively easy straight forward database restore and then to setup replication after that.  My next question that popped up was “Is SB necessary?”

Well the answers that came back were “Yes” and “YES!!!”  Apparently without SB, the application would break in epic fashion.  That is certainly not something that I want to do.  There are enough broke brokers and broke applications without me adding to the list.

Occasionally when this problem arises it means that the Service Broker needs a “reset.”  And in this case it makes a lot of sense.  I had just restored the database and there would be conversations that were no longer valid.  Those should be ended and the service broker “reset.”

The “reset” is rather simple.  First a word of warning – do not run this on your production instance or any instance without an understanding that you are resetting SB and it could be that conversations get hosed.

ALTER DATABASE  <dbname> SET NEW_BROKER WITH ROLLBACK IMMEDIATE

For me, this worked like a charm.  There was also substantial reason to proceed with it.  If you encounter this message, this is something you may want to research and determine if it is an appropriate thing to do.

«page 1 of 2




Calendar
April 2014
M T W T F S S
« Mar    
 123456
78910111213
14151617181920
21222324252627
282930  
Content
SQLHelp

SQLHelp


Welcome , today is Friday, April 18, 2014