T-SQL Tuesday #081: Recap

Comments: 4 Comments
Published on: August 18, 2016

Sharpen Something

sqlskillsharpener_pigIn case you missed it (many did), TSQL Tuesday was a challenging event this month. I invited people to do a put a little more into writing a post than what they may usually do. There were some very good reasons for this. If you are interested, take a look back at the invite and see if you maybe want to give it a go outside the bounds of TSQL Tuesday. You can check out the original post here.

Before I get into the nitty gritty I have a confession. This topic was a reminder for myself as much as it was a challenge to others to help me continue to drive and improve in various areas as I see fit.

There are other dirty little secrets too. Some may become apparent as you read through the recap.

Recap of the Event

One of the tricks to becoming and staying a top tier data talent or professional is a perpetual cycle to learn, adapt, change, and evolve. We must be in a continual cycle of self evaluation and self modification. Let’s call this by something else – we must be agile. There I said the five letter word. Think about it in broad strokes with your career – it is a development process with perpetual evaluation, review and tweaks.

Now think about the invite and see how that fits with what I just said or with the, cough cough, agile flow. You start (albeit very basically) with a need for enhancement, then you plan which pieces of the enhancement you can accomplish, you then do the work (whether successful or not), then after you deliver the work you conclude with a retrospective (what went well and what needs to change). Yes! I do feel rather dirty for sneaking this on everybody like this. That said, when you think about the model and apply it in broad strokes to your career path – it has merit.

Another way of viewing this is to think in terms of the following flowchart to help improve your personal mindset or maybe improve your personal mental power. The process is repetitive and follows a natural course. Once you have acted on some plan, you must review the performance and results and then gauge where your mindset needs to go from there to improve.

mindset_improvement_560

Rob Farley Steve Jones Robert Davis Kenneth Fisher Kennie Nybo Pontoppidan Mala Mahadevan Wayne Sheffield Jason Brimhall

In other Words

Did you just look at the picture or did you explore the picture? If you hover the picture, you will find there are links to this months participants. There were only eight so not a ton of exploration is necessary.

Here are my thoughts on each of the posts submitted this month:

Wayne Sheffield (blog | twitter) – You can find his link in the big arrow that restarts the cycle. I put his link here because he ran into a ton of blockers during his experiment and he is at a spot of practically restarting – again. This is not the first time he has restarted in his quest to learn more about Availability Groups. Wayne fully admits he is deficient in AG and states near the end of the post that he had to humble himself going through this exercise. That is awesome! We could all use a little humility on a more regular basis.

Mala Mahadevan (blog | twitter) – You can find her link in the “Results” circle. The reason for this choice is that Mala discusses her midlife crisis – erm career change. MTSQL2sDay150x150ala held out for quite a while looking for just the right opportunity. When it came, she snatched it up. Along with that career change, she has implemented a plan to become more active in blogging and to learn more and more through various avenues. The increase in blogging and the ability to stick to her guns resulted in a new job/career she seems to be happy with at the moment.

Robert Davis (blog | twitter) – Robert found himself placed in the performance circle thanks to his article involving a third party backup utility that should be heavy on the performance side. Robert needed something interesting to push him to reacquaint himself with this tool. Once he found that project that required just a touch of ingenuity, performance and a way to avoid the GUI, Robert found himself right at home with a great solution for his environment.

Kennie Nybo Pontoppidan (blog | twitter) – Kennie landed in the Actions node mostly because he decided to take the challenge and act on his long time desire to get better at the new temporal features. To do that, he decided to read a book by Snodgrass which seriously sounded like something from Harry Potter to me. Kennie outlines a bunch of information that he learned from the book such as tracking time based data from either a transaction or valid-time perspective.

personal_growth_brain

Kenneth Fisher (blog | twitter) – I placed this one into the behavior node. Maybe it is a bit of a stretch, but it seems to make sense since he discussed some behavioral differences between Azure DB and SQL Server. Things just do not work exactly the same between the two. You will need to understand these differences if you find yourself in a spot where you must work with both.

Steve Jones (blog | twitter) – When looking through the image, you will find that Steve landed solidly in the mindset node. When I read his contribution, I got the full impression that his mind was 100% in the right place. He set out to learn something and try to get better at it. Additionally, he blogged about a topic that is near and dear to me – Extended Events. Have I mentioned before that I have a lot of content about XE? You can read a bunch of it here. Like Wayne, Steve was humble near the end of his article. He notes that he was clumsy as he started working with XE but that he is glad he did it as well. Read his article. He gave me a great idea of another use for XE and I am sure it may sound good to you too!

Rob Farley (blog | twitter) – I planted Rob firmly on the attitude node. It seems clear to me that Rob had loads of attitude throughout his article about Operational Analytics. The attitude I perceived was that of humility and yearning. Rob feels like he has a lot to learn and his attitude is in the right place it seems to keep him going while he tries to learn more in the field of Operational Analytics.

My Contribution link can be found by clicking on any spot in the image that is not already described. I wrote about my experiences with trying to pick up a little on JSON.

That is a wrap of all eight contributions. If you did not contribute this month, I recommend that you still try to do something with the challenge issued with this months TSQL Tuesday.

Edit: Added links to the articles with each persons name in the event this page is being viewed with Firefox. There seems to be an issue with the links in the image map within Firefox.

T-SQL Tuesday #081: Getting Sharper

Comments: No Comments
Published on: August 16, 2016

Sharpen Something

sqlskillsharpener_pigThis month I am the host of the TSQL Tuesday blog party. In the invite, which can be read here, I asked people to decide on something to work on, plan out and then report the success/failure.

Not only am I the host, but I am a member this month. In my invite (and the reminder) I provided a few examples of what I was really looking for from participants this month. It became apparent that the topic may have been over thought. So, for my contribution, I decided to do something extremely simple.

There is so much about SQL Server that it would not be feasible nor should it be expected that one single person should know everything about the product. That said, within SQL Server alone, all of us have something to learn and improve upon within our skill-set. If we extend out to the professional development realm, we have even more we can explore as a skill sharpening experiment for this month.

I am going to keep it strictly within the SQL Server realm this month. I have chosen to develop my skills a little more with the topic of JSON. I should be an expert in JSON, but since it is spelled incorrectly – maybe I have something to learn. That said, I really do love being in the database now – haha.

JSON

katanaLet’s just get this out there right now – I suck at JSON. I suck at XML. The idea of querying a non-normalized document to get the data is not very endearing to me. It is for that reason that I have written utilities or scripts to help generate my XML shredding scripts – as
can be seen here.

Knowing that I have this allergy to features similar to XML, I need to build up some resistance to the allergy through a little learning and a little practice. Based on that, my plan is pretty simple:

  1. Read up on JSON
  2. Find some tutorials on JSON
  3. Practice using the feature
  4. Potentially do something destructive with JSON

With that plan set before me, it is time to sharpen some skills and then slice, dice, and maybe shred some JSON.

Sharpening

Nothing in this entire process was actually too terribly difficult. That is an important notion to understand. My plan was very lacking in detail and really just had broad strokes. This helps me to be adaptable to changing demands and time constraints. I dare say the combination of broad strokes and a very limited scope also allowed me an opportunity for easier success.

Researching JSON was pretty straight forward. This really meant a few google searches. There was a little bit of time spent reading material from other blogs, a little bit from BOL and a little bit from msdn. Nothing extravagant here. I did also have the opportunity to review some slides from a Microsoft presentation on the topic. Again, not terribly difficult or demanding in effort or time requirement. This research covers both steps one and two in the plan.

Now comes the more difficult task. It was time to put some of what had been seen and read to practice. A little experimentation was necessary. I have two easy enough looking examples that I was able to construct to start experimenting with in my learning endeavors.

Here is the first example. This is a bit more basic in construct. (Updated to use an image since the json was messing with the rss feed and causing malformed xml.)

json_xmpl

And some basic results:

basic_json

Pretty slick. Better yet is that this is many times easier than XML.

How about something a little different like the following:

Admittedly, this one is a bit more of a hack. In my defense, I am still learning how to work with this type of stuff. At any rate, I had an array of values for one  of the attributes. The kludge I used reads up to 3 values from that array and returns those values into individual attributes. I am still learning in this area so I can live with this for now.

array_json

The last part of the plan involved doing something destructive. Why? Well just for the fun of it. I was unable to get to this stage but it is still in the plans.

TSQL2sDay150x150Report on The Successes and Failures

 

I have written about some of the successes and failures along the way thus far. Overall, I would rate this a successful endeavor. The big reason for it being a success is because I do feel I learned more about json within SQL Server than I had prior to the experiment.

Taking a bite sized chunk of learning and acting on it sure makes it a lot easier to learn a new concept or to learn more about such a vast topic such as SQL Server.

*Note: This is a late publish because the post didn’t auto post. This is a tad late but I discovered it as I was prepping the roundup.

T-SQL Tuesday #081: Sharpen Something – Reminder

Comments: No Comments
Published on: August 2, 2016

Sharpen Something

sqlskillsharpener_pigLast week I sent out the invite for the August TSQL Tuesday blog party. In that invite I promised to send out a reminder seven days prior to the event. Well – it is that time.

You are cordially invited to read the invite for TSQL Tuesday 81 and plan your entry for the party.

In the invite, I shared the details for the event including examples of what I am looking for as an entry for the event.

I hope we will be seeing you next Tuesday, August 9th in attendance at this month’s party. I am sure it will prove to be an interesting experience one way or another.

Bonus Example

In the original invite I provided a list of examples of what one could do for this TSQL Tuesday. Today, I am providing one more example in a slightly different format. Recall that the invite requested that participants set out to accomplish something, make a plan and report on that “goal”, the plan, and the outcome.

So, let’s say I have discovered that I write too much in the passive voice. Based on that information, I would like to overcome the passivity in my writing voice, therefore my goal would be to learn how to write more assertively (less passively). In order to accomplish that goal, I may need to read up on the topic and learn exactly what it means to write passively. Then I would need to examine articles that I have written. And then I would need to practice writing more assertively. After all of that is done, I may have somebody (or something) analyze a brand new article or two to determine if I have achieved my desire.

After having executed on that plan, I will write about the experience including what the initial goal and plan were and also on what worked or didn’t work while trying to reach that goal. To summarize, here is an outline of that example:

What I will Accomplish

I will learn how to write more assertively (or just Write more assertively)

How Will I do that

Research what it means to write passively

Research what it means to write assertively

Evaluate “assertively” written articles

Take Notes on how to write assertively

Evaluate my articles

Practice writing assertively

Write a new article and have it reviewed to judge the voice whether it seems too passive or not

Report on The Successes and Failures

Write whether or not each step succeeded or failed.

Write if a step was unnecessary

Write about the experience and your thoughts about the experience.

Did you achieve or fail overall?

What is T-SQL Tuesday?

TSQL2sDay150x150T-SQL Tuesday is a monthly blog party hosted by a different blogger each month. This blog party was started by Adam Machanic (blog|twitter). You can take part by posting your own participating post that fits the topic of the month and follows the requirements below. Additionally, if you are interested in hosting a future T-SQL Tuesday, contact Adam Machanic on his blog.

How to Participate

  • Your post must be published between 00:00 GMT Tuesday, Août 9e, 2016, and 00:00 GMT Wednesday Août 10e, 2016.
  • Your post must contain the T-SQL Tuesday logo from above and the image should link back to this blog post.
  • Trackbacks should work. But, please do add a link to your post in the comments section below so everyone can see your work.
  • Tweet about your post using the hash tag #TSQL2sDay.

T-SQL Tuesday #081: Sharpen Something

Comments: 11 Comments
Published on: July 27, 2016

Sharpen Something

It has now been 30 months since the last time I hosted a TSQL Tuesday, that was TSQL Tuesday 51. I recapped that event here with the original invite here. I can’t believe it has been that long since I last hosted. It only seems like yesterday.

sqlskillsharpener_pigComing into the present day, we are now at TSQL Tuesday 81. For this month, I would like to try and up the ante a bit. Usually we only get about a weeks notice prior to the event to think about the article to write for the event.

This time, I want to invite everybody just a little bit sooner and will follow-up with a reminder seven days prior to the event. The reason I want to do this is because I think this may be a touch more difficult this time.

 

This month I am asking you to not only write a post but to do a little homework – first. In other words, plan to do something, carry out that plan, and then write about the experience. There is a lot going into that last sentence. Because of that, let me try to explain through a few examples of what I might like to see. Hopefully these examples will help you understand the intent and how this month the topic relates to “Sharpening Something“.

EXAMPLES

  1. You have learned about a really cool feature called Azure DevTest Lab. Having heard about it, you wish to implement this feature to solve some need in your personal development or corporate environment. Develop a plan to implement the feature and tell us the problem it solves and about your experiences in getting it to work from start to end. An example of how I might try to use this might involve the creation of a disposable and easy setup environment for Precons, Workshops, and various other types of training.
  2. There is a really awesome book about SQL Server you heard about and you decided to buy it. Plan to sit down and read the book. Take a nugget or two from the book and tell us how you can use that nugget of information within your personal or professional environment.
  3. You know you are extremely deficient at a certain SQL Skill. Tell me what that skill is and develop a plan to get better at that skill. Report on the implementation of this skill and how you are doing at improving. Maybe that skill is about Extended Events, PoSH or availability groups.
  4. Similar to the skill deficiency, you know you do not understand a certain concept within SQL Server as well as you feel you should. Maybe that concept is indexing or statistics (for example). Create a two week plan to become more proficient at that concept. Follow that plan and report on your progress.

In recap, this is an invite to make a short term goal covering the next two weeks. Tell everybody what that goal is (in your tsql tuesday post of course) and how you went about creating a plan for that goal and how you have progressed during the two week interval.

What is T-SQL Tuesday?

TSQL2sDay150x150T-SQL Tuesday is a monthly blog party hosted by a different blogger each month. This blog party was started by Adam Machanic (blog|twitter). You can take part by posting your own participating post that fits the topic of the month and follows the requirements below. Additionally, if you are interested in hosting a future T-SQL Tuesday, contact Adam Machanic on his blog.

How to Participate

  • Your post must be published between 00:00 GMT Tuesday, Août 9e, 2016, and 00:00 GMT Wednesday Août 10e, 2016.
  • Your post must contain the T-SQL Tuesday logo from above and the image should link back to this blog post.
  • Trackbacks should work. But, please do add a link to your post in the comments section below so everyone can see your work.
  • Tweet about your post using the hash tag #TSQL2sDay.

SQL Server Desired Enhancements

TSQL2sDayHappy Belated Birthday

The monthly Data Professionals blog party has come and gone. It happens the second Tuesday of every month – or at least is supposed to happen on that day. This month, the formidable Chris Yates (blog | twitter) has invited everybody to a birthday party – of sorts. As with many birthdays, there is always somebody that wishes you a happy belated birthday. For this party, it is my turn to offer up that belated birthday. It just so happens there was some coordination between Chris and myself for this belated birthday.

Read all about the invite from Chris’ blog. If you missed the link, here it is again – right here.

Plastic Surgery – Desired Enhancements

SQL Server 2016 has come with a ton of cool features, bells, whistles and well cool stuff (yes redundant). That aside, what are some of the really cool features that I would love to see in SQL Server? Let’s run through them (And yes, I will be a bit greedy. It is standard operating procedure when asking for gifts, right?).

Gift #1

I need some way of being able to reproduce a production database cleanly and efficiently in a different environment. Sure, I can script everything and develop an elaborate process to ensure I got an exact duplicate of the stats, stat steps, stats histogram, schema, procedures, indexes, etc etc etc. Being able to do all of that cleanly and efficiently is the key. This is a pretty big want from clients and could be extremely useful.

Microsoft has heard the pleas. Introduced with SP2 for SQL Server 2014 there is a new DBCC statement to do exactly that – DBCC CloneDatabase. Check out all the details here.

Gift #1, let’s check that off the list.

Gift #2

Instant File Initialization is fantastic and a huge time saver. Unfortunately this only applies to the data files. We need something like this implemented for the transaction log. Currently the transaction log must be “zeroed” or 0-stamped when new space is allocated. This mechanism can delay transactions and impact performance if there happens to be a required file growth or even when trying to manually grow the file or even restore the database.

Believe it or not, Microsoft has addressed this request as well. Microsoft has changed how the transaction log is stamped for a significant performance improvement. This is a part of SQL Server 2016. Bob Dorr explains it very well in his blog post on the topic. You can read his blog post here.

Wow, two for two. We can check gift #2 off the list.

Gift #3

Availability Groups seems to get bogged down under heavy load. The redo and log send seem to get backed up and can have a significant impact on production operations. We need the log transport to be faster.  No, check that. Not just faster it needs to be 2-3x faster.

SQL Server 2016 comes to the rescue again. Amped up on SQL Steroids, Availability Groups has seen a significant improvement in log transport speeds to the secondaries. Some report it as at least twice as fast. The bottleneck has been moved out of the SQL Engine and it has really amped things up from a performance perspective. Here is a supporting article by Jimmy May on the topic – though it doesn’t go deep into the specifics.

Mark another one off the gift registry. Think we can maintain this pace?

Gift #4

Statistics seem to become stale for smaller tables which dramatically affects performance of certain queries. These tables will not see 20% of the rows updated in the leading edge any time before the turn of the year but they would likely change within the six months following the turn of the new year. We need to be able to force these stats to auto-update more regularly without extra intervention.

Fair enough, we already have a trace flag that can help with that (TF 2371). Maybe the environment or management is resistant to having trace flags implemented for something such as this. You never know what the political red tape may dictate.

stackeddeckSQL Server 2016 to the rescue again!!! SQL Server 2016 has this trace flag enabled automatically. You don’t need to do anything extra special. What this means is that those stats on the smaller tables may actually get updated without intervention despite the lack of change to the rows in the table.
That is four for four. Should we take this birthday party to Vegas? Don’t assume I have stacked the deck either! ;0)

Gift #5

I am getting very frustrated with the constant clearing of usage stats every time I rebuild an index. Just because I rebuild the index, it does not mean that I no longer need the usage stats from prior to the index rebuild. I need to be able to see the usage stats for a the time spanning before and after the rebuild without creating a custom process to capture that information. Sure it may not be an insanely difficult task to perform, but it is extra process I have to build out. It’s the principle of the matter.

SQL Server 2016 to the rescue yet again. This age old bug of usage stats being cleared is finally fixed. It is frustrating to say the least to have to deal with this kind of bug. It is a huge relief to have it fixed and be able to get a consistent clear picture of the usage information since the server has been up.

For more information, you could read this article by Kendra Little – here.

Cha-ching. We are now five for five.

resourceflowGift #6

Digging a little deeper on this one. I would really love to see an enhancement to Resource Governor. Not just any enhancement will do. I need it to be enhanced so it will also affect the reporting services engine and the integration services engine in addition to the database engine. I want to be able to use RG to prevent certain reports from over consuming resources within the SSRS engine. Or for that matter, I want to make sure certain SSIS packages do not consume too much memory. If I can implement constraints on resources for these two engines it would be a huge improvement.

We will have to wait for a while on this one. It is currently not scheduled for delivery

Gift #7

This one is going to be a little tougher. It’s not in place. It would be a fantastic gift in my opinion. I would like some tool such as Extended Events to be able to monitor the workload and determine best recommended trace flags to implement.There are many trace flags that reaper_rip_tombstoneare far from well known but could be extremely helpful to production environments based on the workload and internal workflow. Not all would trace flags are built for all environments. An analysis through some automated tool for best recommended flags to implement (again solely at your discretion) would be fantastic.

Gift #8

Get Profiler out of Management Studio finally. Enough said there. There really is no good solid reason in my opinion to keep it around. It is deprecated. It is hardly helpful with 2014 or 2016 and it is just dead weight. Extended Events really is the better way to go here.

Last Request

 

Can we please fix the spelling of JSON? It really needs to be spelled correctly. That spelling is: JASON.

Awesome SQL Server Feature

TSQL2sDayThe second Tuesday of April 2016 is now upon us and you know what that means. Well, I hope you know what that means.

It is time for TSQL Tuesday. It is now the 77th edition of this monthly blog party. This month the host is Jens Vestergaard (blog | twitter) and he insists we do a little soul searching to figure out what about SQL Server really makes our hearts go pitter patter for SQL Server. Ok, so he didn’t really put it that way but you get the point, right? What is it about SQL Server that ROCKS in your opinion?

Well, I think there are a lot of really cool features in SQL Server that ROCK! It really is hard to pick just one feature because there are a lot of really good features that can make life so much easier as a database professional. Then again, there is that topic that bubbles to the top in my articles – a lot. If you haven’t followed my blog, here is a quick clue: click here.

Why is this feature so AWESOME?

Truth be told, there are a ton of reasons why I really like it. Before diving into the why, I need to share an experience.

A client using Microsoft Dynamics AX to manage the Point of Sale (POS) systems for their retail chain has been running into a problem with the POS database at each store. Approximately a year ago, this client had upgrade most of the store databases to SQL Server Standard Edition from Express due to the size restriction of the Express Edition. This SKU upgrade was necessary because the database had grown to exceed 10GB. Most of this growth was explicitly related to the INVENTDIM table consuming 3.5GB of space in the data file.

Right here, you may be asking what the big deal is. Just upgrade the SKU to Standard Edition and don’t worry about the size of the database. I mean, that is an easy fix, right? Sure, that may be perfectly acceptable in an environment with one or maybe even a handful of servers. Imagine a retail chain with more than 120 stores and a database at each store. No extrapolate standard edition licensing costs for all of those stores. Suddenly we are talking a pretty big expense to just upgrade. All of that just because one table chews up 35% of the size limitation of a data file in SQL Server Express Edition.

What if there was an alternative with SQL Express to mitigate that cost and maintain the POS functionality? Enter the SYNONYM! You may recall from a previous post a thing or two that I have said about synonyms in SQL Server. There is good and bad to be had with this feature and most of the bad comes from implementation and not the feature itself.

Using a synonym, I can extend this database beyond the 10GB limitation – or at least that is the proposed theory. To make this work properly, the plan was to create a new database, copy the INVENTDIM table from the POS database to this new database, rename the old INVENTDIM table in the POS database, create a synonym referencing the new table in the new database, and then select from the table to confirm functionality. Sounds easy right? Here is the script that basically goes with that set of steps.

This seems to make a fair amount of sense. Querying the INVENTDIM synonym produces results just as would be expected. Notice that there is one additional step in the script which I did not mention. That step removes unnecessary rows from the INVENTDIM table based on an actual inventory item or barcode for the particular dimension variant related to the item. This helps to trim the table to specific rows related to the retail store available for purchase there. In addition, it serves as a failsafe to get the data down to less than 10GB in case of failure with the synonym.

failedTesting from within SQL Server proved very optimistic. The synonym was working exactly as desired. Next up was to test the change by performing various transactions through the POS.

The solution not only failed, it failed consistently and dramatically. It didn’t even come close. How is this possible? What is Dynamics AX doing that could possibly subvert the synonym implementation? Time to start troubleshooting.

I checked through the logs. Nothing to be found. I checked and validated permissions. No Dice! I checked the ownership chaining. Still no dice! What in the world is causing this failure?

What if I switch to use a view instead of a synonym? I created a view with cross database ownership chains in tact. Test the application again and still failed. What if I use the synonym pointed to a table in the same database? Test from the application and all of a sudden we have success. Now the head-scratching gets a little more intense.

xe_superheroIt is time to get serious. What exactly is the Dynamics AX POS application doing that is leading to failure that does not happen when we query direct from within Management Studio? The means to get serious is to now implement that awesome tool I alluded to previously – Extended Events (XE or XEvents).

With no clues being available from any of the usual sources (including application error messages), XE or profiler is about the only thing left to try and capture the root cause of this failure. Since this happens to be a SQL Server 2014 implementation (yeah I omitted that fact), the only real option in my opinion was to use XE. Truth be told, even on SQL Server 2008 R2, my go to tool is XE. In this case, here is what I configured to try and catch the problem:

With the session running, I had the POS tests begin again. Bang! It failed again, but I expected it and wanted it to fail again. This time around, finding the problem turned out to be really easy. As soon as the error hit, I was able to check the trapped events and see what it was that had been missing and ultimately causing this string of failures.

xe_trappederror_ax

Using the GUI (yeah rare occasion for me with XE), I filtered the events down for display purposes only to make it easier to see what was found by running these tests that was pertinent to the problem. Here is the highlighted text a little larger and easier to see:

Snapshot isolation transaction failed accessing database ‘AxRetailDIM’ because snapshot isolation is not allowed in this database. Use ALTER DATABASE to allow snapshot isolation.

Wow! Light bulb shines bright and the clue finally clicks. The POS databases for this client are all set to allow snapshot isolation. Since this error is coming at the time when the failure occurs in the application, it stands to reason that this is the root cause. Time to test by changing the snapshot isolation setting.

That is a quick change and easy enough to test again. With the XE Session still running, and the change in effect, it’s time to test via the POS application again. To my expectations the application is working now. This is good news! Time to test again and again and again to make sure it wasn’t a fluke that it worked and that it was only going to work just the once.

Not a single failure after the change to allow snapshot isolation. One small change with such a big impact and so few clues to be found except in that super Awesome Super Hero feature of SQL Server called Extended Events!

Being able to quickly find the root cause of so much pain is why I enjoy working with the Extended Events feature. It is an efficient way to find a ton of information while causing little overhead to the server.

The bonus here is that XE allowed us to pinpoint a problem with the proposed solution to help save costs while extending a database beyond the 10GB limitation of SQL Express.

Note: I left some notes in the XE session script. These notes help to point out differences between implementing this particular session on SQL Server 2012 (or later) and SQL Server 2008 (or R2).

All about the Change

Comments: 1 Comment
Published on: January 12, 2016

TSQL2sDayThe second Tuesday of January 2016 is now upon us and you know what that means. Well, I hope you know what that means.

It is time for TSQL Tuesday. It is now the 74th edition of this monthly blog party. This month the host is Robert Davis (blog | twitter) and he has asked us to “Be the change”. Whether the inspiration for this topic is the new year and resolutions, or Ghandi (you must be the change), or CaddyShack (be the ball), we will be discussing “Change.”

Specifically, Robert requested that we discuss data changes and anything relating to data changes. Well, I am going to take that “anything” literally and stretch the definition of changing data just a bit. It will all make sense by the end (I hope).

Ch-ch-changes

Changes happen on a constant basis within a database. Data will more than likely be blackbox2changing. Yes, there are some exceptions to that, but the expectation that data is changing is not an unreal expectation.

Where that expectation becomes unwanted is when we start talking about the data that helps drive the configuration of the server. Ok, technically that is a setting or configuration option or a button, knob, whirlygig or thingamajig. Seldom do we really think about these settings as data. Think about it for a moment though. We can certainly derive some data about these changes (if these settings themselves are not actually data).

So, while you may call it settings changes, I will still be capturing data about the changes. Good? Good! Another term for this is auditing. And auditing applies to all levels including ETL processes and data changes etc. By that fortune, I just covered the topic again – tangentially.

How does one audit configuration changes? Well, there are a few different methods to do this. One could use a server side trace, SQL audit, Extended Events or (if somebody wants to) a custom solution not involving any of those using some sort of variation of tsql and error log monitoring. The point is, there are options. I have discussed a few options for the custom solution path as well as (recently published article using…) the default trace path. Today I will dive into what it looks like via SQL Audit.

When creating an audit to figure out what changes are occurring within the instance, one would need to utilize the SERVER_OPERATION_GROUP action audit group. This action group provides auditing of the following types of events:

  • Administer Bulk Operations
  • Alter Settings
  • Alter Resources
  • Authenticate
  • External Access
  • Alter Server State
  • Unsafe Assembly
  • Alter Connection
  • Alter Resource Governor
  • Use Any Workload Group
  • View Server State

From this group of events, we can guess at the types of actions that might trigger one of these events to fire for the audit. Some of the possible actions would be:

Action Example
Issue a bulk administration command BULK INSERT TestDB.dbo.Test1
FROM ‘c:\database\test1.txt’;
Issue an alter connection command KILL 66
Issue an alter resources command CREATE RESOURCE POOL PrimaryServerPool
WITH {}
Issue an alter server state command DBCC FREEPROCCACHE
Issue an alter server settings command Perform sp_configure with reconfigure
Issue a view server state command

SELECT *

FROM sys.dm_xe_session_targets

Issue an external access assembly command CREATE ASSEMBLY SQLCLRTest
FROM ‘C:\MyDBApp\SQLCLRTest.dll’
WITH PERMISSION_SET = EXTERNAL_ACCESS;
Issue an unsafe assembly command CREATE ASSEMBLY SQLCLRTest
FROM ‘C:\MyDBApp\SQLCLRTest.dll’
WITH PERMISSION_SET = UNSAFE;
Issue an alter resource governor command ALTER RESOURCE GOVERNOR DISABLE
Authenticate see view server state vsst type occurs for auth events
Use any workload group See Resource Governor

This is quite a bit of interesting information. All of these events can be audited from the same audit group. The interesting ones of this bunch are the ones that indicate some sort of change has occurred. These happen to be all but the “Authenticate”, “View Server State” and “Use any workload Group” events even though these events may be stretched to say something has changed with them as well.

With all of that in mind, I find the the “alter server settings” event to be the most problematic. While it does truly capture that something changed, it does not completely reveal to me what was changed – just that a reconfigure occurred. If a server configuration has changed, I can capture the spid and that reconfigure statement – sure. Once that is captured, I now have to do something more to figure out what configuration was “reconfigured”. This is highly frustrating.

Here’s an example from the audit I created:

audit_alterserversettings

This is only a small snippit of the results. I can see who made the configuration change, the time, the spid, the source machine etc. I just miss that nugget that tells me the exact change that was made. At least that is the case with the changes made via sp_configure. There are fixes for that – as previously mentioned.

Here is another bit of a downside. If you have the default trace still running, a lot of this information will be trapped in that trace. Furthermore, some of the events may be duplicated via the object_altered event session (e.g. the resource governor events). What does this really mean? Extra tracing and a bit of extra overhead. It is something to consider. As for the extended events related events and how to do this sort of thing via XE, I will be exploring that further in a future post.

Suffice it to say that, while not a complete solution, the use of SQL Audit can be viable to track the changes that may be occurring within your SQL Server – from a settings point of view.

Auditing Needs Reporting

Comments: No Comments
Published on: October 13, 2015

TSQL2sDay

 

Welcome to the second Tuesday of the month. And in the database world of SQL Server and the SQL Server community, that means it is time for TSQL2SDAY. This month the host is Sebastian Meine (blog / twitter), and the topic that he wants us to write about is: “Strategies for managing an enterprise”. Specifically, Sebastian has requested that everybody contribute articles about auditing. Auditing doesn’t have to be just “another boring topic”, rather it can be interesting and there is a lot to auditing.

For me, just like I did last month, I will be just doing a real quick entry. I have been more focused on my 60 Days of Extended Events series and was looking for something that might tie into both really well that won’t necessarily be covered in the series. Since I have auditing scheduled for later in the series, I was hoping to find something that meets both the XE topic and the topic of Auditing.

audit_wordcloudNo matter the mechanism used to capture the data to fulfill the “investigation” phase of the audit, if the data is not analyzed and reports generated, then the audit did not happen. With that in mind, I settled on a quick intro in how to get the audit data in order to generate reports.

Reporting

An audit can cover just about any concept, phase, action within a database. If you want to monitor and track performance and decide to store various performance metrics, that is an audit for all intents and purposes. If you are more interested in tracking the access patterns and sources of the sa login, the trapping and storing of that data would also be an audit. The data is different between the two, but the base concept boils down to the same thing. Data concerning the operations or interactions within the system is being trapped and recorded somewhere.

That said, it would be an incomplete audit if all that is done is to trap the data. If the data is never reviewed, how can one be certain the requirements are being met for that particular data trapping exercise? In other words, unless the data is analysed and some sort of report is generated from the exercise it is pretty fruitless and just a waste of resources.

There is a plenitude of means to capture data to create an audit. Some of those means were mentioned on Sebastian’s invite to the blog party. I want to focus on just two of those means because of how closely they are related – SQL Server Audits and Extended Events. And as I previously stated, I really only want to get into the how behind getting to the audit data. Once the data is able to be retrieved, then generating a report is only bound by the imagination of the intended consumer of the report.

SQL Server Audits

Audits from within SQL Server was a feature introduced at the same time as Extended Events (with SQL Server 2008). In addition to being released at the same time, some of the metadata is recorded with the XEvents metadata. Even some of the terminology is the same. When looking deep down into it, one can even find all of the targets for Audits listed within the XEvents objects.

Speaking of Targets, looking at the documentation for audits, one will see this about the Targets:

The results of an audit are sent to a target, which can be a file, the Windows Security event log, or the Windows Application event log. Logs must be reviewed and archived periodically to make sure that the target has sufficient space to write additional records.

That doesn’t look terribly different from what we have seen with XEvents thus far. Well, except for the addition of the Security and Application Event Logs. But the Target concept is well within reason and what we have become accustomed to seeing.

If the audit data is being written out to one of the event logs, it would be reasonable to expect that one knows how to find and read them. The focus today will be on the file target. I’m going to focus strictly on that with some very basic examples here.

I happen to have an Audit running on my SQL Server instance currently. I am not going to dive into how to create the audit. Suffice it to say the audit name in this case is “TSQLTuesday_Audit”. This audit is being written out to a file with rollover. In order for me to access the data in the audit file(s), I need to employ the use of a function (which is strikingly similar to the function used to read XE file targets) called fn_get_audit_file. The name is very simple and task oriented – making it pretty easy to remember.

Using the audit I mentioned and this function, I would get a query such as the following to read that data. Oh, and the audit in question is set to track the LOGIN_CHANGE_PASSWORD_GROUP event.

There are some tweaks that can be made to this, but I will defer to the 60 day XE series where I cover some of the tweaks that could/should be made to the basic form of the query when reading event files / audit files.

XE Audits

Well, truth be told, this one is a bit of trickery. Just as I mentioned in the preceding paragraph, I am going to defer to the 60 day series. In that series I cover in detail how to read the data from the XE file target. Suffice it to say, the method for reading the XE file target is very similar to the one just shown for reading an Audit file. In the case of XEvents, the function name is sys.fn_xe_file_target_read_file.

Capturing data to track performance, access patterns, policy adherence, or other processes is insufficient for an audit by itself. No audit is complete unless data analysis and reporting is attached to the audit. In this article, I introduced how to get to this data which will lead you down the path to creating fantastic reports.

One Easy Strategy for the Database Enterprise

Comments: 1 Comment
Published on: September 8, 2015

TSQL2sDay

 

Welcome to the second Tuesday of the month. And in the database world of SQL Server and the SQL Server community, that means it is time for TSQL2SDAY. This month the host is Jen McCown (blog / twitter), half of the MidnightDBA team, and the topic that she wants us to write about is: “Strategies for managing an enterprise”. Specifically, she wants to know “How do you manage an enterprise? Grand strategies? Tips and tricks? Techno hacks? Do tell.”

For me, this month, I will be just doing a real quick entry. I have been more focused on my 60 Days of Extended Events series and was looking for something that might tie into both really well that won’t necessarily be covered in the series, but that might work well as an “Enterprise” worthy topic.

ussenterpriseSo, what I decided to land on was the system_health session.

Enterprise

Wait, isn’t the system_health session one of those things that is configured per Instance?

Yes it is!

The system_health session is a default Extended Events session that is running by default on every instance of SQL Server (keyword is default) since SQL Server 2008. Whether you want it to be running or not is an entirely different conversation. But by default it is running.

There is a small problem with that default though. That problem is in the 2008 and 2008 R2 flavors of SQL Server. The default behavior is that the session only dumps the events to the ring buffer. And if you are only dumping the events to the ring buffer, you can imagine this is not entirely that useful. Why? Well, the ring buffer is just a memory target and is considerably more volatile than to write the event session data out to a file. One need not try terribly hard to see why this can be frustrating (unless of course you didn’t even know it was there).

So what to do to help push this in a more enterprise friendly direction? The answer is to add a file target like was done in the 2012 (and up) flavors of SQL Server. Here is the entire system_health session as defined in u_tables.sql (the backup script of the session deployed to the install directory):

Now, with all of the session data going out to disk, you can also schedule a scraper to copy the files to a central log folder on the network. Unfortunately, placing the files directly on a UNC share (via mapped drive or via UNC naming) does not work in 2008 or R2. I have a few more configurations to run on that still, but it doesn’t look good.

At least by dumping the session data to an event file, you are closer to an enterprise worthy solution. Just remember to do it!

One last thing. After you alter the system_health session, make sure you start it again.

 

Compressing Encrypted Backups

TSQL2sDayA common requirement, whether it be based out of pure want or truly out of necessity, is to make a large database backup file, that is encrypted, be much smaller.

This was a knock for the early days of Transparent Data encryption (circa SQL Server 2012). If TDE were enabled, then a compressed backup (though compression was available) was not an option. Not only did compression in the 2012 implementation of TDE make the database backup not smaller, it occasionally caused it to be larger.

This was a problem.  And it is still a problem if you are still on SQL 2012. Having potentially seen this problem, amongst many others, Ken Wilson (blog | twitter) decided to ask us to talk about some of these things as a part of the TSQL Tuesday Blog party. Read all about that invite here.

Encrypted and Compressed

dbsecurityWell, thankfully Microsoft saw the shortcoming as well. With SQL Server 2014, MS released some pretty cool changes to help us encrypt and compress our database backups at rest.

Now, instead of a database backup that could potentially get larger due to encryption and compression combined, we have a significant hope of reducing the encrypted backup footprint to something much smaller. Here is a quick example using the AdventureWorks2014 database.

In this little exercise, I will perform three backups. But before I can even get to those, I need to ensure I have a Master Key set and a certificate created. The encrypted backups will require the use of that certificate.

Do this in a sandbox environment please. Do not do this on a production server.

In the first backup, I will attempt to backup the AW database using both encryption and compression. Once that is finished, then a backup that utilizes the encryption feature only will be done. And the last backup will be a compressed only backup. The three backups should show the space savings and encryption settings of the backup if all goes well. The compressed and encrypted backup should also show an equivalent savings as the compression only backup.

With that script executed, I can query the backup information in the msdb database to take a peek at what happened.

This should produce results similar to the following:

backup_results

Looking at the results, I can see that the compression only backup and the compression with encryption backup show very similar space savings. The compression only dropped to 45.50MB and the Compression with encryption dropped to 45.53MB. Then the encryption only backup showed that, interestingly, the CompBackSizeMB (compressed_backup_size) got larger (which is the actual size on disk of this particular backup).

At any rate, the compression now works with an encrypted backup and your backup footprint can be smaller while the data is protected at rest. Just don’t go using the same certificate and password for all of your encrypted backups. That would be like putting all of your eggs in one basket.

With the space savings available in 2014, and if you are using SQL 2014, why not use encrypted backups?

«page 1 of 6








Calendar
August 2016
M T W T F S S
« Jul    
1234567
891011121314
15161718192021
22232425262728
293031  
Content
SQLHelp

SQLHelp


Welcome , today is Friday, August 26, 2016